Yes, that’s a cryptic topic, even for an article that addresses matters of the use of cryptographic algorithms, so congratulations for getting even this far! This is a report of a an experiment conducted in September and October 2014 by the authors to measure the extent to which deployed DNSSEC-validating resolvers fully support the use of the Elliptic Curve Digital Signature Algorithm (ECDSA) with curve P-256.
It has been a very busy period in the domain of computer security. What with “shellshock”, “heartbleed” and NTP monlink adding to the background of open DNS resolvers, port 445 viral nasties, SYN attacks and other forms of vulnerability exploits, it’s getting very hard to see the forest for the trees. We are spending large amounts of resources in reacting to various vulnerabilities and attempting to mitigate individual network attacks, but are we making overall progress? What activities would constitute “progress” anyway? Continue reading
At the NANOG meeting in Baltimore this week I listened to a presentation by Patrick Gilmore on “The Open Internet Debate: Section 706 vs Title II” It’s true that this is a title that would normally induce a comatose reaction from any audience, but don’t let the title put you off. Behind this is an impassioned debate about the nature of the retail Internet for the United States, and, I suspect, a debate about the Internet itself and the nature of the industry that provides it.
How “big” is a network? How many customers are served by an Internet Service Provider?
While some network operators openly publish such numbers, other operators regard such numbers as commercially sensitive information. There are a number of techniques used to estimate the relative size of each Service Provider from public information sources, including the number of IP addresses that are announced by the network, the number of transit customers who use the network, and so on, but the widespread use of NATs in IPv4, the varying IPv6 address plans used by IPv6 service providers, and the varying use of Autonomous Systems (ASes) by retail Service Providers add some considerable uncertainty to such indirect measurement exercises.
The 12th August 2014 was widely reported as a day when the Internet collapsed. Despite the sensational media reports the following day, the condition was not fatal, and perhaps it could be more reasonably reported that some parts of the Internet were having a bad hair day.
What was happening was that the Internet’s growth had just exceeded the default configuration limits of certain models of network switching equipment. In this article I’ll review the behaviour of the Internet’s routing system, and then look at the internal organization of packet switching equipment and see how the growth of the routing table and the scaling in the size of transmission circuits impacts on the internal components of network routing equipment.
If you’re playing in the DNS game, and you haven’t done so already, then you really should be considering turning on security in your part of the DNS by enabling DNSSEC. There are various forms of insidious attack that start with perverting the DNS, and end with the misdirection of an unsuspecting user. DNSSEC certainly allows a DNS resolver to tell the difference between valid intention and misdirection. But there’s no such thing as a free lunch, and the decision to turn on DNSSEC is not without some additional cost in terms of traffic load and resolution time. In this article, I’ll take our observations from running a large scale DNSSEC adoption measurement experiment and apply them to the question: What’s the incremental cost when turning on DNSSEC?
In around 1990 Internet Engineering Task Force (IETF) was alerted to a looming problem: long before the Internet was a commercial reality it looked like we would hit two really solid walls if we wanted to make the Internet scale to a global communications system.
The first problem was that the Internet Protocol’s 32 bit binary address was just too small. It was looking likely that we were going to run out of addresses in the mid ’90s. Continue reading
At APNIC Labs we’ve been working on developing a new approach to navigating through some of our data sets the describe aspects of IPv6 deployment, the use of DNSSEC and some measurements relating to the current state of BGP.
The recent NANOG 61 meeting was a pretty typical NANOG meeting, with a plenary stream, some interest group sessions, and an ARIN Public Policy session. The meeting attracted some 898 registered attendees, which was the biggest NANOG to date. No doubt the 70 registrations from Microsoft helped in this number, as the location for NANOG 61 was in Bellevue, Washington State, but even so the interest in NANOG continues to grow, and there was a strong European contingent, as well as some Japanese and a couple of Australians. The meeting continues to have a rich set of corridor conversations in addition to the meeting schedule. These corridor conversations are traditionally focused on peering, but these days there are a number of address brokers, content networks, vendors and niche industry service providers added to the mix. The meeting layout always includes a large number (20 or so) of round tables in the common area, and they are well used. NANOG manages to facilitate conversations extremely well, and I’m sure that the majority of the attendees attend for conversations with others, while the presentation content takes a second place for many. That said, the program committee does a great job of balancing keynotes, vendor presentations, operator experience talks and researchers.
Here’s my impressions of some of the presentations at NANOG 61 that I listened to.