Whats so special about 512K?


In around 1990 Internet Engineering Task Force (IETF) was alerted to a looming problem: long before the Internet was a commercial reality it looked like we would hit two really solid walls if we wanted to make the Internet scale to a global communications system.

The first problem was that the Internet Protocol’s 32 bit binary address was just too small. It was looking likely that we were going to run out of addresses in the mid ’90s. Continue reading

NANOG 61

The recent NANOG 61 meeting was a pretty typical NANOG meeting, with a plenary stream, some interest group sessions, and an ARIN Public Policy session. The meeting attracted some 898 registered attendees, which was the biggest NANOG to date. No doubt the 70 registrations from Microsoft helped in this number, as the location for NANOG 61 was in Bellevue, Washington State, but even so the interest in NANOG continues to grow, and there was a strong European contingent, as well as some Japanese and a couple of Australians. The meeting continues to have a rich set of corridor conversations in addition to the meeting schedule. These corridor conversations are traditionally focused on peering, but these days there are a number of address brokers, content networks, vendors and niche industry service providers added to the mix. The meeting layout always includes a large number (20 or so) of round tables in the common area, and they are well used. NANOG manages to facilitate conversations extremely well, and I’m sure that the majority of the attendees attend for conversations with others, while the presentation content takes a second place for many. That said, the program committee does a great job of balancing keynotes, vendor presentations, operator experience talks and researchers.

Here’s my impressions of some of the presentations at NANOG 61 that I listened to.

Continue reading

RIP Net Neutrality

It’s been an interesting couple of months in the ongoing tensions between Internet carriage and content service providers, particularly in the United States. The previous confident assertion was that the network neutrality regulatory measures in that country had capably addressed these tensions. While the demands of the content industry continue to escalate as the Internet rapidly expands into video content streaming models, we are seeing a certain level of reluctance from the carriage providers to continually accommodate these expanding demands within their networks though ongoing upgrades of their own capacity without any impost on the content provider. The veneer of network neutrality is cracking under the pressure, and the arrangements that attempted to isolate content from carriage appear to be failing. What’s going on this extended saga about the tensions between carriage and content?

Continue reading

A Reappraisal of Validation in the RPKI

I’ve often heard that security is hard. And good security is very hard. Despite the best of intentions, and the investment of considerable care and attention in the design of a secure system, sometimes it takes the critical gaze of experience to sharpen the focus and understand what’s working and what’s not. We saw this with the evolution of the security framework in the DNS, where it took multiple iterations over 10 or more years to come up with a DNSSEC framework that was able to gather a critical mass of acceptance. So before we hear cries that the deployed volume of RPKI technology means that its too late to change anything, let’s take a deep breath and see what we’ve learned so far from this initial experience, and see if we can figure out what’s working and what’s not, and what we may want to reconsider.

Continue reading

NTP for Evil

There was a story that was distributed around the newswire services at the start of February this year, reporting that we had just encountered the “biggest DDOS attack ever”. This dubious distinction was as a result of the observation that this time around the attack volume got to 400Gbps of traffic, some 100Gbps more than the March 2013 DNS-based Spamhaus attack). What’s going on? Why are these supposedly innocuous, and conventionally all but invisible services suddenly turning into venomous daemons? How has the DNS and NTP been turned against us in such a manner? And why have these attacks managed to overwhelm our conventional cyber defences?

Continue reading

Protocol Basics – The Network Time Protocol

Back at the end of June 2012[0] there was a brief IT hiccup as the world adjusted the Coordinated Universal Time (UTC) standard by adding an extra second to the last minute of the 31st of June. Normally such an adjustment would pass unnoticed by all but a small dedicated collection of time keepers, but this time the story spread out into the popular media as numerous Linux systems hiccupped over this additional second, and they supported some high-profile services, including a major air carrier’s reservation and ticketing backend system. The entire topic of time, time standards, and the difficulty of keeping a highly stable and regular clock standard in sync with a slightly wobbly rotating Earth has been a longstanding debate in the International Telecommunication Union Radiocommunication Sector (ITU-R) standards body that oversees this coordinated time standard. However, I am not sure that anyone would argue that the challenges of synchronizing a strict time signal with a less than perfectly rotating planet is sufficient reason to discard the concept of a coordinated time standard and just let each computer system drift away on its own concept of time. These days we have become used to a world that operates on a consistent time standard, and we have become used to our computers operating at sub-second accuracy. But how do they do so? In this article I will look at how a consistent time standard is spread across the Internet, and examine the operation of the Network Time Protocol (NTP).

Continue reading

BGP in 2013 – The Churn Report

Last month, in January 2014, I reported on the size of the Internet’s inter-domain routing table, and looked at some projection models for the size of the default-free zone in the coming years. At present these projections are looking at relatively modest levels of growth of some 7 – 8% per year with IPv4. Although IPv6 is growing at a faster rate, doubling in size every two years, its relatively modest size of 1/30th of the size of the IPv4 routing table does not give cause for concern at the moment. But size of not the only metric of the scale of the routing space – it’s also what BGP does with this information that matters. As the routing table increases in size do we see a corresponding increase in the number of updates generated by BGP as it attempts to converge? What can we see when we look a the profile of dynamic updates within BGP, and can we make some projections here about the likely future for BGP?

Continue reading

Addressing 2013 – That Was The Year That Was

Time for another annual roundup from the world of IP addresses. What happened in 2013 and what is likely to happen in 2014? This is an update to the reports prepared at the same time in previous years, so lets see what has changed in the past 12 months in addressing the Internet, and look at how IP address allocation information can inform us of the changing nature of the network itself.

Continue reading

BGP in 2013

The Border Gateway Protocol, or BGP, has been toiling away, literally holding the Internet together, for more than two decades and nothing seems to be falling off the edge of the Internet so far. As far as we can tell everyone can still see everyone else, assuming that they want to be seen, and the distributed routing system appears to be working smoothly. All appears to be working within reasonable parameters, and there is no imminent danger of some routing catastrophe, as far as we can tell.

Continue reading