There are a number of reasons that both domain name administrators and vendors of client DNS software cite for not incorporating DNSSEC signing into their offerrings. The added complexity of the name administration process when signatures are added to the mix, the challenges of maintaining current root trust keys, and the adverse consequences of DNSSEC signature validation failure have all been mentioned as reasons to hesitate. We have also heard concerns over increased overhead of using DNSSEC. These concerns come from zone administrators, authoritative name server operators and from suppliers of DNS resolver systems, and all point to a concern over the imposition of further overheads in the process of DNS name resolution when the name being resolved is DNSSEC signed. While the issues of complexity are challenging to quantify, we were interested in the issues of performance. What are the performance costs of adding DNSSEC signatures to a domain? Can we measure them?
There is something badly broken in today’s Internet.
At first blush that may sound like a contradiction in terms, or perhaps a wild conjecture intended only to grab your attention to get you to read on. After all, the Internet is a modern day technical marvel. In just a couple of decades the Internet has not only transformed the global communications sector, but its reach has extended far further into our society, and it has fundamentally changed the way we do business, the nature of entertainment, the way we buy and sell, and even the structures of government and their engagement with citizens. In many ways the Internet has had a transformative effect on our society that is similar in scale and scope to that of the industrial revolution in the 19th century. How could it possibly be that this prodigious technology of the Internet is “badly broken?” Everything that worked yesterday is still working today isn’t it? In this article I’d like to explain this situation in a little more detail and expose some cracks in the foundations of today’s Internet.
The Domain Name System, or the DNS, is a critical, yet somewhat invisible component of the Internet. The world of the Internet is a world of symbols and words. We invoke applications to interact with services such as Google, Facebook and Twitter, and the interaction is phrased in human readable symbols. But the interaction with the network is one that is entirely in a binary format. So our symbolic view of a service, such as www.google.com, has to be translated into a protocol address, such as 126.96.36.199. The mapping from symbols to protocol addresses is one of the critical functions of the DNS. We rely not only on the continued presence of the DNS, but its correct operation as well. Entering mybank.com.au in a browser does not necessarily guarantee that your interaction will be with your intended service. One of the more insidious attack vectors for the Internet is to deliberately corrupt the operation of the DNS, and thereby dupe the user’s application to open a session with the wrong destination. The most robust response we’ve managed to devise to mitigate this longstanding vulnerability in the DNS has been to add secure cryptographic signatures into the DNS, using a technology called DNSSEC. But are we using DNSSEC in today’s Internet?
As many who have worked with computer software would attest, software bugs come in many strange forms. This month I’d like to relate a recent experience I’ve had with one such bug that pulls together aspects of IPv6 standard specifications and interoperability.
With WICT-12 over, and now the preparation for the forthcoming WTPF underway, and of course also we have the WTDC and WTISD coming up, one could be excused for thinking that that world famous, but hopelessly unintelligible, cartoon character from the 80’s and 90’s, Bill the Cat, has come out of retirement to work as head of Acronym Engineering at the ITU.
However, no matter how unintelligible the acronyms of these meetings can get, the issue of how we come to terms with a technology-dense world is a serious matter. Too often we appear to use yesterday’s tools and techniques to address tomorrow’s issues, and take the view that if it worked in the past it should work now. I’d like to look at this approach in a little more detail here, and try and understand why WCIT was such a comprehensive failure and why the prospects for the next round of telecommunications sector meetings are not exactly looking rosy.
Time for another annual roundup from the world of IP addresses. What happened in 2012 and what is likely to happen in 2013? This is an update to the reports prepared at the same time in previous years, so lets see what has changed in the past 12 months in addressing the Internet, and look at how IP address allocation information can inform us of the changing nature of the network itself.
The problem with setting expectations is that when they are not fulfilled the fallout is generally considered to be a failure, and while everyone wants to claim parenthood of success, failure is an orphan. In that sense it looks like the WCIT meeting, and the International Telecommunications Regulations (ITRs) that were being revised at that conference are both looking a lot like orphans this week.
No that’s not a question about Australian coffee tastes and the critically important difference between a flat white and a cappuccino. This is a question about the differences in ISP retail models for broadband Internet access and the choice between a retail model of a “unlimited” flat fee that has no volume component, and a “capped” model where the service fee provides for a certain data volume and when that volume is reached either the user is exposed to an incremental fee, or the service is throttled back to a narrowband service for the remainder of the billing period. It seems that this is once more a critical question in the ISP world, and maybe this time the topic is best approached through television.
APNIC has recently deployed some changes to its RPKI service, and is in the process of continuing developments that will be released across 2013. This article discusses the changes, and what’s on the horizon early next year.
Splitting the TAL
A highly visible change to the APNIC RPKI system recently was to split our trust anchor into five discrete parts. Before discussing why we did this, we realize that this can become quite a complex topic, so perhaps a quick primer on certificates and certificate validation may be helpful.