The Resilience of the Internet

As the season for making predictions approached, I found myself reflecting on some of the more famous technology predictions of the past. One of the boldest was by Ethernet inventor and networking pioneer Robert Metcalfe. In 1995, as the Internet was just beginning to reach a mass audience with the rise of Netscape, the World Wide Web, and consumer Internet access, he predicted that the Internet would suffer “catastrophic collapse” within a year. Of course, we know this didn’t happen, and Metcalfe, having promised that he would eat his words if he was wrong, very publicly turned his printed magazine column into pulp and ate it on stage. I have to say I admire his willingness to both make a bold prediction and back up his words with action. By comparison, my predictions columns have been relatively tame.

While we can all agree that the Internet has not collapsed, the foundation of Metcalfe’s prediction is reasonable. That observation led me to think about the remarkable success of the Internet in avoiding collapse — and evolving in spectacular ways — over the decades.

One of the most highly cited papers in networking is Jacobson and Karels’ “Congestion Avoidance and Control”, published in 1988. Their work was precipitated by a series of congestion collapse events in 1986, in which segments of the Internet saw their effective throughput drop by three orders of magnitude. The situation was analogous to the failure modes of many highways during rush hour, in which increased traffic dramatically degrades performance.

The 1988 paper documents the early congestion control algorithms for the Internet. It introduced a number of key concepts that endure today: packet drops as a congestion signal, slow-start, and additive increase/multiplicative decrease (AIMD). For a few years, there was a sense that congestion control was a “solved problem”, but further research in the 1990s made multiple improvements to the established algorithms, such as using increasing network delay as a congestion signal.  Congestion control has continued to be an active field of research to this day. Indeed, Jacobson and Karels’ paper continues to be cited precisely because it laid the groundwork for a field that may never be fully “solved”. There will, I believe, always be room for more innovation in congestion control, in part because the way we use the Internet continues to evolve.

The Challenges of Growth

One area where congestion control has proven particularly challenging is in video streaming. I delivered a keynote talk at a networking conference in 2005 — a few months after the founding of (then barely-known) YouTube and before video on the Internet was working well or reaching a mass audience. Netflix was a DVD-by-mail service, still five years away from launching their streaming video service. I predicted that we were at the beginning of an era in which video streaming would become prevalent on the Internet, exactly as voice-over-IP had taken off in the preceding half-dozen years (at the time, I worked for Cisco, which had launched IP telephony products in 1999 and had recently shipped their 6 millionth IP phone). Part of my bullishness came from personal experience – I had recently paid a few dollars to livestream the World Championships of Athletics, and while the video quality was poor, it was worth every penny to access niche content. Video now accounts for about 60% of Internet traffic, so I’d say that probably counts as one of my best predictions.

Video presents a number of challenges for congestion control. We’re all familiar with the frustration of a video buffering or failing to stream smoothly. Classical TCP congestion control seeks to “fill the pipe” without causing congestion, i.e., it puts as much traffic as possible into the network to keep a path busy but not overloaded. The amount of bandwidth that is delivered to the application is unpredictable, as it depends on what’s available along the path. But once a video has been encoded, it needs a certain amount of bandwidth, and if less bandwidth is available, the video will stall. If you don’t use TCP congestion control, or something “TCP-friendly”, then there is the risk of congestion collapse. The solution to this for streaming video has been “adaptive streaming”, in which the video is coded at a variety of different resolutions, the level of congestion is measured, and the video stream switches among resolutions to avoid both congestion and stalling. There have also been innovations in congestion control algorithms that don’t require TCP (which is ill-suited to applications with tight latency bounds, for example).

Continued Innovation

Another area of innovation in congestion control has come with the evolution of HTTP. In looking to improve the performance of HTTP, researchers at Google eventually determined that a new transport protocol with different properties than TCP was required. This led to the development of QUIC, now undergoing standardization at the IETF. Among the many interesting properties of QUIC, is that it allows a range of congestion control algorithms, and moves congestion control out of the operating system into user space. This opens up the possibility that new congestion control algorithms could be pushed out more quickly than ever, as they would be part of an incremental update to an application rather than a change to a sensitive part of the operating system kernel.

So, do I have any bold predictions to make at this time?

I confidently predict that the Internet is not going to collapse in my lifetime. The Internet’s architecture is established in a way that is likely to ensure its success. Enabling innovation at the edge has been an essential part of the Internet’s design from the beginning, and the long, successful history of innovation in handling congestion has been one of its great successes. There are plenty of other challenges that the Internet will have to address (such as the likelihood that cryptographic protocols will one day be broken by quantum computers) but I’m still going to bet on the resilience of the Internet and the innovation of the community that powers it. I promise to eat my words if I’m wrong.


Photo by 7 SeTh on Unsplash

Other posts by

Service Mesh: The Next Step in Networking for Modern Applications

As I’m currently preparing my breakout session for VMworld 2019, I’ve been spending plenty of time looking into what’s new in the world of networking. A lot of what’s currently happening in networking is driven by the requirements of modern applications, and in that context it’s hard to miss the rise of service mesh. I […]

Technology Predictions for 2019

I used to write an annual blog post on my networking predictions for the year ahead, but it seems that I dropped the ball (as it were) at the end of 2017. My last effort — predicting the rise of SD-WAN, expansion of network virtualization outside the data center, and the importance of networking for […]

The Transforming Role of the CIO

I recently passed the one-year mark of my tenure as CTO for the Asia Pacific region. I marked the occasion by taking off for a 6-day hike of the Overland Track in Tasmania, one of the must-do wilderness experiences in the world. Right after that, I jumped back into full swing at work to keynote […]