I salute Richard Bennett’s new paper Designed for Change, in which he traces the engineering history of the end-to-end principle. It is a serious paper and deserving of serious response. Unfortunately, it being right before Yom Kippur and various deadlines, that more serious response will need to come from elsewhere. I can give only a brief, surface response — reality is messy.
OK, too brief. A bit more elaboration. Richard Bennett is eminently qualified to write the technical history and draw engineering conclusions. As are a large number of other folks who take very different views on the issue of net neutrality and the virtues of end-to-end (Vint Cerf, David Reed and kc claffy to name a few folk of my acquaintance). The history described by Richard is layered onto an equally rich history of political and economic events which all interweave, and continue to interweave, to create a complex and messy reality in which public policy tries (in my opinion) to set rules to create the strongest likelihood of the best possible outcome.
More below . . . .
Bennett’s essential argument, if I grasp it correctly, is that certain difficulties most agree are substantial problems would be far easier to solve if we gave the network operators greater freedom to manipulate traffic. While possibly true in the abstract, I am much less convinced it will play out that way in reality. For one thing, when Comcast was forced to disclose its network management practices, it turned out that Comcast was not actually experiencing significant network congestion. Instead, it was proactively solving the fear of future network congestion by going after the top 1000 users every month and targeting what it considered the most significant applications that could cause congestion in the future. That had the virtue of cheap efficiency for Comcast, but it had significant costs to others.
Furthermore, the evolution of the the wireless network, and the domain name system (DNS) bear me out. In both cases, allowing the network operators unrestrained freedom to develop engineering solutions that suited the needs of the network operator has created a cost in terms of innovation and investment. In the case of wireless, despite a recent explosion of applications, the limits on what equipment can be attached to the network and on the applications one can run have resulted in a world less robust, interesting or useful than the wireline or unlicensed wireless world. To take a simple example, I have a choice of many laptops and an infinite number of applications I could run it them via my Verizon FIOS connection and my wifi router. I have no problem watching TV in my workroom using Slingbox while working on my HP G60-235DX Notebook PC. For my Verizon wireless connection, I have a far more limited set of choices.
The situation is worse for DNS, which has undergone very limited evolution in the last 15 or so years. We are still struggling with interntionalized domain names, DNSSEC, and have introduced only a handful of new gTLDs, despite demand from many would-be TLD administrators.
To all these one may add a “yes but,” explaining the differences in such systems or the advantages obtained from this stability. But it is stability that comes at a cost. I can think of no significant application that evolved in the licensed wireless universe in the last five years which did not first evolve in the wireline universe — whereas I can think of several in the unlicensed world (albeit these are more localized and less well known, wifi dog being the first that comes to mind). Venture Capitalists are giving up on licensed wireless, despite the continued growth in importance of mobile services, because the more controlled environment of mobile makes it much harder to develop the sort of disruptive innovation that pays huge dividends for VCs simply cannot occur under the existing licensed wireless rules.
Which brings me to my conclusion. It is as wrong to conclude that the network neutrality question is solely an engineering question, or an economic question, or a legal question. As I have argued before, the ability to tier traffic has serious implications for the cost of political speech, our civil liberties generally, the cost and structure of ecommerce, and the potential for virtual redlining. These are not questions to be answered solely by recourse to engineering anymore than they can be answered without a serious examination of the underlying engineering. We cannot guard against these possibilities without cost, nor can we ignore them without consequences.
In the end, the network neutrality debate is a debate on what our priorities will be for our digital future. We may elevate autonomy for network operators for any of a variety of reasons, ranging from a Libertarian belief that regulation is inherently wrong or harmful to a desire for engineering elegance and centralized control. We may restrict the network and empower developers at the edge of the network for an equally wide range of reasons. It is not a debate in which well reasoned arguments of opponents should be lightly cast aside — either because we consider the source suspect or the discipline of study less relevant. But for the same reason, it is not a debate in which a single argument or a single set of costs or a single set of benefits will automatically prevail. It will be a very messy argument in which we must balance a great many things to set the course for the world in which we wish to live. For this reason I respect the arguments Richard Bennett makes, even if I think the world under the rules he proposes would be a poorer world than the one that I hope will exist under the rules I would like to see.
Stay tuned . . . .