Most folks do not monitor the day-to-day filings in the Broadband Practices Notice of Inquiry, Docket No. 07-52, the proceeding which has become the open docket part of the regulatory discussion on Comcast’s treatment of p2p uploads. Lucky them. But the sifting of this endless stream of regulatory filings has yielded some rather important nuggets of gold in the last few weeks that deserve much greater attention for anyone who cares about the substance of the debate. As I discuss below, three recent filings deserve particular attention:
a) Robert Topolski demonstrates that Comcast blocks p2p uploads at a remarkably consistent rate, at any time of day or night when the test takes place, and regardless of the nature of the content uploaded. This is utterly inconsistent with Comcast’s stated position that it “delays” p2p traffic only during times of peak network congestion. Topolski adds some other interesting details as well.
b) Jon Peha, a Professor of electrical engineering and public policy at Carnegie Mellon, provides his own explanation why Comcast’s characterization of its “network management practice” as merely “delaying” p2p uploads and its claim that this practice is in accord with general industry practice is nonsense.
c) In defense of Comcast (or at least, in opposition to any government action to restrict the ability of ISPs to target p2p traffic specifically), George Ou filed this this piece on how bittorrent and other p2p applications exploit certain features of TCP, a critical part of the protocol suite that makes the internet possible. Ou argues that as a result of this feature of p2p, heavy users of these applications will always be able to seize the vast majority of available bandwidth on the system to the disadvantage of all other users. Accordingly, the FCC should acknowledge that it is a “reasonable network management” practice to target p2p applications specifically as opposed to heavy users or all applications generally.
My analysis of each filing, and something of a response to Ou, below . . . .
Last Time on “Comcast: The Misunderstood Consumer-Friendly Network Manager”
About two weeks ago Comcast and BitTorrent, Inc., announced they had reached an agreement to discuss how to handle p2p traffic to their (and therefore, of course, to everyone else’s) mutual satisfaction. Also unsurprisingly, those not happy with big bad government coming in and telling “industry” what to do rejoiced at this “private sector” solution that was happening all on its own without any government interference whatsover. Even those skeptical that this had nothing to do with the pending FCC complaint and skeptical of Comcast generally opted to reserve judgment on whether the announced agreement addressed anything relevant to the pending complaint against Comcast for its practice of targeting bittorent (the application, not the company BitTorrent, Inc.) and other p2p applications.
FCC Chairman Martin, however, remained unimpressed. In particular, he drew attention to the fact that Comcast appeared now to admit to practices it had previously denied, and that Comcast could give no clear date on when it would stop blocking and/or degrading p2p applications. Two days later, Comcast Executive VP David Cohen sent this letter, in which he accused Kevin Martin of being a Comcast playa-hater who always got to be bringin’ Comcast down. Cohen went on to say that (a) Comcast repeats that it only occasionally, only during periods of peak congestion, and only in the most non-intrusive way possible, “delays” uploads; (b) Comcast is not “arbitrary” or “discriminatory” in these practices; and (c) Comcast needs to wait until the end of the year, and provide customers with plenty of notice and preparation rather than “put our customers at risk of network congestion.” Cohen concluded by telling Martin that everyone else looooves the the Comcast/BitTorrent deal (well, anyone who matters anyway), Martin is just hatin’ on cable, and they will no doubt continue to whine about how mean Martin is to their wholly owned subsidiaries in Congress.
The Topolski Filing
This proved too much for Robert Topolski, whose initial investigation on Comcast’s “network management” practices vis-a-vis bittorrent and other p2p applications got the ball rolling in the first place. Topolski sent his own letter to the FCC, in which he offers a point-by-point rebuttal of Cohen. I highly recommend reading the thing for youself, nut I will hit a few critical headlines here:
a) For all tests conducted by Topolski since he began testing until February 20, 2008, Comcast blocked p2p uploads using Gnutella 100% of the time, blocked ED2K uploads approximately 75% of the time, and blocked bittorrent uploads approximately 40% of the time. These results remained consistent no matter what time of day or night Topolski conducted the test and without regard to the size of the file or the legality of the content. This result is inconsistent with Comcast’s claim that it only targets p2p uploads during periods of peak congestion, or that the “delay” of uploads is transient.
b) On February 20, 2008, Comcast apparently changed its network management practice. It ceased all interference with Gnutella or ED2K, but interference with bittorrent uploads increased to 75% of all attempted uploads. Again, this new result remains remarkably consistent. The rate of degradation for bittorent uploads remains approximately 70% no matter the content or time of day of the attempted uploads. This demonstrates an ongoing monitoring by Comcast of its network management decisions, and an ability to alter those practices when it so desires. It also raises questions about why Comcast would need to wait until the end of the year to implement a new network management system that did not “delay” p2p uploads, since Comcast apparently had no trouble implementing a radical change on February 20.
Perhaps more importantly, Topolski’s test results (which Topolski states were independently confirmed by EFF’s Peter’s Eckersley on the monitoring website NNSQUAD.ORG) demonstrate that Comcast is both capable of targeting specific p2p applications while ignoring others, and that it is doing so. In the absence of any other information, Topolski argues that this blocking of bittorrent is “arbitrary.” My personal fear is that it is not “arbitrary,” but that it is motivated for reasons related to Comcast’s business plans rather than for engineering reasons. But in the absence of evidence, I can’t prove anything. Hopefully, the FCC will look into this and ask (a) whether Comcast disputes Topolski’s allegations, and (b) if Topolski is correct, why does Comcast choose the specific applications that it “delays.”
c) Topolski includes statements from Comcast CTO Tony Werner that indicate that Comcast can manage its capacity on a dynamic basis by reclaiming channels from video delivery or by “virtual node splitting,” and that Comcast both anticipates network use surges and plans accordingly. If true, rather than merely being the sort of puffery common in the industry, it flatly contradicts Comcast’s repeated assertions that it suffers capacity constraints and that the only way Comcast can ensure that a handful of “bandwidth hogs” don’t destroy the network is by “delaying” p2p uploads.
The Peha Filing
Cohen’s scolding of Kevin Martin also prompted Carnegie Mellon Professor Jon Peha to explain why Comcast’s characterization of its practices does not wash. For those who love to play the “this is all heavy engineering and none of you policy wonks can ever really understand this stuff” card, I direct your attention to Mr. Peha’s engineering credentials: Ph.D in EE from Satndford University, IEEE Fellow, etc. For those who like to play the “academics don’t understand how this stuff works in the real world” card, I note that Peha has been CTO of three tech start ups and worked for SRI International, Bell Labs, MS, and the U.S. government.
Peha’s letter also provides a point-by-point rebuttal of the Comcast claims. Again, I will commend the entire letter to folks rather than attempt to summarize it all here. But his challenge to Comcast’s claim that its practice of targeting p2p applications (and, if Topolski is correct, very specific applications for very specific types of blocking and degradation), deserves reproduction here:
Comcast has made this assertion without evidence. I would never claim to know everything that happens in engineering circles, but I am unaware of any technical literature that has proposed that ISPs adopt this particular practice as a way of dealing with congestion, or to use this practice to address any other issues that might be important in the context of “network management.” The practice is known, but it is known in the security literature rather than the network management literature. The textbook name for this is a “man in the middle attack,” or MITM attack. It is therefore reasonable to ask whether the fCC should consider this approach as falling within the realm of network management at all, much less reasonable network management.
The Ou Filing
Finally, we come to George Ou’s filing on how p2p applications exploit certain aspects of TCP (a key element of the internet’s architecture) to “unfairly” (as Ou characterizes it in this article) capture bandwidth at the expense of other applications. As David Clarke touched on at the Boston FCC hearing, TCP has built in mechanisms for addressing congestion. As I understand Ou’s argument (and I trust he will once again correct me if he thinks I misrepresent his position), and to grossly oversimplify the engineering, p2p applications such as bittorrent are designed to open multiple TCP streams and thus avoid the congestion regulation mechanisms in TCP. This has the effect of allowing p2p applications that “blatantly exploit” this feature of TCP to circumvent the existing traffic controls at the expense of other users, grab up all the available bandwidth, and force all other users to endure TCP congestion delays.
Assuming Ou is correct (and I have not heard any engineering discussion to the contrary), Ou’s submission into the record provides an answer to one of the questions raised by Peha: how is it consistent with reasonable, non-arbitrary network management to target a specific type of application rather than address congestion generally? Ou argues that this is justified because p2p applications are different in a highly relevant way, and that only by targeting p2p applications (and, presumably, other applications that exploit the same features of TCP for similar effect) can Comcast (or other ISPs) address the problem. If ISPs merely address congestion generally, a handful of p2p users will continue to absorb a disproportionate amount of the resources.
Ou’s argument does not, of course, address Topolski’s observation that Comcast is not merely targeting p2p applications that exploit the TCP protocol, but is targeting very specific p2p applications with precision. But that issue is irrelevant to Ou’s real argument. Ou is not — as I understand it — defending any specific Comcast practice. He is arguing that the FCC must not prohibit ISPs from targeting specific applications like p2p because very real differences in the nature of the applications justifies this approach.
I am not an engineer, and will wait to see if anyone attempts to rebut Ou’s technical characterization. But assuming Ou is correct, I do not believe that his argument carries the day. The “arms race” between application providers and network providers is an old one, and makes itself felt in every level of network management. The question does not begin or end with engineering (a fact that causes much consternation to those who wish otherwise). As in any other policy debate — especially where critical infrastructure is involved — law and economics inform the technical decisions, the technology and the economics inform the choice of laws, and the technology and legal environment inform the business decisions.
For example, as I have observed in the past one approach to the network congestion issue is to impose explicit limits on users while giving users tools to better manage their bandwidth. The combination of giving users incentives to be more efficient and giving them to tools to do so may alleviate congestion in a way that — to my mind at least — is a lot less dangerous than giving ISPs carte blanche to go after applications.
Ou hates the idea of metered pricing, and considers those of us on the net neutrality side that suggest this as an alternative way of handling congestion hypocrites because metered pricing will kill the internet by chilling innovation and deter use.
As it happens, I agree with Ou about the dangers of metered pricing. But now we are no longer having a technical argument. We are debating among possible alternatives, both technically feasible, but with very different possible real world outcomes. Ou is quite willing to agree that metering internet usage is technically possible, and that it is in fact happening in the real world. He merely thinks it will be a disaster if we require ISPs to adopt this approach as the alternative to targeting p2p applications.
Maybe. Or maybe, faced with these choices, technical folks looking to make money will get clever. Heck, that’s how we got businesses like Akami, because folks wanted to move content faster and Akami offered one possible solution. That happened because folks at the edge of the network (and there are an awful lot of them, many of them both very clever and very greedy in that good free market sense of the word) offered solutions to these problems and FCC rules prevented the network operators from interfering with those solutions. Now, broadband ISPs like Comcast would rather offer a different solutions to the same problem of moving more stuff faster. Unsurprisingly, their solutions to the same problem are quite different, do not involve letting people at the edge decide without their consent, and produce very different worlds.
So no, I don’t ignore the engineering realities. That would be idiotic. But it is equally idiotic to pretend that one engineer’s perception of one technical aspect of a very complex problem is a show stopper to which all non-technical interests must yield. Just like there was never an internet in which all traffic moved at the same speed and money didn’t matter, there was never an internet where law and economics didn’t impact the technical solutions people chose to develop. Just ask anyone who used to waste “hundreds, if not thousands, of dollars” anytime we posted to Usenet at a time when no one was allowed to use “the backbone” for commerce. It would be equally nice if others could stop pretending this is just about engineering, and goes to the heart of what kind of world we want to live in and what kind of applications will be permitted — either by operation of public policy or by operation of the corporate policy — to evolve.
Stay tuned . . . .