Tales of the Sausage Factory:
The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading

Tales of the Sausage Factory:
Net Neutrality Oral Argument Highlights Problem For Pai: You Can’t Hide The Policy Implications Of Your Actions From Judges.

Friday, February 1, we had approximately 4.5 hours of oral argument before Judge Millett, Judge Wilkins, and Senior Judge Williams. You can listen to a recording of the oral argument here. As everyone who does this for a living will tell you, you can’t judge the outcome by what happens at oral argument. Because that’s the biggest set of tea leaves we have that can tell us anything about the black box of the court making its decision, however, we all speculate shamelessly. Unsurprisingly, Williams seemed most favorable to the FCC. He dissented in USTA v. FCC, and generally prefers deregulatory policy choices. Millett, as expected, pushed both sides hard. But ultimately both she and Wilkins seemed to come down against the FCC on several issues, including a lengthy discussion of the Section 257 argument I highlighted last week.

 

My colleague John Bergmayer has this summary of the substance of the argument. I want to just highlight one theme, the refusal of the FCC to be honest about the expected policy consequences of its actions. I highlight this for several reasons. First, people need to understand that while the agency can always change its mind, it has to follow the Administrative Procedure Act (APA), which includes addressing the factual record, acknowledging the change in policy from the previous FCC, and explaining why it makes a different decision this time around. As I have noted for the last couple of years, there is a lot of confusion around this point. On the one hand, it doesn’t mean you have to show that the old agency decision was wrong. But on the other hand, it doesn’t mean you get to pretend like the old opinion and its old factual record don’t exist. Nor do you get to ignore the factual record established in this case.

 

It was on these points that Millett and Wilkins kept hammering the FCC, and where they are likely in the biggest trouble in terms of the Order. Because FCC Chair Ajit Pai has pretty much made it his signature style to ignore contrary arguments and make ridiculous claims about his orders, this problem has already chomped the FCC on the rear end pretty hard (ironically, in an opinion released on Friday), and will likely continue to do so.

 

More below . . .

Continue reading

Tales of the Sausage Factory:
Fun Arguments To Watch At Net Neutrality Oral Argument, or Did Marsha Blackburn Accidentally Save Net Neutrality?

At last, the contest everyone has been waiting for is finally here! Get ready tomorrow (Friday February 1) for the oral argument in Mozilla v. FCC, the challenge to the 2017 repeal of net neutrality and re-reclassification of broadband as a Title I “information service.” (aka the “Restoring Internet Freedom Order” or “RIFO”).  Obviously, as one of the counsel’s in the case, I am utterly confident that we will totally prevail, so I am not going to try to rehash why I think we win. Besides, you can get horse race coverage and results anywhere. ToTSF is where you go for the geeky and get your policy wonk on!

 

So in preparation for the Superb Owl of the the 2018 telecom season, I thought I would point out some of the more fun arguments that may come up. As always, keep in mind that oral argument is a perilous guide to the final order, and the judges on the panel have a reputation for peppering both sides with tough questions. Also, there is a lot of legal ground to cover, and many important issues raised in the briefs may not get discussed at all because of time limitations. With all that in mind, here are some things to look for if you are lucky enough to be in the courtroom tomorrow, or listen to the full audio when it’s released.

Continue reading

Tales of the Sausage Factory:
Why “Wi-Fi 6” Tells You Exactly What You’re Buying, But “5G” Doesn’t Tell You Anything.

Welcome to 2019, where you will find aggressively marketed to you a new upgrade in Wi-Fi called “Wi-Fi 6” and just about every mobile provider will try to sell you some “new, exciting, 5G service!” But funny thing. If you buy a new “Wi-Fi 6” wireless router you know exactly what you’re getting. It supports the latest IEEE 802.11ax protocol, operating on existing Wi-Fi frequencies of 2.4 GHz and 5 GHz, and any other frequencies listed on the package. (You can see a summary of how 802.11ax differs from 802.11ac here.) By contrast, not only does the term “5G” tell you nothing about the capabilities (or frequencies, for them what care) of the device, but what “5G” means will vary tremendously from carrier to carrier. So while you can fairly easily decide whether you want a new Wi-Fi 6 router, and then just buy one from anywhere, you are going to want to very carefully and very thoroughly interrogate any mobile carrier about what their “5G” service does and what limitations (including geographic limitations) it has.

 

Why the difference? It’s not simply that we live in a world where the Federal Trade Commission (FTC) lets mobile carriers get away with whatever marketing hype they can think up, such as selling “unlimited” plans that are not, in fact unlimited. It has to do with the fact that back in the early 00s, the unlicensed spectrum/Wi-Fi community decided to solve the confusion problem by eliminating confusion, whereas the licensed/mobile carrier world decided to solve the confusion problem by embracing it. As I explain below, that wasn’t necessarily the wrong decision given the nature of licensed mobile service v. unlicensed. But it does mean that 5G will suffer from Forest Gump Syndrome for the foreseeable future. (“5G is like a box of chocolates, you never know what to expect.”) It also means that, for the foreseeable future, consumers will basically need to become experts in a bunch of different technologies to figure out what flavor of “5G” they want, or whether to just wait a few years for the market to stabilize.

More below . . . .

 

Continue reading

Tales of the Sausage Factory:
Apple v. Pepper: Can Illinois Brick Survive Ohio v. Amex, or Is Antitrust On Two Sided-Platforms Possible or Effectively Dead?

Last term the Supreme Court decided Ohio v. American Express, an antitrust case in which the Supreme Court held that when analyzing whether conduct harmed consumers (and is thus a cognizable injury under the antitrust laws based on the current “consumer welfare standard“), if the object of the case is a two-sided market, the Court must analyze both sides of the market, i.e., the consumer facing side and the merchant facing side, to determine if the conduct causes harm. If vertical restraints on the merchant side of the platform produce benefits to consumers on the other side, then the restraints do not violate the antitrust law — even if they prevent new competitors from successfully emerging. In Ohio v. Amex, the court reasoned that an “anti-steering provision” that prevented merchants from directing consumers to other credit cards with lower swipe fees (the amount a merchant pays the card) was offset by Amex providing benefits such as travel services (at least to platinum members) and various discount and loyalty reward programs. The court found this consumer benefit offset the cost to merchants of the higher swipe fees (as the dissent observed, the majority did not address the finding of the district court that these higher swipe fees were passed on to consumers in the form of overall higher prices).

 

While Ohio v. Amex dealt with credit cards, folks like Lena Kahn have argued that because digital platforms such as Facebook are also “two-sided markets,” this decision will make it extremely difficult to go after digital platforms. As long as the company justifies its conduct by pointing to a consumer benefit, such as giving the product away for free (or selling at a reduced cost in the case of companies like Amazon), it is hard to understand what harm to the folks on the other side of the market will satisfy the consumer welfare standard. Or, in other words, it would appear under Ohio v. Amex that even if a firm like Amazon or Facebook does things to prevent a competitor or extract monopoly rents from the non-consumer side, as long as consumers benefit in some way everything is cool.

Others have argued, however, that we should not read Ohio v. Amex as bleakly as this. Since the majority did not address the findings of the district court, the majority did not rule out that exercise of market power over the merchant side could never cause harm to consumers and thus violate the consumer welfare standard. Rather, taking the decision at face value, those more optimistic about the future of antitrust and two-sided markets maintain that the district court erred in Amex by focusing on the harm to competition, rather than how that harm directly impacted consumers (again, the dissent points out the district court did focus on the harm to consumers, but the majority makes no comment on these findings, so there is no negative case law about whether a merchant voluntarily passing on the higher swipe fees in overall higher prices is a cognizable harm).

 

Recently, the Supreme Court heard argument in Apple v. Pepper.  As I explain below, although Apple v. Pepper addresses standing rather than a finding of a violation of the antitrust law itself, it should provide further guidance on whether antitrust law remains relevant in the era of two-sided markets. More below . . . .

Continue reading

Tales of the Sausage Factory:
Tumblr, Consolidation and The Gentrification of Internet.

Tumblr recently announced it will ban adult content.  Although partially in response to the discovery of a number of communities posting child pornography and subsequent ban of the Tumblr ap from the extremely important Apple ap store, a former engineer at Tumblr told Vox the change has been in works for months. The change was mandated by Tumblr’s corporate parent Verizon (which acquired Tumblr when it acquired Yahoo! after Yahoo! acquired it back in 2013. Why did Verizon want to ban adult content on Tumblr after 11 years? According to the same Vox article, it new ban is an effort to attract greater advertising revenue. Tumblr has a reputation for adult content which translates to advertisers as “porn” (unfairly, in the view of Tumblr’s supporters), and advertisers don’t like their products associated with pornography (or other types of controversial content.)

 

I can’t blame Verizon for wanting to make more money from Tumblr. But the rendering of Tumblr “safe for work” (and therefore safe for more mainstream advertising) illustrates one of the often under-appreciated problems of widespread content and platform consolidation. Sites that become popular because they allow communities or content that challenge conventional standards become targets for acquisition. Once acquired, the acquirer seeks to expand the attractiveness of the platform for advertisers and more mainstream audiences. Like a gentrifying neighborhood, the authentic and sometimes dangerous character rapidly smoothes out to become more palatable — forcing the original community to either conform to the new domesticated normal or try to find somewhere else to go. And, as with gentrification, while this may appear to have limited impact, the widespread trends ultimately impact us all.

 

I explain more below . . . .

Continue reading

Tales of the Sausage Factory:
Pai Continues Radical Deregulation Agenda. Next On The Menu — SMS Texting and Short Codes

In December 2007, Public Knowledge (joined by several other public interest groups] filed a Petition For Declaratory Ruling asking the Federal Communications Commission (FCC) to clarify that both SMS Text Messaging and short codes are “Title II” telecommunications services. Put another way, we asked the FCC to reaffirm the basic statutory language that if you use telephones and the telephone network to send information from one telephone number to another, it meets the definition of “telecommunications service.” (47 U.S.C. 153(53)) We did this because earlier in 2007 Verizon had blocked NARAL from using its short code for political action alerts. While we thought there might be some question about short codes, it seemed pretty obvious from reading the statute that when you send “information between or among points of the users choosing, without change in the form or content as sent and received” (definition of “telecommunications”), over the phone network, using phone numbers that it is a “telecommunications service.”

 

Sigh.

 

On the anniversary of the repeal of net neutrality, FCC Chair Ajit Pai now proposes another goodie for carriers – classifying both short codes and text messages as Title I “information service” rather than a Title II telecommunications service. As this is even more ridiculous than last year’s reclassification of broadband as Title I, the draft Order relies primarily on the false claim that classifying text messaging as Title I is an anti-robocall measure. As we at PK pointed out a bunch of times when the wireless carriers first raised this argument back in 2008 – this is utter nonsense. Email, the archetypal Title I information service, is (as Pai himself pointed out over here) chock full of spam. Furthermore, as Pai pointed out last month, the rise in robocalls to mobile phones has nothing to do with regulatory classification and is primarily due to the carriers not implementing existing technical fixes. (And, as the Wall St J explained in this article, robocallers have figured out how to get paid just for connecting to a live number whether or not you answer, which involves a kind of arbitrage that does not work for text messages.)

 

As if that were not enough, the FCC issued a declaratory ruling in 2015, reaffirmed in 2016, that carriers may block unwanted calls or texts despite being Title II common carriers. There is absolutely nothing, nada, zip, zero, that classifying text messages as Title II does that makes it harder to combat spam. By contrast, Title II does prevent a bunch of blocking of wanted text messages as an anticompetitive conduct which we have already seen (and which is occurring fairly regularly on a daily basis, based on the record in the relevant FCC proceeding (08-7). This includes blocking immigrants rights groups, blocking health alerts, blocking information about legal medical marijuana, and blocking competing services. We should therefore treat the claims by industry and the FCC that only by classifying text messaging as “information services” can we save consumers from a rising tide of spam for what they are – self-serving nonsense designed to justify stripping away the few remaining enforceable consumer rights.

 

Once again, beyond the obvious free expression concerns and competition concerns, playing cutesy games with regulatory definitions will have a bunch of unintended consequences that the draft order either shrugs off or fails to consider. Notably:

 

  1. Classifying texting as Title I will take revenue away from the Universal Service Fund (USF). This will further undermine funds to support rural broadband.

 

  1. Classifying texting as Title I disrupts the current automatic roaming framework established by the FCC in 2007.

 

  1. Classifying texting as Title I may, ironically, take it out of the jurisdiction of the Robocall statute (Telephone Consumer Protection Act (TCPA) of 1991).

 

  1. Trashing whatever consumer protections, we have for text messages, and taking one more step to total administrative repeal of Title II completely. Which sounds like fun if you are a carrier but leaves us operating without a safety net for our critical communications infrastructure (as I’ve been writing about for almost ten years).

 

I unpack all of this below.

 

Continue reading

Tales of the Sausage Factory:
We Need To Fix Media, Not Just Social Media — Part III

This is part of a continuing series of mine on platform regulation published by my employer, Public Knowledge. You can find the whole series here. You can find the original of this blog post here. This blog post is Part 3 of a three part series on media and social media. Part 1 is here, Part 2 is here. This version includes recommendations that are my own, and have not been reviewed by, or endorsed by, Public Knowledge.

 

And now . . . after more than 6,000 words of background and build up . . . my big reveal on how to fix the problems in media! You’re welcome.

 

Somewhat more seriously, I’ve spent a lot of time in Part 1 and Part 2 reviewing the overall history of the last 150 years of how technology and journalism inter-relate  because two critically important themes jump out. First, the evolution in communications technology always results in massive changes to the nature of journalism by enabling new forms of journalism and new business models. Sometimes these changes are positive, sometimes negative. But the dominance of the large media corporations financing news production and distribution through advertising revenue is not a natural law of the universe or necessarily the best thing for journalism and democracy. The Internet generally, and digital platforms such as news aggregators and social media specifically, are neither the solution to the dominance of corporate media as optimists hoped it would be or the source of all media’s problems as some people seem to think. Digital platforms are tools, and they have the same promise to utterly revolutionize both the nature of journalism and the business of generating and distributing news as the telegraph or the television.

 

In Part 2, I looked at how activists and journalists connected to social media used these tools in ways that changed the way in which the public observed the events unfolding in Ferguson in 2014, and how this challenged the traditional media narrative around race and policing in America. Combining the lessons from this case study with the broader lessons of history, I have a set of specific policy recommendations that address both the continued solvency of the business of journalism and steps to regain public trust in journalism.

 

More below . . .

 

Continue reading

Tales of the Sausage Factory:
Why You Should Treat Any Predictions About Telecom/Tech Policy in 2019 Skeptically.

Under Section 217, Paragraph (b), sub (1) of the “wonk code of conduct,” I am required to provide some immediate analysis on what the election means for my area of expertise (telecom/tech, if you were wondering). So here goes.

 

  1. Everyone will still pretend to care deeply about the digital divide, particularly the rural digital divide.
  2. The MPAA, RIAA and all the usual suspects are probably already shopping their wish lists. This is great news to any recently elected member of staffer who was worried about needing to get tickets to “Fantastic Beasts” or whatever other blockbuster they will screen at MPAA HQ.
  3. Everyone will still talk about the vital importance of “winning” the “race to 5G” while having no clue what that actually means.

These predictions rank up there with “New England Patriots will play football, and everyone outside of New England will hate them” or “The media will spend more time covering celebrity ‘feuds’ than on major health crises like the famine in Yemen or Ebola outbreak in Congo.” They are more like natural laws of the universe than actual predictions. As for substance, y’all remember that Trillion dollar infrastructure bill Trump was gonna do in 2017? I suspect predictions about how federal policy is going to sort itself out will be just as reliable.

 

Why? Because at this stage there are just too many dang meta-questions unresolved. So rather than try to predict things, I will explain what pieces need to fall into place first.

 

Also, it’s worth noting that we had action on the state level that impacts tech and telecom. Start with Phil Weiser winning the election for State AG in Colorado. As Jon Oliver recently pointed out, don’t underestimate the importance of state AGs. This is particularly true for a tech savvy AG in a techie state. Then there is California’s governor-elect Gavin Newsom, who tried to address the digital divide as Mayor of San Francisco with a community wireless network back when people were trying that. Will he continue to make digital divide a major issue? But I’ll stick to my forte of federal policy for the moment.

 

Anyway, rather than try to predict what the policy will be, here’s what is going to have clarify first.

Continue reading

Tales of the Sausage Factory:
We Need To Fix News Media, Not Just Social Media — Part II

This is part of a continuing series of mine on platform regulation published by my employer, Public Knowledge. You can find the whole series here. You can find the original of this blog post here. This blog post is Part 2 of a three part series on media and social media. Part 1 is here.

 

In Part I, I explained why blaming digital platforms generally (and Facebook and Google in particular) for the current dysfunctional news industry and the erosion of public trust in journalism is an incomplete assessment and therefore leads to proposed solutions that do not actually address the underlying problems. To recap briefly, we have seen since the mid-1990s the steady decline in the quality of journalism and increasing public distrust of traditional newspapers and broadcast news. Massive consolidation financed by massive debt prompting an ever smaller number of mega-companies to cut costs by firing reporters and closing news rooms, shifting from hard news (which is more expensive to produce) to infotainment and talking head punditry, and the rise of unabashedly partisan talk radio hosts and cable networks were causing the public to increasingly silo themselves in partisan echo chambers. The relentless drive of these media giants to use the news to cross-promote their products, the increasing perception that the news industry had failed to question the Bush Administration’s justification for the invasion of Iraq and general perception that corporate media slanted news coverage to further their corporate or political interests (an impression shared by many reporters as well) all contributed to public distrust with the media and the general decline in consumption of news from traditional outlets long before online advertising was a serious threat to revenue. Finally, the unshakably wrong perception by corporate media that the public have no interest in substantive political coverage (despite numerous surveys to the contrary) prompted an audience hungry for real reporting to look to the emerging Blogosphere and away from traditional journalists.

 

Again, to be clear, there are genuine and serious concerns with regard to the potential gatekeeper and market power of social media and other digital platforms. The incentive of platforms to encourage “engagement” – whether by inspiring agreement or inspiring anger – warps both news reporting and news consumption. This incentive encourages these platforms to promote extreme headlines, hyper-partisanship, and radicalization, which in turn encourages those trying to attract readers to increasingly move to ever more extreme language and positions. These problems require a set of their own solutions, which I will reserve for a future installment. In this post, I want to focus on how we can begin to repair the problems with our dysfunctional news industry and the crisis of trust undermining journalism.

 

More Below . . .

Continue reading