Will The FCC Ignore the Privacy Implications of Enhanced Geolocation In New E911 Rulemaking?

NB: This originally appeared as a blog post on the site of my employer, Public Knowledge.

Over the last three months, Motherboard’s Joseph Cox has produced an excellent series of articles on how the major mobile carriers have sold sensitive geolocation data to bounty hunters and others, including highly precise information designed for use with “Enhance 911” (E911). As we pointed out last month when this news came to light, turning over this E911 data (called assisted GPS or A-GPS), exposing E911 data to third parties — whether by accident or intentionally, or using it in any way except for 911 or other purposes required by law violates the rules the Federal Communications Commission adopted in 2015 to protect E911 data.

Just last week, Motherboard ran a new story on how stalkers, bill collectors, and anyone else who wants highly precise real-time geolocation consumer data from carriers can usually scam it out of them by pretending to be police officers. Carriers have been required to take precautions against this kind of “pretexting” since 2007. Nevertheless, according to people interviewed in the article, this tactic of pretending to be a police officer is extremely common and ridiculously easy because, according to one source, “Telcos have been very stupid about it. They have not done due diligence.”

So you would think, with the FCC scheduled to vote this Friday on a mandate to make E911 geolocation even more precise, the FCC would (a) remind carriers that this information is super sensitive and subject to protections above and beyond the FCC’s usual privacy rules for phone information (called “customer proprietary network information,” or “CPNI”); (b) make it clear that the new information required will be covered by the rules adopted in the 2015 E911 Order; and (c) maybe even, in light of these ongoing revelations that carriers do not seem to be taking their privacy obligations seriously, solicit comment on how to improve privacy protections to prevent these kinds of problems from occurring in the future. But of course, as the phrase “you would think” indicates, the FCC’s draft Further Notice of Proposed Rulemaking (FNPRM) does none of these things. The draft doesn’t even mention privacy once.

 

I explain why this has actual and potentially really bad implications for privacy below.

Continue reading

The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading

Better Privacy Protections Won’t Kill Free Facebook.

Once upon a time, some people developed a new technology for freely communicating with people around the world. While initially the purview of techies and hobbyists, it didn’t take long for commercial interests to notice the insanely popular new medium and rapidly move to displace the amateur stuff with professional content. But these companies had a problem. For years, people had gotten used to the idea that if you paid for the equipment to access the content, you could receive the content for free. No one wanted to pay for this new, high quality (and expensive to make) content. How could private enterprise possibly make money (other than selling equipment) in a market where people insisted on getting new content every day — heck, every minute! — for free?

 

Finally, a young techie turned entrepreneur came up with a crazy idea. Advertising! This fellow realized that if he could attract a big enough audience, he could get people to pay him so much for advertising it would more than cover the cost of creating the content. Heck, he even seeded the business by paying people to take his content, just so he could sell more advertising. Everyone thought he was crazy. What? Give away content for free? How the heck can you make money giving it away for free? From advertising? Ha! Crazy kids with their whacky technology. But over the course of a decade, this young genius built one of the most lucrative and influential industries in the history of the world.

 

I am talking, of course, about William Paley, who invented the CBS broadcast network and figured out how to make radio broadcasting an extremely profitable business. Not only did Paley prove that you could make a very nice living giving away content supported by advertising, he also demonstrated that you didn’t need to know anything about your audience beyond the most basic raw numbers and aggregate information to do it. For the first 80 or so years of its existence, broadcast advertising depended on extrapolated guesses about total aggregate viewing audience and only the most general information about the demographics of viewership. Until the recent development of real-time information collection via set-top boxes, broadcast advertising (and cable advertising) depended on survey sampling and such broad categories as “18-25 year old males” to sell targeted advertising — and made a fortune while doing it.

 

We should remember this history when evaluating claims by Facebook and others that any changes to enhance user privacy will bring the digital world crashing down on us and force everyone to start paying for content. Setting aside that some people might actually like the option of paying for services in exchange for enhanced privacy protection (I will deal with why this doesn’t happen on its own in a separate blog post), history tells us that advertising can support free content just fine without needing to know every detail of our lives to serve us unique ads tailored to an algorithms best guess about our likes and dislikes based on multi-year, detailed surveillance of our every eye-muscle twitch. Despite the unfortunate tendency of social media to drive toward the most extreme arguments even at the best of times, “privacy regulation” is hardly an all or nothing proposition. We have a lot of room to address the truly awful problems with data collection and storage of personal information before we start significantly eating into the potential revenue of Facebook and other advertising supported media.

 

Mind you, I’m not promising that solid and effective privacy regulation would have no impact on the future revenue earning power of advertising. Sometimes, and again I recognize this will sound like heresy to a bunch of folks, we find that the overall public interest actually requires that we impose limits on profit making activities to protect people. But again, and as I find myself explaining every time we debate possible regulation in any context, we don’t face some Manichean choice between libertarian utopia and a blasted regulatory Hellscape where no business may offer a service without filling out 20 forms in triplicate. We have a lot of ways we can strike a reasonable balance that provides users with real, honest-to-God enforceable personal privacy, while keeping the advertising-supported digital economy profitable enough to thrive. My Public Knowledge colleague Allie Bohm has some concrete suggestions in this blog post here. I explore some broader possible theoretical dimensions of this balance below . . . .

Continue reading