Information Fiduciaries: Good Framework, Bad Solution.

By and large, human beings reason by analogy. We learn a basic rule, usually from a specific experience, and then generalize it to any new experience we encounter that seems similar. Even in the relatively abstract area of policy, human beings depend on reasoning by analogy. As a result, when looking at various social problems, the first thing many people do is ask “what is this like?” The answer we collectively come up with then tends to drive the way we approach the problem and what solutions we think address it. Consider the differences in policy, for example, between thinking of spectrum as a “public resource” v. “private property” v. “public commons” — although none of these actually describes what happens when we send a message via radio transmission.

 

As with all human things, this is neither good nor bad in itself. But it does mean that bad analogies drive really bad policy outcomes. By contrast, good analogies and good intellectual frameworks often lead to much better policy results. Nevertheless, most people in policy tend to ignore the impact of our policy frameworks. Indeed, those who mistake cynicism for wisdom had a tendency to dismiss these intellectual frameworks as mere post hoc rationalizations for forgone conclusions. And, in fact, sometimes they are. But even in these cases, the analogies till end up subtly influencing how the policies get developed and implemented. Because law and policy gets implemented by human beings, and human beings think in terms of frameworks and analogies.

 

I like to think of these frameworks and analogies as “deep structures” of the law. Like the way the features of geography impact the formation and course of rivers over time, the way we think about law and policy shapes how it flows in the real world. You can bulldoze through it, forcibly change it, or otherwise ignore these deep structures, but they continue to exert influence over time.

 

Case in point, the idea that personal information is “property.” I will confess to using this as a shorthand myself since 2016 when I started on the ISP privacy proceeding. My 2017 white paper on privacy legislative principles, I traced the evolution of this analogy from Brandies to the modern day, similar to other intangibles such as the ‘right of publicity.’ But as I also tried to explain, this was not meant as actual, real property but shorthand for the idea of a general, continuing interest. Unfortunately, as my Public Knowledge colleague Dylan Gilbert explains here, too many people have now taken this framework as meaning ‘treat property like physical property that can be bought and sold and have exclusive ownership.’ This leads to lots of problems and bad policies, since (as Dylan explains) data is not actually like physical property or even other forms of intangible property.

 

Which brings me to Professor Jack Balkin of Yale Law School and his “information fiduciaries” theory. (Professor Balkin has co-written pieces about this with several different co-authors, but it’s generally regarded as his theory.) Briefly (since I get into a bit more detail with links below), Balkin proposes that judges can (and should) recognize that the nature of the relationship between companies that collect personal information in exchange for services is similar to professional relationships such as doctor-patient or lawyer-client where the law imposes limitations on your ability to use the information you collect over the course of the relationship.

 

This theory has become popular in recent years as a possible way to move forward on privacy. As with all theories that become popular, Balkin’s information fiduciary theory has started to get some skeptical feedback. The Law and Political Economy blog held a symposium for information fiduciary skeptics and invited me to submit an article. As usual, my first draft ended up being twice as long as what they wanted. So I am now running the full length version below.

 

You can find the version they published here, You can find the rest of the articles from the symposium here. Briefly, I think relying on information fiduciaries for privacy doesn’t do nearly enough, and has no advantage over passing strong privacy legislation at the state and federal levels. OTOH, I do think the idea of a fiduciary relationship between the companies that collect and use personal information and the individuals whose information gets collected provides a good framework for how to think about the relationships between the parties, and therefore what sort of legal rights should govern the relationship.

 

More below . . .

Continue reading

Will The FCC Ignore the Privacy Implications of Enhanced Geolocation In New E911 Rulemaking?

NB: This originally appeared as a blog post on the site of my employer, Public Knowledge.

Over the last three months, Motherboard’s Joseph Cox has produced an excellent series of articles on how the major mobile carriers have sold sensitive geolocation data to bounty hunters and others, including highly precise information designed for use with “Enhance 911” (E911). As we pointed out last month when this news came to light, turning over this E911 data (called assisted GPS or A-GPS), exposing E911 data to third parties — whether by accident or intentionally, or using it in any way except for 911 or other purposes required by law violates the rules the Federal Communications Commission adopted in 2015 to protect E911 data.

Just last week, Motherboard ran a new story on how stalkers, bill collectors, and anyone else who wants highly precise real-time geolocation consumer data from carriers can usually scam it out of them by pretending to be police officers. Carriers have been required to take precautions against this kind of “pretexting” since 2007. Nevertheless, according to people interviewed in the article, this tactic of pretending to be a police officer is extremely common and ridiculously easy because, according to one source, “Telcos have been very stupid about it. They have not done due diligence.”

So you would think, with the FCC scheduled to vote this Friday on a mandate to make E911 geolocation even more precise, the FCC would (a) remind carriers that this information is super sensitive and subject to protections above and beyond the FCC’s usual privacy rules for phone information (called “customer proprietary network information,” or “CPNI”); (b) make it clear that the new information required will be covered by the rules adopted in the 2015 E911 Order; and (c) maybe even, in light of these ongoing revelations that carriers do not seem to be taking their privacy obligations seriously, solicit comment on how to improve privacy protections to prevent these kinds of problems from occurring in the future. But of course, as the phrase “you would think” indicates, the FCC’s draft Further Notice of Proposed Rulemaking (FNPRM) does none of these things. The draft doesn’t even mention privacy once.

 

I explain why this has actual and potentially really bad implications for privacy below.

Continue reading

The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading

Better Privacy Protections Won’t Kill Free Facebook.

Once upon a time, some people developed a new technology for freely communicating with people around the world. While initially the purview of techies and hobbyists, it didn’t take long for commercial interests to notice the insanely popular new medium and rapidly move to displace the amateur stuff with professional content. But these companies had a problem. For years, people had gotten used to the idea that if you paid for the equipment to access the content, you could receive the content for free. No one wanted to pay for this new, high quality (and expensive to make) content. How could private enterprise possibly make money (other than selling equipment) in a market where people insisted on getting new content every day — heck, every minute! — for free?

 

Finally, a young techie turned entrepreneur came up with a crazy idea. Advertising! This fellow realized that if he could attract a big enough audience, he could get people to pay him so much for advertising it would more than cover the cost of creating the content. Heck, he even seeded the business by paying people to take his content, just so he could sell more advertising. Everyone thought he was crazy. What? Give away content for free? How the heck can you make money giving it away for free? From advertising? Ha! Crazy kids with their whacky technology. But over the course of a decade, this young genius built one of the most lucrative and influential industries in the history of the world.

 

I am talking, of course, about William Paley, who invented the CBS broadcast network and figured out how to make radio broadcasting an extremely profitable business. Not only did Paley prove that you could make a very nice living giving away content supported by advertising, he also demonstrated that you didn’t need to know anything about your audience beyond the most basic raw numbers and aggregate information to do it. For the first 80 or so years of its existence, broadcast advertising depended on extrapolated guesses about total aggregate viewing audience and only the most general information about the demographics of viewership. Until the recent development of real-time information collection via set-top boxes, broadcast advertising (and cable advertising) depended on survey sampling and such broad categories as “18-25 year old males” to sell targeted advertising — and made a fortune while doing it.

 

We should remember this history when evaluating claims by Facebook and others that any changes to enhance user privacy will bring the digital world crashing down on us and force everyone to start paying for content. Setting aside that some people might actually like the option of paying for services in exchange for enhanced privacy protection (I will deal with why this doesn’t happen on its own in a separate blog post), history tells us that advertising can support free content just fine without needing to know every detail of our lives to serve us unique ads tailored to an algorithms best guess about our likes and dislikes based on multi-year, detailed surveillance of our every eye-muscle twitch. Despite the unfortunate tendency of social media to drive toward the most extreme arguments even at the best of times, “privacy regulation” is hardly an all or nothing proposition. We have a lot of room to address the truly awful problems with data collection and storage of personal information before we start significantly eating into the potential revenue of Facebook and other advertising supported media.

 

Mind you, I’m not promising that solid and effective privacy regulation would have no impact on the future revenue earning power of advertising. Sometimes, and again I recognize this will sound like heresy to a bunch of folks, we find that the overall public interest actually requires that we impose limits on profit making activities to protect people. But again, and as I find myself explaining every time we debate possible regulation in any context, we don’t face some Manichean choice between libertarian utopia and a blasted regulatory Hellscape where no business may offer a service without filling out 20 forms in triplicate. We have a lot of ways we can strike a reasonable balance that provides users with real, honest-to-God enforceable personal privacy, while keeping the advertising-supported digital economy profitable enough to thrive. My Public Knowledge colleague Allie Bohm has some concrete suggestions in this blog post here. I explore some broader possible theoretical dimensions of this balance below . . . .

Continue reading