Tales of the Sausage Factory:
Political Advertising In Crisis: What We Should Learn From the Warren/Facebook Ad Flap.

[This is largely a reprint from a blog post originally posted on the Public Knowledge blog.]

The last week or so has highlighted the complete inadequacy of our political advertising rules in an era when even the President of the United States has no hesitation in blasting the world with unproven conspiracy theories about political rivals using both traditional broadcast media and social media. We cannot ignore the urgency of this for maintaining fair and legitimate elections, even if we realistically cannot hope for Congress to address this in a meaningful way any time soon.

 

To recap for those who have not followed closely, President Trump has run an advertisement repeating a debunked conspiracy theory about former Vice President Joe Biden (a current frontrunner in the Democatic presidential primary). Some cable programming networks such as CNN and those owned by NBCU have refused to run the advertisement. The largest social media platforms — Facebook, Google, and Twitter — have run the advertisement, as have local broadcast stations, despite requests from the Biden campaign to remove the ads as violating the platform policy against running advertising known to contain false or misleading information. The social media platforms refused to drop the ads. Facebook provided further information that it does not submit direct statements by politicians to fact checkers because they consider that “direct speech.”

 

Elizabeth Warren responded first with harsh criticism for Facebook, then with an advertisement of her own falsely stating that Zuckerberg had endorsed President Trump. Facebook responded that the Trump advertisement has run “on broadcast stations nearly 1,000 times as required by law,” and that Facebook agreed with the Federal Communications Commission that “it’s better to let voters — not companies — decide.” Elizabeth Warren responded with her own tweet that Facebook was “proving her point” that it was Facebook’s choice “whether [to] take money to promote lies. You can be in the disinformation-for-profit business or hold yourself to some standards.”

 

Quite a week, with quite a lot to unpack here. To summarize briefly, the Communications Act (not just the FCC) does indeed require broadcast stations that accept advertising from political candidates to run the advertisement “without censorship.” (47 U.S.C. §315(a).) While the law does not apply to social media (or to programming networks like NBCU or CNN), there is an underlying principle behind the law that we want to balance the ability of platforms to control their content with preventing platforms from selectively siding with one political candidate over another while at the same time allowing candidates to take their case directly to the people. But, at least in theory, broadcasters also have other restrictions that social media platforms don’t have (such as a limit on the size of their audience reach), which makes social media platforms more like content networks with greater freedom to apply editorial standards. But actual broadcast licensees — the local station that serves the viewing or listening area — effectively become “common carriers” for all “qualified candidates for public office,” and must sell to all candidates the opportunity to speak directly to the audience and charge all candidates the same rate.

 

All of this begs the real question, applicable to both traditional media and social media: How do we balance the power of these platforms to shape public opinion, the desire to let candidates make their case directly to the people, and the need to safeguard our ability to govern ourselves? Broadcast media remain powerful shapers of public opinion, but they clearly work in a very different way from social media. We need to honor the fundamental values at stake across all media, while tailoring the specific regulations to the specific media.

 

Until Congress gets off its butt and actually passes some laws we end up with two choices. Either we are totally cool with giant corporation making the decision about which political candidates get heard and whether what they have to say is sufficiently supported and mainstream and inoffensive to get access to the public via social media, or we are totally cool with letting candidates turn social media into giant disinformation machines pushing propaganda and outright lies to the most susceptible audiences targeted by the most sophisticated placement algorithms available. It would be nice to imagine that there is some magic way out of this which doesn’t involve doing the hard work of reaching a consensus via our elected representatives on how to balance competing concerns, but there isn’t. There is no magic third option by which platforms acting “responsibly” somehow substitutes for an actual law. Either we make the choice via our democratic process, or we abdicate the choice to a handful of giant platforms run by a handful of super-rich individuals. So perhaps we could spend less time shaming big companies and more time shaming our members of Congress into actually doing their freaking jobs!!

 

(OK, spend more time doing both. Just stop thinking that yelling at Facebook is gonna magically solve anything.)

I unpack this below . . .

Continue reading

Tales of the Sausage Factory:
Mozilla v. FCC Reaction, or Net Neutrality Telenovela Gets Renewed For At Least Two More Seasons.

I’ve been doing network neutrality an awfully long time. More than 20 years, actually. That was when we started arguing over how to classify cable modem service. As complained almost a decade ago, this is the issue that just will not die. I understand that, given the central importance of broadband to our society and economy. Nevertheless, my feeling on this can be summed up by the classic line from Godfather III: “Just when I thought I was out, they pull me back in.” [subtle product placement] I even went so far as to write a book on platform regulation to try to get away from this (available free here). [/subtle product placement] . But no. Here we are again, with a decision that creates further muddle and guarantees this will keep going until at least after the 2020 election.

Sigh.

 

Getting on to the basics, you can find the decision in its 186-page glory here. You can find a good analysis of what potentially happens next for net neutrality by my colleague John Bergmayer here. The short version is that we lost the big prize (getting the Order overturned, or “vacated” as we lawyers say), but won enough to force this back to the FCC for further proceedings (which may yet result in the “Restoring Internet Freedom Order” or RIFO being reversed and/or vacated) and open up new fronts in the states. The net result on balance is rather similar to what we had after the 2014 court decision that tossed out the 2010 net neutrality rules but laid the groundwork for reclassifying broadband as Title II; a curve ball that lets all sides claim some sort of win and creates enough uncertainty to likely keep the worst ISP abuses in check for the time being. (Mind you, ISPs will continue to test the boundaries, as they are already doing without actual enforceable rights in place.)

 

Most importantly, industry and the FCC can’t get what they want most (preemption of state authority) without going full Title II. This puts the FCC in a bind, since it can’t deliver the thing industry most wants. It also means that various state laws (especially the comprehensive California net neutrality law) and various executive orders imposing some sort net neutrality obligations now go into effect get to be litigated individually. As with the California privacy law passed last year, industry now has significant incentive to stop fooling around and offer real concessions to get some sort of federal law on the books. Also like the California Privacy Law, this is not going to be enough to overcome industry reluctance against a law with teeth and therefore is unlikely to go anywhere. So we are likely stuck until after the 2020 election.

 

I also want to emphasize that even the parts where we lost, as in 2014, contain the groundwork for ultimately winning. This gets lost in the headlines (particularly in the triumphant crowing of FCC majority). But like any good telenovela, this latest dramatic plot twist has lots of foreshadowing for the next few seasons and a set up for an even BIGGER plot twist in future seasons.

 

My incredibly long, highly personal and really snarky dissection of the D.C. Circuit’s opinion in Mozilla v. FCC and what it means going forward below.

Continue reading

Tales of the Sausage Factory:
A Tax on Silicon Valley Is A Dumb Way to Solve Digital Divide, But Might Be A Smart Way To Protect Privacy.

Everyone talks about the need to provide affordable broadband to all Americans. This includes not only finding ways to get networks deployed in rural areas on par with those in urban areas. As a recent study showed, more urban folks are locked out of home broadband by factors such as price than do without broadband because of the lack of a local access network. The simplest answer would be to simply include broadband (both residential and commercial) in the existing Universal Service Fund. Indeed, Rep. Doris Matsui has been trying to do this for about a decade. But, of course, no one wants to impose a (gasp!) tax on broadband, so this goes nowhere.

 

Following the Washington maxim “don’t tax you, don’t tax me, tax that fellow behind the tree,” lots of people come up with ideas of how to tax folks they hate or compete against. This usually includes streaming services such as Netflix, but these days is more likely to include social media — particularly Facebook. The theory being that “we want to tax our competitors, “or “we hates Facebook precious!” Um, I mean “these services consume more bandwidth or otherwise disproportionately benefit from the Internet.” While this particular idea is both highly ridiculous (we all benefit from the Internet, and things like cloud storage take up more bandwidth than streaming services like Netflix) and somewhat difficult —  if not impossible — to implement in any way related to network usage (which is the justification), it did get me thinking about what sort of a tax on Silicon Valley (and others) might make sense from a social policy perspective.

 

What about a tax on the sale of personal information, including the use of personal information for ad placement? To be clear, I’m not talking about a tax on collecting information or on using the information collected. I’m talking a tax on two-types of commercial transactions; selling information about individuals to third parties, or indirectly selling information to third parties via targeted advertising. It would be sort of a carbon tax for privacy pollution. We could even give “credits” for companies that reduce the amount of personal information that they collect (although I’m not sure we want to allow firms to trade them). We could have additional fines for data breaches the way we do for other toxic waste spills that require clean up.

 

Update: I’m apparently not the first person to think of something like this, although I’ve expanded it a bit to address privacy generally and not just targeted advertising. As Tim Karr pointed out in the comments, Free Press got here ahead of me back in February — although with a more limited proposed tax on targeted advertising. Also, Paul Roemer wrote an op ed on this in the NYT last May. I have some real problem with the Roemer piece, since he seems to think that an even more limited tax on targeted advertising is enough to address all the social problems and we should forget about either regulation or antitrust. Sorry, but just as no one serious about global climate change thinks a carbon tax alone will do the trick, no one serious about consumer protection and competition should imagine that a privacy pollution tax alone is going to force these companies to change their business models. This is a push in the right direction, not the silver bullet.

 

I elaborate below. . . .

Continue reading

Tales of the Sausage Factory:
Can Trump Really Have The FCC Regulate Social Media? So No.

Last week, Politico reported that the White House was considering a potential “Executive Order” (EO) to address the ongoing-yet-unproven allegations of pro-liberal, anti-conservative bias by giant Silicon Valley companies such as Facebook, Twitter, and Google. (To the extent that there is rigorous research by AI experts, it shows that social media sites are more likely to flag posts by self-identified African Americans as “hate speech” than identical wording used by whites.) Subsequent reports by CNN and The Verge have provided more detail. Putting the two together, it appears that the Executive Order would require the Federal Communications Commission to create regulations designed to create rules limiting the ability of digital platforms to “remove or suppress content” as well as prohibit “anticompetitive, unfair or deceptive” practices around content moderation. The EO would also require the Federal Trade Commission to somehow open a docket and take complaints (something it does not, at present, do, or have capacity to do – but I will save that hobby horse for another time) about supposed political bias claims.

 

(I really don’t expect I have to explain why this sort of ham-handed effort at political interference in the free flow of ideas and information is a BAD IDEA. For one thing, I’ve covered this fairly extensively in chapters five and six of my book, The Case for the Digital Platform Act. Also, Chris Lewis, President of my employer Public Knowledge, explained this at length in our press release in response to the reports that surfaced last week. But for those who still don’t get it, giving an administration that regards abuse of power for political purposes as a legitimate tool of governance power to harass important platforms for the exchange of views and information unless they promote its political allies and suppress its critics is something of a worst case scenario for the First Amendment and democracy generally. Even the most intrusive government intervention/supervision of speech in electronic media, such as the Fairness Doctrine, had built in safeguards to insulate the process from political manipulation. Nor are we talking about imposing common carrier-like regulations that remove the government entirely from influencing who gets to use the platform. According to what we have seen so far, we are talking about direct efforts by the government to pick winners and losers — the opposite of net neutrality. That’s not to say that viewpoint-based discrimination on speech platforms can’t be a problem — it’s just that, if it’s a problem, it’s better dealt with through the traditional tools of media policy, such as ownership caps and limits on the size of any one platform, or by using antitrust or regulation to create a more competitive marketplace with fewer bottlenecks.)

 

I have a number of reasons why I don’t think this EO will ever actually go out. For one thing, it would completely contradict everything that the FCC said in the “Restoring Internet Freedom Order” (RIFO) repealing net neutrality. As a result, the FCC would either have to reverse its previous findings that Section 230 prohibits any government regulation of internet services (including ISPs), or see the regulations struck down as arbitrary and capricious. Even if the FCC tried to somehow reconcile the two, Section 230 applies to ISPs. Any “neutrality” rule that applies to Facebook, Google, and Twitter would also apply to AT&T, Verizon, and Comcast. 

 

But this niggles at my mind enough to ask a good old law school hypothetical. If Trump really did issue an EO similar to the one described, what could the FCC actually do under existing law?

  Continue reading

Tales of the Sausage Factory:
Information Fiduciaries: Good Framework, Bad Solution.

By and large, human beings reason by analogy. We learn a basic rule, usually from a specific experience, and then generalize it to any new experience we encounter that seems similar. Even in the relatively abstract area of policy, human beings depend on reasoning by analogy. As a result, when looking at various social problems, the first thing many people do is ask “what is this like?” The answer we collectively come up with then tends to drive the way we approach the problem and what solutions we think address it. Consider the differences in policy, for example, between thinking of spectrum as a “public resource” v. “private property” v. “public commons” — although none of these actually describes what happens when we send a message via radio transmission.

 

As with all human things, this is neither good nor bad in itself. But it does mean that bad analogies drive really bad policy outcomes. By contrast, good analogies and good intellectual frameworks often lead to much better policy results. Nevertheless, most people in policy tend to ignore the impact of our policy frameworks. Indeed, those who mistake cynicism for wisdom had a tendency to dismiss these intellectual frameworks as mere post hoc rationalizations for forgone conclusions. And, in fact, sometimes they are. But even in these cases, the analogies till end up subtly influencing how the policies get developed and implemented. Because law and policy gets implemented by human beings, and human beings think in terms of frameworks and analogies.

 

I like to think of these frameworks and analogies as “deep structures” of the law. Like the way the features of geography impact the formation and course of rivers over time, the way we think about law and policy shapes how it flows in the real world. You can bulldoze through it, forcibly change it, or otherwise ignore these deep structures, but they continue to exert influence over time.

 

Case in point, the idea that personal information is “property.” I will confess to using this as a shorthand myself since 2016 when I started on the ISP privacy proceeding. My 2017 white paper on privacy legislative principles, I traced the evolution of this analogy from Brandies to the modern day, similar to other intangibles such as the ‘right of publicity.’ But as I also tried to explain, this was not meant as actual, real property but shorthand for the idea of a general, continuing interest. Unfortunately, as my Public Knowledge colleague Dylan Gilbert explains here, too many people have now taken this framework as meaning ‘treat property like physical property that can be bought and sold and have exclusive ownership.’ This leads to lots of problems and bad policies, since (as Dylan explains) data is not actually like physical property or even other forms of intangible property.

 

Which brings me to Professor Jack Balkin of Yale Law School and his “information fiduciaries” theory. (Professor Balkin has co-written pieces about this with several different co-authors, but it’s generally regarded as his theory.) Briefly (since I get into a bit more detail with links below), Balkin proposes that judges can (and should) recognize that the nature of the relationship between companies that collect personal information in exchange for services is similar to professional relationships such as doctor-patient or lawyer-client where the law imposes limitations on your ability to use the information you collect over the course of the relationship.

 

This theory has become popular in recent years as a possible way to move forward on privacy. As with all theories that become popular, Balkin’s information fiduciary theory has started to get some skeptical feedback. The Law and Political Economy blog held a symposium for information fiduciary skeptics and invited me to submit an article. As usual, my first draft ended up being twice as long as what they wanted. So I am now running the full length version below.

 

You can find the version they published here, You can find the rest of the articles from the symposium here. Briefly, I think relying on information fiduciaries for privacy doesn’t do nearly enough, and has no advantage over passing strong privacy legislation at the state and federal levels. OTOH, I do think the idea of a fiduciary relationship between the companies that collect and use personal information and the individuals whose information gets collected provides a good framework for how to think about the relationships between the parties, and therefore what sort of legal rights should govern the relationship.

 

More below . . .

Continue reading

Tales of the Sausage Factory:
I Accidentally Write A Book On How To Regulate Digital Platforms.

Some of you may have noticed I haven’t posted that much lately. For the last few months, I’ve been finishing up a project that I hope will contribute to the ongoing debate on “What to do about ‘Big Tech'” aka, what has now become our collective freak out at discovering that these companies we thought of as really cool turn out to control big chunks of our lives. I have now, literally, written the book on how to regulate digital platforms. Well, how to think about regulating them. As I have repeatedly observed, this stuff is really hard and involves lots of tradeoffs.

 

The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, with a Foreword by former FCC Chair (and author of From Gutenberg to Google) Tom Wheeler, covers all the hot topics (some of which I have previewed in other blog posts). How do we define digital platforms? How do we determine if a platform is ‘dominant’? What can we do to promote competition in the platform space? How do we handle the very thorny problem of content moderation and filter bubbles? How do we protect consumers on digital platforms, and how do we use this technology to further traditional important goals such as public safety? Should we preempt the states to create one, uniform national policy? (Spoiler alert, no.) Alternatively, why do need any sort of government regulation at all?

 

My employer, Public Knowledge, is releasing The Case for the Digital Platform Act free, under the Creative Commons Attribution-NonCommercial-ShareAlike license (v. 4.0) in partnership with the Roosevelt Institute. You can download the Foreword by Tom Wheeler here, the Executive Summary here, and the entire book here. Not since Jean Tirole’s Economics for the Common Good has there been such an amazing work of wonkdom to take to the beach for summer reading! Even better, it’s free — and we won’t collect your personal information unless you actively sign up for our mailing list!

 

Download the entire book here. You can also scroll down the page to links for just the executive summary (if you don’t want to print out all 216 pages) or just the Tom Wheeler foreword.

 

More, including spoilers!, below . . .

Continue reading

Tales of the Sausage Factory:
How Not To Train Your Agency, Or Why The FTC Is Toothless.

You know your agency is pathetic at its job when Tea Party Republicans tell you to go harder on industry — especially in a Republican Administration that makes deregulation an end in itself and where despising government interference in “the market” is religious orthodoxy. So it was quite noteworthy to see Freshman Senator Josh Hawley (R-MO) tear the Federal Trade Commission (FTC) a new one for its failure to do anything about how tech companies generally (and Google and Facebook specifically) vacuum up everyone’s personal information, crush competition, swear general allegiance to Gellert Grindelwald and sell us out to the Kree. “The approach the FTC has taken to these issues has been toothless,” Hawley accused in his letter (apparently not meaning this adorable night fury over here).

 

I’m not going to argue with Senator Hawley’s characterization of the FTC. But since he is new in town I think it is important for him to understand why the FTC (and other federal agencies charged with consumer protection) have generally gone from fearsome growling watchdog to timorous toothless purse dog with laryngitis. Short answer, Congress has spent the last 40 years training agencies to not do their job and leave big industry players with political pull alone by abusing them at hearings, cutting their budgets, and — when necessary — passing laws to eliminate or massively restrict whatever authority the agency just exercised.   Put another way, Congress has basically spent the last 40 years conditioning consumer protection agencies to think about enforcement in much the same way Alex DeLarge was conditioned to think about violence in A Clockwork Orange, keep applying negative stimulus until the very thought of trying to enforce the law against any powerful company in any meaningful way makes them positively ill.

 

I explain all this, and the problem with “public choice theory” as applied here in Policyland, below . . .

 

Continue reading

Tales of the Sausage Factory:
Will The FCC Ignore the Privacy Implications of Enhanced Geolocation In New E911 Rulemaking?

NB: This originally appeared as a blog post on the site of my employer, Public Knowledge.

Over the last three months, Motherboard’s Joseph Cox has produced an excellent series of articles on how the major mobile carriers have sold sensitive geolocation data to bounty hunters and others, including highly precise information designed for use with “Enhance 911” (E911). As we pointed out last month when this news came to light, turning over this E911 data (called assisted GPS or A-GPS), exposing E911 data to third parties — whether by accident or intentionally, or using it in any way except for 911 or other purposes required by law violates the rules the Federal Communications Commission adopted in 2015 to protect E911 data.

Just last week, Motherboard ran a new story on how stalkers, bill collectors, and anyone else who wants highly precise real-time geolocation consumer data from carriers can usually scam it out of them by pretending to be police officers. Carriers have been required to take precautions against this kind of “pretexting” since 2007. Nevertheless, according to people interviewed in the article, this tactic of pretending to be a police officer is extremely common and ridiculously easy because, according to one source, “Telcos have been very stupid about it. They have not done due diligence.”

So you would think, with the FCC scheduled to vote this Friday on a mandate to make E911 geolocation even more precise, the FCC would (a) remind carriers that this information is super sensitive and subject to protections above and beyond the FCC’s usual privacy rules for phone information (called “customer proprietary network information,” or “CPNI”); (b) make it clear that the new information required will be covered by the rules adopted in the 2015 E911 Order; and (c) maybe even, in light of these ongoing revelations that carriers do not seem to be taking their privacy obligations seriously, solicit comment on how to improve privacy protections to prevent these kinds of problems from occurring in the future. But of course, as the phrase “you would think” indicates, the FCC’s draft Further Notice of Proposed Rulemaking (FNPRM) does none of these things. The draft doesn’t even mention privacy once.

 

I explain why this has actual and potentially really bad implications for privacy below.

Continue reading

Tales of the Sausage Factory:
What Makes Elizabeth Warren’s Platform Proposal So Potentially Important.

As always when I talk politics, I remind folks that this blog is my personal blog, which I had well before I joined my current employer Public Knowledge. I’ve been commenting on Presidential campaigns since well before I joined PK, and I don’t run any of this stuff in front of my employer before I publish it.

 

 

Friday March 8, the Presidential campaign of Elizabeth Warren, not to be confused with the actual office of Senator Elizabeth Warren (D-MA), announced Warren’s plan for addressing the tech giants. Warren has been drawing attention to massive concentration in industry generally and tech specifically since well before it was cool, so the fat that she is out of the gate with a major proposal on this early in the 2020 campaign is no surprise. Nor is it a surprise that her proposed plan would end up breaking up, in some significant ways, the largest tech platforms.

 

What makes Warren’s contribution a potential game changer is that she goes well beyond the standard “break ’em up” rhetoric that has dominated most of the conversation to date. Warrens proposal addresses numerous key weaknesses I have previously pointed out in relying exclusively on antitrust and is the first significant effort to propose a plan for permanent, sustainable sector specific regulation. As my boss at public knowledge Gene Kimmelman has observed here, (and I’ve spent many 10s of thousands of words explaining) antitrust alone won’t handle the problem of digital platforms and how they impact our lives. For that we need sector specific regulation.

 

Warren is the first major Presidential candidate to advance a real proposal that goes beyond antitrust. As Warren herself observes, this proposal is just a first step to tackle on of the most serious problems that has emerged in the digital platform space, the control that a handful of giant platforms exercises over digital commerce. But Warren’s proposal is already smart in a number of important ways that have the potential to trigger the debate we need to have if we hope to develop smart regulation that will actually work to promote competition and curb consumer abuses.

 

I break these out below . . . .

Continue reading

Tales of the Sausage Factory:
The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading