Gonzales v. Google Validates My Theory of Legislative Drafting — Be Really, Really Detailed and Longwinded.

Every now and then, I do some legislative drafting. I tend to get pushback on my habit of including a bunch of legislative findings and statements of policy, and what some consider my over-detailed definitions. The usual challenge I get is “everyone knows what we’re talking about.” My response: “I’m not writing for us. I’m writing for some judge 25 years from now with no idea what we’re talking about or trying to do.”

 

Which brings me to Gonzales v. Google, the Supreme Court case in which the Justices will take their first shot at interpreting Section 230 of the Communications Act. Distill down the thousands of pages of briefs, brush away the policy arguments, and it all boils down to one question: “What did Congress actually mean when it said don’t treat online services as the ‘publisher of speaker’ of third-party content”? Does it mean (a) the plain English ‘don’t treat the provider of the online service as if that provider actually said the thing’ – so you can’t sue a provider of an “interactive computer service” (to use the actual statutory term) for anything relating to third party content; or (b) does it mean ‘this section provides only protection from liability as a ‘publisher’ under the common law’ – but feel free to impose liability as a common law distributor of third party content (or possibly for any other kind of liability outside the rather narrow common law universe of defamation)?

 

Because this question comes a lot, and because I expect lots of folks to follow the Gonzales case, I decided to run through the type of analysis courts typically engage in when trying to interpret what Congress meant and why courts can come up with wildly divergent explanations.

 

Yes, policy issues and outcome determination matter. But good judges at least try to figure these things out, and even bad judges (by which I mean those determined to reach a specific outcome no matter what the statute actually says) need to couch their opinions in the form of legislative analysis. This is why lawyers and scholars spend so much time on the subject.

 

So if you want to understand how this game works to follow the arguments in Gonzales v. Google, see below. Along the way, I’ll highlight how W. VA v. EPA may complicate things with its stupid ‘let’s look at what Congress didn’t pass’ analysis.

 

Continue reading

Yes, Facebook Wants a Digital Regulator. It’s Still A Good Idea.

This originally appeared on the blog of my employer, Public Knowledge.

Frances Haugen, the (hopefully first of many) Facebook whistleblower, made one thing abundantly clear this week in both her 60 Minutes interview and her Senate Hearing: The United States needs a specialized agency to oversee digital platforms. Antitrust enforcement alone is not enough. Breaking up Facebook would solve some problems, but without additional oversight it will also produce a bunch of smaller companies all running algorithms that maximize engagement regardless of the harm to society (something I have called the “Starfish Problem” — tear up a starfish and the pieces regenerate into lots of smaller starfish). Companies, Haugen warned, “will always put profits over people.” Haugen further emphasized that effectively regulating Facebook (and other digital platforms) requires specialized expertise about the sector. “Right now, the only people in the world trained to analyze these experiences are people who grew up inside of Facebook,” Haugen said. We don’t just need new laws, or to expand the Federal Trade Commission. As Haugen stressed multiple times, we need a specialized, sector-specific regulator to do the job right.

Back in May, Facebook V.P. of Global Public Affairs Nick Clegg wrote an op-ed also calling for the creation of a digital regulator. “Finally,” writes Clegg, “the U.S. could create a new digital regulator. Not only would a new regulator be able to navigate the competing trade-offs in the digital space, it would be able to join the dots between issues like content, data, and economic impact — much like the Federal Communications Commission has successfully exercised regulatory oversight over telecoms and media.”

How do these two diametrically opposed people arrive at the same recommendation? Does the fact that Facebook also says it wants a regulator automatically make it a bad idea? Given that Public Knowledge has repeatedly pushed for a sector-specific regulator since 2018, we obviously don’t think so. But if a sector-specific regulator is the right answer, why is Facebook also pushing for a digital regulator?

Continue reading

Ohio Lawsuit to Declare Google a Common Carrier Not Obviously Stupid – But No Sure Deal Either.

Yesterday, the Ohio Attorney General filed a lawsuit  asking an Ohio state court to declare Google a common carrier and/or public utility under the laws of Ohio and Ohio common law. (News release here; complaint here.) Here’s my hot take just from reading the complaint and with zero Ohio law research: It’s novel, and not obviously stupid. But it has some real obstacles to overcome.

 

I stress this because I expect most people will find this so mind boggling that they will be tempted to write this off. Don’t. It’s a novel application of traditional common carrier law, but that is how law evolves.

 

That said, I don’t think it’s a winner. But I would need to do some serious research on how Ohio common law has dealt with particular key elements of the common law, embodied in Ohio’s statute as serving the public “reasonably and indiscriminately.” Keep in mind I’m not saying that I think this is necessarily the right policy. Indeed, my colleague John Bergmayer at Public Knowledge has explained why treating digital platforms as common carriers could be a very bad idea.

 

A brief explanation of all this below . . . .

Continue reading

Does the Amazon “Drone Cam” Violate the FCC’s Anti-Eavesdropping Rule? And If It Does, So What?

Folks may have heard about the new Amazon prototype, the Ring Always Home Cam. Scheduled for release in early 2021, the”Drone Cam” will run a pattern of flight around your house to allow you to check on things when you are away. As you might imagine, given a history of Amazon’s Alexa recording things without permission, the announcement generated plenty of pushback among privacy advocates. But what attracted my attention was this addendum at the bottom of the Amazon blog post:

“As with other devices at this stage of development, Ring Always Home Cam has not been authorized as required by the rules of the Federal Communications Commission. Ring Always Home Cam is not, and may not be, offered for sale or lease or sold or leased, until authorization is obtained.”

 

A number of folks asked me why this device needs FCC authorization. In general, any device that emits radio-frequency radiation as part of its operation requires certification under 47 U.S.C. 302a and Part 15 of the FCC’s rules (47 C.F.R. 15.1, et seq.) In addition, devices that incorporate unlicensed spectrum capability (e.g., like Wi-Fi or Bluetooth) need certification from the FCC to show that they do not exceed the relevant power levels or rules of operation. So mystery easily solved. But this prompted me to ask the following question. “Does the proposed Amazon “Drone Cam” violate the FCC’s rule against using electronic wireless devices to record or listen to conversation without consent?

 

As I discuss below, this would (to my knowledge) be a novel use of 47 C.F.R. 15.9. It’s hardly a slam dunk, especially with an FCC that thinks it has no business enforcing privacy rules. But we have an actual privacy law on the books, and as the history of the rule shows the FCC intended it to prevent the erosion of personal privacy in the face of rapidly developing technology — just like this. If you are wondering why this hasn’t mattered until now, I will observe that — to the best of my knowledge — this is the only such device that relies exclusively on wireless technology. The rule applies to the use of wireless devices, not to all devices certified under the authority of Section 302a* (which did not exist until 1982).

 

I unpack this, and how the anti-eavesdropping rule might impact the certification or operation of home drone cams and similar wireless devices, below . . .

 

*technically, although codified at 47 USC 302a, the actual Section number in the Comms Act is Section 302. Long story not worth getting into here. But I will use 302a for consistency’s sake.

Continue reading

An Ounce of Preventive Regulation is Worth a Pound of Antitrust: A Proposal for Platform CPNI.

A substantially similar version of this blog was published on the blog of my employer, Public Knowledge.

 

Last year, Public Knowledge and Roosevelt Institute published my book, The Case for the Digital Platform Act, I argued there that we could define digital platforms as a distinct sector of the economy, and that the structure of these businesses and the nature of the sector combined to encourage behaviors that create challenges for existing antitrust enforcement. In the absence of new laws and policies, the digital platform sector gives rise to “tipping points” where a single platform or small oligopoly of platforms can exercise control over a highly lucrative, difficult-to-replicate set of online businesses. For example, despite starting as an online bookseller with almost no customers in 1994, Amazon has grown to an online e-commerce behemoth controlling approximately 40% of all online sales in the United States and enjoying a market capitalization of $1.52 trillion. Google has grown from a scrappy little search engine in 1998 to dominate online search and online advertising — as well as creating the most popular mobile application system (Android) and web browser (Chrome).

 

Today, Public Knowledge released my new paper on digital platform regulation: Mind Your Own Business: Protecting Proprietary Third-Party Information from Digital Platforms. Briefly, this paper provides a solution to a specific competition problem that keeps coming up in the digital platform space. Continuing accusations against AmazonGoogle, and other digital platforms that connect third-party vendors with customers, that these platforms appropriate proprietary data (such as sales information, customer demographics, or whether the vendor uses associated affiliate services such as Google Ads or Amazon Fulfillment Centers) and use this data collected for one purpose to privilege themselves at the expense of the vendor.

 

While I’ve blogged about this problem previously, the new paper provides a detailed analysis of the problem, why the market will not find a solution without policy intervention, and a model statute to solve the problem. Congress has only to pass the draft statute attached from the paper’s Appendix to take a significant step forward in promoting competition in the digital marketplace. For the benefit of folks just tuning in, here is a brief refresher and summary of the new material.

 

A side note. One of the things I’ve done in the paper and draft statute in Appendix A (Feld’s First Principle of Advocacy: Always make it as easy as possible for people to do what you want them to do) is to actually define, in statutory terms, a “digital platform.” Whatever happens with this specific regulatory proposal, this definition is something I hope people will pick up on and recycle. One of the challenges for regulating a specific sector is to actually define the sector. Most legislative efforts, however, think primarily in terms of “Google, Facebook, Amazon, maybe Apple and whoever else.” But digital platforms as a sector of the economy includes not just the biggest providers but the smallest and everything in between. With all due respect to Justice Potter Stewart, you can’t write legislation that defines the sorts of actors covered by the legislation as “I know it when I see it.”

 

More below . . .

 

Continue reading

Breaking Down and Taking Down Trump’s Executive Order Spanking Social Media.

(A substantially similar version of this appeared first on the blog of my employer, Public Knowledge)

It’s hard to believe Trump issued this stupid Executive Order a mere week ago. Even by the standards of insanity known as the Trump Administration, the last week has reached heights of insanity that make a full frontal assault on the First Amendment with anything less than tear gas and tanks seem trivial. Nevertheless, given the vital importance social media have played in publicizing the murders of George Floyd, Ahmed Arbery, and too many others, how social media have broadcast police brutality against peaceful protesters to be broadcast live around the world from countless locations, and how social media has allowed organizers to to coordinate with one another, we need to remember how vitally important it is to protect these means of communication from being cowed and coopted by the President and others with power. At the same time, the way others have used social media to spread misinformation and promote violence highlights that we have very real problems of content moderation we need to address.

 

In both cases, Trump’s naked effort to use his authority to threaten social media companies so they will dance to his tune undermines everything good about social media while doing nothing to address any of its serious problems. So even though (as I have written previously) I don’t think the FCC has the authority to do what Trump wants (and as I write below, i don’t think the FTC does either), it doesn’t make this Executive Order (EO) something harmless we can ignore. Below, I explain what the EO basically instructs federal agencies to do, what happens next, and what people can do about it.

 

More below . . . .

Continue reading

A Slew of Minor Corrections On My Political Advertising Post From the Dean of Public Interest Telecom.

There is an expression that gets used in the Talmud to praise one’s teacher that goes: “My Rabbi is like wine and I am like vinegar,” whereupon the Rabbi actually doing the talking quotes some superior wisdom from his teacher.

 

When it comes to FCC rules governing political advertising, Andrew Jay Schwartzman is like wine and I am like vinegar. Andy knows this stuff backward and forward. So after my recent blog post on Facebook political advertising, Andy sent me a very nice note generally complimenting me on my blog post (always appreciated), but pointing out a bunch of things I either got wrong or could have said more clearly. As Andy observed in his email to me, they don’t actually impact the substance. But in the spirit of transparency, admitting error, and generally preventing the spread of misinformation, I am going to list them out here (a la Emily Ruins Adam Ruins Everything) and correct them in the actual post.

 

List of my goofs below . . . .

Continue reading

Political Advertising In Crisis: What We Should Learn From the Warren/Facebook Ad Flap.

[This is largely a reprint from a blog post originally posted on the Public Knowledge blog.]

The last week or so has highlighted the complete inadequacy of our political advertising rules in an era when even the President of the United States has no hesitation in blasting the world with unproven conspiracy theories about political rivals using both traditional broadcast media and social media. We cannot ignore the urgency of this for maintaining fair and legitimate elections, even if we realistically cannot hope for Congress to address this in a meaningful way any time soon.

 

To recap for those who have not followed closely, President Trump has run an advertisement repeating a debunked conspiracy theory about former Vice President Joe Biden (a current frontrunner in the Democatic presidential primary). Some cable programming networks such as CNN and those owned by NBCU have refused to run the advertisement. The largest social media platforms — Facebook, Google, and Twitter — have run the advertisement, as have local broadcast stations, despite requests from the Biden campaign to remove the ads as violating the platform policy against running advertising known to contain false or misleading information. The social media platforms refused to drop the ads. Facebook provided further information that it does not submit direct statements by politicians to fact checkers because they consider that “direct speech.”

 

Elizabeth Warren responded first with harsh criticism for Facebook, then with an advertisement of her own falsely stating that Zuckerberg had endorsed President Trump. Facebook responded that the Trump advertisement has run “on broadcast stations nearly 1,000 times as required by law,” and that Facebook agreed with the Federal Communications Commission that “it’s better to let voters — not companies — decide.” Elizabeth Warren responded with her own tweet that Facebook was “proving her point” that it was Facebook’s choice “whether [to] take money to promote lies. You can be in the disinformation-for-profit business or hold yourself to some standards.”

 

Quite a week, with quite a lot to unpack here. To summarize briefly, the Communications Act (not just the FCC) does indeed require broadcast stations that accept advertising from political candidates to run the advertisement “without censorship.” (47 U.S.C. §315(a).) While the law does not apply to social media (or to programming networks like NBCU or CNN), there is an underlying principle behind the law that we want to balance the ability of platforms to control their content with preventing platforms from selectively siding with one political candidate over another while at the same time allowing candidates to take their case directly to the people. But, at least in theory, broadcasters also have other restrictions that social media platforms don’t have (such as a limit on the size of their audience reach), which makes social media platforms more like content networks with greater freedom to apply editorial standards. But actual broadcast licensees — the local station that serves the viewing or listening area — effectively become “common carriers” for all “qualified candidates for public office,” and must sell to all candidates the opportunity to speak directly to the audience and charge all candidates the same rate.

 

All of this begs the real question, applicable to both traditional media and social media: How do we balance the power of these platforms to shape public opinion, the desire to let candidates make their case directly to the people, and the need to safeguard our ability to govern ourselves? Broadcast media remain powerful shapers of public opinion, but they clearly work in a very different way from social media. We need to honor the fundamental values at stake across all media, while tailoring the specific regulations to the specific media.

 

Until Congress gets off its butt and actually passes some laws we end up with two choices. Either we are totally cool with giant corporation making the decision about which political candidates get heard and whether what they have to say is sufficiently supported and mainstream and inoffensive to get access to the public via social media, or we are totally cool with letting candidates turn social media into giant disinformation machines pushing propaganda and outright lies to the most susceptible audiences targeted by the most sophisticated placement algorithms available. It would be nice to imagine that there is some magic way out of this which doesn’t involve doing the hard work of reaching a consensus via our elected representatives on how to balance competing concerns, but there isn’t. There is no magic third option by which platforms acting “responsibly” somehow substitutes for an actual law. Either we make the choice via our democratic process, or we abdicate the choice to a handful of giant platforms run by a handful of super-rich individuals. So perhaps we could spend less time shaming big companies and more time shaming our members of Congress into actually doing their freaking jobs!!

 

(OK, spend more time doing both. Just stop thinking that yelling at Facebook is gonna magically solve anything.)

I unpack this below . . .

Continue reading

Can Trump Really Have The FCC Regulate Social Media? So No.

Last week, Politico reported that the White House was considering a potential “Executive Order” (EO) to address the ongoing-yet-unproven allegations of pro-liberal, anti-conservative bias by giant Silicon Valley companies such as Facebook, Twitter, and Google. (To the extent that there is rigorous research by AI experts, it shows that social media sites are more likely to flag posts by self-identified African Americans as “hate speech” than identical wording used by whites.) Subsequent reports by CNN and The Verge have provided more detail. Putting the two together, it appears that the Executive Order would require the Federal Communications Commission to create regulations designed to create rules limiting the ability of digital platforms to “remove or suppress content” as well as prohibit “anticompetitive, unfair or deceptive” practices around content moderation. The EO would also require the Federal Trade Commission to somehow open a docket and take complaints (something it does not, at present, do, or have capacity to do – but I will save that hobby horse for another time) about supposed political bias claims.

 

(I really don’t expect I have to explain why this sort of ham-handed effort at political interference in the free flow of ideas and information is a BAD IDEA. For one thing, I’ve covered this fairly extensively in chapters five and six of my book, The Case for the Digital Platform Act. Also, Chris Lewis, President of my employer Public Knowledge, explained this at length in our press release in response to the reports that surfaced last week. But for those who still don’t get it, giving an administration that regards abuse of power for political purposes as a legitimate tool of governance power to harass important platforms for the exchange of views and information unless they promote its political allies and suppress its critics is something of a worst case scenario for the First Amendment and democracy generally. Even the most intrusive government intervention/supervision of speech in electronic media, such as the Fairness Doctrine, had built in safeguards to insulate the process from political manipulation. Nor are we talking about imposing common carrier-like regulations that remove the government entirely from influencing who gets to use the platform. According to what we have seen so far, we are talking about direct efforts by the government to pick winners and losers — the opposite of net neutrality. That’s not to say that viewpoint-based discrimination on speech platforms can’t be a problem — it’s just that, if it’s a problem, it’s better dealt with through the traditional tools of media policy, such as ownership caps and limits on the size of any one platform, or by using antitrust or regulation to create a more competitive marketplace with fewer bottlenecks.)

 

I have a number of reasons why I don’t think this EO will ever actually go out. For one thing, it would completely contradict everything that the FCC said in the “Restoring Internet Freedom Order” (RIFO) repealing net neutrality. As a result, the FCC would either have to reverse its previous findings that Section 230 prohibits any government regulation of internet services (including ISPs), or see the regulations struck down as arbitrary and capricious. Even if the FCC tried to somehow reconcile the two, Section 230 applies to ISPs. Any “neutrality” rule that applies to Facebook, Google, and Twitter would also apply to AT&T, Verizon, and Comcast. 

 

But this niggles at my mind enough to ask a good old law school hypothetical. If Trump really did issue an EO similar to the one described, what could the FCC actually do under existing law?

  Continue reading

Information Fiduciaries: Good Framework, Bad Solution.

By and large, human beings reason by analogy. We learn a basic rule, usually from a specific experience, and then generalize it to any new experience we encounter that seems similar. Even in the relatively abstract area of policy, human beings depend on reasoning by analogy. As a result, when looking at various social problems, the first thing many people do is ask “what is this like?” The answer we collectively come up with then tends to drive the way we approach the problem and what solutions we think address it. Consider the differences in policy, for example, between thinking of spectrum as a “public resource” v. “private property” v. “public commons” — although none of these actually describes what happens when we send a message via radio transmission.

 

As with all human things, this is neither good nor bad in itself. But it does mean that bad analogies drive really bad policy outcomes. By contrast, good analogies and good intellectual frameworks often lead to much better policy results. Nevertheless, most people in policy tend to ignore the impact of our policy frameworks. Indeed, those who mistake cynicism for wisdom had a tendency to dismiss these intellectual frameworks as mere post hoc rationalizations for forgone conclusions. And, in fact, sometimes they are. But even in these cases, the analogies till end up subtly influencing how the policies get developed and implemented. Because law and policy gets implemented by human beings, and human beings think in terms of frameworks and analogies.

 

I like to think of these frameworks and analogies as “deep structures” of the law. Like the way the features of geography impact the formation and course of rivers over time, the way we think about law and policy shapes how it flows in the real world. You can bulldoze through it, forcibly change it, or otherwise ignore these deep structures, but they continue to exert influence over time.

 

Case in point, the idea that personal information is “property.” I will confess to using this as a shorthand myself since 2016 when I started on the ISP privacy proceeding. My 2017 white paper on privacy legislative principles, I traced the evolution of this analogy from Brandies to the modern day, similar to other intangibles such as the ‘right of publicity.’ But as I also tried to explain, this was not meant as actual, real property but shorthand for the idea of a general, continuing interest. Unfortunately, as my Public Knowledge colleague Dylan Gilbert explains here, too many people have now taken this framework as meaning ‘treat property like physical property that can be bought and sold and have exclusive ownership.’ This leads to lots of problems and bad policies, since (as Dylan explains) data is not actually like physical property or even other forms of intangible property.

 

Which brings me to Professor Jack Balkin of Yale Law School and his “information fiduciaries” theory. (Professor Balkin has co-written pieces about this with several different co-authors, but it’s generally regarded as his theory.) Briefly (since I get into a bit more detail with links below), Balkin proposes that judges can (and should) recognize that the nature of the relationship between companies that collect personal information in exchange for services is similar to professional relationships such as doctor-patient or lawyer-client where the law imposes limitations on your ability to use the information you collect over the course of the relationship.

 

This theory has become popular in recent years as a possible way to move forward on privacy. As with all theories that become popular, Balkin’s information fiduciary theory has started to get some skeptical feedback. The Law and Political Economy blog held a symposium for information fiduciary skeptics and invited me to submit an article. As usual, my first draft ended up being twice as long as what they wanted. So I am now running the full length version below.

 

You can find the version they published here, You can find the rest of the articles from the symposium here. Briefly, I think relying on information fiduciaries for privacy doesn’t do nearly enough, and has no advantage over passing strong privacy legislation at the state and federal levels. OTOH, I do think the idea of a fiduciary relationship between the companies that collect and use personal information and the individuals whose information gets collected provides a good framework for how to think about the relationships between the parties, and therefore what sort of legal rights should govern the relationship.

 

More below . . .

Continue reading