Why Platform Regulation Is Both Necessary and Hard.

This is the first blog in a series on regulating digital platforms.

 

As digital platforms have become increasingly important in our everyday lives, we’ve recognized that the need for some sort of regulatory oversight increases. In the past, we’ve talked about this in the context of privacy and what general sorts of due process rights dominant platforms owe their customers. Today, we make it clear that we have reached the point where we need sector-specific regulation focused on online digital platforms, not just application of existing antitrust or existing consumer protection laws. When platforms have become so central to our lives that a change in algorithm can dramatically crash third-party businesses, when social media plays such an important role in our lives that entire businesses exist to pump up your follower numbers, and when a multi-billion dollar industry exists for the sole purpose of helping businesses game search engine rankings, lawmakers need to stop talking hopefully about self-regulation and start putting in place enforceable rights to protect the public interest.

 

That said, we need to recognize at the outset that a lot of things make it rather challenging to  figure out what kind of regulation actually makes sense in this space. Although Ecclesiastes assures us “there is nothing new under the sun,” digital platforms combine issues we’ve dealt with in electronic media (and elsewhere) in novel ways that make applying traditional solutions tricky. Before diving into the solution, therefore, we need to (a) define the problem, and (b) decide what kind of outcome we want to see.

 

 

As Jean Tirole, the economist who won the Nobel Prize for defining two-sided markets, pointed out in this interview, unless you know what you’re doing and trying to accomplish, you can’t really know if you are addressing your concerns. Breaking up Facebook won’t solve the privacy problems, for example. Nor is it clear how you could prevent “baby Facebooks” or “baby Googles” from simply re-establishing their market dominance if we don’t have a clear understanding of the mechanisms of how they work. When we broke up AT&T, we could easily define the essential facility to be regulated (local networks) and separate out the market segments where we could have competition (e.g., long distance, “electronic publishing,” equipment manufacture). If Google’s big advantage is “search,” how exactly do you break that up? What is Facebook’s market anyway?

 

It’s not that these questions don’t have answers. They do. But the big problem in Policyland is that people know what they don’t like and try to get rid of that one piece. This usually works about as well as Canute ordering back the tide. So before talking about solutions, or leaping into a debate over whether the Federal Communications Commission, the Federal Trade Commission, or some hypothetical new agency should have jurisdiction, let me run through some of the factors that we need to navigate.

 

What Exactly Is a “Digital Platform” Anyway? How Are They Different From Anything Else?

 

When we do sector-specific regulation (like telecommunications), we at least have some idea of what we are talking about, even if it is pretty broad. Section 1 of the Communications Act establishes the FCC to regulate “interstate and foreign commerce in communication by wire or radio.” Right away, I know I’m talking about the business of communication via electronic means. I’ve excluded heliographs and letters, and included a broad array of things from AM Radio to (until recently) broadband. This wildly divergent set of technologies all have one essential element in common — they deal with the critical human activity of communications. Likewise, the Food, Drug and Cosmetics Act creating the Food and Drug Administration may cover an awful lot of territory, but I can define fairly easily what is a food, what is a drug, or what is a cosmetic. Yes, there will always be fun edge cases (e.g., Are cigarettes a drug delivery system?), but for the most part we have a pretty good idea what we mean.

 

Now we come to digital platforms. Generally, people know what they definitely want covered: Google, Facebook, Amazon, and maybe Twitter. What about Cloudflare? Reddit? Netflix? That stupid app that only said “yo”? Did Yo change into a platform once it expanded to let you attach links and things? Why or why not?

 

It’s not enough to say “Google, we hates it precious!” We need to articulate exactly what it is we are trying to cover. Which brings us to the next problem.

 

A Digital Platform Is Like an Elephant, Which Is Like a Snake, or a Rope, or Something.

 

Intertwined with the question of what makes a digital platform is figuring out what these platforms do. When Lindsey Graham and Mark Zuckerberg sparred over whether or not Facebook had competitors, they each had a point. Zuckerberg argued that what Facebook does overlaps with a lot of different companies, but Graham pointed out that Facebook is unique in offering a service that combines a whole bunch of different functionalities. But the question goes deeper than market definitions. It goes to the goals we set for public policy.

 

Traditionally, we could neatly divide activities into lines of business and determine what sort of policies would most likely promote the common good. For example, in the Communications Act, we generally had one set of public interest obligations associated with telecommunications and a different set for media. Certainly, we had (and continue to have) some overlap. We broadly care about competition and public safety in both telecommunications and mass media, for example. But traditionally, we have focused telecommunications policy on our five fundamental values of universal service, competition, consumer protection, network reliability, and public safety.

 

By contrast, we have focused our media policy on promoting diverse sources of news and perspectives as critical to enabling our democratic system of government to function. We treat telecommunications as infrastructure and a public utility, spending billions of dollars to ensure that everyone in the country has affordable access. We have no policy of making sure that everyone has access to a cable or satellite provider — despite the important news and public safety content they provide.

 

These differences inform the kind of regulation we impose to further our public policy goals. We have strict no interference/common carriage requirements on telecommunications. No one demands that mobile phone providers monitor the phone calls of everyone using their phone networks to block hate speech. No one has argued that Comcast or AT&T should cancel the phone service of Nazis. In fact, we have laws in place precisely to prevent such things. In exchange, we immunize common carriers from any liability for content of their customers’ speech. Again, no one proposes that Verizon Wireless should be liable for sex traffickers, or that Sprint should ensure that Russians trying to manipulate our elections don’t send texts.

 

On the other hand, we explicitly prohibit treating broadcasters (or cable operators) as common carriers. But we make them liable for their editorial choices and require them to promote certain social policies such as providing educational material to children (and protecting them from ‘indecent’ content). We require broadcasters and cable operators to disclose when programming material is sponsored. We prohibit them from selling advertising to one political party’s federal candidates, but not others. And — at least until recently — we have sought to promote diversity of viewpoints by setting ownership limits well below those considered dangerous under antitrust law. To quote the departing Justice Kennedy: “Federal policy, however, has long favored preserving a multiplicity of broadcast outlets regardless of whether the conduct that threatens it is motivated by anticompetitive animus or rises to the level of an antitrust violation.”

 

Digital platforms, depending on how broadly we define them, share elements of both straight-up telecommunications and mass media — as well as qualities not found in either. These platforms often combine both the one-to-one aspect of traditional telecommunications with the potentially vast reach of mass media. Even the largest conference call Public Knowledge could host is trivial compared to the number of people who could theoretically access this blog post (insert joke about our blog being one-to-one because we have so few readers here). But digital platforms add a new element in the mix by giving me access to other content through linking. Platforms may enable organizing — for positive or negative purposes — in ways that neither traditional telecommunications nor traditional media could make possible.

 

But it gets even more complicated when we consider the vast array of other functions performed by online platforms that we instinctively group together. Is Amazon a retailer? A shopping mall for third-party vendors? All of the above? Video sharing sites and other platforms for exchanging content look more like public storage cubes than broadcasters in that they simply are a repository for someone else’s stuff. But we increasingly relate to them in the same way that we have related to traditional mass media. Sometimes. But other times not.

 

Balancing Policy Objectives Makes for Messy Tradeoffs.

 

Finally, we need to recognize that regulating platforms ultimately means a bunch of trade-offs. Everyone hates this. Everyone loves to talk about policy options as if my proposal is a stairway to heaven, and all other options are handcarts heading down the road paved with good intentions. Everyone wants to talk about this as “curbing greedy corporations” or “protecting innovation and free expression from ravening Socialists.” And, to be fair, sometimes the answers are pretty obvious. We can all agree that free speech survives just fine under laws that prevent false or deceptive advertising. But most times, we are talking about balancing trade-offs and looking to maximize the probability of good results while minimizing the possibility of bad results.

 

To take just one obvious example, it is impossible to have social media platforms operate as common carriers while simultaneously policing their networks for hate speech. But that doesn’t mean our choices are binary. Somewhere between blocking a quote from the Declaration of Independence and helplessly standing by while hate groups organize online harassment campaigns lies some trade-off that protects most (but invariably not all) controversial speech while simultaneously preventing most (but not all) online harassment.

 

But then we have trade-offs that are more economic or technical in nature. Take the question of “search neutrality.” The entire point of a search engine is to help organize things in useful ways. A search engine can’t be “neutral” in the same way a broadband network can be “neutral” because I don’t need my broadband provider to recommend websites or applications. But my typing “manage student loans without sobbing hysterically” into a search engine requires it to recommend websites or applications relevant to my request — in fact, presenting this information is exactly what I’m asking the search engine to do.

 

At the same time, however, we can recognize how controlling internet search — whether by developing the algorithms of search engines or otherwise selecting for or discriminating against specific types of results, such as favoring affiliated content or discriminating against rival content or unpopular speech — has enormous implications for competition, as well as for other social policies. When we expand “search” to mean any sort of ordering and recommendation, such as how Facebook presents things in my timeline or how Amazon recommends products, we discover a new set of problems. Setting aside things we obviously want to disallow, such as secret experiments to manipulate our emotions, the very thing that makes it most effective can have negative social consequences. For example, should YouTube suggest related videos based on its algorithm when those videos lead to increasing radicalization? Should Facebook continue to show related news items, despite the fact that this re-enforces the “bubble effect” that many say is fragmenting our society? Should we make systems more annoying and less efficient in order to prevent addiction by design?

 

Hard but Necessary.

 

This is the point where the industry lobby and those ideologically opposed to regulation usually leap in and start talking about “regulatory humility” and “unintended consequences” and “first do no harm.” The problem is, to add to the cliché storm, “refusal to act is an action.” We are living in a world rapidly devolving into a set of highly concentrated digital platforms around which major aspects of our economy and our lives revolve. As the CEO of Cloudflare, Matthew Prince, eloquently put it after terminating service to Nazi Organization/Publication Der Stormer: “In a not-so-distant future, if we’re not there already, it may be that if you’re going to put content on the internet you’ll need to use a company with a giant network like Cloudflare, Google, Microsoft, Facebook, Amazon, or Alibaba.” Or, somewhat more directly: “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the internet. No one should have that power.”

 

Prince was talking specifically about policing speech, but the same is true about competition and consumer protection. No company should have the power to determine what business models are acceptable and which ones to block as potential competition. People should have confidence that protection of their privacy does not depend on the whims and best efforts of CEOs. Nor is this simply a question of size and market dominance. While the conversation until now has largely focused on the largest platforms, and while there are certainly concerns that apply only to dominant platforms, one of the critical aspects of sector-specific regulation is to identify when a public policy concern needs to apply to all providers regardless of size. For example, Reddit can in no way be considered “dominant,” since as measured by either subscribers or total social media traffic it does not even come close to Facebook’s market share. But if we are trying to determine the right policy for balancing content moderation vs. fears about censorship or concerns about the harm to innovation, then it doesn’t matter whether we’re talking about Facebook or Reddit or some fledgling service that doesn’t even exist.

 

And yes, we should acknowledge that such regulation may raise the cost of doing business — although both experience and research studies tell us that these fears are greatly exaggerated. But, as noted above, balancing policy objectives makes for trade-offs. There is no doubt that health codes and fire safety codes raise the cost of business to emerging restaurants. It is also true that without health codes we get more cases of food poisoning and more fires. While we can, and should, debate the trade-offs and where to set the balance, the fact that a rule may impose cost is not an automatic showstopper in any rational policy discussion.

 

In my next blog post, I will start trying to answer these questions, starting with the most basic one — what, exactly, are we talking about regulating? Or, what exactly is a “digital platform” anyway?

 

Stay tuned . . .

 

(This blog first appeared on the site of my employer, Public Knowledge.)

Comments are closed.