CPNI Is More Than Just Consumer Privacy — How To Apply It To Digital Platforms.

This is the fourth blog post in a series on regulating digital platforms. A substantially similar version of this was published by my employer Public Knowledge. You can view the full series here. You can find the previous post in this series on Wetmachine here.

 

“Customer proprietary network information,” usually abbreviated as “CPNI,” refers to a very specific set of privacy regulations governing telecommunications providers (codified at 47 U.S.C. §222) and enforced by the Federal Communications Commission (FCC). But while CPNI provides some of the strongest consumer privacy protections in federal law, it also does much more than that. CPNI plays an important role in promoting competition for telecommunications services and for services that require access to the underlying telecommunications network — such as alarm services. To be clear, CPNI is neither a replacement for general privacy nor a substitute for competition policy. Rather, these rules prohibit telecommunications providers from taking advantage of their position as a two-sided platform. As explained below, CPNI prevents telecommunications carriers from using data that customers and competitors must disclose to the carrier for the system to work.

All of which brings us to our first concrete regulatory proposal for digital platforms. As I discuss below, the same concerns that prompted the FCC to invent CPNI rules in the 1980s and Congress to expand them in the 1990s apply to digital platforms today. First, because providers of potentially competing services must expose proprietary information to the platform for the service to work, platform operators can use their rivals’ proprietary information to offer competing services. If someone sells novelty toothbrushes through Amazon, Amazon can track if the product is selling well, and use that information to make its own competing toothbrushes.

 

Second, the platform operator can compromise consumer privacy without access to the content of the communication by harvesting all sorts of information about the communication and the customer generally. For example, If I’m a mobile phone platform or service, I can tell if you are calling your mother every day like a good child should, or if you are letting her sit all alone in the dark, and whether you are having a long conversation or just blowing her off with a 30-second call. Because while I know you are so busy up in college with all your important gaming and fraternity business, would it kill you to call the woman who carried you for nine months and nearly died giving birth to you? And no, a text does not count. What, you can’t actually take the time to call and have a real conversation? I can see by tracking your iPhone that you clearly have time to hang out at your fraternity with your friends and go see Teen Titans Go To The Movies five times this week, but you don’t have time to call your mother?

 

As you can see, both to protect consumer privacy and to promote competition and protect innovation, we should adopt a version of CPNI for digital platforms. And call your mother more often. I’m just saying.

Once again, before I dig into the substance, I warn readers that I do not intend to address either whether the regulation should apply exclusively to dominant platforms or what federal agency (if any) should enforce these regulations. Instead, in an utterly unheard of approach for Policyland, I want to delve into the substance of why we need real CPNI for digital platforms and what that would look like.

Continue reading

Stoping the 5G Digital Divide Before It Happens.

About 10 years ago, the telcos and the cablecos argued that they needed “franchise reform” to deploy fiber to the home high speed broadband. Anyone offering cable services (which, at the time, were a necessary part of any bundle including broadband — yup, times change) needs to get a franchise. At the time, all franchises were local. They also usually required the franchisee to serve the entire franchise area with same quality service. This requirement to serve the entire service area with the same quality service is called an “anti-redlining” provision. It is designed to ensure that providers of service do not avoid traditionally unserved communities (particularly communities of color), who were on the wrong side of the “red line” drawn by real estate developers to separate the whites only neighborhoods from the “colored” neighborhoods. (For more info, see this clip from Adam Ruins The Suburbs.) While we no longer have laws mandating segregation, the combination of stereotypes about urban neighborhoods dominated by people of color, combined with the unfortunate economic reality that non-whites systemically earn lower incomes than whites often means that providers simply ignore these neighborhoods when they offer services and focus investment on whiter (and wealthier) areas. Anti-redlining laws are designed to prevent that from happening.

 

To return back to the mid-00s, telecos (later joined by cable cos demanding a level playing field) pushed states to reform their franchise laws to (a) replace local franchising with state franchising; and, (b) eliminate most of the requirements of the franchise — including eliminating the anti-redlining provisions. The carriers argued that OF COURSE they intended to provide FTTH everywhere, including communities of color. But if they had to deal with local franchise authorities dictating deployment schedules and demanding all sorts of conditions to get a franchise, then — gosh darn it — they just would not be able to invest in FTTH no matter how much they wanted to do so. Although I and my then employer Media Access Project worked with the handful of local and national orgs fighting repeal of local franchises generally and anti-redlining provisions specifically, we lost bigly.

 

Today, I am once again feeling the Cassandrefreude. As predicted 10 years ago, in the absence of anti-redlining provisions, carriers have not invested in upgrading their broadband capacity in communities of color at anything close to the same rate they have upgraded in wealthier, whiter neighborhoods. As a result, the urban digital divide is once again growing. It’s not just that high-speed broadband is ridiculously expensive, although this is also serious barrier to adoption in urban areas. It’s also that in many low-income and predominantly non-white neighborhoods, speeds on par with those offered in wealthier and whiter neighborhoods are not even available.

 

This problem is further compounded by the belief that we have solved the problem of urban deployment and the only places where deployment (as opposed to simply cost of access) remains an issue is in rural America. But while the problems in rural America are very real, we need to recognize that the digital divide problem is actually growing in urban areas as carriers rush to provide gigabit speed in some neighborhoods while leaving other neighborhoods in the digital dust.

 

With the focus on 5G deployment, however, we have a rare opportunity to avoid repeating past mistakes. Just once, just once, we could actually take steps to prevent the inequality before it happens.

Continue reading

Using The Cost of Exclusion to Measure The Dominance of Digital Platforms.

This is the third blog post in a series on regulating digital platforms. A version of this first appeared on the blog of my employer, Public Knowledge.

 

In my last blog post, I explained my working definition for what constitutes a “digital platform.” Today, I focus on another concept that gets thrown around a lot: “dominant.” While many regulations promoting consumer protection and competition apply throughout a sector, some economic regulations apply to “dominant” firms or firms with “market power.” Behavior that is harmless, or potentially even positive when done by smaller companies or in a more competitive marketplace, can be anticompetitive or harmful to consumers when done by dominant firms — regardless of the firm’s actual intent.

For reasons discussed in my previous blog posts, defining what constitutes “dominant” (or even identifying a single market in which to make such a determination), presents many challenges using the traditional tools of analysis favored by antitrust enforcers and regulators. I therefore propose that we use the cost of exclusion (“COE,” because nothing in policy is taken seriously unless it has its own acronym) as the means of determining when we need to apply regulation to “dominant” firms. That is to say, the greater the cost to individuals and firms (whether as consumers or producers or any of the other roles they may play simultaneously on digital platforms), the greater the need for regulations to protect platform users from harm. If a firm is “too big to lose access to,” then we should treat that firm as dominant.

 

Continue reading

So What The Heck *IS* A Digital Platform?

This is the second blog in a series on regulating digital platforms. A (less snaky) version first appeared on the blog of my employer, Public Knowledge.


In Part I, I explored the challenges of regulating digital platforms to promote competition, protect consumers, and encourage news production and civic engagement. Today, I plan to dive into the first set of challenges. First, I define what I mean when I talk about digital platforms. I will argue that platforms that (a) provide a two-sided or multi-sided market; (b) are accessed via the internet; and (c) have at least one side that is marketed as a “mass market” service, share a set of characteristics and raise a similar set of concerns so that we should consider them as a distinct set of businesses.


Let me stress at the outset something that I will repeat multiple times. First and foremost, describing the common attributes of platforms does not make value judgments about whether these attributes are bad or good. Indeed, many of the attributes I describe have enormous positive effects for consumers, competition, and civic discourse. At the same time, however, the implications of these specific attributes give rise to a number of unique concerns that we read about every day, ranging from companies using targeted advertising to stalk people to extremists using social media to radicalize and recruit.


Equally important, nothing in sector-specific regulation replaces antitrust or consumer protection laws of general applicability. Nor does it suggest that digital services that do not meet the definition of a “digital platform” do not need oversight. Rather, both the definitions I propose below and the sector-specific recommendations that flow from them (discussed in future blog posts) complement each other. The fact that many platform attributes complicate existing antitrust analysis does not mean that antitrust law has now lost its utility as an important tool for protecting competition. But even embracing a broader view of antitrust law and its goals, there remains an important role for sector-specific regulation to address concerns that arise from the unique nature of digital platforms (as unique from other sectors of the economy).


Finally, before diving in, I must caveat this with the recognition that this is a field very much in flux. I have identified what I think are the important elements which, taken together, make digital platforms different from other lines of business or even other “internet companies.” Nor is this the only potentially useful distinction. In the past, for example, I have argued that we should also distinguish between “public utility” concerns (services so important the government has an affirmative responsibility to ensure affordable access for everyone) and services that, while important, do not rise to this level. Deputy Director of Georgetown Law’s Center on Privacy and Technology Laura Moy, in testimony before the House Energy and Commerce Committee, provides an excellent distinction between “essential services” and “unavoidable services,” i.e., services so ubiquitous they are virtually impossible to avoid in one form or another. Others have different definitions of platforms, and/or different distinctions among them.


The definition I propose here is therefore not intended as a final conclusion, but an initial working definition to debate and refine over time. 

 

With all that out of the way, lets move on to the good stuff . . .



Continue reading

Why Platform Regulation Is Both Necessary and Hard.

This is the first blog in a series on regulating digital platforms.

 

As digital platforms have become increasingly important in our everyday lives, we’ve recognized that the need for some sort of regulatory oversight increases. In the past, we’ve talked about this in the context of privacy and what general sorts of due process rights dominant platforms owe their customers. Today, we make it clear that we have reached the point where we need sector-specific regulation focused on online digital platforms, not just application of existing antitrust or existing consumer protection laws. When platforms have become so central to our lives that a change in algorithm can dramatically crash third-party businesses, when social media plays such an important role in our lives that entire businesses exist to pump up your follower numbers, and when a multi-billion dollar industry exists for the sole purpose of helping businesses game search engine rankings, lawmakers need to stop talking hopefully about self-regulation and start putting in place enforceable rights to protect the public interest.

 

That said, we need to recognize at the outset that a lot of things make it rather challenging to  figure out what kind of regulation actually makes sense in this space. Although Ecclesiastes assures us “there is nothing new under the sun,” digital platforms combine issues we’ve dealt with in electronic media (and elsewhere) in novel ways that make applying traditional solutions tricky. Before diving into the solution, therefore, we need to (a) define the problem, and (b) decide what kind of outcome we want to see.

 

Continue reading