The T-Mobile Data Breach and Your Basic Primer on CPNI – Part II: How Will the FCC Investigate T-Mo’s Data Breach?

In Part I, I provided all the legal and political background to understand why the Federal Communications Commission’s (FCC’s) investigation into T-Mobile’s data breach impacting about 53 million existing customers, former customers, and folks who applied for credit checks but never have been customers, may be complicated politically. But what are the mechanics of the investigation? How does this actually work? What are the rules, and what remedies or penalties can the FCC impose on T-Mobile?

 

I explore these questions below . . . . .

Continue reading

The T-Mobile Data Breach and Your Basic Primer on CPNI – Part I: The Major Background You Need to Know for This to Make Sense.

T-Mobile announced recently that it experienced a major cybersecurity breach, exposing personal information (including credit card numbers) for at least 53 million customers and former customers. Because T-Mobile is a Title II mobile phone provider, this automatically raises the question of whether T-Mobile violated the FCC’s Customer Proprietary Network Information (CPNI) rules. These rules govern, among other things, the obligation of telecommunications service providers to protect CPNI and how to respond to a data breach when one occurs. The FCC has confirmed it is conducting an investigation into the matter.

 

It’s been a long time since we’ve had to think about CPNI, largely because former FCC Chair Ajit Pai made it abundantly clear that he thought the FCC should not enforce privacy rules. Getting the FCC to crack down on even the most egregious violations – such as selling super accurate geolocation data to bounty hunters was like pulling teeth. But back in the Wheeler days, CPNI was a big deal, with Enforcement Bureau Chief Travis LeBlanc terrorizing incumbents by actually enforcing the law with real fines and stuff (and much to the outrage of Republican Commissioners Ajit Pai and Mike O’Reilly). Given that Jessica Rosenworcel is now running the Commission, and both she and Democratic Commissioner Geoffrey Starks are both strong on consumer protection generally and privacy protection in particular, it seems like a good time to fire up the long disused CPNI neurons with a review of how CPNI works and what might or might not happen in the T-Mo investigation.

 

Before diving in, I want to stress that getting hacked and suffering a data breach is not, in and of itself, proof of a rule violation or cause for any sort of fine or punishment. You can do everything right and still get hacked. But the CPNI rules impose obligations on carriers to take suitable precautions to protect CPNI, as well as obligations on what to do when a carrier discovers a breach. If the FCC finds that T-Mobile acted negligently in its data storage practices, or failed to follow appropriate procedures, it could face a substantial fine in addition to the FCC requiring it to come up with a plan to prevent this sort of hack going forward.

 

Assuming, of course, that the breach involved CPNI at all. One of the fights during the Wheeler FCC involved what I will call the “broad” view of CPNI v. the “narrow” view of CPNI. Needless to say, I am an advocate of the “broad” view, and think that’s a proper reading of the law. But I wouldn’t be providing an accurate primer if I didn’t also cover the “narrow” view advanced by the carriers and Pai and O’Reilly.

 

Because (as usual) actually understanding what is going on and its implications requires a lot of background, I’ve broken this up into 2 parts. Part I gives the basic history and background of CPNI, and why this provides the first test of how the Biden FCC will treat CPNI enforcement. Part II will look at application of the FCC’s rules to the T-Mobile breach and what issues are likely to emerge along the way.

 

More below . . .

Continue reading

An Ounce of Preventive Regulation is Worth a Pound of Antitrust: A Proposal for Platform CPNI.

A substantially similar version of this blog was published on the blog of my employer, Public Knowledge.

 

Last year, Public Knowledge and Roosevelt Institute published my book, The Case for the Digital Platform Act, I argued there that we could define digital platforms as a distinct sector of the economy, and that the structure of these businesses and the nature of the sector combined to encourage behaviors that create challenges for existing antitrust enforcement. In the absence of new laws and policies, the digital platform sector gives rise to “tipping points” where a single platform or small oligopoly of platforms can exercise control over a highly lucrative, difficult-to-replicate set of online businesses. For example, despite starting as an online bookseller with almost no customers in 1994, Amazon has grown to an online e-commerce behemoth controlling approximately 40% of all online sales in the United States and enjoying a market capitalization of $1.52 trillion. Google has grown from a scrappy little search engine in 1998 to dominate online search and online advertising — as well as creating the most popular mobile application system (Android) and web browser (Chrome).

 

Today, Public Knowledge released my new paper on digital platform regulation: Mind Your Own Business: Protecting Proprietary Third-Party Information from Digital Platforms. Briefly, this paper provides a solution to a specific competition problem that keeps coming up in the digital platform space. Continuing accusations against AmazonGoogle, and other digital platforms that connect third-party vendors with customers, that these platforms appropriate proprietary data (such as sales information, customer demographics, or whether the vendor uses associated affiliate services such as Google Ads or Amazon Fulfillment Centers) and use this data collected for one purpose to privilege themselves at the expense of the vendor.

 

While I’ve blogged about this problem previously, the new paper provides a detailed analysis of the problem, why the market will not find a solution without policy intervention, and a model statute to solve the problem. Congress has only to pass the draft statute attached from the paper’s Appendix to take a significant step forward in promoting competition in the digital marketplace. For the benefit of folks just tuning in, here is a brief refresher and summary of the new material.

 

A side note. One of the things I’ve done in the paper and draft statute in Appendix A (Feld’s First Principle of Advocacy: Always make it as easy as possible for people to do what you want them to do) is to actually define, in statutory terms, a “digital platform.” Whatever happens with this specific regulatory proposal, this definition is something I hope people will pick up on and recycle. One of the challenges for regulating a specific sector is to actually define the sector. Most legislative efforts, however, think primarily in terms of “Google, Facebook, Amazon, maybe Apple and whoever else.” But digital platforms as a sector of the economy includes not just the biggest providers but the smallest and everything in between. With all due respect to Justice Potter Stewart, you can’t write legislation that defines the sorts of actors covered by the legislation as “I know it when I see it.”

 

More below . . .

 

Continue reading

CPNI Is More Than Just Consumer Privacy — How To Apply It To Digital Platforms.

This is the fourth blog post in a series on regulating digital platforms. A substantially similar version of this was published by my employer Public Knowledge. You can view the full series here. You can find the previous post in this series on Wetmachine here.

 

“Customer proprietary network information,” usually abbreviated as “CPNI,” refers to a very specific set of privacy regulations governing telecommunications providers (codified at 47 U.S.C. §222) and enforced by the Federal Communications Commission (FCC). But while CPNI provides some of the strongest consumer privacy protections in federal law, it also does much more than that. CPNI plays an important role in promoting competition for telecommunications services and for services that require access to the underlying telecommunications network — such as alarm services. To be clear, CPNI is neither a replacement for general privacy nor a substitute for competition policy. Rather, these rules prohibit telecommunications providers from taking advantage of their position as a two-sided platform. As explained below, CPNI prevents telecommunications carriers from using data that customers and competitors must disclose to the carrier for the system to work.

All of which brings us to our first concrete regulatory proposal for digital platforms. As I discuss below, the same concerns that prompted the FCC to invent CPNI rules in the 1980s and Congress to expand them in the 1990s apply to digital platforms today. First, because providers of potentially competing services must expose proprietary information to the platform for the service to work, platform operators can use their rivals’ proprietary information to offer competing services. If someone sells novelty toothbrushes through Amazon, Amazon can track if the product is selling well, and use that information to make its own competing toothbrushes.

 

Second, the platform operator can compromise consumer privacy without access to the content of the communication by harvesting all sorts of information about the communication and the customer generally. For example, If I’m a mobile phone platform or service, I can tell if you are calling your mother every day like a good child should, or if you are letting her sit all alone in the dark, and whether you are having a long conversation or just blowing her off with a 30-second call. Because while I know you are so busy up in college with all your important gaming and fraternity business, would it kill you to call the woman who carried you for nine months and nearly died giving birth to you? And no, a text does not count. What, you can’t actually take the time to call and have a real conversation? I can see by tracking your iPhone that you clearly have time to hang out at your fraternity with your friends and go see Teen Titans Go To The Movies five times this week, but you don’t have time to call your mother?

 

As you can see, both to protect consumer privacy and to promote competition and protect innovation, we should adopt a version of CPNI for digital platforms. And call your mother more often. I’m just saying.

Once again, before I dig into the substance, I warn readers that I do not intend to address either whether the regulation should apply exclusively to dominant platforms or what federal agency (if any) should enforce these regulations. Instead, in an utterly unheard of approach for Policyland, I want to delve into the substance of why we need real CPNI for digital platforms and what that would look like.

Continue reading

Broadband Privacy Can Prevent Discrimination, The Case of Cable One and FICO Scores.

The FCC has an ongoing proceeding to apply Section 222 (47 U.S.C. 222) to broadband. For those unfamiliar with the statute, Section 222 prohibits a provider of a “telecommunications service” from either disclosing information collected from a customer without a customer’s consent, or from using the information for something other than providing the telecom service. While most of us think this generally means advertising, it means a heck of a lot more than that — as illustrated by this tidbit from Cable One.

 

Continue reading

Phone Industry To The Poor: “No Privacy For You!”

Back in June, the FCC released a major Order on the Lifeline program. Lifeline, for those not familiar with it by that name, is the federal program started in the Reagan era to make sure poor people could have basic phone service by providing them with a federal subsidy. Congress enshrined Lifeline (along with subsidy programs for rural areas) in 1996 as Section 254 of the Communications Act. While most of the item dealt with a proposal to expand Lifeline to broadband, a portion of the Order dealt with the traditional FCC Lifeline program.

As a result, the wireless industry trade association, CTIA, has asked the FCC to declare that poor people applying for Lifeline have no enforceable privacy protections when they provide things like their social security number, home address, full name, date of birth, and anything else an identity thief would need to make your life miserable. Meanwhile, US Telecom Association, the trade association for landline carriers, has actually sued the FCC for the right to behave utterly irresponsibly with any information poor people turn over about themselves — including the right to sell that information to 3rd parties.

 

Not that the wireless carriers would ever want to do anything like that, of course! As CTIA, USTA, and all their members constantly assure us, protecting customer privacy is a number one priority. Unless, of course, they’re running some secret experiments on tracking without notifying customers that accidentally expose customer information to third parties. Oh, and it might take longer than promised to actually let you opt out once you discover it. And in our lawsuit against the FCC’s Net Neutrality rules, they explicitly cite the inability to use customer information for marketing, the inability to sell this information to third parties, and the requirement to protect this information generally as one of the biggest burdens of classifying broadband as Title II. But other than that, there is no reason to think that CTIA’s members or USTA’s members would fail to respect and protect your privacy.

 

So how did the Lifeline Reform Order which most people assumed was all about expanding Lifeline to broadband became the vehicle for the phone industry to tell poor people they have no privacy protections when they apply for a federal aid program? I explain below . . .

Continue reading