CPNI Is More Than Just Consumer Privacy — How To Apply It To Digital Platforms.

This is the fourth blog post in a series on regulating digital platforms. A substantially similar version of this was published by my employer Public Knowledge. You can view the full series here. You can find the previous post in this series on Wetmachine here.

 

“Customer proprietary network information,” usually abbreviated as “CPNI,” refers to a very specific set of privacy regulations governing telecommunications providers (codified at 47 U.S.C. §222) and enforced by the Federal Communications Commission (FCC). But while CPNI provides some of the strongest consumer privacy protections in federal law, it also does much more than that. CPNI plays an important role in promoting competition for telecommunications services and for services that require access to the underlying telecommunications network — such as alarm services. To be clear, CPNI is neither a replacement for general privacy nor a substitute for competition policy. Rather, these rules prohibit telecommunications providers from taking advantage of their position as a two-sided platform. As explained below, CPNI prevents telecommunications carriers from using data that customers and competitors must disclose to the carrier for the system to work.

All of which brings us to our first concrete regulatory proposal for digital platforms. As I discuss below, the same concerns that prompted the FCC to invent CPNI rules in the 1980s and Congress to expand them in the 1990s apply to digital platforms today. First, because providers of potentially competing services must expose proprietary information to the platform for the service to work, platform operators can use their rivals’ proprietary information to offer competing services. If someone sells novelty toothbrushes through Amazon, Amazon can track if the product is selling well, and use that information to make its own competing toothbrushes.

 

Second, the platform operator can compromise consumer privacy without access to the content of the communication by harvesting all sorts of information about the communication and the customer generally. For example, If I’m a mobile phone platform or service, I can tell if you are calling your mother every day like a good child should, or if you are letting her sit all alone in the dark, and whether you are having a long conversation or just blowing her off with a 30-second call. Because while I know you are so busy up in college with all your important gaming and fraternity business, would it kill you to call the woman who carried you for nine months and nearly died giving birth to you? And no, a text does not count. What, you can’t actually take the time to call and have a real conversation? I can see by tracking your iPhone that you clearly have time to hang out at your fraternity with your friends and go see Teen Titans Go To The Movies five times this week, but you don’t have time to call your mother?

 

As you can see, both to protect consumer privacy and to promote competition and protect innovation, we should adopt a version of CPNI for digital platforms. And call your mother more often. I’m just saying.

Once again, before I dig into the substance, I warn readers that I do not intend to address either whether the regulation should apply exclusively to dominant platforms or what federal agency (if any) should enforce these regulations. Instead, in an utterly unheard of approach for Policyland, I want to delve into the substance of why we need real CPNI for digital platforms and what that would look like.

Continue reading

Better Privacy Protections Won’t Kill Free Facebook.

Once upon a time, some people developed a new technology for freely communicating with people around the world. While initially the purview of techies and hobbyists, it didn’t take long for commercial interests to notice the insanely popular new medium and rapidly move to displace the amateur stuff with professional content. But these companies had a problem. For years, people had gotten used to the idea that if you paid for the equipment to access the content, you could receive the content for free. No one wanted to pay for this new, high quality (and expensive to make) content. How could private enterprise possibly make money (other than selling equipment) in a market where people insisted on getting new content every day — heck, every minute! — for free?

 

Finally, a young techie turned entrepreneur came up with a crazy idea. Advertising! This fellow realized that if he could attract a big enough audience, he could get people to pay him so much for advertising it would more than cover the cost of creating the content. Heck, he even seeded the business by paying people to take his content, just so he could sell more advertising. Everyone thought he was crazy. What? Give away content for free? How the heck can you make money giving it away for free? From advertising? Ha! Crazy kids with their whacky technology. But over the course of a decade, this young genius built one of the most lucrative and influential industries in the history of the world.

 

I am talking, of course, about William Paley, who invented the CBS broadcast network and figured out how to make radio broadcasting an extremely profitable business. Not only did Paley prove that you could make a very nice living giving away content supported by advertising, he also demonstrated that you didn’t need to know anything about your audience beyond the most basic raw numbers and aggregate information to do it. For the first 80 or so years of its existence, broadcast advertising depended on extrapolated guesses about total aggregate viewing audience and only the most general information about the demographics of viewership. Until the recent development of real-time information collection via set-top boxes, broadcast advertising (and cable advertising) depended on survey sampling and such broad categories as “18-25 year old males” to sell targeted advertising — and made a fortune while doing it.

 

We should remember this history when evaluating claims by Facebook and others that any changes to enhance user privacy will bring the digital world crashing down on us and force everyone to start paying for content. Setting aside that some people might actually like the option of paying for services in exchange for enhanced privacy protection (I will deal with why this doesn’t happen on its own in a separate blog post), history tells us that advertising can support free content just fine without needing to know every detail of our lives to serve us unique ads tailored to an algorithms best guess about our likes and dislikes based on multi-year, detailed surveillance of our every eye-muscle twitch. Despite the unfortunate tendency of social media to drive toward the most extreme arguments even at the best of times, “privacy regulation” is hardly an all or nothing proposition. We have a lot of room to address the truly awful problems with data collection and storage of personal information before we start significantly eating into the potential revenue of Facebook and other advertising supported media.

 

Mind you, I’m not promising that solid and effective privacy regulation would have no impact on the future revenue earning power of advertising. Sometimes, and again I recognize this will sound like heresy to a bunch of folks, we find that the overall public interest actually requires that we impose limits on profit making activities to protect people. But again, and as I find myself explaining every time we debate possible regulation in any context, we don’t face some Manichean choice between libertarian utopia and a blasted regulatory Hellscape where no business may offer a service without filling out 20 forms in triplicate. We have a lot of ways we can strike a reasonable balance that provides users with real, honest-to-God enforceable personal privacy, while keeping the advertising-supported digital economy profitable enough to thrive. My Public Knowledge colleague Allie Bohm has some concrete suggestions in this blog post here. I explore some broader possible theoretical dimensions of this balance below . . . .

Continue reading