The T-Mobile Data Breach and Your Basic Primer on CPNI – Part II: How Will the FCC Investigate T-Mo’s Data Breach?

In Part I, I provided all the legal and political background to understand why the Federal Communications Commission’s (FCC’s) investigation into T-Mobile’s data breach impacting about 53 million existing customers, former customers, and folks who applied for credit checks but never have been customers, may be complicated politically. But what are the mechanics of the investigation? How does this actually work? What are the rules, and what remedies or penalties can the FCC impose on T-Mobile?

 

I explore these questions below . . . . .

Continue reading

The T-Mobile Data Breach and Your Basic Primer on CPNI – Part I: The Major Background You Need to Know for This to Make Sense.

T-Mobile announced recently that it experienced a major cybersecurity breach, exposing personal information (including credit card numbers) for at least 53 million customers and former customers. Because T-Mobile is a Title II mobile phone provider, this automatically raises the question of whether T-Mobile violated the FCC’s Customer Proprietary Network Information (CPNI) rules. These rules govern, among other things, the obligation of telecommunications service providers to protect CPNI and how to respond to a data breach when one occurs. The FCC has confirmed it is conducting an investigation into the matter.

 

It’s been a long time since we’ve had to think about CPNI, largely because former FCC Chair Ajit Pai made it abundantly clear that he thought the FCC should not enforce privacy rules. Getting the FCC to crack down on even the most egregious violations – such as selling super accurate geolocation data to bounty hunters was like pulling teeth. But back in the Wheeler days, CPNI was a big deal, with Enforcement Bureau Chief Travis LeBlanc terrorizing incumbents by actually enforcing the law with real fines and stuff (and much to the outrage of Republican Commissioners Ajit Pai and Mike O’Reilly). Given that Jessica Rosenworcel is now running the Commission, and both she and Democratic Commissioner Geoffrey Starks are both strong on consumer protection generally and privacy protection in particular, it seems like a good time to fire up the long disused CPNI neurons with a review of how CPNI works and what might or might not happen in the T-Mo investigation.

 

Before diving in, I want to stress that getting hacked and suffering a data breach is not, in and of itself, proof of a rule violation or cause for any sort of fine or punishment. You can do everything right and still get hacked. But the CPNI rules impose obligations on carriers to take suitable precautions to protect CPNI, as well as obligations on what to do when a carrier discovers a breach. If the FCC finds that T-Mobile acted negligently in its data storage practices, or failed to follow appropriate procedures, it could face a substantial fine in addition to the FCC requiring it to come up with a plan to prevent this sort of hack going forward.

 

Assuming, of course, that the breach involved CPNI at all. One of the fights during the Wheeler FCC involved what I will call the “broad” view of CPNI v. the “narrow” view of CPNI. Needless to say, I am an advocate of the “broad” view, and think that’s a proper reading of the law. But I wouldn’t be providing an accurate primer if I didn’t also cover the “narrow” view advanced by the carriers and Pai and O’Reilly.

 

Because (as usual) actually understanding what is going on and its implications requires a lot of background, I’ve broken this up into 2 parts. Part I gives the basic history and background of CPNI, and why this provides the first test of how the Biden FCC will treat CPNI enforcement. Part II will look at application of the FCC’s rules to the T-Mobile breach and what issues are likely to emerge along the way.

 

More below . . .

Continue reading

Does the Amazon “Drone Cam” Violate the FCC’s Anti-Eavesdropping Rule? And If It Does, So What?

Folks may have heard about the new Amazon prototype, the Ring Always Home Cam. Scheduled for release in early 2021, the”Drone Cam” will run a pattern of flight around your house to allow you to check on things when you are away. As you might imagine, given a history of Amazon’s Alexa recording things without permission, the announcement generated plenty of pushback among privacy advocates. But what attracted my attention was this addendum at the bottom of the Amazon blog post:

“As with other devices at this stage of development, Ring Always Home Cam has not been authorized as required by the rules of the Federal Communications Commission. Ring Always Home Cam is not, and may not be, offered for sale or lease or sold or leased, until authorization is obtained.”

 

A number of folks asked me why this device needs FCC authorization. In general, any device that emits radio-frequency radiation as part of its operation requires certification under 47 U.S.C. 302a and Part 15 of the FCC’s rules (47 C.F.R. 15.1, et seq.) In addition, devices that incorporate unlicensed spectrum capability (e.g., like Wi-Fi or Bluetooth) need certification from the FCC to show that they do not exceed the relevant power levels or rules of operation. So mystery easily solved. But this prompted me to ask the following question. “Does the proposed Amazon “Drone Cam” violate the FCC’s rule against using electronic wireless devices to record or listen to conversation without consent?

 

As I discuss below, this would (to my knowledge) be a novel use of 47 C.F.R. 15.9. It’s hardly a slam dunk, especially with an FCC that thinks it has no business enforcing privacy rules. But we have an actual privacy law on the books, and as the history of the rule shows the FCC intended it to prevent the erosion of personal privacy in the face of rapidly developing technology — just like this. If you are wondering why this hasn’t mattered until now, I will observe that — to the best of my knowledge — this is the only such device that relies exclusively on wireless technology. The rule applies to the use of wireless devices, not to all devices certified under the authority of Section 302a* (which did not exist until 1982).

 

I unpack this, and how the anti-eavesdropping rule might impact the certification or operation of home drone cams and similar wireless devices, below . . .

 

*technically, although codified at 47 USC 302a, the actual Section number in the Comms Act is Section 302. Long story not worth getting into here. But I will use 302a for consistency’s sake.

Continue reading

A Tax on Silicon Valley Is A Dumb Way to Solve Digital Divide, But Might Be A Smart Way To Protect Privacy.

Everyone talks about the need to provide affordable broadband to all Americans. This includes not only finding ways to get networks deployed in rural areas on par with those in urban areas. As a recent study showed, more urban folks are locked out of home broadband by factors such as price than do without broadband because of the lack of a local access network. The simplest answer would be to simply include broadband (both residential and commercial) in the existing Universal Service Fund. Indeed, Rep. Doris Matsui has been trying to do this for about a decade. But, of course, no one wants to impose a (gasp!) tax on broadband, so this goes nowhere.

 

Following the Washington maxim “don’t tax you, don’t tax me, tax that fellow behind the tree,” lots of people come up with ideas of how to tax folks they hate or compete against. This usually includes streaming services such as Netflix, but these days is more likely to include social media — particularly Facebook. The theory being that “we want to tax our competitors, “or “we hates Facebook precious!” Um, I mean “these services consume more bandwidth or otherwise disproportionately benefit from the Internet.” While this particular idea is both highly ridiculous (we all benefit from the Internet, and things like cloud storage take up more bandwidth than streaming services like Netflix) and somewhat difficult –  if not impossible — to implement in any way related to network usage (which is the justification), it did get me thinking about what sort of a tax on Silicon Valley (and others) might make sense from a social policy perspective.

 

What about a tax on the sale of personal information, including the use of personal information for ad placement? To be clear, I’m not talking about a tax on collecting information or on using the information collected. I’m talking a tax on two-types of commercial transactions; selling information about individuals to third parties, or indirectly selling information to third parties via targeted advertising. It would be sort of a carbon tax for privacy pollution. We could even give “credits” for companies that reduce the amount of personal information that they collect (although I’m not sure we want to allow firms to trade them). We could have additional fines for data breaches the way we do for other toxic waste spills that require clean up.

 

Update: I’m apparently not the first person to think of something like this, although I’ve expanded it a bit to address privacy generally and not just targeted advertising. As Tim Karr pointed out in the comments, Free Press got here ahead of me back in February — although with a more limited proposed tax on targeted advertising. Also, Paul Roemer wrote an op ed on this in the NYT last May. I have some real problem with the Roemer piece, since he seems to think that an even more limited tax on targeted advertising is enough to address all the social problems and we should forget about either regulation or antitrust. Sorry, but just as no one serious about global climate change thinks a carbon tax alone will do the trick, no one serious about consumer protection and competition should imagine that a privacy pollution tax alone is going to force these companies to change their business models. This is a push in the right direction, not the silver bullet.

 

I elaborate below. . . .

Continue reading

Information Fiduciaries: Good Framework, Bad Solution.

By and large, human beings reason by analogy. We learn a basic rule, usually from a specific experience, and then generalize it to any new experience we encounter that seems similar. Even in the relatively abstract area of policy, human beings depend on reasoning by analogy. As a result, when looking at various social problems, the first thing many people do is ask “what is this like?” The answer we collectively come up with then tends to drive the way we approach the problem and what solutions we think address it. Consider the differences in policy, for example, between thinking of spectrum as a “public resource” v. “private property” v. “public commons” — although none of these actually describes what happens when we send a message via radio transmission.

 

As with all human things, this is neither good nor bad in itself. But it does mean that bad analogies drive really bad policy outcomes. By contrast, good analogies and good intellectual frameworks often lead to much better policy results. Nevertheless, most people in policy tend to ignore the impact of our policy frameworks. Indeed, those who mistake cynicism for wisdom had a tendency to dismiss these intellectual frameworks as mere post hoc rationalizations for forgone conclusions. And, in fact, sometimes they are. But even in these cases, the analogies till end up subtly influencing how the policies get developed and implemented. Because law and policy gets implemented by human beings, and human beings think in terms of frameworks and analogies.

 

I like to think of these frameworks and analogies as “deep structures” of the law. Like the way the features of geography impact the formation and course of rivers over time, the way we think about law and policy shapes how it flows in the real world. You can bulldoze through it, forcibly change it, or otherwise ignore these deep structures, but they continue to exert influence over time.

 

Case in point, the idea that personal information is “property.” I will confess to using this as a shorthand myself since 2016 when I started on the ISP privacy proceeding. My 2017 white paper on privacy legislative principles, I traced the evolution of this analogy from Brandies to the modern day, similar to other intangibles such as the ‘right of publicity.’ But as I also tried to explain, this was not meant as actual, real property but shorthand for the idea of a general, continuing interest. Unfortunately, as my Public Knowledge colleague Dylan Gilbert explains here, too many people have now taken this framework as meaning ‘treat property like physical property that can be bought and sold and have exclusive ownership.’ This leads to lots of problems and bad policies, since (as Dylan explains) data is not actually like physical property or even other forms of intangible property.

 

Which brings me to Professor Jack Balkin of Yale Law School and his “information fiduciaries” theory. (Professor Balkin has co-written pieces about this with several different co-authors, but it’s generally regarded as his theory.) Briefly (since I get into a bit more detail with links below), Balkin proposes that judges can (and should) recognize that the nature of the relationship between companies that collect personal information in exchange for services is similar to professional relationships such as doctor-patient or lawyer-client where the law imposes limitations on your ability to use the information you collect over the course of the relationship.

 

This theory has become popular in recent years as a possible way to move forward on privacy. As with all theories that become popular, Balkin’s information fiduciary theory has started to get some skeptical feedback. The Law and Political Economy blog held a symposium for information fiduciary skeptics and invited me to submit an article. As usual, my first draft ended up being twice as long as what they wanted. So I am now running the full length version below.

 

You can find the version they published here, You can find the rest of the articles from the symposium here. Briefly, I think relying on information fiduciaries for privacy doesn’t do nearly enough, and has no advantage over passing strong privacy legislation at the state and federal levels. OTOH, I do think the idea of a fiduciary relationship between the companies that collect and use personal information and the individuals whose information gets collected provides a good framework for how to think about the relationships between the parties, and therefore what sort of legal rights should govern the relationship.

 

More below . . .

Continue reading

Will The FCC Ignore the Privacy Implications of Enhanced Geolocation In New E911 Rulemaking?

NB: This originally appeared as a blog post on the site of my employer, Public Knowledge.

Over the last three months, Motherboard’s Joseph Cox has produced an excellent series of articles on how the major mobile carriers have sold sensitive geolocation data to bounty hunters and others, including highly precise information designed for use with “Enhance 911” (E911). As we pointed out last month when this news came to light, turning over this E911 data (called assisted GPS or A-GPS), exposing E911 data to third parties — whether by accident or intentionally, or using it in any way except for 911 or other purposes required by law violates the rules the Federal Communications Commission adopted in 2015 to protect E911 data.

Just last week, Motherboard ran a new story on how stalkers, bill collectors, and anyone else who wants highly precise real-time geolocation consumer data from carriers can usually scam it out of them by pretending to be police officers. Carriers have been required to take precautions against this kind of “pretexting” since 2007. Nevertheless, according to people interviewed in the article, this tactic of pretending to be a police officer is extremely common and ridiculously easy because, according to one source, “Telcos have been very stupid about it. They have not done due diligence.”

So you would think, with the FCC scheduled to vote this Friday on a mandate to make E911 geolocation even more precise, the FCC would (a) remind carriers that this information is super sensitive and subject to protections above and beyond the FCC’s usual privacy rules for phone information (called “customer proprietary network information,” or “CPNI”); (b) make it clear that the new information required will be covered by the rules adopted in the 2015 E911 Order; and (c) maybe even, in light of these ongoing revelations that carriers do not seem to be taking their privacy obligations seriously, solicit comment on how to improve privacy protections to prevent these kinds of problems from occurring in the future. But of course, as the phrase “you would think” indicates, the FCC’s draft Further Notice of Proposed Rulemaking (FNPRM) does none of these things. The draft doesn’t even mention privacy once.

 

I explain why this has actual and potentially really bad implications for privacy below.

Continue reading

The Market for Privacy Lemons. Why “The Market” Can’t Solve The Privacy Problem Without Regulation.

Practically every week, it seems, we get some new revelation about the mishandling of user information that makes people very upset. Indeed, people have become so upset that people are actually talking about, dare we say it, “legislating” some new privacy protections. And no, I don’t mean “codifying existing crap while preempting the states.” For those interested, I have a whitepaper outlining principles for moving forward on effective privacy legislation (which you can read here). My colleagues at my employer Public Knowledge have a few blog posts on how Congress ought to respond to the whole Facebook/Cambridge Analytica thing and analyzing some of the privacy bills introduced this Congress.

 

Unsurprisingly, we still have folks who insist that we don’t need any regulation and that if we don’t have a market that provides people with privacy protection, it must be because people don’t value privacy protection. After all, the argument goes, if people valued privacy, people would offer services that protect privacy. So if we don’t see such services in the market, people must not want them. Q.E.D. Indeed, these folks will argue, we find that — at least for some services — there are privacy friendly alternatives. Often these cost money, since you aren’t paying with your personal information. This leads some to argue that it’s simply that people like “free stuff.” As a result, the current Administration continues to focus on finding “market based solutions” rather than figuring out what regulations would actually give people greater control over their personal information, and prevent the worst abuses.

 

But an increasing number of people are wising up to the reality that this isn’t the case. What folks lack is a vocabulary to explain why these “market approaches” don’t work. Fortunately, a Nobel Prize winning economist named George Akerlof figured this out back in the 1970s in a paper called the Market for Lemons. Akerlof’s later work on cognitive dissonance in economics is also relevant and valuable. (You can read what amounts to a high level book report on Akerlof & Dickens “The Economics of Cognitive Dissonance” here.) To summarize: everyone knows that they can’t do anything real to protect their privacy, so they either admit defeat and resent it, or lie to themselves that they don’t care. A few believe they can protect themselves via some combination of services and avoidance I will call the “magic privacy dance,” and therefore blame everyone else for not caring enough to do their own magic privacy dance. This ignores that (a) the magic privacy dance requires specialized knowledge; (b) the magic privacy dance imposes lots of costs, ranging from monthly subscription to a virtual private network (VPN) to opportunity cost from forgoing the use of services like Facebook to the fact that Amazon and Google are so embedded in the structure of the internet at this point that blocking them literally causes large parts of the internet to become inaccessible or slow down to the point of uselessness; and (c) Nothing helps anyway!  No matter how careful you are, a data breach by a company like Equifax or a decision by a company you invested in to change their policy means all you magic privacy dancing amounted to a total expensive waste of time.

 

Accordingly, the rational consumer gives up. Unless you are willing to become a hermit, “go off the grid,” pay cash for everything, and other stuff limited to retired spies in movies, you simply cannot realistically expect to protect your privacy in any meaningful way. Hence, as predicted by Akerlof, rational consumers don’t trust “market alternatives” promising to protect privacy. Heck, thanks to Congress repealing the FCC’s privacy rules in 2017, you can’t even get on to the internet without exposing your personal information to your broadband provider. Even the happy VPN dance won’t protect all your information from leaking out. So if you are screwed from moment you go online, why bother to try at all?

 

I explore this more fully below . . .

Continue reading

Better Privacy Protections Won’t Kill Free Facebook.

Once upon a time, some people developed a new technology for freely communicating with people around the world. While initially the purview of techies and hobbyists, it didn’t take long for commercial interests to notice the insanely popular new medium and rapidly move to displace the amateur stuff with professional content. But these companies had a problem. For years, people had gotten used to the idea that if you paid for the equipment to access the content, you could receive the content for free. No one wanted to pay for this new, high quality (and expensive to make) content. How could private enterprise possibly make money (other than selling equipment) in a market where people insisted on getting new content every day — heck, every minute! — for free?

 

Finally, a young techie turned entrepreneur came up with a crazy idea. Advertising! This fellow realized that if he could attract a big enough audience, he could get people to pay him so much for advertising it would more than cover the cost of creating the content. Heck, he even seeded the business by paying people to take his content, just so he could sell more advertising. Everyone thought he was crazy. What? Give away content for free? How the heck can you make money giving it away for free? From advertising? Ha! Crazy kids with their whacky technology. But over the course of a decade, this young genius built one of the most lucrative and influential industries in the history of the world.

 

I am talking, of course, about William Paley, who invented the CBS broadcast network and figured out how to make radio broadcasting an extremely profitable business. Not only did Paley prove that you could make a very nice living giving away content supported by advertising, he also demonstrated that you didn’t need to know anything about your audience beyond the most basic raw numbers and aggregate information to do it. For the first 80 or so years of its existence, broadcast advertising depended on extrapolated guesses about total aggregate viewing audience and only the most general information about the demographics of viewership. Until the recent development of real-time information collection via set-top boxes, broadcast advertising (and cable advertising) depended on survey sampling and such broad categories as “18-25 year old males” to sell targeted advertising — and made a fortune while doing it.

 

We should remember this history when evaluating claims by Facebook and others that any changes to enhance user privacy will bring the digital world crashing down on us and force everyone to start paying for content. Setting aside that some people might actually like the option of paying for services in exchange for enhanced privacy protection (I will deal with why this doesn’t happen on its own in a separate blog post), history tells us that advertising can support free content just fine without needing to know every detail of our lives to serve us unique ads tailored to an algorithms best guess about our likes and dislikes based on multi-year, detailed surveillance of our every eye-muscle twitch. Despite the unfortunate tendency of social media to drive toward the most extreme arguments even at the best of times, “privacy regulation” is hardly an all or nothing proposition. We have a lot of room to address the truly awful problems with data collection and storage of personal information before we start significantly eating into the potential revenue of Facebook and other advertising supported media.

 

Mind you, I’m not promising that solid and effective privacy regulation would have no impact on the future revenue earning power of advertising. Sometimes, and again I recognize this will sound like heresy to a bunch of folks, we find that the overall public interest actually requires that we impose limits on profit making activities to protect people. But again, and as I find myself explaining every time we debate possible regulation in any context, we don’t face some Manichean choice between libertarian utopia and a blasted regulatory Hellscape where no business may offer a service without filling out 20 forms in triplicate. We have a lot of ways we can strike a reasonable balance that provides users with real, honest-to-God enforceable personal privacy, while keeping the advertising-supported digital economy profitable enough to thrive. My Public Knowledge colleague Allie Bohm has some concrete suggestions in this blog post here. I explore some broader possible theoretical dimensions of this balance below . . . .

Continue reading