Why Canada’s C-18 Isn’t Working Out As Expected.

Back at the end of June, Canada passed C-18, aka “The Online News Act,” a law designed to make Google and Facebook negotiate with news providers for linking to news. In theory, C-18 is based on the Australia News Media Bargaining Code, which Australia adopted in 2021. It also follows the EU adoption of Article 15 as part of its 2019 Copyright Directive — although supporters of this approach don’t seem to want to talk about Article 15 much. Supporters of the “free market” approach adopted by C-18, which requires Google and Facebook to enter into negotiations with news providers (defined in various ways) argue that the NMBC has been a huge success, forcing Goog and FB to pay $200 million AU and that this money has been spent on reporters and other news-producing stuff not just gone into the pockets of big news conglomerates as critics such as my employer Public Knowledge keep warning will happen. There is a fair amount of evidence to refute this rosy tale of success, but let us set that aside for the moment.

 

The supposed success of the AU NMBC is one of the biggest arguments in support of the Journalism Competition and Preservation Act and the California version. It was a major reason why supporters of Canada’s C-18 assured everyone that FB and Goog were bluffing when they said they would simply stop linking to news if Canada passed C-18. After all, they made the same threat in Australia and, other than a brief weekend when FB stopped linking to news in Australia, they ultimately went along with the AU NMBC. So hang in there, supporters of this approach keep telling Canada! Trust us, they’ll cave, because the AU NMBC is a huge success!

 

My employer Public Knowledge has an entire resource page devoted to what’s wrong with this approach in general and JCPA in particular. I’ve written about this a couple of times as well. So I won’t rehash the problems with JCPA too much below. Instead, I want to focus on one this argument that C-18 (and JCPA) are just like the amazingly awesomely successful Australia approach. After a bit of digging, I found two things:

  1. C-18 is not like the NMBC in some really critical ways. Which is why Goog and FB are not reacting the way they did to NMBC. Notably, the NMBC lets Goog and FB negotiate private deals not subject to any sort of review or mandatory arbitration. C-18 fixes these loopholes by requiring mandatory arbitration and transparency. Hence the very different reaction from Goog and FB.
  2. The claim that NMBC was a “success” comes from two primary sources: the officially mandated study by the Australian Government one year after adoption of the NMBC (available here) and a follow up report by Rod Sims, the guy who wrote the NMBC and pushed it through. As I will explain in a separate post, and as others have noticed before me (see here and here), the biggest beneficiary was News Corp, whose subsidiaries took in the bulk of the money (Crikey! I’m shocked!), followed by Nine Entertainment, the next largest media conglomerate. Next came AU’s major public broadcaster, which is the primary source of the “the money went to news production — really!” anecdote. After that, you got increasingly smaller deals for smaller outlets, with most outlets cut out altogether and no transparency into this because it relies on private deals.

So if your definition of success was “Goog and FB pay off the major outlets like they were basically doing, but with bigger buckets of loot going to Rupert Murdoch & friends,” then this worked totally great! In fact, this scheme works so much like the way the cable cartel and the broadcaster cartel negotiate with each other (including providing things like C-Span so they can threaten to take it away and squeezing out independent minority-owned networks in favor of vertically integrated ones) that it makes me want to weep tears of Cassandrafreude. Heck, it even includes an official report with unverified industry posting that only true believers can take seriously — just like the old FCC cable competition report!

 

As a result, as reported by Michael Geist, the Canadian government is now apparently trying to use its rulemaking to implement the act to bring this inline with what the AU NMBC actually does, make it possible for Goog and FB to make private payments to the politically powerful media to make this issue go away. Whether that will end up being enough at this point remains to be seen. I would hope that this serves as a valuable lesson in life for those trying to replicate the “success” of NMBC (like, maybe read the source material with a jaundiced eye that comes from 20+ years of reading similar FCC reports). More importantly, the idea that you can pass a law that actually fixes the problems with the NMBC and not have Goog and FB flip out is a delusion because it fails to understand the economics of any of this. Yes, there is a real problem with how online advertising works, but that requires real solutions that identify the real problems and addresses them. (There are some already out there, you can see Public Knowledge’s “Superfund for the Internet” here, FAQ here).

 

I will pick apart the claim that the Australia NMBC worked– if by “worked” we mean actually fed money to small news organizations that dedicated the money to news production rather than simply funneled money to the biggest news media — in my next blog post. For now, I will focus on why Google and Facebook are reacting in such a radically different way to C-18 and why this isn’t just a bargaining tactic. Unlike the NMBC, the law actually designates Google and FB as entities subject to the law and therefore obligated to participate in the government supervised arbitration process. The NMBC — as explained below — does not designate Goog or FB to actually do anything, as long as they keep the big news producers happy.

 

More below . . .

Continue reading

Breaking Down and Taking Down Trump’s Executive Order Spanking Social Media.

(A substantially similar version of this appeared first on the blog of my employer, Public Knowledge)

It’s hard to believe Trump issued this stupid Executive Order a mere week ago. Even by the standards of insanity known as the Trump Administration, the last week has reached heights of insanity that make a full frontal assault on the First Amendment with anything less than tear gas and tanks seem trivial. Nevertheless, given the vital importance social media have played in publicizing the murders of George Floyd, Ahmed Arbery, and too many others, how social media have broadcast police brutality against peaceful protesters to be broadcast live around the world from countless locations, and how social media has allowed organizers to to coordinate with one another, we need to remember how vitally important it is to protect these means of communication from being cowed and coopted by the President and others with power. At the same time, the way others have used social media to spread misinformation and promote violence highlights that we have very real problems of content moderation we need to address.

 

In both cases, Trump’s naked effort to use his authority to threaten social media companies so they will dance to his tune undermines everything good about social media while doing nothing to address any of its serious problems. So even though (as I have written previously) I don’t think the FCC has the authority to do what Trump wants (and as I write below, i don’t think the FTC does either), it doesn’t make this Executive Order (EO) something harmless we can ignore. Below, I explain what the EO basically instructs federal agencies to do, what happens next, and what people can do about it.

 

More below . . . .

Continue reading

The Lessig Lawsuit (sung to the tune of “The Reynolds Pamphlet”).

Cyberlaw Twitter has been mildly abuzz recently over the news that Professor Larry Lessig. Has decided to sue the New York Times for defamation. Specifically, Lessig claims that a NYT article describing this essay on Medium, explaining his position around the mess at MIT Media Lab and an anonymous donation from the late and utterly unlamented Jeffery Epstein. In his complaint, Lessig accuses the NYT of using a deliberately misleading headline and lede knowing that the vast majority of people do not click through to read the actual content they share with others and that therefore this “clickbait defamation” (as Lessig calls it) was knowingly defamatory even under the exacting standard of NYT v. Sullivan.

 

Perhaps unsurprisingly, in light of both the connection with Jeff Epstein and because newspapers don’t like to be sued, folks have reacted with particularly scathing criticism of this lawsuit. Many view this as contradictory to Lessig’s previous advocacy for an open internet and information freedom. Some have gone so far as to accuse Lessig of filing a “Strategic Lawsuit Against Public Participation” (SLAPP) complaint. Meanwhile, legal Twitter has been awash with rather melodramatic proclamations of how Lessig has lost his way by suing a newspaper, even if it did screw him over bigly.

 

Perhaps it is just the sheer overwrote nonsense that gets me contrarian here, but I’m going to disagree with the broader tech Twitter community on this. The Lessig Lawsuit actually raises a rather interesting new question of defamation law with a high degree of relevance in the modern world. It also highlights one of the things defamation law is concerned about — the ability of people to spread false statements that have very serious impact on your life or profession with virtually no repercussions. The complicated dance between needing defamation to protect people from harassment and potentially having their lives destroyed and the First Amendment protections for speech and the press has been pumped up on steroids in the information age — but we still need to remember that it is sometimes complicated. It is also important to keep in mind that while defamation law is frequently abused, it also plays a very important role in pushing back on deliberate misinformation and using a fairly powerful megaphone to make other people’s lives miserable — such as with the lawsuit by Sandy Hook families against Alex Jones. Defamation law requires a balance, which is why we cure the problem of SLAPP suits with Anti-SLAPP suit statutes rather than simply eliminating ye olde common law tort of defamation.

 

So I’m going to run through the Case for the Lessig Lawsuit below. To be clear, I’m not saying I agree with Lessig. Also, as someone who himself has a tendency to overshare and think things through online, I rank trying to work out complex highly emotionally charged issues online as up there with Hamilton’s decision to publish the Reynold’s Pamphlet.  On the other hand, the chilling effect on open and honest discussion from “clickbait defamation” is an argument in favor of finding for Lessig here. Indeed, I have hesitated to say anything because the “chain of association cooties” and the ancient legal principle of “why borrow trouble.” (I am so looking forward to headline before my Senate confirmation hearing under President Warren with the title “Nominee supported Taking Jeff Epstien donation at MIT” — despite the fact that nothing in this blog post could reasonably suggest such a thing and the likelihood of my being nominated for anything requiring Senate confirmation ranks just behind my winning MegaMillions.) But I am hoping that obscurity combined with mind-numbing historical and legal discussion about one of my favorite traditional actions at common law will save me from too much opprobrium. Besides, the actual legal question is interesting and highly relevant in today’s media environment, and deserves some serious discussion rather than dismissive mockery.

 

More below . . . .

Continue reading

A Slew of Minor Corrections On My Political Advertising Post From the Dean of Public Interest Telecom.

There is an expression that gets used in the Talmud to praise one’s teacher that goes: “My Rabbi is like wine and I am like vinegar,” whereupon the Rabbi actually doing the talking quotes some superior wisdom from his teacher.

 

When it comes to FCC rules governing political advertising, Andrew Jay Schwartzman is like wine and I am like vinegar. Andy knows this stuff backward and forward. So after my recent blog post on Facebook political advertising, Andy sent me a very nice note generally complimenting me on my blog post (always appreciated), but pointing out a bunch of things I either got wrong or could have said more clearly. As Andy observed in his email to me, they don’t actually impact the substance. But in the spirit of transparency, admitting error, and generally preventing the spread of misinformation, I am going to list them out here (a la Emily Ruins Adam Ruins Everything) and correct them in the actual post.

 

List of my goofs below . . . .

Continue reading

Political Advertising In Crisis: What We Should Learn From the Warren/Facebook Ad Flap.

[This is largely a reprint from a blog post originally posted on the Public Knowledge blog.]

The last week or so has highlighted the complete inadequacy of our political advertising rules in an era when even the President of the United States has no hesitation in blasting the world with unproven conspiracy theories about political rivals using both traditional broadcast media and social media. We cannot ignore the urgency of this for maintaining fair and legitimate elections, even if we realistically cannot hope for Congress to address this in a meaningful way any time soon.

 

To recap for those who have not followed closely, President Trump has run an advertisement repeating a debunked conspiracy theory about former Vice President Joe Biden (a current frontrunner in the Democatic presidential primary). Some cable programming networks such as CNN and those owned by NBCU have refused to run the advertisement. The largest social media platforms — Facebook, Google, and Twitter — have run the advertisement, as have local broadcast stations, despite requests from the Biden campaign to remove the ads as violating the platform policy against running advertising known to contain false or misleading information. The social media platforms refused to drop the ads. Facebook provided further information that it does not submit direct statements by politicians to fact checkers because they consider that “direct speech.”

 

Elizabeth Warren responded first with harsh criticism for Facebook, then with an advertisement of her own falsely stating that Zuckerberg had endorsed President Trump. Facebook responded that the Trump advertisement has run “on broadcast stations nearly 1,000 times as required by law,” and that Facebook agreed with the Federal Communications Commission that “it’s better to let voters — not companies — decide.” Elizabeth Warren responded with her own tweet that Facebook was “proving her point” that it was Facebook’s choice “whether [to] take money to promote lies. You can be in the disinformation-for-profit business or hold yourself to some standards.”

 

Quite a week, with quite a lot to unpack here. To summarize briefly, the Communications Act (not just the FCC) does indeed require broadcast stations that accept advertising from political candidates to run the advertisement “without censorship.” (47 U.S.C. §315(a).) While the law does not apply to social media (or to programming networks like NBCU or CNN), there is an underlying principle behind the law that we want to balance the ability of platforms to control their content with preventing platforms from selectively siding with one political candidate over another while at the same time allowing candidates to take their case directly to the people. But, at least in theory, broadcasters also have other restrictions that social media platforms don’t have (such as a limit on the size of their audience reach), which makes social media platforms more like content networks with greater freedom to apply editorial standards. But actual broadcast licensees — the local station that serves the viewing or listening area — effectively become “common carriers” for all “qualified candidates for public office,” and must sell to all candidates the opportunity to speak directly to the audience and charge all candidates the same rate.

 

All of this begs the real question, applicable to both traditional media and social media: How do we balance the power of these platforms to shape public opinion, the desire to let candidates make their case directly to the people, and the need to safeguard our ability to govern ourselves? Broadcast media remain powerful shapers of public opinion, but they clearly work in a very different way from social media. We need to honor the fundamental values at stake across all media, while tailoring the specific regulations to the specific media.

 

Until Congress gets off its butt and actually passes some laws we end up with two choices. Either we are totally cool with giant corporation making the decision about which political candidates get heard and whether what they have to say is sufficiently supported and mainstream and inoffensive to get access to the public via social media, or we are totally cool with letting candidates turn social media into giant disinformation machines pushing propaganda and outright lies to the most susceptible audiences targeted by the most sophisticated placement algorithms available. It would be nice to imagine that there is some magic way out of this which doesn’t involve doing the hard work of reaching a consensus via our elected representatives on how to balance competing concerns, but there isn’t. There is no magic third option by which platforms acting “responsibly” somehow substitutes for an actual law. Either we make the choice via our democratic process, or we abdicate the choice to a handful of giant platforms run by a handful of super-rich individuals. So perhaps we could spend less time shaming big companies and more time shaming our members of Congress into actually doing their freaking jobs!!

 

(OK, spend more time doing both. Just stop thinking that yelling at Facebook is gonna magically solve anything.)

I unpack this below . . .

Continue reading

Can Trump Really Have The FCC Regulate Social Media? So No.

Last week, Politico reported that the White House was considering a potential “Executive Order” (EO) to address the ongoing-yet-unproven allegations of pro-liberal, anti-conservative bias by giant Silicon Valley companies such as Facebook, Twitter, and Google. (To the extent that there is rigorous research by AI experts, it shows that social media sites are more likely to flag posts by self-identified African Americans as “hate speech” than identical wording used by whites.) Subsequent reports by CNN and The Verge have provided more detail. Putting the two together, it appears that the Executive Order would require the Federal Communications Commission to create regulations designed to create rules limiting the ability of digital platforms to “remove or suppress content” as well as prohibit “anticompetitive, unfair or deceptive” practices around content moderation. The EO would also require the Federal Trade Commission to somehow open a docket and take complaints (something it does not, at present, do, or have capacity to do – but I will save that hobby horse for another time) about supposed political bias claims.

 

(I really don’t expect I have to explain why this sort of ham-handed effort at political interference in the free flow of ideas and information is a BAD IDEA. For one thing, I’ve covered this fairly extensively in chapters five and six of my book, The Case for the Digital Platform Act. Also, Chris Lewis, President of my employer Public Knowledge, explained this at length in our press release in response to the reports that surfaced last week. But for those who still don’t get it, giving an administration that regards abuse of power for political purposes as a legitimate tool of governance power to harass important platforms for the exchange of views and information unless they promote its political allies and suppress its critics is something of a worst case scenario for the First Amendment and democracy generally. Even the most intrusive government intervention/supervision of speech in electronic media, such as the Fairness Doctrine, had built in safeguards to insulate the process from political manipulation. Nor are we talking about imposing common carrier-like regulations that remove the government entirely from influencing who gets to use the platform. According to what we have seen so far, we are talking about direct efforts by the government to pick winners and losers — the opposite of net neutrality. That’s not to say that viewpoint-based discrimination on speech platforms can’t be a problem — it’s just that, if it’s a problem, it’s better dealt with through the traditional tools of media policy, such as ownership caps and limits on the size of any one platform, or by using antitrust or regulation to create a more competitive marketplace with fewer bottlenecks.)

 

I have a number of reasons why I don’t think this EO will ever actually go out. For one thing, it would completely contradict everything that the FCC said in the “Restoring Internet Freedom Order” (RIFO) repealing net neutrality. As a result, the FCC would either have to reverse its previous findings that Section 230 prohibits any government regulation of internet services (including ISPs), or see the regulations struck down as arbitrary and capricious. Even if the FCC tried to somehow reconcile the two, Section 230 applies to ISPs. Any “neutrality” rule that applies to Facebook, Google, and Twitter would also apply to AT&T, Verizon, and Comcast. 

 

But this niggles at my mind enough to ask a good old law school hypothetical. If Trump really did issue an EO similar to the one described, what could the FCC actually do under existing law?

  Continue reading

I Accidentally Write A Book On How To Regulate Digital Platforms.

Some of you may have noticed I haven’t posted that much lately. For the last few months, I’ve been finishing up a project that I hope will contribute to the ongoing debate on “What to do about ‘Big Tech'” aka, what has now become our collective freak out at discovering that these companies we thought of as really cool turn out to control big chunks of our lives. I have now, literally, written the book on how to regulate digital platforms. Well, how to think about regulating them. As I have repeatedly observed, this stuff is really hard and involves lots of tradeoffs.

 

The Case for the Digital Platform Act: Market Structure and Regulation of Digital Platforms, with a Foreword by former FCC Chair (and author of From Gutenberg to Google) Tom Wheeler, covers all the hot topics (some of which I have previewed in other blog posts). How do we define digital platforms? How do we determine if a platform is ‘dominant’? What can we do to promote competition in the platform space? How do we handle the very thorny problem of content moderation and filter bubbles? How do we protect consumers on digital platforms, and how do we use this technology to further traditional important goals such as public safety? Should we preempt the states to create one, uniform national policy? (Spoiler alert, no.) Alternatively, why do need any sort of government regulation at all?

 

My employer, Public Knowledge, is releasing The Case for the Digital Platform Act free, under the Creative Commons Attribution-NonCommercial-ShareAlike license (v. 4.0) in partnership with the Roosevelt Institute. You can download the Foreword by Tom Wheeler here, the Executive Summary here, and the entire book here. Not since Jean Tirole’s Economics for the Common Good has there been such an amazing work of wonkdom to take to the beach for summer reading! Even better, it’s free — and we won’t collect your personal information unless you actively sign up for our mailing list!

 

Download the entire book here. You can also scroll down the page to links for just the executive summary (if you don’t want to print out all 216 pages) or just the Tom Wheeler foreword.

 

More, including spoilers!, below . . .

Continue reading

What Makes Elizabeth Warren’s Platform Proposal So Potentially Important.

As always when I talk politics, I remind folks that this blog is my personal blog, which I had well before I joined my current employer Public Knowledge. I’ve been commenting on Presidential campaigns since well before I joined PK, and I don’t run any of this stuff in front of my employer before I publish it.

 

 

Friday March 8, the Presidential campaign of Elizabeth Warren, not to be confused with the actual office of Senator Elizabeth Warren (D-MA), announced Warren’s plan for addressing the tech giants. Warren has been drawing attention to massive concentration in industry generally and tech specifically since well before it was cool, so the fat that she is out of the gate with a major proposal on this early in the 2020 campaign is no surprise. Nor is it a surprise that her proposed plan would end up breaking up, in some significant ways, the largest tech platforms.

 

What makes Warren’s contribution a potential game changer is that she goes well beyond the standard “break ’em up” rhetoric that has dominated most of the conversation to date. Warrens proposal addresses numerous key weaknesses I have previously pointed out in relying exclusively on antitrust and is the first significant effort to propose a plan for permanent, sustainable sector specific regulation. As my boss at public knowledge Gene Kimmelman has observed here, (and I’ve spent many 10s of thousands of words explaining) antitrust alone won’t handle the problem of digital platforms and how they impact our lives. For that we need sector specific regulation.

 

Warren is the first major Presidential candidate to advance a real proposal that goes beyond antitrust. As Warren herself observes, this proposal is just a first step to tackle on of the most serious problems that has emerged in the digital platform space, the control that a handful of giant platforms exercises over digital commerce. But Warren’s proposal is already smart in a number of important ways that have the potential to trigger the debate we need to have if we hope to develop smart regulation that will actually work to promote competition and curb consumer abuses.

 

I break these out below . . . .

Continue reading

Tumblr, Consolidation and The Gentrification of Internet.

Tumblr recently announced it will ban adult content.  Although partially in response to the discovery of a number of communities posting child pornography and subsequent ban of the Tumblr ap from the extremely important Apple ap store, a former engineer at Tumblr told Vox the change has been in works for months. The change was mandated by Tumblr’s corporate parent Verizon (which acquired Tumblr when it acquired Yahoo! after Yahoo! acquired it back in 2013. Why did Verizon want to ban adult content on Tumblr after 11 years? According to the same Vox article, it new ban is an effort to attract greater advertising revenue. Tumblr has a reputation for adult content which translates to advertisers as “porn” (unfairly, in the view of Tumblr’s supporters), and advertisers don’t like their products associated with pornography (or other types of controversial content.)

 

I can’t blame Verizon for wanting to make more money from Tumblr. But the rendering of Tumblr “safe for work” (and therefore safe for more mainstream advertising) illustrates one of the often under-appreciated problems of widespread content and platform consolidation. Sites that become popular because they allow communities or content that challenge conventional standards become targets for acquisition. Once acquired, the acquirer seeks to expand the attractiveness of the platform for advertisers and more mainstream audiences. Like a gentrifying neighborhood, the authentic and sometimes dangerous character rapidly smoothes out to become more palatable — forcing the original community to either conform to the new domesticated normal or try to find somewhere else to go. And, as with gentrification, while this may appear to have limited impact, the widespread trends ultimately impact us all.

 

I explain more below . . . .

Continue reading

Pai Continues Radical Deregulation Agenda. Next On The Menu — SMS Texting and Short Codes

In December 2007, Public Knowledge (joined by several other public interest groups] filed a Petition For Declaratory Ruling asking the Federal Communications Commission (FCC) to clarify that both SMS Text Messaging and short codes are “Title II” telecommunications services. Put another way, we asked the FCC to reaffirm the basic statutory language that if you use telephones and the telephone network to send information from one telephone number to another, it meets the definition of “telecommunications service.” (47 U.S.C. 153(53)) We did this because earlier in 2007 Verizon had blocked NARAL from using its short code for political action alerts. While we thought there might be some question about short codes, it seemed pretty obvious from reading the statute that when you send “information between or among points of the users choosing, without change in the form or content as sent and received” (definition of “telecommunications”), over the phone network, using phone numbers that it is a “telecommunications service.”

 

Sigh.

 

On the anniversary of the repeal of net neutrality, FCC Chair Ajit Pai now proposes another goodie for carriers – classifying both short codes and text messages as Title I “information service” rather than a Title II telecommunications service. As this is even more ridiculous than last year’s reclassification of broadband as Title I, the draft Order relies primarily on the false claim that classifying text messaging as Title I is an anti-robocall measure. As we at PK pointed out a bunch of times when the wireless carriers first raised this argument back in 2008 – this is utter nonsense. Email, the archetypal Title I information service, is (as Pai himself pointed out over here) chock full of spam. Furthermore, as Pai pointed out last month, the rise in robocalls to mobile phones has nothing to do with regulatory classification and is primarily due to the carriers not implementing existing technical fixes. (And, as the Wall St J explained in this article, robocallers have figured out how to get paid just for connecting to a live number whether or not you answer, which involves a kind of arbitrage that does not work for text messages.)

 

As if that were not enough, the FCC issued a declaratory ruling in 2015, reaffirmed in 2016, that carriers may block unwanted calls or texts despite being Title II common carriers. There is absolutely nothing, nada, zip, zero, that classifying text messages as Title II does that makes it harder to combat spam. By contrast, Title II does prevent a bunch of blocking of wanted text messages as an anticompetitive conduct which we have already seen (and which is occurring fairly regularly on a daily basis, based on the record in the relevant FCC proceeding (08-7). This includes blocking immigrants rights groups, blocking health alerts, blocking information about legal medical marijuana, and blocking competing services. We should therefore treat the claims by industry and the FCC that only by classifying text messaging as “information services” can we save consumers from a rising tide of spam for what they are – self-serving nonsense designed to justify stripping away the few remaining enforceable consumer rights.

 

Once again, beyond the obvious free expression concerns and competition concerns, playing cutesy games with regulatory definitions will have a bunch of unintended consequences that the draft order either shrugs off or fails to consider. Notably:

 

  1. Classifying texting as Title I will take revenue away from the Universal Service Fund (USF). This will further undermine funds to support rural broadband.

 

  1. Classifying texting as Title I disrupts the current automatic roaming framework established by the FCC in 2007.

 

  1. Classifying texting as Title I may, ironically, take it out of the jurisdiction of the Robocall statute (Telephone Consumer Protection Act (TCPA) of 1991).

 

  1. Trashing whatever consumer protections, we have for text messages, and taking one more step to total administrative repeal of Title II completely. Which sounds like fun if you are a carrier but leaves us operating without a safety net for our critical communications infrastructure (as I’ve been writing about for almost ten years).

 

I unpack all of this below.

 

Continue reading