There are two types of public events here in DC. Those designed to actually educate people and those designed so that folks can display their talking points like dancing peacocks displaying their tail feathers to attract a mate. The Federal Communications Commission and National Science Foundation had a half-day workshop on July 13 to discuss the role of AI in telecom and — specifically — what is the FCC’s role here in promoting innovation and adoption of AI tools in telecom and what are the challenges the FCC should address. Happily, this event was of the educational kind and well worth watching if the intersection of AI and telecom is your thing. You can see the video here.
To forestall the usual panic and “regulation, we hates it precious!” response, I will point out that Chairwoman Jessica Rosenworcel is a self-described optimist about the value of AI in telecom. You can see her opening remarks here. The event was very much about showcasing the positives of AI and pushing back on the current uncanny valley freakout driving the current policy discussions. I was invited to speak on the first panel, which focused on the current state of research and applications and how this fits into the FCC’s overall responsibilities for spectrum management and telecommunications network management. These are the remarks I prepared below. It didn’t quite come out this way (I appear on the video at about 57:30 for those interested in how it actually came out.
Hello, my name is Harold Feld and I am Senior Vice President at Public Knowledge. We are a public advocacy organization based in Washington DC. Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest.
This is a very exciting time to work in AI. We at Public Knowledge have been thinking about AI policy since 2018. Some folks have been thinking about it for even longer. But our current working environment is what I call the “uncanny valley freakout.” Previously, every now and then, for the last ten years or so, some billionaire has talked about how AI is going to kill us all or some other sort of existential threat. But we really didn’t start talking seriously about AI policy until ChatGPT went public and people were suddenly like: “Oh my God! My computer is hitting on me!”
While energy is good, panic is bad. We should not make policy by panic freakout. This is especially true for what I like to call the “insanely boring technical,” or IBT, AI applications. Policies that we make for AIs that mimic human behavior and therefore freak us out are likely to be a very poor match for IBTs. We need to consider what best matches the incredibly important and potentially very useful AIs that fall into the insanely boring technical category. Happily, at the heartland of insanely boring and technical sits the NSF and FCC. So we are well positioned here today to have this discussion.
I am not as familiar with NSF, but I can speak a little bit to the role of the FCC here. Essentially, the FCC’s role in AI policy is the same as it has always been at the cutting edge of communications technology that goes into our critical infrastructure.
- Promote and encourage innovation while protecting the ongoing functionality of the network. This is particularly true for spectrum access, which is a unique responsibility of the FCC. There is no such thing as a “bucket of spectrum.” Any and all spectrum access begins here, at the FCC. Therefore any policy to promote use of AI tools to enhance spectrum capacity and spectrum access must begin here. Additionally, unlike in the laboratory, we have hundreds of millions of users relying on our critical communications infrastructure 24/7. The FCC must make sure that the adoption of AI tools does not disrupt the ingoing operation of communications networks (to the greatest degree possible, nothing is perfect).
2. Overcome coordination issues when necessary to maximize the public good. We like competition. We like different networks competing with each other and different vendors offering different tools. But sometimes you need to coordinate across networks to develop standards. Sometime — especially in AI — you need to mandate sharing of information while protecting competition. Sometimes you need t make sure we have common safeguards among and between networks. If nothing else, you need to recognize that in a network of networks my problems become your problems and your problems become my problems.
3. Ensure equity and inclusion, empower users, and protect users against abuse. There is a second panel after this one that focuses exclusively on this point. But it is critically important for us to focus on these issues in the technical discussions, not after the technology is developed. Digital inclusion, user empowerment and protecting users from abuse are not an added layer of policy once the tools are built. They must be incorporated as design principles.
As Chairwoman Rosenworcel made clear, we should be optimists about the future of AI tools deployed in communications networks. Network and spectrum engineers have spent years discussing ways that AI tools can improve network efficiency, improve spectrum capacity multi-fold, and improve the reliability of our networks. The FCC should certainly educate itself about these tools and encourage their development. At the same time, we must distinguish between optimism and reckless cheerleading. Two examples from recent history demonstrate the need to consider the implications of new tools carefully as part of the technical discussions and illustrate the difference between optimism and reckless cheerleading.
First, the mandatory inclusion of GPS in cell phones around 2007-ish. Before cell phones became ubiquitous, 911 providers knew exactly where to find the source of a call. A phone called from a fixed location, you sent a first responder to the phone’s location. That changed with mobile phones. Tower-triangulation at the time offered far less precision of location than GPS, so the FCC mandated inclusion of GPS in mobile phones. No one thought about the privacy implications of this. Why? Because no one in the room was thinking about privacy. They were thinking ‘how do we get ambulances and police cars to the right place in time?’ Additionally, at the time, the technology was still largely Blackberry-based. The idea of an ‘app economy’ that sucked up sensitive geolocation data and reported it back to the mothership (and was then sold to whoever wanted it) was not uppermost in people’s thoughts outside the privacy advocacy community. Had we had some consumer advocates in the room to say ‘hey, what about protecting people’s privacy?’ We might be living in a very different world.
Second, the introduction of IP technology and the negative impact on network reliability. IP technology allowed the carriers to break up the traditional network into various subnetworks with competing providers that enabled tremendous new competition and innovation of services. This worked well for awhile, until we started to have “sunny day outages” — outages unrelated to any kind of weather or outside condition. These outages can spread across multiple states, impacting millions of people. It can take hours, sometimes days, to figure out where the problem is and correct it. Why? Because the networks had grown increasingly complex and managed by multiple different operators, none of whom had specific responsibility for making sure that the network as a whole continued to operate. Providers discovered they no longer understood how their phone networks actually worked, or even how to spot problems when they emerged until those problems took down a sufficiently large portion of the network to attract attention.
How did that happen? Where was the FCC? Reckless cheerleaders kept the FCC from doing its job. People were like ‘no! The FCC should not even look at how IP is impacting the network! If the FCC even thinks about informing itself, never mind exercising any oversight, it will scare away the innovation gods!’ So we had to have things go wrong and only then — over the ongoing objections of many reckless cheerleaders — the FCC has to step in, educate itself, and set some basic standards to ensure that people had specific responsibilities to ensure the smooth operation of the network because when something is everyone’s general responsibility, it is no one’s actual responsibility.
I’d like to suggest the following specific areas of FCC focus on the potential and responsibility for AI in spectrum and communications networks generally.
Spectrum access and interference mitigation. How will the ability of AI tools to enhance spectrum and access and mitigate potential interference be considered by the Office of Engineering Technology (OET) an the Wireless Telecommunications Bureau (WT) — particularly in light of the new policy statement? Additionally, although it is primarily the responsibility of the NTIA, how will these new tools impact the National Spectrum Strategy?
Network reliability, competition and stability. How will we know where problems are? Whose responsibility is it to identify problems and correct them? Will this require the FCC to develop rules for cross-network cooperation and coordination?
Privacy and other forms of consumer protection by design. All these tools require data for training and will be involved in ongoing monitoring and direction of people’s communications. A network must know precisely where to deliver a message, but it does not need to be able to report that information with specificity to those who do not need to know this data. A network can vastly improve efficient operation via AI tools, but the same tools can be used to prioritize traffic for business reasons rather than for reasons of network efficiency. It is not enough to prohibit certain sorts of bad behavior, we should do what we can to design the networks to make abuse as difficult as possible.
Finally, I want to thank Chairwoman Rosenworcel for including the consumer perspective at the tech level. Not merely at this event, but consistently throughout her tenure as Chair. Doing so helps everyone in the process maintain the optimism we want to embrace the potential of these new tools, without falling into the trap of mindless cheerleading.
I look forward to this and future conversations.