AI Policy and the Uncanny Valley Freakout.

We have been debating, on and off, about the issues around artificial intelligence and AI governance for some time now. Here at Public Knowledge, we published our first white paper on the subject in 2018. But the last few months have seen an explosion of interest and a sudden consensus that powerful AI tools require some sort of regulation. Hardly a day goes by without a new editorial calling for regulation of AI, or a high-profile story on the potential threat of AI to jobs (ranging from creative jobs such as Hollywood writers or musicians to boring lawyers), or a story on new AI threats to consumers, or even how AI poses an existential threat to our democracy. A recent Senate Hearing produced a rare bipartisan consensus on the need for new laws and federal regulation to mitigate the threats posed by AI technology. In response, technology giants such as Google and Microsoft have published new proposed codes of conduct and regulatory regimes that include not merely the usual calls for self-regulation, but also actually invite government regulation as well.

Or, to quote my colleague Sara Collins, “AI is having a moment.” Unfortunately, that moment turns out to be a total uncanny valley freakout. Yes, there are serious issues here — and some very smart folks have been talking about them for years. But suddenly a chatbot starts hitting on people and it’s all “AAAAAGGHHH!!! We are doomed!! Doomed!!! The AIs are coming for us, and none of them look like Scarlett Johansen!” One does not have to totally agree with Adam Conover that AI hype is total baloney to recognize the signs of a full-blown irrational panic. And, as a general rule, total panic freakouts rarely lead to good policy outcomes.

 

Continue reading