Surely AI Safety Legislation Is A No-Brainer

Radical Silicon Valley libertarianism is forcing us all to take on unnecessary risks, and the new technology is badly in need of regulation.

One thing I’ve changed some of my opinions about in the last few years is AI. I used to think that most of the claims made about its radically socially disruptive potential (both positive and negative) were hype. That was in part because they often came from the same people who made massively overstated claims about cryptocurrency. Some also resembled science fiction stories, and I think we should prioritize things we know to be problems in the here and now (climate catastrophe, nuclear weapons, pandemics) than purely speculative potential disasters. Given that Silicon Valley companies are constantly promising new revolutions, I try to always remember that there is a tendency for those with strong financial incentives to spin modest improvements, or even total frauds, as epochal breakthroughs. 

But as I’ve actually used some of the various technologies lumped together as “artificial intelligence,” over and over my reaction has been: “Jesus, this stuff is actually very powerful… and this is only the beginning.” I think many of my fellow leftists tend to have a dismissive attitude toward AI’s capabilities, delighting in its failures (ChatGPT’s basic math errors and “hallucinations,” the ugliness of much AI-generated “art,” badly made hands from image generators, etc.). There is even a certain desire for AI to be bad at what it does, because nobody likes to think that so much of what we do on a day-to-day basis is capable of being automated. But if we are being honest, the kinds of technological breakthroughs we are seeing are shocking. If I’m training to debate someone, I can ask ChatGPT to play the role of my opponent, and it will deliver a virtually flawless performance. I remember not too many years ago when chatbots were so laughably inept that it was easy to believe one would never be able to pass a Turing Test. Now, ChatGPT not only aces the test but is better at being “human” than most humans. And, again, this is only the start. 

Personally, I have a lot of fun with generative AI. I use it for goofy stuff, like making fun pictures with image generators or producing parody radio stations. I enjoy getting to write lyrics and then have an AI music generator make them like the Beach Boys (or just having it sing "f*ck fascism" over and over), or feeding The Communist Manifesto into a voice generator and making Donald Trump read the whole thing. I even did a whole book, Echoland, which imagined what a future in which human beings put AI to good use might look like. My personal experience of new generative AI programs has been that I find them immensely liberating to my creativity. I thought I’d hate AI, but here we are and I kind of love it.

Donate-Ad-V2

But I’m also terrified by it, and I don’t see why the alarm isn’t more widespread. The deepfake voice generators alone are quite horrible. You can upload about twenty seconds of a person’s voice and then you’re able to replicate it almost perfectly.It’s all fun and games when you’re using that capacity to make fun of Joe Rogan, but of course scammers immediately realized you could use it to impersonate people’s loved ones and trick them into giving you money by pretending their children had been in a horrible accident. There are certainly highly beneficial applications; for instance, AI voice cloning is giving people with ALS the ability to use their voices again, which is kind of miraculous. 

The ability to replicate more and more of the functions of human intelligence on a machine is both very exciting and incredibly risky. Personally I am deeply alarmed by military applications of AI in an age of great power competition. The autonomous weapons arms race strikes me as one of the most dangerous things happening in the world today, and it’s virtually undiscussed in the press. The conceivable harms from AI are endless. If a computer can replicate the capacities of a human scientist, it will be easy for rogue actors to engineer viruses that could cause pandemics far worse than COVID. They could build bombs. They could execute massive cyberattacks. From deepfake porn to the empowerment of authoritarian governments to the possibility that badly-programmed AI will inflict some catastrophic new harm we haven’t even considered, the rapid advancement of these technologies is clearly hugely risky. That means that we are being put at risk by institutions over which we have no control. 

Newsom Vetoes a Safety Bill 

 

In California, Gavin Newsom has just vetoed a bill, SB 1047, that would have tried to put some safety guardrails on AI development. The problem is that a lot of AI is being developed by for-profit companies whose short-term incentive toward maximizing returns may motivate them to develop tools that cause widespread social harms (for which the companies themselves will not pay the price). It’s therefore critically important for the state to step in and make sure that AI is developed safely. 

As Garrison Lovely writes in a comprehensive review for Jacobin, the California legislation would have put in some basic measures to ensure that AI companies developed their products responsibly. They would have been liable if their products caused costly catastrophes. Whistleblower protections would have been put in place. But Newsom, repeating the usual free market nostrums about stifling innovation, vetoed the bill, meaning, as Lovely writes, “that there will be no mandatory safety protocols for the development of the largest and most powerful AI models.” In doing so, Newsom “hand-delivered a massive victory to Big Tech, at the expense of democracy and every person who might someday be harmed by AI models built by companies locked in a fierce race for primacy and profit.” It is not the first time that Newsom has put business interests above the public interest.

The AI safety legislation does not break down on neatly “left-right” lines. Lovely notes that this bill attracted “strange bedfellows,” with labor unions, Jane Fonda, Elon Musk, a number of AI company employees, and the effective altruists in favor of it, and Google, Nancy Pelosi, and Ro Khanna against it. But it’s clear that companies that stand to make windfall profits from AI do not want its development to be slowed down, even by relatively mild regulation.

Subscribe-Ad-V2

Lovely quotes a libertarian-leaning researcher arguing that it’s simply too early to know whether this kind of legislation is necessary. But that’s a bad way to think about risk. We don’t know how catastrophic the damage caused by uncontrolled AI development can be, and that’s precisely why we should err on the side of regulating too much rather than too little. I don’t think anyone can say for certain how likely it is that AI will be used to, for example, engineer a virus that wipes out human civilization. Maybe it’s quite unlikely. But given the scale of the risk, I don’t want to settle for quite unlikely. We need the chance of that happening to be as close to zero as possible. And I don’t think we should care if, in the process, we stifle a bit of innovation, or force companies to take five years to do what could have taken one year. The stakes here are far too high, precisely because we don’t know what this technology is going to do to us. I’ve been on the record as a skeptic of the hypothesis that a rogue AI, sufficiently capable of improving its own intelligence, could turn on humanity and drive us to extinction. But you don’t need to think that scenario is especially likely to think that we should at least make sure that there’s always an “off switch” built in to intelligent machines. The costs of safety are so low when compared to the costs of the worst outcomes that it’s an absolute no-brainer to write into law the kinds of basic AI safeguards that SB 1047 contains.

I’m more excited by AI than a lot of people in my orbit, but I’m also deeply alarmed by the fact that we are heading into a future with a virtually unregulated, incredibly powerful technology being developed by companies that have no incentive to think long-term about the potential disastrous consequences of their actions. Unfortunately, it seems like Democratic politicians like Newsom and Pelosi are not interested in changing that situation. We may look back on this moment and wonder how they could have been so foolish. 

More In: Tech

Cover of latest issue of print magazine

Announcing Our Newest Issue

Featuring

Our beautiful July-August edition is packed with wholesome goodies to nourish the mind and excite the soul! We've got a feature on why you should host a sing-a-long (they're way better than karaoke), a look at the right-wing myths around post-apartheid South Africa, a dive into the politics of the Black Church, an interview with leading education critic Jonathan Kozol about unequal schooling in America, an examination of the parallels between Ronald Reagan and Donald Trump, plus lots of fun stuff including comics, free music, and a classified section! As always it's loaded with sharp writing and beautiful art.

The Latest From Current Affairs