What’s changed since the “pause AI” letter six months ago?
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Last Friday marked six months since the Future of Life Institute (FLI), a nonprofit focusing on existential risks surrounding artificial intelligence, shared an open letter signed by famous people such as Elon Musk, Steve Wozniak, and Yoshua Bengio. The letter calling for tech companies to “pause” the development of AI language models more powerful than OpenAI’s GPT-4 for six months.
Well, that didn’t happen, obviously.
I sat down with MIT professor Max Tegmark, the founder and president of FLI, to take stock of what has happened since. Here are highlights of our conversation.
On shifting the Overton window on AI risk: Tegmark told me that in conversations with AI researchers and tech CEOs, it had become clear that there was a huge amount of anxiety about the existential risk AI poses, but nobody felt they could speak about it openly “for fear of being ridiculed as Luddite scaremongerers.” “The key goal of the letter was to mainstream the conversation, to move the Overton window so that people felt safe expressing these concerns,” he says. “Six months later, it’s clear that part was a success.”
But that’s about it: “What’s not great is that all the companies are still going full steam ahead and we still have no meaningful regulation in America. It looks like US policymakers, for all their talk, aren’t going to pass any laws this year that meaningfully rein in the most dangerous stuff.”
Why the government should step in: Tegmark is lobbying for an FDA-style agency that would enforce rules around AI, and for the government to force tech companies to pause AI development. “It’s also clear that [AI leaders like Sam Altman, Demis Hassabis, and Dario Amodei] are very concerned themselves. But they all know they can’t pause alone,” Tegmark says. Pausing alone would be “a disaster for their company, right?” he adds. “They just get outcompeted, and then that CEO will be replaced with someone who doesn’t want to pause. The only way the pause comes about is if the governments of the world step in and put in place safety standards that force everyone to pause.”
So how about Elon … ? Musk signed the letter calling for a pause, only to set up a new AI company called X.AI to build AI systems that would “understand the true nature of the universe.” (Musk is an advisor to the FLI.) “Obviously, he wants a pause just like a lot of other AI leaders. But as long as there isn’t one, he feels he has to also stay in the game.”
Why he thinks tech CEOs have the goodness of humanity in their hearts: “What makes me think that they really want a good future with AI, not a bad one? I’ve known them for many years. I talk with them regularly. And I can tell even in private conversations—I can sense it.”
Response to critics who say focusing on existential risk distracts from current harms: “It’s crucial that those who care a lot about current problems and those who care about imminent upcoming harms work together rather than infighting. I have zero criticism of people who focus on current harms. I think it’s great that they’re doing it. I care about those things very much. If people engage in this kind of infighting, it’s just helping Big Tech divide and conquer all those who want to really rein in Big Tech.”
Three mistakes we should avoid now, according to Tegmark: 1. Letting the tech companies write the legislation. 2. Turning this into a geopolitical contest of the West versus China. 3. Focusing only on existential threats or only on current events. We have to realize they’re all part of the same threat of human disempowerment. We all have to unite against these threats.
Deeper Learning
These new tools could make AI vision systems less biased
Computer vision systems are everywhere. They help classify and tag images on social media feeds, detect objects and faces in pictures and videos, and highlight relevant elements of an image. However, they are riddled with biases, and they’re less accurate when the images show Black or brown people and women.
And there’s another problem: the current ways researchers find biases in these systems are themselves biased, sorting people into broad categories that don’t properly account for the complexity that exists among human beings.
New tools could help: Sony has a tool—shared exclusively with MIT Technology Review—that expands the skin-tone scale into two dimensions, measuring both skin color (from light to dark) and skin hue (from red to yellow). Meta has built a fairness evaluation system called FACET that takes geographic location and lots of different personal characteristics into account, and it’s making its data set freely available. Read more from me here.
Bits and Bytes
Now you can chat with ChatGPT using your voice
The new feature is part of a round of updates for OpenAI’s app, including the ability to answer questions about images. You can also choose from one of five lifelike synthetic voices and have a conversation with the chatbot as if you were making a call, getting responses to your spoken questions in real time. (MIT Technology Review)
Getty Images promises its new AI contains no copyrighted art
Just as authors including George R.R. Martin have filed yet another copyright lawsuit against AI companies, Getty Images promises that its new AI system contains no copyrighted art and that it will pay legal fees if its customers end up in any lawsuits about it. (MIT Technology Review)
A Disney director tried—and failed—to use an AI Hans Zimmer to create a soundtrack
When Gareth Edwards, the director of Rogue One: A Star Wars Story, was thinking about the soundtrack for his upcoming movie about artificial intelligence, The Creator, he decided to try composing it with AI—and got “pretty damn good” results. Spoiler alert: The human Hans Zimmer won in the end. (MIT Technology Review)
How AI can help us understand how cells work—and help cure diseases
A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue Priscilla Chan and Mark Zuckerberg. (MIT Technology Review)
DeepMind is using AI to pinpoint the causes of genetic disease
Google DeepMind says it’s trained an artificial-intelligence system that can predict which DNA variations in our genomes are likely to cause disease—predictions that could speed diagnosis of rare disorders and possibly yield clues for drug development. (MIT Technology Review)
Deepfakes of Chinese influencers are livestreaming 24/7
Since last year, a swarm of Chinese startups and major tech companies have been creating deepfake avatars for e-commerce livestreaming. With just a few minutes of sample video and $1,000 in costs, brands can clone a human streamer to work round the clock. (MIT Technology Review)
AI-generated images of naked children shock the Spanish town of Almendralejo
An absolutely horrifying example of real-life harm posed by generative AI. In Spain, AI-generated images of children have been circulating on social media. The pictures were created using clothed images of the girls taken from their social media. Depressingly, at the moment there is very little we can do about it. (BBC)
How the UN plans to shape the future of AI
There’s been a lot of chat about the need to set up an international organization that would govern AI. The UN seems like the obvious choice, and the organization’s leadership wants to step up to the challenge. This is a nice piece looking at what the UN has cooking, and the challenges that lie ahead. (Time)
Amazon Anthropic
Amazon is investing up to $4 billion in the AI safety startup, according to this announcement. The move will give Amazon access to Anthropic’s powerful AI language model Claude 2, which should help it keep up with competitors Google, Meta, and Microsoft.