How US AI policy might change under Trump

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

President Biden first witnessed the capabilities of ChatGPT in 2022 during a demo from Arati Prabhakar, the Director of the White House Office of Science and Technology Policy, in the oval office. That demo set a slew of events into motion, and encouraged President Biden to support the US’s AI sector, while managing the safety risks that will come from it. 

Prabhakar was a key player in passing the president’s executive order on AI in 2023, which sets rules for tech companies to make AI safer and more transparent (though it relies on voluntary participation). Before serving in President Biden’s cabinet, she held a number of government roles, from rallying for domestic production of semiconductors to heading up DARPA, the Pentagon’s famed research department. 

I had a chance to sit down with Prabhakar earlier this month. We discussed AI risks, immigration policies, the CHIPS Act, the public’s faith in science, and how it all may change under Trump.

The change of administrations comes at a chaotic time for AI. Trump’s team has not presented a clear thesis of how it will handle artificial intelligence, but plenty of people in it want to see that executive order dismantled. Trump said as much in July, endorsing the Republican Platform that says the executive order “hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology.” Powerful industry players, like venture capitalist Marc Andreessen, have said they support that move. However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios, and has been supportive of some regulations aiming to promote AI safety. No one really knows exactly what’s coming next, but Prabhakar has plenty of thoughts about what’s happened so far.

For her insights about the most important AI developments of the last administration, and what might happen in the next one, read my conversation with Arati Prabhakar


Now read the rest of The Algorithm

Deeper Learning

These AI Minecraft characters did weirdly human stuff all on their own

The video game Minecraft is increasingly popular as a testing ground for AI models and agents. That’s a trend startup Altera recently embraced. They unleashed up to 1000 software agents at a time, powered by large language models (LLMs), to interact with one another. Given just a nudge through text prompting, they developed a remarkable range of personality traits, preferences, and specialist roles, with no further inputs from their human creators. Remarkably, they spontaneously made friends, invented jobs, and even spread religion.

Why this matters: AI agents can execute tasks and exhibit autonomy, taking initiative in digital environments. This is another example of how the behaviors of such agents, with minimal prompting from humans, can be both impressive and downright bizarre. The people working to bring agents into the world have bold ambitions for them. Altera’s founder, Robert Yang sees the Minecraft experiments as an early step towards large-scale “AI civilizations” with agents that can coexist and work alongside us in digital spaces. “The true power of AI will be unlocked when we have truly autonomous agents that can collaborate at scale,” says Yang. Read more from Niall Firth.

Bits and Bytes

OpenAI is exploring advertising

Building and maintaining some of the world’s leading AI models doesn’t come cheap. The Financial Times has reported that OpenAI is hiring advertising talent from big tech rivals in a push to increase revenues. (Financial Times)

Landlords are using AI to raise rents, and cities are starting to push back

RealPage is a tech company that collects proprietary lease information on how much renters are paying and then uses an AI model to suggest to realtors how much to charge on apartments. Eight states and many municipalities have joined antitrust suits against the company, saying it constitutes an “unlawful information-sharing scheme” and inflates rental prices. (The Markup)

The way we measure progress in AI is terrible

Whenever new models come out, the companies that make them advertise how they perform in benchmark tests against other models. There are even leaderboards that rank them. But new research suggests these measurement methods aren’t helpful. (MIT Technology Review)

Nvidia has released a model that can create sounds and music

AI tools to make music and audio have received less attention than their counterparts that create images and video, except when the companies that make them get sued. Now, chip maker Nvidia has entered the space with a tool that creates impressive sound effects and music. (Ars Technica)

Artists say they leaked OpenAI’s Sora video model in protest

Many artists are outraged at the tech company for training its models on their work without compensating them. Now, a group of artists who were beta testers for OpenAI’s Sora model say they leaked it out of protest. (The Verge)

Main Menu