Generative AI risks concentrating Big Tech’s power. Here’s how to stop it
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now. And it makes sense. To understand why, consider that the current AI boom depends on two things: large amounts of data, and enough computing power to process it.
Both of these resources are only really available to Big Tech companies. And although some of the most exciting applications, such as OpenAI’s chatbot ChatGPT and Stability.AI’s image-generation AI Stable Diffusion, are created by startups, they rely on deals with Big Tech that gives them access to their vast data and computing resources.
“A couple of big tech firms are poised to consolidate power through AI, rather than democratize it,” says Sarah Myers West, managing director of research non-profit the AI Now Institute.
Right now, Big Tech has a chokehold on AI. But Myers West believes we’re actually at a watershed moment. It’s the start of a new tech hype cycle, and that means lawmakers and regulators have a unique opportunity to ensure the next decade of AI technology is more democratic and fair.
What separates this tech boom from previous ones is that we have a better understanding of all the catastrophic ways AI can go awry. And regulators everywhere are paying close attention.
China just unveiled a draft bill on generative AI calling for more transparency and oversight, while the European Union is negotiating the AI Act, which will require tech companies to be more transparent about how generative AI systems work. It’s also planning a bill to make them liable for AI harms.
The US has traditionally been reluctant to regulate its tech sector. But that’s changing. The Biden administration is seeking input on ways to oversee AI models such as ChatGPT, by for example requiring tech companies to produce audits and impact assessments, or for AI systems to meet certain standards before they are released. It’s one of the most concrete steps the Biden Administration has taken to curb AI harms.
Meanwhile, the Federal Trade Commission’s (FTC) chair Lina Khan has also highlighted Big Tech’ s data and computing power advantage, and has vowed to ensure competition in the AI industry. The agency has dangled the threat of antitrust investigations, and crackdowns on deceptive business practices.
This new focus on the AI sector is partly influenced by the fact that many members of the AI Now Institute, including Myers West, have spent stints at the FTC to bring technical expertise to the agency.
Myers West says her secondment taught her that AI regulation doesn’t have to start from a blank slate. Instead of waiting for AI-specific regulations, such as the EU’s AI Act, which will take years to put into place, regulators should ramp up enforcement of existing data protection and competition laws.
Because AI as we know it today is largely dependent on massive amounts of data, data policy is also artificial intelligence policy, says Myers West.
Case in point: ChatGPT has faced intense scrutiny from European and Canadian data protection authorities, and has been blocked in Italy over allegedly scraping personal data off the web illegally and misusing personal data.
The call for regulation is not just happening among government officials. Something interesting has happened. After decades of fighting regulation tooth and nail, today most tech companies, including OpenAI, claim they welcome it.
The big question everyone’s still fighting over is how AI should be regulated. Tech companies claim they support regulation, but they’re still pursuing a “release first, ask question later” approach when it comes to launching AI-powered products. Tech companies are rushing to release image- and text-generating AI models as products, despite these models having major flaws, such as making up nonsense, perpetuating harmful biases, infringing copyright and containing security vulnerabilities.
The White House’s proposal to tackle AI accountability with post-AI product launch measures such as algorithmic audits are not enough to mitigate AI harms, AI Now’s report argues. Stronger, swifter action is needed to ensure companies first prove their models are fit for release, Myers West says.
“We should be very wary of approaches that do not put the burden on companies. There are a lot of approaches to regulation that essentially put the onus on the broader public and on regulators to root out AI-enabled harms,” says Myers West.
And importantly, Myers West says, regulators need to take action swiftly.
“There needs to be consequences for when [tech companies] violate the law.”
How AI is helping historians better understand our past
This is cool. Historians have started using machine learning to examine historical documents smudged by centuries spent in mildewed archives. They’re using these techniques to restore ancient texts, and making significant discoveries along the way.
Connecting the dots: Historians say the application of modern computer science to the distant past helps draw broader connections across the centuries than would otherwise be possible. But there is a risk that these computer programs introduce distortions of their own, slipping bias or outright falsifications into the historical record. Read more from Moira Donovan here.
Bits and Bytes
Google is overhauling Search to compete with AI rivals
Threatened by Microsoft’s relative success with AI-powered Bing search, Google is building a new search engine that uses large language models, and is upgrading its existing search engine with AI features. It hopes the new search engine will offer users a more personalized experience. (The New York Times)
Elon Musk has created a new AI company to rival OpenAI
Over the past few months, Musk has been trying to hire researchers to join his new AI venture, X.AI. Musk was one of OpenAI’s co-founders, but was ousted in 2018 after a power struggle with CEO Sam Altman. Musk has criticized OpenAI’s chatbot ChatGPT of being politically biased, and said he wants to create “truth-seeking” AI models. What does that mean? Your guess is as good as mine. (The Wall Street Journal)
Stability.AI is at risk of going under
Stability.AI, the creator of the open source image-generating AI model Stable Diffusion, just released a new version of their model that is slightly more photorealistic. But the business is in trouble. It’s burning through cash fast, struggling to generate revenue, and staff are losing faith in the company’s CEO. (Semafor)
Meet the world’s worst AI program
The bot on Chess.com, depicted as a turtleneck-wearing Bulgarian man with bushy eyebrows, a thick beard, and a slightly receding hairline, is designed to be absolutely awful at chess. While other AI bots are programmed to dazzle, Martin is a reminder that even dumb AI systems can still surprise, delight, and teach us things. (The Atlantic)