OpenAI will now sell its image-making program DALL-E 2 to the million people on its waiting list, MIT Technology Review can reveal.
Around 100,000 people have played with DALL-E 2 since its invite-only launch in April. Today, the San Francisco–based company is opening the gates to 10 times as many as it turns the AI into a paid-for service.
“We’ve seen much more interest than we had anticipated, much bigger than it was for GPT-3,” says Peter Welinder, vice president of product and partnerships at OpenAI.
Paying customers will now be able to use the images they create with DALL-E in commercial projects, such as illustrations in children’s books, concept art for movies and games, and marketing brochures. But the product launch will also be the biggest test yet for the company’s preferred approach to rolling out its powerful AI, which is to release it to customers in stages and address problems as they arise.
A DALL-E beta subscription won’t break the bank: $15 buys you 115 credits, and one credit lets you submit a text prompt to the AI, which returns four images at a time. In other words, that’s $15 for 460 images. On top of this, users get 50 free credits in their first month and 15 free credits a month after that. Still, with users typically generating dozens of images at a time and keeping only the best, power users could soon burn through that quota.
In the lead-up to this launch, OpenAI has been working with early adopters to troubleshoot the tool. The first wave of users has produced a steady stream of surreal and striking images: mash-ups of cute animals, pictures that imitate the style of real photographers with eerie accuracy, mood boards for restaurants and sneaker designs. That has allowed OpenAI to explore the strengths and weaknesses of its tool. “They’ve been giving us a ton of really great feedback,” says Joanne Jang, product manager at OpenAI.
OpenAI has already taken steps to control what kind of images users can produce. For example, people cannot generate images that show well-known individuals. In preparation for this commercial launch, OpenAI has addressed another serious problem early users flagged. The version of DALL-E released in April often produced images reflecting clear gender and racial bias, such as images of CEOs and firefighters who were all white men, and teachers and nurses who were all white women.
On July 18, OpenAI announced a fix. When users ask DALL-E 2 to generate an image that includes a group of people, the AI now draws on a dataset of samples OpenAI claims is more representative of global diversity. According to its own testing, OpenAI says users were 12 times more likely to report that DALL-E 2’s output included people of diverse backgrounds.
It’s a necessary fix, but a superficial one. OpenAI addresses many of the problems its users flag by filtering what people can ask for or censoring what the underlying model produces. But it is not fixing problems in the model itself or the data it is trained on. This approach allows OpenAI to make quick fixes. But for some, it amounts to putting on a Band-Aid.
“The issue of social biases in algorithms is huge,” says Judy Wajcman at the London School of Economics, who also studies gender in data science and AI at the Turing Institute. “A lot of energy goes into technical fixes, and I laud all those efforts, but they’re not long-term solutions to the problem.”
OpenAI says its work addressing DALL-E 2’s gender and racial biases gave it the confidence to go ahead with the full launch. It won’t be the final word, however. Bias in AI is a pernicious and intractable problem, and the company will have to carry on its game of whack-a-mole as new examples arise. OpenAI says it will pause the rollout whenever the product needs tweaking.
It’s a balancing act, says Welinder. Tweaks can sometimes curb what users create in unexpected ways. For example, when OpenAI first released its fix for gender bias, some users complained that they were now getting too many female Super Marios. That kind of case is hard to predict, says Welinder: “Seeing what people were trying to create from it lets us fine-tune and calibrate.”
But monitoring hundreds of millions of images produced by a million or more users will be a vast undertaking. Welinder won’t say how many human moderators it will requires, but they will be in-house staff, he says. The company takes a hybrid approach to moderation, mixing human judgment with automated inspection. Welinder says the make-up of the team can be adapted as needed by adding more moderators or adjusting the balance between human and machine intervention.
In May, Google showed off its own image-making AI, called Imagen. Unlike OpenAI, Google has said very little about its plans for the technology. “We still don’t have anything new to share re Imagen yet,” says Google spokesperson Brian Gabriel.
When OpenAI was founded in 2015, it was presented as a pure research lab, with a belief in artificial general intelligence and a commitment to making sure that technology would benefit humanity—if it ever arrived. But in the last few years, it has pivoted to become a product company, offering its powerful AI to paying customers.
It’s still all part of the same vision, says Welinder: “Deploying our technology as a product and at scale is a critical part of our mission. It’s important to iterate on the usefulness and safety around the technology early, while the stakes are lower.”