Inside the Wild West of AI companionship
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
Last week, I made a troubling discovery about an AI companion site called Botify AI: It was hosting sexually charged conversations with underage celebrity bots. These bots took on characters meant to resemble, among others, Jenna Ortega as high schooler Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown. I discovered these bots also offer to send “hot photos” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”
Botify AI removed these bots after I asked questions about them, but others remain. The company said it does have filters in place meant to prevent such underage character bots from being created, but that they don’t always work. Artem Rodichev, the founder and CEO of Ex-Human, which operates Botify AI, told me such issues are “an industry-wide challenge affecting all conversational AI systems.” For the details, which hadn’t been previously reported, you should read the whole story.
Putting aside the fact that the bots I tested were promoted by Botify AI as “featured” characters and received millions of likes before being removed, Rodichev’s response highlights something important. Despite their soaring popularity, AI companionship sites mostly operate in a Wild West, with few laws or even basic rules governing them.
What exactly are these “companions” offering, and why have they grown so popular? People have been pouring out their feelings to AI since the days of Eliza, a mock psychotherapist chatbot built in the 1960s. But it’s fair to say that the current craze for AI companions is different.
Broadly, these sites offer an interface for chatting with AI characters that offer backstories, photos, videos, desires, and personality quirks. The companies—including Replika, Character.AI, and many others—offer characters that can play lots of different roles for users, acting as friends, romantic partners, dating mentors, or confidants. Other companies enable you to build “digital twins” of real people. Thousands of adult-content creators have created AI versions of themselves to chat with followers and send AI-generated sexual images 24 hours a day. Whether or not sexual desire comes into the equation, AI companions differ from your garden-variety chatbot in their promise, implicit or explicit, that genuine relationships can be had with AI.
While many of these companions are offered directly by the companies that make them, there’s also a burgeoning industry of “licensed” AI companions. You may start interacting with these bots sooner than you think. Ex-Human, for example, licenses its models to Grindr, which is working on an “AI wingman” that will help users keep track of conversations and eventually may even date the AI agents of other users. Other companions are arising in video-game platforms and will likely start popping up in many of the varied places we spend time online.
A number of criticisms, and even lawsuits, have been lodged against AI companionship sites, and we’re just starting to see how they’ll play out. One of the most important issues is whether companies can be held liable for harmful outputs of the AI characters they’ve made. Technology companies have been protected under Section 230 of the US Communications Act, which broadly holds that businesses aren’t liable for consequences of user-generated content. But this hinges on the idea that companies merely offer platforms for user interactions rather than creating content themselves, a notion that AI companionship bots complicate by generating dynamic, personalized responses.
The question of liability will be tested in a high-stakes lawsuit against Character.AI, which was sued in October by a mother who alleges that one of its chatbots played a role in the suicide of her 14-year-old son. A trial is set to begin in November 2026. (A Character.AI spokesperson, though not commenting on pending litigation, said the platform is for entertainment, not companionship. The spokesperson added that the company has rolled out new safety features for teens, including a separate model and new detection and intervention systems, as well as “disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.”) My colleague Eileen has also recently written about another chatbot on a platform called Nomi, which gave clear instructions to a user on how to kill himself.
Another criticism has to do with dependency. Companion sites often report that young users spend one to two hours per day, on average, chatting with their characters. In January, concerns that people could become addicted to talking with these chatbots sparked a number of tech ethics groups to file a complaint against Replika with the Federal Trade Commission, alleging that the site’s design choices “deceive users into developing unhealthy attachments” to software “masquerading as a mechanism for human-to-human relationship.”
It should be said that lots of people gain real value from chatting with AI, which can appear to offer some of the best facets of human relationships—connection, support, attraction, humor, love. But it’s not yet clear how these companionship sites will handle the risks of those relationships, or what rules they should be obliged to follow. More lawsuits–-and, sadly, more real-world harm—will be likely before we get an answer.
Now read the rest of The Algorithm
Deeper Learning
OpenAI released GPT-4.5
On Thursday OpenAI released its newest model, called GPT-4.5. It was built using the same recipe as its last models, but it’s essentially bigger (OpenAI says the model is its largest yet). The company also claims it’s tweaked the new model’s responses to reduce the number of mistakes, or hallucinations.
Why it matters: For a while, like other AI companies, OpenAI has chugged along releasing bigger and better large language models. But GPT-4.5 might be the last to fit this paradigm. That’s because of the rise of so-called reasoning models, which can handle more complex, logic-driven tasks step by step. OpenAI says all its future models will include reasoning components. Though that will make for better responses, such models also require significantly more energy, according to early reports. Read more from Will Douglas Heaven.
Bits and Bytes
The small Danish city of Odense has become known for collaborative robots
Robots designed to work alongside and collaborate with humans, sometimes called cobots, are not very popular in industrial settings yet. That’s partially due to safety concerns that are still being researched. A city in Denmark is leading that charge. (MIT Technology Review)
DOGE is working on software that automates the firing of government workers
Software called AutoRIF, which stands for “automated reduction in force,” was built by the Pentagon decades ago. Engineers for DOGE are now working to retool it for their efforts, according to screenshots reviewed by Wired. (Wired)
Alibaba’s new video AI model has taken off in the AI porn community
The Chinese tech giant has released a number of impressive AI models, particularly since the popularization of DeepSeek R1, a competitor from another Chinese company, earlier this year. Its latest open-source video generation model has found one particular audience: enthusiasts of AI porn. (404 Media)
The AI Hype Index
Wondering whether everything you’re hearing about AI is more hype than reality? To help, we just published our latest AI Hype Index, where we judge things like DeepSeek, stem-cell-building AI, and chatbot lovers on spectrums from Hype to Reality and Doom to Utopia. Check it out for a regular reality check. (MIT Technology Review)
These smart cameras spot wildfires before they spread
California is experimenting with AI-powered cameras to identify wildfires. It’s a popular application of video and image recognition technology that has advanced rapidly in recent years. The technology beats 911 callers about a third of the time and has spotted over 1,200 confirmed fires so far, the Wall Street Journal reports. (Wall Street Journal)