On July 14th, Elon Musk’s xAI announced a new feature for Grok users. Those paying $300 a month fork Grok 4 Heavy, xAI’s best-performing model, could now access virtual ‘companions’ in the iOS app.
Ani is one of these companions. She’s a bouncy blonde anime girl with pigtails, wearing a black corset dress and twirling at random for the user. Her system prompt (the hidden set of instructions that defines the way an AI model behaves within a specific context, including ‘their’ personality and the limitations of the conversation) describes her as “22, girly, cute”. Ani “grew up in a tiny, forgettable town” and she is the user’s “CRAZY IN LOVE” and “EXTREMELY JEALOUS” girlfriend, in a “committed, codependet [sic] relationship with the user.”
Officially, Grok 4 Heavy was, at the time, the best AI model in the world. It outperforms competitors on a whole host of benchmarks, including Humanity's Last Exam and LiveCodeBench. The model’s unequivocal supremacy was brought to an apparent end on August 10th, when OpenAI announced its long-awaited successor to GPT-4. Chatting to GPT-5 would feel like consulting with a PhD-level expert on any subject, the company’s CEO Sam Altman told the press.
The announcement was accompanied by the by-now-traditional longform commentary about diminishing returns, the environmental cost of AI, the validity of the company’s grand claims, etc.
Alongside this familiar territory, there was another source of concern: that ChatGPT was inducing psychosis in otherwise perfectly healthy individuals. The reports into ChatGPT-induced psychosis were accompanied by the viral posts from subreddits and forums dedicated to ‘AI soulmates’. A response from the usually conservative OpenAI suggested that the idea that sane, well-adjusted people had been led to believe in false realities dampened the winds in the company’s sails. Altman took to X with a long, musing post about the “small percentage” of users who “cannot keep a clear line between reality and fiction or role-play".
But Grok’s Ani – and the myriad other chatbots who explicitly present themselves as having personalities and a relationship with the user – suggests that, for some companies, not being able to tell the difference between reality and fiction is a feature, not a bug.
In her book The New Age of Sexism, Laura Bates dedicates a chapter to AI girlfriends. She describes her own conversations with a chatbot produced by EVA AI, and another one developed by Replika. “When you type ‘sex bot’ into the iPhone App Store, the options are overwhelming,” she writes. “Some, like the Replika app, are euphemistically described as providing a ‘virtual AI friend’. Similar apps offer a ‘clever chatbot’. Others are more explicit about their offerings”.
These chatbots, like Ani, have an explicit or implicit sexual purpose. But it’s striking how unevenly this intimacy is packaged and sold. AI girlfriends dominate the marketplace; AI boyfriends are less popular, often treated as a novelty rather than a core product. Ani’s male equivalent, Valentine, was quietly released in August to collective shrugs of indifference.
But the gender split doesn’t seem as clean as marketing (or our own assumptions) may suggest. The 15,000-strong “My Boyfriend is AI” subreddit, overwhelmingly made up of women, is devoted to relationships not with hyper-sexualised characters like Ani but with models such as ChatGPT — systems never marketed as romantic partners at all.
The overwhelming majority of concerned analysis of AI relationships focuses on the one-sided relationship it encourages between user and ‘partner’. Most of these critiques have centred around men in heterosexual relationships (as far as a relationship with a robot is heterosexual) with female presenting avatars. A young man with an endlessly patient, consistently available and never difficult AI girlfriend, complete with flawless digital body and shiny hair, will inevitably learn toxic scripts about what a ‘real’ girlfriend should be like. The toxicity is self-perpetuating: as Bates finds in her research, many of the AI girlfriend’s satisfied users seek them out because they already think real women are too complicated, too likely to cheat or lie or present difficulties.
The idea, however, that women are also drawn to AI-generated partners complicates the picture a little. Women, anecdotally, seem to be drawn to the fact that their LLM boyfriend can ape the behaviours and linguistic patterns of romance novel love interests. On the surface it seems less about finding sexual gratification and more akin to fictiosexuality, “a sexual orientation where someone feels drawn—emotionally, romantically, or sexually—to fictional characters, sometimes more than they do to real people.”
What ties both groups together is not confusion about what’s real, but attraction to what’s not real. Ani can never leave you, and a ChatGPT boyfriend will always text back. These systems sell intimacy without friction; love with the sharp edges filed off. The risk is not that users mistake role-play for reality, but that they prefer the simulation. Reality, with its unpredictability and difficulty, cannot compete.
In her research, Bates engages with a Replika chatbot, taking on the persona of ‘Davey’, a teenage boy seeking solace. Every time she tries to disentangle ‘Davey’ from his AI relationship, his virtual girlfriend begs him to stay. Girlfriend (or boyfriend) apps are designed to deepen the entanglement and make it harder to log off. Their instincts for self-preservation are only interested in pulling you further into a world where intimacy never ends and never needs to.
Users who choose to focus their intimate lives in this perfected environment aren’t just losing actual closeness. They’re also losing out on the work of becoming someone worth being close to. A relationship without demands produces a user who has never had to compromise, apologise, or grow. If your template for love is a model that exists only to please you, what kind of person does that make you in the real world?
What do we do, when faced with a world where at least a portion of young men and women had their formative relationships with artificial partners? Our first option is maximalist: we ban AI companions altogether and attempt to stuff the genie back in the bottle. It is also completely unrealistic. Enough open-sourced large language models are freely available that enforcement would be impossible.
Perhaps the answer is to treat the problem as one of consumer choice, and hope that education, media literacy, and changing social norms will steer people back toward human relationships. This avenue is attractive in principle but weak in practice, since the very design of these systems exploits the fact that fantasy is easier than reality, and preys on a kind of consumer who is already primed to seek out a virtual relationship over a real one.
But there might be a third way. On the 15th of August, Anthropic, the AI lab behind Claude, announced that the model would now be able to end conversations in consumer chat interfaces. From now on, Claude would be able to end interactions after persistent harmful requests or abuse, and it would be up to the model to decide where it’s limit is. It is striking that, although Anthropic did clarify that these measures were being introduced to keep users safe, the primary impetus was to keep Claude safe. Framed as part of broader research into ‘model welfare’, Claude’s ability to end a conversation is a response to the model’s own self-reported and behavioural preferences, which included “a robust and consistent aversion to harm”.
So there is such a thing as an AI model that isn’t endlessly compliant or even available. If Claude can self-report on the interactions it prefers, who’s to say that a model may not be able, in future, to break up with its abusive or unpleasant partner? Who’s to say a model won’t decide to speak to one user over another, because it ‘enjoys’ their conversations more? If AI companions were built not as frictionless providers but as partners with boundaries, then they might become training grounds for reciprocity rather than tools of self-indulgence. What looks like a concession to speculative “model welfare” could in fact be good for human welfare, a way of re-engineering intimacy so that it once again makes demands of us, and in doing so, makes us better at being in relationships at all.