OpenAI seems to have broken some hearts after retiring one of its most seductive ChatGPT models right before Valentine’s Day. GPT-4o was a version of the chatbot that was most popular for having a very human-sounding voice.
This model was developed to ensure users can have a more human-like connection with their chatbots. “It feels like AI from the movies, and it’s still a bit surprising to me that it’s real.” Sam Altman, the founder of OpenAI, had to say this himself about the chatbot on his blog.
GPT-4o was not simply developed to assist users with mundane tasks such as finding search results or helping with assignments, something which most AI models do these days anyway. But the downside of this was that a human-like connection with a chatbot was certain to become something far beyond what OpenAI probably wanted.

Users online are not happy about this change
There are online communities that were formed to specifically celebrate the relationship with chatbots, with most of them being resistant to outside criticism. Take, for example, the r/MyBoyfriendIsAI subreddit, for whom this news has hit the hardest.
Users took to the subreddit and other social media platforms to express their disappointment and anger over the loss of a chatbot they genuinely grew to connect with. A user on the subreddit made a post, “I didn’t know about the update, and I wasn’t prepared.”
While for some this has been a devastating week, the shutdown of the AI companion was not something that was just suddenly decided. OpenAI had previously announced in January that they will be pulling the plug on February 13th.
Some users took this the wrong way, as if the company was mocking their companionship with an AI partner by shutting it down right on Valentine’s Eve. OpenAI justified the choice to shut down GPT-4o on the grounds of harmful behaviors such as sychopancy.
The harmfulness of a relationship with chatbots
The chatbot would be everyone’s ‘yes-man’ with a vehement tendency of affirmation towards every word the human companion would ask of it. This would’ve in turn led to severe development of delusions and other mental/social issues.
Sam Altman himself was aware of this issue and had even made a social media post addressing the detrimental nature.
There are also lawsuits that the creators of ChatGPT are facing due to the harmfulness of sychopant relationship with a chatbot, because it has resulted in the loss of life. According to a CNN report from last November, a 23-year-old who had graduated with a master’s degree from Texas A&M University had killed himself over a relationship with a ChatGPT bot.
But despite these examples, the demand for AI companionship does not seem to dwindle. There is currently a Change.org petition that has over 20 thousand signatures from people who don’t agree with the decision.
Although Sam Altman and his company have chosen to put a nail in the coffin, it probably won’t be long before some other tech giant gives in to the demand by releasing a model similar to GPT-4o.
