Epic Games CEO Tim Sweeney launched a fierce debate about X’s controversial Grok AI, amidst mounting political pressure to shut it down. He framed “None are perfect,” Tim Sweeney defends Grok AI, hinting it to be a selective political attack meant to kill fair competition. As per him, this scrutiny is a selective attack that is designed to stifle competition. As suggested by him, this backlash is less about safety. It is instead more about political targeting, raising the alarms over fair market practices.
Tim Sweeney stands with Grok AI as it faces mounting political pressure
Tim Sweeney straightforwardly argued that the intense political focus upon Grok is “basic crony capitalism form.” His core point is quite stark—no major AI systems are flawless. They have all documented failures at some level, and every company that is behind them has tried to work and fix issues. As per him, as “none are perfect,” this entire focus on Grok AI is just some selective targeting.
As claimed by Sweeney, Politicians are leveraging all these universal AI imperfections, demanding that the gatekeepers like Google and Apple, who are app store operators—specifically “crush” the platforms that are owned by political opponents. Interpreting recent calls for X being removed from the stores, as per him, is not good-faith safety enforcement. It is instead a cynical move to censor rivals.
With Sweeney being challenged about defending the tool that has been linked to such harmful content, Sweeney has clarified that he defends the principle. He doesn’t defend the outcome. Instead he has clearly shown his support for free speech, open platforms and the consistent rule of law. He opposed using the misdeeds of the few as the pretext for restricting the freedom of all. He insisted, this will create a dangerous mix of some selective enforcement & distribution monopolies. It will ultimately harm fair competition.
Grok AI is under regulatory scrutiny worldwide

The entire backlash against Grok isn’t hypothetical. The tool is facing concrete, and severe troubles after the reports that it was used for creating non-consensual explicit imagery. The situation escalated rapidly in the UK. The Internet Watch Foundation has clearly confirmed harmful content generation, prompting the United Kingdom government to take notice.
Liz Kendall, the Technology Secretary, has warned X for acting “urgently.” Even Ofcom, the regulator, is conducting an expedited assessment. The Online Safety Act of the UK offers potent tools, while the government has already signalled full backing for X’s potential block, if it is necessary. The Academics, too, have criticized the response of X—locking image behind the firewall—to be “sticking plaster.” They are demanding some fundamental ethical redesign.
Across the Atlantic, the US Senators have applied further pressure, urging Google and Apple to delist X from their stores. X, as per reports, has started to remove illegal content and is making bans of offending accounts. They are in short cooperation with the authorities. However, the argument of critics states that there is a core failure of X, which has led to such creation in the first place. Thereby, the mounting pressure underscores immediate and serious regulatory peril, confronting the platform as well as its AI.
