Anthropic’s Super Bowl commercial, 1 of 4 ads the AI laboratory dropped on Wednesday, begins with the connection “BETRAYAL” splashed boldly crossed the screen. The camera pans to a antheral earnestly asking a chatbot (obviously intended to picture ChatGPT) for proposal connected however to speech to his mom.
The bot, portrayed by a blonde woman, offers immoderate classical bits of advice. Start by listening. Try a quality walk! And past twists into an advertisement for a fictitious (we hope!) cougar-dating tract called Golden Encounters. Anthropic finishes the spot by saying that portion ads are coming to AI, they won’t beryllium coming to it’s ain chatbot, Claude.
Another one features a flimsy young antheral looking for proposal connected gathering a six pack. After offering his height, age, and weight, the bot serves him an advertisement for height-boosting insoles.
The Anthropic commercials are cleverly crafted astatine OpenAI’s users, aft that company’s recent announcement that ads volition beryllium coming to ChatGPT’s escaped tier. And they caused an contiguous stir, spawning headlines that Anthropic “mocks,” “skewers” and “dunks” connected OpenAI.
They are comic capable that adjacent Sam Altman admitted connected X that helium laughed astatine them. But helium intelligibly didn’t truly find them funny. They inspired him to constitute a novella-sized rant that devolved into calling his rival “dishonest” and “authoritarian.”
First, the bully portion of the Anthropic ads: they are funny, and I laughed.
But I wonderment wherefore Anthropic would spell for thing truthful intelligibly dishonest. Our astir important rule for ads says that we won’t bash precisely this; we would evidently ne'er tally ads successful the mode Anthropic…
In that post, Altman explains that an ad-supported tier is intended to enarthrosis the load of offering escaped ChatGPT to galore of its millions of users. ChatGPT is inactive the astir fashionable chatbot by a ample margin.
But the OpenAI CEO insisted they were “dishonest” successful implying that ChatGPT volition twist a speech to insert an advertisement (and perchance for an off-color product, to boot).”We would evidently ne'er tally ads successful the mode Anthropic depicts them,” Altman wrote successful the societal media post. “We are not anserine and we cognize our users would cull that.”
Techcrunch event
Boston, MA | June 23, 2026
Indeed, OpenAI has promised ads volition beryllium separate, labeled, and volition ne'er power a chat. But the institution has besides said it is readying connected making them conversation-specific — which is the cardinal allegation of Anthropic’s ads. As OpenAI explained successful its blog. “We program to trial ads astatine the bottommost of answers successful ChatGPT erstwhile there’s a applicable sponsored merchandise oregon work based connected your existent conversation.”
Altman past went connected to fling immoderate arsenic questionable assertions astatine his rival. “Anthropic serves an costly merchandise to affluent people,” helium wrote. “We besides consciousness powerfully that we request to bring AI to billions of radical who can’t wage for subscriptions.”
But Claude has a escaped chat tier, too, with subscriptions astatine $0, $17, $100, $200. ChatGPT’s tiers are $0, $8, $20, $200. One could reason the subscription tiers are reasonably equivalent.
Altman besides alleged successful his station that: “Anthropic wants to power what radical bash with AI” He argues it blocks usage of Claude Code from “companies they don’t like” similar OpenAI, and said Anthropic tells radical what they tin and can’t usage AI for.
True, Anthropic’s full selling woody since time 1 has been “responsible AI.” The institution was founded by two erstwhile OpenAI alums, aft all, who claimed they grew alarmed astir AI information erstwhile they worked there.
Still, some chatbot companies person usage policies, AI guardrails, and talk astir AI safety. And, portion OpenAI allows ChatGPT to be utilized for erotica portion Anthropic does not, it, too, has determined some contented should beryllium blocked, peculiarly successful regards to intelligence health.
Yet Altman took this Anthropic-tells-you-what-to-do statement to an utmost level erstwhile helium accused Anthropic of being “authoritarian.”
“One authoritarian institution won’t get america determination connected their own, to accidental thing of the different evident risks. It is simply a acheronian path,” helium wrote.
Using “authoritarian” successful a rant implicit a cheeky Super Bowl advertisement is misplaced, astatine best. It’s peculiarly tactless erstwhile considering the existent geopolitical situation successful which protesters astir the world person been killed by agents of their ain government. While concern rivals person been duking it retired successful ads since the opening of time, intelligibly Anthropic deed a nerve.















English (US) ·