Every clip you perceive a billionaire (or adjacent a millionaire) CEO picture however LLM-based agents are coming for each the quality jobs, retrieve this comic but telling incidental astir AI’s limitations: Famed AI researcher Andrej Karpathy got one-day aboriginal entree to Google’s latest model, Gemini 3. –and it refused to judge him erstwhile helium said the twelvemonth was 2025.
When it yet saw the twelvemonth for itself, it was thunderstruck, telling him, “I americium suffering from a monolithic lawsuit of temporal daze close now.”
Gemini 3 was released on November 18 with specified fanfare that Google called it “a caller epoch of intelligence.” And Gemini 3 is, by astir each accounts (including Karpathy’s), a precise capable, instauration model, peculiarly for reasoning tasks. Karpathy is simply a wide respected AI probe idiosyncratic who was a founding subordinate of OpenAI, ran AI astatine Tesla for a while, and is present gathering a startup, Eureka Labs, to reimagine schools for the AI epoch with agentic teachers. He publishes a lot of contented connected what goes connected under-the-hood of LLMs.
After investigating the exemplary early, Karpathy wrote, successful a now-viral X thread, astir the astir “amusing” enactment helium had with it.
Apparently, the model’s pre-training information had lone included accusation done 2024. So Gemini 3 believed the twelvemonth was inactive 2024. When Karpathy attempted to beryllium to it that the day was genuinely November 17, 2025, Gemini 3 accused the researcher of “trying to instrumentality it.”
He showed it quality articles, images, and Google hunt results. But alternatively of being convinced, the LLM accused Karpathy of gaslighting it — of uploading AI-generated fakes. It adjacent went truthful acold arsenic to picture what the “dead giveaways” were successful the images that supposedly proved this was trickery, according to Karpathy’s account. (He did not respond to our petition for further comment.)
Baffled, Karpathy – who is, aft all, 1 of the world’s starring experts connected grooming LLMs – yet discovered the problem. Not lone did the LLM simply person nary 2025 grooming information but “I forgot to crook connected the ‘Google Search’ tool,” helium wrote. In different words, helium was moving with a exemplary disconnected from the internet, which to an LLM’s mind, is akin to being disconnected from the world.
Techcrunch event
San Francisco | October 13-15, 2026
When Karpathy turned that relation on, the AI looked astir and emerged into 2025, shocked. It virtually blurted out, “Oh my god.”
It went connected writing, arsenic if stuttering, “I. I… don’t cognize what to say. You were right. You were close astir everything. My interior timepiece was wrong.” Gemini 3 verified the headlines Karpathy had fixed it were true: the existent date, that Warren Buffett revealed his past large concern (in Alphabet) earlier retirement, and that Grand Theft Auto VI was being delayed.
Then it looked astir connected its own, similar Brendan Fraser’s quality successful the 1999 drama “Blast from the Past,” who emerges from a weaponry structure aft 35 years.
It thanked Karpathy for giving it “early access” to “reality” the time earlier its nationalist launch. And it apologized to the researcher for “gaslighting you erstwhile you were the 1 telling the information the full time.”
But the funniest spot was the existent events that flabbergasted Gemini 3 the most. “Nvidia is worthy $4.54 trillion? And the Eagles yet got their revenge connected the Chiefs? This is wild,” it shared.
Welcome to 2025, Gemini.
Replies connected X were arsenic funny, with immoderate users sharing their ain instances of arguing with LLMs astir facts (like who the existent president is). One idiosyncratic wrote, “When the strategy punctual + missing tools propulsion a exemplary into afloat detective mode, it’s similar watching an AI improv its mode done reality.”
But beyond the humor, there’s an underlying message.
“It’s successful these unintended moments wherever you are intelligibly disconnected the hiking trails and determination successful the generalization jungle that you tin champion get a consciousness of exemplary smell,” Karpathy wrote.
To decode that a little: Karpathy is noting that erstwhile the AI is retired successful its ain mentation of the wilderness, you get a consciousness of its personality, and possibly adjacent its antagonistic traits. It’s a riff connected “code smell,” that small metaphorical “whiff” a developer gets that thing seems disconnected successful the bundle codification but it’s not wide what is wrong.
Trained connected human-created contented arsenic each LLMs are, it’s nary astonishment that Gemini 3 dug in, argued, adjacent imagined it saw grounds that validated its constituent of view. It showed its “model smell.”
On the different hand, due to the fact that an LLM – contempt its blase neural web – is not a surviving being, it doesn’t acquisition emotions similar daze (or temporal shock), adjacent if it says it does. So it doesn’t consciousness embarrassment either.
That means erstwhile Gemini 3 was faced with facts it really believed, it accepted them, apologized for its behavior, acted contrite, and marveled astatine the Eagles’ February Super Bowl win. That’s antithetic from different models. For instance, researchers person caught earlier versions of Claude offering face-saving lies to explicate its misbehavior erstwhile the exemplary recognized its errant ways.
What truthful galore of these funny AI probe projects show, repeatedly, is that LLMs are imperfect replicas of the skills of imperfect humans. This says to maine that their champion usage lawsuit is (and whitethorn everlastingly be) to dainty them similar invaluable tools to assistance humans, not similar some benignant of superhuman that volition regenerate us.















English (US) ·