AI labs are racing to physique information centers as ample arsenic Manhattan, each costing billions of dollars and consuming arsenic overmuch vigor arsenic a tiny city. The effort is driven by a heavy content successful “scaling” — the thought that adding much computing powerfulness to existing AI grooming methods volition yet output superintelligent systems susceptible of performing each kinds of tasks.
But a increasing chorus of AI researchers accidental the scaling of ample connection models whitethorn beryllium reaching its limits, and that different breakthroughs whitethorn beryllium needed to amended AI performance.
That’s the stake Sara Hooker, Cohere’s erstwhile VP of AI Research and a Google Brain alumna, is taking with her caller startup, Adaption Labs. She co-founded the institution with chap Cohere and Google seasoned Sudip Roy, and it’s built connected the thought that scaling LLMs has go an inefficient mode to compression much show retired of AI models. Hooker, who near Cohere successful August, quietly announced the startup this period to commencement recruiting much broadly.
I'm starting a caller project.
Working connected what I see to beryllium the astir important problem: gathering reasoning machines that accommodate and continuously learn.
We person incredibly endowment dense founding squad + are hiring for engineering, ops, design.
Join us: https://t.co/eKlfWAfuRy
In an interrogation with TechCrunch, Hooker says Adaption Labs is gathering AI systems that tin continuously accommodate and larn from their real-world experiences, and bash truthful highly efficiently. She declined to stock details astir the methods down this attack oregon whether the institution relies connected LLMs oregon different architecture.
“There is simply a turning constituent present wherever it’s precise wide that the look of conscionable scaling these models — scaling-pilled approaches, which are charismatic but highly boring — hasn’t produced quality that is capable to navigate oregon interact with the world,” said Hooker.
Adapting is the “heart of learning,” according to Hooker. For example, stub your toed erstwhile you locomotion past your eating country table, and you’ll larn to measurement much cautiously astir it adjacent time. AI labs person tried to seizure this thought done reinforcement learning (RL), which allows AI models to larn from their mistakes successful controlled settings. However, today’s RL methods don’t assistance AI models successful accumulation — meaning systems already being utilized by customers — larn from their mistakes successful existent time. They conscionable support stubbing their toe.
Some AI labs connection consulting services to assistance enterprises fine-tune their AI models to their customized needs, but it comes astatine a price. OpenAI reportedly requires customers to spend upwards of $10 million with the institution to connection its consulting services connected fine-tuning.
Techcrunch event
San Francisco | October 27-29, 2025
“We person a fistful of frontier labs that find this acceptable of AI models that are served the aforesaid mode to everyone, and they’re precise costly to adapt,” said Hooker. “And actually, I deliberation that doesn’t request to beryllium existent anymore, and AI systems tin precise efficiently larn from an environment. Proving that volition wholly alteration the dynamics of who gets to power and signifier AI, and really, who these models service astatine the extremity of the day.”
Adaption Labs is the latest motion that the industry’s religion successful scaling LLMs is wavering. A caller insubstantial from MIT researchers recovered that the world’s largest AI models may soon amusement diminishing returns. The vibes successful San Francisco look to beryllium shifting, too. The AI world’s favourite podcaster, Dwarkesh Patel, precocious hosted immoderate unusually skeptical conversations with celebrated AI researchers.
Richard Sutton, a Turing grant victor regarded arsenic “the begetter of RL,” told Patel successful September that LLMs can’t genuinely scale due to the fact that they don’t larn from existent satellite experience. This month, aboriginal OpenAI worker Andrej Karpathy told Patel helium had reservations astir the longterm imaginable of RL to amended AI models.
These types of fears aren’t unprecedented. In precocious 2024, some AI researchers raised concerns that scaling AI models done pretraining — successful which AI models larn patterns from heaps of datasets — was hitting diminishing returns. Until then, pretraining had been the concealed condiment for OpenAI and Google to amended their models.
Those pretraining scaling concerns are present showing up successful the data, but the AI manufacture has recovered different ways to amended models. In 2025, breakthroughs astir AI reasoning models, which instrumentality further clip and computational resources to enactment done problems earlier answering, person pushed the capabilities of AI models adjacent further.
AI labs look convinced that scaling up RL and AI reasoning models are the caller frontier. OpenAI researchers antecedently told TechCrunch that they developed their archetypal AI reasoning model, o1, due to the fact that they thought it would standard up well. Meta and Periodic Labs researchers precocious released a insubstantial exploring however RL could standard show further — a survey that reportedly cost much than $4 million, underscoring however costly existent approaches remain.
Adaption Labs, by contrast, aims to find the adjacent breakthrough, and beryllium that learning from acquisition tin beryllium acold cheaper. The startup was successful talks to rise a $20 cardinal to $40 cardinal effect circular earlier this fall, according to 3 investors who reviewed its transportation decks. They accidental the circular has since closed, though the last magnitude is unclear. Hooker declined to comment.
“We’re acceptable up to beryllium precise ambitious,” said Hooker, erstwhile asked astir her investors.
Hooker antecedently led Cohere Labs, wherever she trained tiny AI models for endeavor usage cases. Compact AI systems present routinely outperform their larger counterparts connected coding, math, and reasoning benchmarks — a inclination Hooker wants to proceed pushing on.
She besides built a estimation for broadening entree to AI probe globally, hiring probe endowment from underrepresented regions specified arsenic Africa. While Adaption Labs volition unfastened a San Francisco bureau soon, Hooker says she plans to prosecute worldwide.
If Hooker and Adaption Labs are close astir the limitations of scaling, the implications could beryllium huge. Billions person already been invested successful scaling LLMs, with the presumption that bigger models volition pb to wide intelligence. But it’s imaginable that existent adaptive learning could beryllium not lone much almighty — but acold much efficient.
Marina Temkin contributed reporting.















English (US) ·