The biggest AI stories of the year (so far)

1 month ago 19

You tin illustration a twelvemonth done merchandise launches, oregon you tin measurement it successful the greater moments that alteration the mode we look astatine AI. The AI manufacture is perpetually churning retired news, similar large acquisitions, indie developer successes, nationalist outcry against sketchy products, and existentially unsafe contract negotiations — it’s a batch to untangle, truthful we’re taking a glimpse astatine wherever we’re astatine and wherever we’ve been truthful acold this year.

Anthropic vs. the Pentagon

Once concern partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter stalemate arsenic they renegotiated the contracts that dictate however the U.S. subject tin usage Anthropic’s AI tools successful February.

Anthropic established a hard enactment against its AI being utilized for wide surveillance of Americans oregon to powerfulness autonomous weapons that tin onslaught without quality oversight. Meanwhile, the Pentagon has argued that the Department of Defense — which President Donald Trump’s medication calls the Department of War — should beryllium permitted entree to Anthropic’s models for immoderate “lawful use.” Government representatives took discourtesy to the thought that the subject should beryllium constricted to the rules of a backstage company, but Amodei stood his ground.

“Anthropic understands that the Department of War, not backstage companies, makes subject decisions. We person ne'er raised objections to peculiar subject operations nor attempted to bounds usage of our exertion successful an advertisement hoc manner,” Amodei wrote successful a statement addressing the situation. “However, successful a constrictive acceptable of cases, we judge AI tin undermine, alternatively than defend, antiauthoritarian values.”

The Pentagon gave Anthropic a deadline to hold to their contract. Hundreds of employees astatine Google and OpenAI signed an unfastened letter urging their respective leaders to respect Amodei’s limits and garbage to budge connected issues of autonomous weapons oregon home surveillance.

The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump directed national agencies to signifier retired their usage of Anthropic tools implicit a six-month transition play and called the AI company, which is valued astatine $380 billion, a “radical left, woke company” successful an all-caps societal media post. The Pentagon past moved to state Anthropic a “supply concatenation risk,” a designation that is usually reserved for overseas adversaries and prevents immoderate institution that works with Anthropic from doing concern with the U.S. military. (Anthropic has since sued to situation the designation.)

Anthropic rival OpenAI past swooped in and announced that it had reached an statement allowing its ain models to beryllium deployed successful classified situations. It was a daze to the tech community, since reports had indicated that OpenAI would instrumentality to Anthropic’s reddish lines governing usage of AI for the military.

Techcrunch event

San Francisco, CA | October 13-15, 2026

Public sentiment would bespeak that radical recovered OpenAI’s determination fishy — ChatGPT uninstalls jumped 295% day-over-day connected the time aft OpenAI announced its deal, and Anthropic’s Claude changeable to No. 1 successful the app store. OpenAI hardware enforcement Caitlin Kalinowski discontinue successful effect to the deal, saying that it was “rushed without the guardrails defined.”

OpenAI told TechCrunch that it believes its statement “makes wide [its] redlines: nary autonomous weapons and nary autonomous surveillance.”

As this saga plays out, it volition person important implications for the aboriginal of however AI is deployed astatine war, perchance changing the people of past — you know, nary large deal…

‘Vibe-coded’ app OpenClaw accelerates the crook to agentic AI

February was the period of OpenClaw, and its interaction continues to reverberate. In speedy succession, the vibe-coded AI adjunct app went viral, spawned a clump of spinoff companies, suffered from privateness snafus, and past got acquired by OpenAI. Even 1 of the companies built connected OpenClaw, a Reddit-clone for AI agents called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem whipped Silicon Valley into a downright frenzy.

Created by Peter Steinberger — who has since joined OpenAI — OpenClaw is simply a wrapper for AI models similar Claude, ChatGPT, Google’s Gemini, oregon xAI’s Grok. What sets it isolated is that it allows radical to pass with AI agents successful earthy connection via the astir fashionable chat apps, similar iMessage, Discord, Slack, oregon WhatsApp. There’s besides a nationalist marketplace wherever radical tin codification and upload “skills” for radical to adhd to their AI agents, making it imaginable to automate fundamentally thing that tin beryllium done connected a computer.

If that seems excessively bully to beryllium true, it’s due to the fact that it benignant of is. In bid for an AI cause to beryllium effectual arsenic a idiosyncratic assistant, it needs to person entree to your email, recognition paper numbers, substance messages, machine files, etc. If it were to beryllium hacked, a batch could spell wrong, and unfortunately, there’s nary mode to afloat unafraid these agents against prompt-injection attacks.

“It is conscionable an cause sitting with a clump of credentials connected a container connected to everything — your email, your messaging platform, everything you use,” Ian Ahl, CTO astatine Permiso Security, told TechCrunch. “So what that means is, erstwhile you get an email, and possibly idiosyncratic is capable to enactment a small punctual injection method successful determination to instrumentality an action, [and] that cause sitting connected your container with entree to everything you’ve fixed it to tin present instrumentality that action.”

One AI information researcher astatine Meta said that OpenClaw ran amok connected her inbox, deleting each of her emails contempt repeated calls to stop. “I had to RUN to my Mac mini similar I was defusing a bomb” to physically unplug the device, she wrote successful a now-viral station connected X, which included images of the ignored halt prompts arsenic receipts.

Despite the information risks, the exertion piqued OpenAI’s involvement capable for an acquihire.

Other tools built connected OpenClaw, including Moltbook — a Reddit-like “social network” wherever AI agents tin pass with 1 different — ended up becoming much viral than OpenClaw itself.

In 1 instance, a post went viral in which an AI cause appeared to beryllium encouraging its chap agents to make their ain secret, end-to-end-encrypted connection wherever they could signifier amongst themselves without humans knowing.

But researchers soon revealed that the vibe-coded Moltbook wasn’t precise secure, meaning that it was precise casual for quality users to airs arsenic AIs to marque posts that would trigger viral societal hysteria.

Again, adjacent though the treatment astir Moltbook was much grounded successful panic than reality, Meta saw thing successful the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would articulation Meta Superintelligence Labs.

It seems unusual that Meta would bargain a societal web wherever each of the users are bots. While Meta hasn’t revealed overmuch astir the acquisition, we theorize that owning Moltbook is much astir gaining entree to the endowment down it, who are enthusiastic astir experimenting with AI cause ecosystems. CEO Mark Zuckerberg has said it himself: He thinks that 1 day, each concern volition person a concern AI.

As we ticker the hubbub astir OpenClaw, Moltbook, and NanoClaw play retired it seems arsenic though those who predicted an agentic AI aboriginal whitethorn beryllium onto something, at slightest for now.

Chip shortages, hardware drama, and information halfway demands escalate

The harsh demands of the AI manufacture — which necessitate computing powerfulness and information centers successful unprecedented volumes — are reaching a constituent wherever the mean user has nary prime but to wage attention. Now, it whitethorn not adjacent beryllium imaginable for the manufacture to fulfill the astronomical demands for representation chips, and consumers are already seeing the prices of their phones, laptops, cars, and different hardware increase.

So far, analysts from IDC and Counterpoint person predicted that smartphone shipments, for example, volition plummet astir 12 to 13 percent this year; Apple has already raised MacBook Pro prices by up to $400.

Google, Amazon, Meta, and Microsoft are readying to walk up to a combined $650 billion connected information centers unsocial this year, which is an estimated 60% summation from past year.

If the spot shortage doesn’t deed you successful your wallet, it mightiness deed your assemblage astatine large. In the U.S. alone, astir 3,000 caller information centers are nether construction, adding to the 4,000 already operating successful the country. The request for laborers to physique these information centers is important capable that “man camps” person sprung up successful Nevada and Texas, attempting to lure workers with the committedness of play simulator crippled rooms and steaks grilled on-demand.

Not lone does information halfway operation person a semipermanent interaction connected the environment, but it besides creates health hazards for adjacent residents, polluting the aerial and impacting the information of adjacent h2o sources.

All the while, 1 of the astir invaluable hardware and spot developers, Nvidia, is reshaping its narration to starring AI companies similar OpenAI and Anthropic. Nvidia has been an ongoing backer of these companies, sparking concerns astir the circularity of the AI industry, and however overmuch of those eye-popping valuations are based connected recursive deals with each other. Last year, for example, Nvidia invested $100 cardinal successful OpenAI stock, and OpenAI past said it would bargain $100 cardinal of Nvidia chips.

It was surprising, then, erstwhile Nvidia CEO Jensen Huang said that his institution would stop investing successful OpenAI and Anthropic. He said that this is due to the fact that the companies program to spell nationalist aboriginal this year, though that logic doesn’t rather marque sense, since investors typically funnel successful much wealth pre-IPO to extract arsenic overmuch worth arsenic possible.

Read Entire Article