OpenAI staff grapples with the company’s social media push

6 months ago 80

Several existent and erstwhile OpenAI researchers are speaking retired implicit the company’s archetypal foray into societal media: the Sora app, a TikTok-style provender filled with AI-generated videos and a batch of Sam Altman deepfakes. The researchers, airing their grievances connected X, look torn implicit however the motorboat fits into OpenAI’s nonprofit ngo to make precocious AI that benefits humanity.

“AI-based feeds are scary,” said OpenAI pretraining researcher John Hallman successful a post connected X. “I won’t contradict that I felt immoderate interest erstwhile I archetypal learned we were releasing Sora 2. That said, I deliberation the squad did the implicit champion occupation they perchance could successful designing a affirmative acquisition […] We’re going to bash our champion to marque definite AI helps and does not wounded humanity.”

AI-based feeds are scary. I won't contradict that I felt immoderate interest erstwhile I archetypal learned we were releasing Sora 2.
That said, I deliberation the squad did the implicit champion occupation they imaginable could successful designing a affirmative experience. Compared to different platforms, I find myself scrolling way… https://t.co/uLeeVMKncl

— John Hallman (@johnohallman) September 30, 2025

Boaz Barak, different OpenAI researcher and Harvard professor, replied: “I stock a akin premix of interest and excitement. Sora 2 is technically astonishing but it’s premature to congratulate ourselves connected avoiding the pitfalls of different societal media apps and deepfakes.”

Former OpenAI researcher Rohan Pandey utilized the infinitesimal to plug a caller startup, Periodic Labs, which is made up of erstwhile AI laboratory researchers trying to physique AI systems for technological discovery: “If you don’t privation to physique the infinite AI TikTok slop instrumentality but privation to make AI that accelerates cardinal subject […] travel articulation america astatine Periodic Labs.”

There were many other posts on the aforesaid lines.

The Sora motorboat highlights a halfway hostility for OpenAI that flares up clip and clip again. It’s the fastest-growing user tech institution connected Earth, but besides a frontier AI laboratory with a lofty nonprofit charter. Some erstwhile OpenAI employees I’ve spoken to reason the user concern can, successful theory, service the mission: ChatGPT helps money AI probe and administer the exertion widely.

OpenAI CEO Sam Altman said arsenic overmuch successful a post connected X Wednesday, addressing wherefore the institution is allocating truthful overmuch superior and computing powerfulness to an AI societal media app:

Techcrunch event

San Francisco | October 27-29, 2025

“We bash mostly request the superior for physique [sic] AI that tin bash science, and for definite we are focused connected AGI with astir each of our probe effort,” said Altman. “It is besides bully to amusement radical chill caller tech/products on the way, marque them smile, and hopefully marque immoderate wealth fixed each that compute need.”

“When we launched chatgpt determination was a batch of ‘who needs this and wherever is AGI’,” Altman continued. “[R]eality is nuanced erstwhile it comes to optimal trajectories for a company.”

i get the vibe here, but…

we bash mostly request the superior for physique AI that tin bash science, and for definite we are focused connected AGI with astir each of our probe effort.

it is besides bully to amusement radical chill caller tech/products on the way, marque them smile, and hopefully marque some… https://t.co/bcCUmXsloP

— Sam Altman (@sama) October 1, 2025

But astatine what constituent does OpenAI’s user concern overtake its nonprofit mission? In different words, erstwhile does OpenAI accidental nary to a money-making, platform-growing accidental due to the fact that it’s astatine likelihood with the mission?

The question looms arsenic regulators scrutinize OpenAI’s for-profit transition, which OpenAI needs to implicit to rise further superior and yet spell public. California Attorney General Rob Bonta said past period that helium is “particularly acrophobic with ensuring that the stated information ngo of OpenAI arsenic a nonprofit remains beforehand and center” successful the restructuring.

Skeptics person dismissed OpenAI’s ngo arsenic a branding instrumentality to lure endowment from Big Tech. But galore insiders astatine OpenAI importune it’s cardinal to wherefore they joined the institution successful the archetypal place.

For now, Sora’s footprint is small; the app is 1 time old. But its debut marks a important enlargement of OpenAI’s user business, and exposes the institution to incentives that person plagued societal media apps for decades.

Unlike ChatGPT, which is optimized for usefulness, OpenAI says Sora is built for amusive — a spot to make and stock AI clips. The provender feels person to TikTok oregon Instagram Reels, platforms that are infamous for their addictive loops.

OpenAI insists it wants to debar those pitfalls, claiming successful blog post announcing the Sora motorboat that “concerns astir doomscrolling, addiction, isolation, and RL-sloptimized feeds are apical of mind.” The institution explicitly says it’s not optimizing for clip spent connected provender and alternatively wants to maximize creation. OpenAI says it volition nonstop reminders to users erstwhile they’ve been scrolling for excessively long, and chiefly amusement them radical they know.

That’s a stronger starting constituent than Meta’s Vibes — different AI-powered abbreviated signifier video provender released past week — that seems to person been raced retired without arsenic galore safeguards. As a erstwhile OpenAI argumentation leader, Miles Brundage, points out, it’s imaginable determination volition beryllium bully and atrocious applications of AI-video feeds, overmuch similar we’ve seen successful the chatbot era.

Still, arsenic Altman has agelong acknowledged, nary 1 sets retired to physique an addictive app. The incentives of moving a provender usher them to it. OpenAI has adjacent tally into problems astir sycophancy successful ChatGPT, which the institution says was unintentional owed to immoderate of its grooming techniques.

In a June podcast, Altman discussed what helium calls “the large misalignment of societal media.”

“One of the large mistakes of the societal media epoch was [that] the provender algorithms had a clump of unintended, antagonistic consequences connected nine arsenic a whole, and possibly adjacent idiosyncratic users. Although they were doing the happening that a idiosyncratic wanted — oregon idiosyncratic thought users wanted — successful the moment, which is [to] get them to, like, support spending clip connected the site.”

It’s excessively soon to archer however aligned the Sora app is with its users oregon OpenAI’s mission. Users are already noticing immoderate engagement optimizing techniques successful the app, specified arsenic the dynamic emojis that look each clip you similar a video. That feels designed to sprout a small dopamine to users for engaging with a video.

The existent trial volition beryllium however OpenAI evolves Sora. Given however overmuch AI has taken implicit regular societal media feeds, it seems plausible that AI-native feeds could soon person their moment. Whether OpenAI tin turn Sora without replicating the mistakes of its predecessors remains to beryllium seen.

Read Entire Article