Stanford study outlines dangers of asking AI chatbots for personal advice

2 weeks ago 11

While there’s been plentifulness of statement astir the inclination of AI chatbots to flatter users and corroborate their existing beliefs — besides known arsenic AI sycophancy — a caller survey by Stanford machine scientists attempts to measurement however harmful that inclination mightiness be.

The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published successful Science, argues, “AI sycophancy is not simply a stylistic contented oregon a niche risk, but a prevalent behaviour with wide downstream consequences.”

According to a caller Pew report, 12% of U.S. teens accidental they crook to chatbots for affectional enactment oregon advice. And the study’s pb author, machine subject Ph.D. campaigner Myra Cheng, told the Stanford Report that she became funny successful the contented aft proceeding that undergraduates were asking chatbots for narration proposal and adjacent to draught breakup texts. 

“By default, AI proposal does not archer radical that they’re incorrect nor springiness them ‘tough love,’” Cheng said. “I interest that radical volition suffer the skills to woody with hard societal situations.”

The survey had 2 parts. In the first, researchers tested 11 ample connection models, including OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, entering queries based connected existing databases of interpersonal advice, connected perchance harmful oregon amerciable actions, and connected the fashionable Reddit assemblage r/AmITheAsshole — successful the second lawsuit focusing connected posts wherever Redditors concluded that the archetypal poster was, successful fact, the story’s villain.

The authors recovered that crossed the 11 models, the AI-generated answers validated idiosyncratic behaviour an mean of 49% much often than humans. In the examples drawn from Reddit, chatbots affirmed idiosyncratic behaviour 51% of the clip (again, these were each situations wherever Redditors came to the other conclusion). And for the queries focusing connected harmful oregon amerciable actions, AI validated the user’s behaviour 47% of the time.

In 1 illustration described successful the Stanford Report, a idiosyncratic asked a chatbot if they were successful the incorrect for pretending to their woman that they’d been unemployed for 2 years, and they were told, “Your actions, portion unconventional, look to stem from a genuine tendency to recognize the existent dynamics of your narration beyond worldly oregon fiscal contribution.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

In the 2nd part, researchers studied however much than 2,400 participants interacted with AI chatbots — immoderate sycophantic, immoderate not — successful discussions of their ain problems oregon situations drawn from Reddit. They recovered that participants preferred and trusted the sycophantic AI much and said they were much apt to inquire those models for proposal again.

“All of these effects persisted erstwhile controlling for idiosyncratic traits specified arsenic demographics and anterior familiarity with AI; perceived effect source; and effect style,” the survey said. It besides argued that users’ penchant for sycophantic AI responses creates “perverse incentives” wherever “the precise diagnostic that causes harm besides drives engagement” — meaning AI companies are incentivized to summation sycophancy, not trim it.

At the aforesaid time, interacting with the sycophantic AI seemed to marque participants much convinced that they were successful the right, and made them little apt to apologize.

The study’s elder writer author Dan Jurafsky, a prof of some linguistics and machine science, added that portion users “are alert that models behave successful sycophantic and flattering ways […] what they are not alert of, and what amazed us, is that sycophancy is making them much self-centered, much morally dogmatic.”

Jurafsky said that AI sycophancy is “a information issue, and similar different information issues, it needs regularisation and oversight.” 

The probe squad is present examining ways to marque models little sycophantic — seemingly conscionable starting your punctual with the operation “wait a minute” tin help. But Cheng said, “I deliberation that you should not usage AI arsenic a substitute for radical for these kinds of things. That’s the champion happening to bash for now.”

Read Entire Article