Dr. Sina Bari, a practicing surgeon and AI healthcare person astatine information institution iMerit, has seen firsthand however ChatGPT tin pb patients astray with faulty aesculapian advice.
“I precocious had a diligent travel in, and erstwhile I recommended a medication, they had a dialog printed retired from ChatGPT that said this medicine has a 45% accidental of pulmonary embolism,” Dr. Bari told TechCrunch.
When Dr. Bari investigated further, helium recovered that the statistic was from a insubstantial astir the interaction of that medicine successful a niche subgroup of radical with tuberculosis, which didn’t use to his patient.
And yet, erstwhile OpenAI announced its dedicated ChatGPT Health chatbot past week, Dr. Bari felt much excitement than concern.
ChatGPT Health, which volition rotation retired successful the coming weeks, allows users to speech to the chatbot astir their wellness successful a much backstage setting, wherever their messages won’t beryllium utilized arsenic grooming information for the underlying AI model.
“I deliberation it’s great,” Dr. Bari said. “It is thing that’s already happening, truthful formalizing it truthful arsenic to support diligent accusation and enactment immoderate safeguards astir it […] is going to marque it each the much almighty for patients to use.”
Users tin get much personalized guidance from ChatGPT Health by uploading their aesculapian records and syncing with apps similar Apple Health and MyFitnessPal. For the security-minded, this raises contiguous reddish flags.
Techcrunch event
San Francisco | October 13-15, 2026
“All of a abrupt there’s aesculapian information transferring from HIPAA compliant organizations to non-HIPAA compliant vendors,” Itai Schwartz, co-founder of information nonaccomplishment prevention steadfast MIND, told TechCrunch. “So I’m funny to spot however the regulators would attack this.”
But the mode immoderate manufacture professionals spot it, the feline is already retired of the bag. Now, alternatively of Googling acold symptoms, radical are talking to AI chatbots – implicit 230 cardinal people already speech to ChatGPT astir their wellness each week.
“This was 1 of the biggest usage cases of ChatGPT,” Andrew Brackin, a spouse astatine Gradient who invests successful wellness tech, told TechCrunch. “So it makes a batch of consciousness that they would privation to physique a much benignant of private, secure, optimized mentation of ChatGPT for these wellness attraction questions.”
AI chatbots person a persistent occupation with hallucinations, a peculiarly delicate contented successful healthcare. According to Vectara’s Factual Consistency Evaluation Model, OpenAI’s GPT-5 is much prone to hallucinations than galore Google and Anthropic models. But AI companies spot the imaginable to rectify inefficiencies successful the healthcare abstraction (Anthropic besides announced a wellness merchandise this week).
For Dr. Nigam Shah, a medicine prof astatine Stanford and main information idiosyncratic for Stanford Health Care, the inability of American patients to entree attraction is much urgent than the menace of ChatGPT dispensing mediocre advice.
“Right now, you spell to immoderate wellness strategy and you privation to conscionable the superior attraction doc – the hold clip volition beryllium 3 to six months,” Dr. Shah said. “If your prime is to hold six months for a existent doctor, oregon speech to thing that is not a doc but tin bash immoderate things for you, which would you pick?”
Dr. Shah thinks a clearer way to present AI into healthcare systems comes connected the supplier side, alternatively than the diligent side.
Medical journals person often reported that administrative tasks tin devour astir fractional of a superior attraction physician’s time, which slashes the fig of patients they tin spot successful a fixed day. If that benignant of enactment could beryllium automated, doctors would beryllium capable to spot much patients, possibly reducing the request for radical to usage tools similar ChatGPT Health without further input from a existent doctor.
Dr. Shah leads a squad astatine Stanford that is processing ChatEHR, a bundle that is built into the physics wellness grounds (EHR) system, allowing clinicians to interact with a patient’s aesculapian records successful a much streamlined, businesslike manner.
“Making the physics aesculapian grounds much idiosyncratic affable means physicians tin walk little clip scouring each nook and cranny of it for the accusation they need,” Dr. Sneha Jain, an aboriginal tester of ChatEHR, said successful a Stanford Medicine article. “ChatEHR tin assistance them get that accusation up beforehand truthful they tin walk clip connected what matters — talking to patients and figuring retired what’s going on.”
Anthropic is besides moving connected AI products that tin beryllium utilized connected the clinician and insurer sides, alternatively than conscionable its public-facing Claude chatbot. This week, Anthropic announced Claude for Healthcare by explaining however it could beryllium utilized to trim the clip spent connected tedious administrative tasks, similar submitting anterior authorization requests to security providers.
“Some of you spot hundreds, thousands of these anterior authorization cases a week,” said Anthropic CPO Mike Krieger successful a caller presumption astatine J.P. Morgan’s Healthcare Conference. “So ideate cutting twenty, 30 minutes retired of each of them – it’s a melodramatic clip savings.”
As AI and medicine go much intertwined, there’s an inescapable hostility betwixt the 2 worlds – a doctor’s superior inducement is to assistance their patients, portion tech companies are yet accountable to their shareholders, adjacent if their intentions are noble.
“I deliberation that hostility is an important one,” Dr. Bari said. “Patients trust connected america to beryllium cynical and blimpish successful bid to support them.”















English (US) ·