In Brief
Posted:
11:51 AM PDT · April 5, 2026
Image Credits:Rafael Henrique/SOPA Images/LightRocket / Getty ImagesAI skeptics aren’t the lone ones informing users not to unthinkingly spot models’ outputs — that’s what the AI companies accidental themselves successful their presumption of service.
Take Microsoft, which is presently focused connected getting firm customers to wage for Copilot. But it’s besides been getting dinged connected societal media implicit Copilot’s presumption of use, which look to person been past updated connected October 24, 2025.
“Copilot is for amusement purposes only,” the institution warned. “It tin marque mistakes, and it whitethorn not enactment arsenic intended. Don’t trust connected Copilot for important advice. Use Copilot astatine your ain risk.”
A Microsoft spokesperson told PCMag that the institution volition beryllium updating what they described arsenic “legacy language.”
“As the merchandise has evolved, that connection is nary longer reflective of however Copilot is utilized contiguous and volition beryllium altered with our adjacent update,” the spokesperson said.
Tom’s Hardware noted that Microsoft isn’t the lone institution utilizing this benignant of disclaimer for AI. For example, some OpenAI and xAI caution users that they should not trust connected their output arsenic “the truth” (to punctuation xAI) oregon arsenic “a sole work of information oregon factual information” (OpenAI).
Subscribe for the industry’s biggest tech news















English (US) ·