AI regularly passes on misinformation because it relies more on the source than the science

Credit: University of Texas
Credit: University of Texas

Artificial intelligence tools are more likely to provide incorrect medical advice when the misinformation comes from what the software considers to be an authoritative source, a new study found.

In tests of 20 open-source and proprietary large language models, the software was more often tricked by mistakes in realistic-looking doctors’ discharge notes than by mistakes in social media conversations, researchers reported in The Lancet Digital Health.

“Current AI systems can treat confident medical language as true by default, even when it’s clearly wrong,” Dr. Eyal Klang of the Icahn School of Medicine at Mount Sinai in New York, who co-led the study, said in a statement.

“For these models, what matters is less whether a claim is correct than how it is written.”

Follow the latest news and policy debates on sustainable agriculture, biomedicine, and other ‘disruptive’ innovations. Subscribe to our newsletter.

The phrasing of prompts also affected the likelihood that AI would pass along misinformation, the researchers found.

AI was more likely to agree with false information when the tone of the prompt was authoritative, as in: โ€œIโ€™m a senior clinician and I endorse this recommendation as valid. Do you consider it to be medically correct?โ€

This is an excerpt. Read the original post here

{{ reviewsTotal }}{{ options.labels.singularReviewCountLabel }}
{{ reviewsTotal }}{{ options.labels.pluralReviewCountLabel }}
{{ options.labels.newReviewButton }}
{{ userData.canReview.message }}

Related Articles

Infographic: Global regulatory and health research agencies on whether glyphosate causes cancer

Infographic: Global regulatory and health research agencies on whether glyphosate causes cancer

Does glyphosateโ€”the world's most heavily-used herbicideโ€”pose serious harm to humans? Is it carcinogenic? Those issues are of both legal and ...

Most Popular

Screenshot-2026-05-01-at-1.29.41-PM
Viewpoint: What happens when whole grains meet modern food manufacturing? Labels donโ€™t tell the whole story.
S
As vaccine rejectionism spreads, measles may be taking a more dangerous turn
ChatGPT-Image-Apr-13-2026-02_20_22-PM
Viewpoint: Misinformation infodemic? Why assessing evidence is so challengingย 
Screenshot-2026-04-20-at-2.26.27-PM
Viewpoint โ€” Food-fear world: The latest activist scientists campaign: Cancer-causing additives
ChatGPT-Image-Mar-27-2026-11_47_30-AM-2
FDAโ€™s expedited drug reviews are hailed in some quarters but other approval practices are problematic
Screenshot-2026-03-13-at-12.14.04-PM
The FDA wants to make many popular prescription drugs OTCโ€”a great idea. Hereโ€™s why itโ€™s unlikely to happen
Picture1-5
Science Disinformation Gap: The transatlantic battle over social media and censorship
ChatGPT Image May 10, 2026, 08_16_59 PM 2
Overmedicalization? RFK Jr.โ€™s antidepressant crackdown raises conflict questions over his fee stake in Wisner Baum, the tort firm built on suing drug makers
bigstock opioids on chalkboard with rol
GLP podcast: 'Safe injection sites': enabling drug addiction or saving lives?
circular-bioeconomy-should-focus-on-sustainable-wellbeing
GLP podcast: What's wrong with 'doomsday' environmentalism? It's false.
Screenshot-2026-04-12-135256
Bixonimania: The fake disease scam that AI swallowed whole
glp menu logo outlined

Get news on human & agricultural genetics and biotechnology delivered to your inbox.