The GLP is committed to full transparency. Download and review our Annual Report.

Here’s how artificial intelligence could ‘poison’ healthcare

| | July 3, 2019

Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible.

But what if it turns out to be poison?

Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston [June 18] that examined the use of AI to accelerate the delivery of precision medicine to the masses.

Kadija Ferryman, a fellow at the Data & Society Research Institute in New York, said AI is just as likely to perpetuate bias as it is to eliminate it. That’s because bias is embedded in the data being fed to algorithms, whose outputs could be skewed as a result.

She cited an article in The Atlantic magazine that highlighted an algorithm used to identify skin cancer that was less effective in people with darker skin. In mental health care, data kept in electronic medical records has been shown to be infused with bias toward women and people of color.

The inequity in the data doesn’t just translate to unequal treatment, it can lead to ineffective care, said Ferryman.

Read full, original post: What if AI in health care is the next asbestos?

Related article:  Viewpoint: Stop using human intelligence to explain machine learning
The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend