Artificial intelligence is often hailed as a great catalyst of medical innovation, a way to find cures to diseases that have confounded doctors and make health care more efficient, personalized, and accessible.
But what if it turns out to be poison?
Jonathan Zittrain, a Harvard Law School professor, posed that question during a conference in Boston [June 18] that examined the use of AI to accelerate the delivery of precision medicine to the masses.
Kadija Ferryman, a fellow at the Data & Society Research Institute in New York, said AI is just as likely to perpetuate bias as it is to eliminate it. That’s because bias is embedded in the data being fed to algorithms, whose outputs could be skewed as a result.
She cited an article in The Atlantic magazine that highlighted an algorithm used to identify skin cancer that was less effective in people with darker skin. In mental health care, data kept in electronic medical records has been shown to be infused with bias toward women and people of color.
The inequity in the data doesn’t just translate to unequal treatment, it can lead to ineffective care, said Ferryman.
Read full, original post: What if AI in health care is the next asbestos?