Can Google’s medical AI improve our medical system? Laboratory results and real life offer different answers

image
A nurse operates the retinal scanner, taking images of the back of a patient’s eye. Credit: Google
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

[A] study from Google Health—the first to look at the impact of a deep-learning tool in real clinical settings—reveals that even the most accurate AIs can actually make things worse.

In the system Thailand had been using, nurses take photos of patients’ eyes during check-ups and send them off to be looked at by a specialist elsewhere­—a process that can take up to 10 weeks. AI developed by Google Health can identify signs of diabetic retinopathy from an eye scan with more than 90% accuracy—which the team calls “human specialist level”—and, in principle, give a result in less than 10 minutes.

When it worked well, the AI did speed things up. But it sometimes failed to give a result at all. Like most image recognition systems, the deep-learning model had been trained on high-quality scans; to ensure accuracy, it was designed to reject images that fell below a certain threshold of quality. With nurses scanning dozens of patients an hour and often taking the photos in poor lighting conditions, more than a fifth of the images were rejected.

Related article:  Podcast: How artificial intelligence, machine learning can help us realize the value of all that genetic data we're collecting

“This is a crucial study for anybody interested in getting their hands dirty and actually implementing AI solutions in real-world settings,” says [AI researcher] Hamid Tizhoosh.

Read the original post

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend