Why people don’t trust artificial intelligence: It’s an ‘explainability’ problem

| | September 26, 2018
AI
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the ‘black box’ problem: if people don’t know how AI comes up with its decisions, they won’t trust it.

In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology.

If oncologists had understood how Watson had come up with its [diagnoses] – what the industry refers to as ‘explainability’ – their trust level may have been higher.

“The more complex a system is, the less explainable it will be,” says John Zerilli, postdoctoral fellow at University of Otago and researcher into explainable AI. “If you want your system to be explainable, you’re going to have to make do with a simpler system that isn’t as powerful or accurate.”

Related article:  We don't always know why 'intelligent' machines do what they do. Should we study them like animals?

The $2 billion that DARPA is investing in what it calls ‘third-wave AI systems,’ however, may very well be sufficient to resolve this tradeoff. “[Explainable AI] is one of a handful of current DARPA programs expected to enable ‘third-wave AI systems,’ where machines understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena,” DARPA’s [David] Gunning [says].

Read full, original post: Don’t Trust Artificial Intelligence? Time To Open The AI ‘Black Box’

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend