Pursuing artificial intelligence that’s free of bias

| | August 30, 2018
ai bias
Image credit: Shutterstock
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

As the problems caused by algorithmic bias have bubbled to the surface, experts have proposed all sorts of solutions on how to make artificial intelligence more fair and transparent so that it works for everyone.

Now scientists from IBM have a new safeguard that they say will make artificial intelligence more safe, transparent, fair, and effective. They propose that, right before developers start selling an algorithm, they should publish a Supplier’s Declaration of Conformity (SDoC). As a report or user manual, the SDoC would show how well the algorithm performed at standardized tests of performance, fairness and risk factors, and safety measures. And they should make it available to anyone who’s interested.

In a research paper published [August 22], the IBM scientists argue that this kind of transparency could help build public trust and reassure prospective clients that a particular algorithm will do what it’s supposed to without screwing anyone over based on biased training data.

Related article:  Artificial intelligence: Should we worry?

Will a police department like the LAPD, which has used blatantly racist policing algorithms in the past, necessarily care enough about the details of an SDoC to find a better system? Truth is, we don’t know yet.

But if you combine these reports with other tools like third-party audits, the public can demand algorithms that treat everyone fairly.

Read full, original post: To Build Trust In Artificial Intelligence, IBM Wants Developers To Prove Their Algorithms Are Fair

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend