The conversation about unconscious bias in artificial intelligence often focuses on algorithms that unintentionally cause disproportionate harm to entire swaths of society—those that wrongly predict black defendants will commit future crimes, for example, or facial-recognition technologies developed mainly by using photos of white men that do a poor job of identifying women and people with darker skin.
But the problem could run much deeper than that. Society should be on guard for another twist: the possibility that nefarious actors could seek to attack artificial intelligence systems by deliberately introducing bias into them. … This could introduce a worrisome new dimension to cyberattacks, disinformation campaigns or the proliferation of fake news.
According to a U.S. government study on big data and privacy, biased algorithms could make it easier to mask discriminatory lending, hiring or other unsavory business practices.
…
Academics and industry observers have called for legislative oversight that addresses technological bias. Tech companies have pledged to combat unconscious bias in their products by diversifying their workforces and providing unconscious bias training.
As with technological advances throughout history, we must continue to examine how we implement algorithms in society and what outcomes they produce. Identifying and addressing bias in those who develop algorithms, and the data used to train them, will go a long way to ensuring that artificial intelligence systems benefit us all.
Read full, original post: When AI Misjudgment Is Not an Accident