Rooting racism and sexism out of artificial intelligence

| | November 2, 2017
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

In 2016, Microsoft released a “playful” chatbot named Tay onto Twitter designed to show off the tech giant’s burgeoning artificial intelligence research. Within 24 hours, it had become one of the internet’s ugliest experiments.

While it was a public relations disaster for Microsoft, Tay demonstrated an important issue with machine learning artificial intelligence: That robots can be as racist, sexist and prejudiced as humans if they acquire knowledge from text written by humans. Fortunately, scientists may now have discovered a way to better understand the decision-making process of artificial intelligence algorithms to prevent such bias.

[R]esearchers say [deep learning tool] DeepXplore can be used on artificial intelligence used in air traffic control systems, as well as uncovering malware disguised as benign code in antivirus software. The technology may also prove useful in eliminating racism and other discriminatory assumptions embedded within predictive policing and criminal sentencing software.

Earlier this year, a separate team of researchers from Princeton University and Bath University in the UK warned of artificial intelligence replicating the racial and gender prejudices of humans. “Don’t think that AI is some fairy godmother,” said study co-author Joanna Bryson. “AI is just an extension of our existing culture.”

The GLP aggregated and excerpted this blog/article to reflect the diversity of news, opinion, and analysis. Read full, original post: Robots With Artificial Intelligence Become Racist and Sexist—Scientists Think They’ve Found a Way to Change Their Minds

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend