The GLP is committed to full transparency. Download and review our Annual Report.

Delving into the real danger of artificial intelligence

, | | July 3, 2018

CEOs of artificial intelligence companies usually seek to minimize the threats posed by AI, rather than play them up. But on this week’s episode of ConvergeClara Labs co-founder and CEO Maran Nelson tells us there is real reason to be worried about AI — and not for the reasons that science fiction has trained us to expect.

Movies like Her and Ex Machina depict a near future in which anthropomorphic artificial intelligences manipulate our emotions and even commit violence against us. But threats like Ex Machina’s Ava will require several technological breakthroughs before they’re even remotely plausible, Nelson says. And in the meantime, actual state-of-the-art AI — which uses machine learning to make algorithmic predictions — is already causing harm.

AI predictions about which articles you might want to read contributed to the spread of misinformation on Facebook and the 2008 financial crisis, Nelson says. And because algorithms operate invisibly — unlike Ava and other AI characters in fiction — they’re more pernicious.

Related article:  Tearing down the 'myth' of dopamine as the 'pleasure chemical'

Clara’s approach to AI is innocuous to the point of being dull: it makes a virtual assistant that schedules meetings for people.

Even a state-of-the-art AI can’t process this message with a high degree of confidence — so Clara hires people to check the AI’s work.

Nelson says it’s essential to building AI that is both powerful and responsible.

Read full, original post: How science fiction is training us to ignore the real threats posed by AI

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend