Artificial intelligence doesn’t ‘think’ like we do. How can we ever trust it?

bigstock
Image: Bigstock
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

Today, digital information technology has redefined how people interact with each other socially, and even how some find their partners. Redefined relationships between consumers, producers and suppliers, industrialists and laborers, service providers and clients, friends and partners are already creating an upheaval in society that is altering the postindustrial account of moral reasoning.

Humans cannot sift through their experiences at a long-range timescale, while machines can easily do so. Humans will rule out factors that are perceived to be irrelevant and inconsequential for a decision, while a machine will not rule anything out.

To understand, how artificial the artificial intelligence–based decisions may be, it is important to examine how humans make decisions. Human decisions may be guided by a set of explicit rules, or by associations simply based on consequentialism, or by a combination. Humans are also selective about which information is relevant for making a decision. Lacking selectivity, machines may consider factors that humans deem impertinent in making a decision.

Related article:  ‘Wholly unethical’: Gay conversion therapy does not work

Without internal logical consistency, AI systems lack robustness and accountability—two critical measures for engendering trust in a society. By creating a rift between moral sentiment and logical reasoning, the inscrutability of data-driven decisions forecloses the ability to engage critically with decision-making processes.

Read full, original post: Ethics in the Age of Artificial Intelligence

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend