The GLP is committed to full transparency. Download and review our Annual Report.

Artificial intelligence doesn’t ‘think’ like we do. How can we ever trust it?

Today, digital information technology has redefined how people interact with each other socially, and even how some find their partners. Redefined relationships between consumers, producers and suppliers, industrialists and laborers, service providers and clients, friends and partners are already creating an upheaval in society that is altering the postindustrial account of moral reasoning.

Humans cannot sift through their experiences at a long-range timescale, while machines can easily do so. Humans will rule out factors that are perceived to be irrelevant and inconsequential for a decision, while a machine will not rule anything out.

To understand, how artificial the artificial intelligence–based decisions may be, it is important to examine how humans make decisions. Human decisions may be guided by a set of explicit rules, or by associations simply based on consequentialism, or by a combination. Humans are also selective about which information is relevant for making a decision. Lacking selectivity, machines may consider factors that humans deem impertinent in making a decision.

Related article:  Can you become addicted to pot? This gene increases your risk.

Without internal logical consistency, AI systems lack robustness and accountability—two critical measures for engendering trust in a society. By creating a rift between moral sentiment and logical reasoning, the inscrutability of data-driven decisions forecloses the ability to engage critically with decision-making processes.

Read full, original post: Ethics in the Age of Artificial Intelligence

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend