The GLP is committed to full transparency. Download and review our just-released 2019 Annual Report.

We don’t always know why ‘intelligent’ machines do what they do. Should we study them like animals?

Artificial intelligence algorithms are often seen as ‘black boxes’ whose rules remain inaccessible. We must therefore create a new scientific discipline to understand the behaviour of the machines that rely on them, as we did for the study of animal behaviour. This is the perspective of Jean-François Bonnefon, who along with 22 other scientists just signed an editorial in the journal Nature.

Understanding the behaviour of intelligent machines is a broader objective than understanding how they are programmed. Sometimes a machine’s programming is not accessible, for example when its code is a trade secret. In this case, it is important to understand a machine from the outside, by observing its actions and measuring their consequences. Other times, it is not possible to completely predict a machine’s behaviour based on its code, because this behaviour will change in a complex manner when the machine adapts to its environment through a learning process, guided but ultimately opaque. In this case, we need to continually observe this behaviour, and simulate its potential evolution.

Related article:  Why people don't trust artificial intelligence: It's an ‘explainability’ problem

A new scientific discipline dedicated to machine behaviour is needed to meet these challenges, just as we created the scientific discipline of animal behaviour.

Read full, original post: Is a Robot just another “Animal”?

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend