The GLP is committed to full transparency. Download and review our just-released 2019 Annual Report.

The frightening thing about military AI: It may be too easily fooled, ‘turned against it owners’

| | October 31, 2019

Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets—a Tesla electric car.

Tesla’s algorithms are normally brilliant at spotting drops of rain on a windshield or following the lines on the road, but they work in a way that’s fundamentally different from human perception. That makes such “Deep learning” algorithms, which are rapidly sweeping through different industries for applications such as facial recognition and cancer diagnosis, surprisingly easy to fool if you find their weak points.

Leading a Tesla astray might not seem like a strategic threat to the United States. But what if similar techniques were used to fool attack drones, or software that analyzes satellite images, into seeing things that aren’t there—or not seeing things that are?

Related article:  Will China tell the world about its third controversial CRISPR baby?

The ambition to build the smartest, and deadliest, weapons is understandable, but as the Tesla hack shows, an enemy that knows how an AI algorithm works could render it useless or even turn it against its owners. The secret to winning the AI wars might rest not in making the most impressive weapons but in mastering the disquieting treachery of the software.

Read full, original post: Military artificial intelligence can be easily and dangerously fooled

The GLP aggregated and excerpted this article to reflect the diversity of news, opinion, and analysis. Click the link above to read the full, original article.
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.

Send this to a friend