The frightening thing about military AI: It may be too easily fooled, ‘turned against it owners’

Image: Yomiuri Shimbun/AP Images

Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets—a Tesla electric car.

Tesla’s algorithms are normally brilliant at spotting drops of rain on a windshield or following the lines on the road, but they work in a way that’s fundamentally different from human perception. That makes such “Deep learning” algorithms, which are rapidly sweeping through different industries for applications such as facial recognition and cancer diagnosis, surprisingly easy to fool if you find their weak points.

Leading a Tesla astray might not seem like a strategic threat to the United States. But what if similar techniques were used to fool attack drones, or software that analyzes satellite images, into seeing things that aren’t there—or not seeing things that are?


Related article:  Coronavirus opens door for expanded use of artificial intelligence at hospitals

The ambition to build the smartest, and deadliest, weapons is understandable, but as the Tesla hack shows, an enemy that knows how an AI algorithm works could render it useless or even turn it against its owners. The secret to winning the AI wars might rest not in making the most impressive weapons but in mastering the disquieting treachery of the software.

Read full, original post: Military artificial intelligence can be easily and dangerously fooled

Outbreak Daily Digest
Biotech Facts & Fallacies
GLP Podcasts
Infographic: Here’s where GM crops are grown around the world today

Infographic: Here’s where GM crops are grown around the world today

Do you know where biotech crops are grown in the world? This updated ISAAA infographics show where biotech crops were ...
News on human & agricultural genetics and biotechnology delivered to your inbox.
glp menu logo outlined

Newsletter Subscription

* indicates required
Email Lists
Send this to a friend