[A] new report by more than 20 researchers from the Universities of Oxford and Cambridge, OpenAI, and the Electronic Frontier Foundation warns that [artificial intelligence] creates new opportunities for criminals, political operatives, and oppressive governments—so much so that some AI research may need to be kept secret.
Included in the report, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, are [..] dystopian vignettes involving artificial intelligence that seem taken straight out of the Netflix science fiction show Black Mirror.
- large-scale scam operations that identify potential victims online by the truckload, using AI to spot people with wealth
- convincing news reports made up of authentic-looking but entirely fake AI-generated video and pictures
- attacks by swarms of drones that a single person controls, using an AI to manage large numbers of semi-autonomous machines
- systems that automate the drudge work of criminality—for example, negotiating ransom payments with people after infecting their computers with malware—to enable scams at scale
The study is less sure of how to counter such threats. It recommends more research and debate on the risks of AI and suggests that AI researchers need a strong code of ethics. But it also says they should explore ways of restricting potentially dangerous information, in the way that research into other “dual use” technologies with weapons potential is sometimes controlled.
Read full, original post: The “Black Mirror” scenarios that are leading some experts to call for more secrecy on AI