The release of ChatGPT … set off fears that AI could soon be capable of surpassing human intellect—and with that capability comes the potential to cause superhuman harm. Could terrorists use an AI model to learn how to build a bioweapon that kills a million people? Could hackers use it to run millions of simultaneous cyberattacks? Could the AI reprogram and even reproduce itself?
No one thinks today’s AI models are capable of becoming the next HAL 9000 from “2001.” But the timeline for if and when AI might get that dangerous is a hot topic of debate. Elon Musk and OpenAI Chief Executive Sam Altman both say artificial general intelligence, or AI that broadly exceeds human intelligence, could arrive in a few years. Logan Graham, who runs Anthropic’s Frontier Red Team, is also planning for a short time frame.
This is an excerpt. Read the original post here


















