Microsoft’s research paper, provocatively called “Sparks of Artificial General Intelligence,” goes to the heart of what technologists have been working toward — and fearing — for decades. If they build a machine that works like the human brain or even better, it could change the world. But it could also be dangerous.
And it could also be nonsense. Making A.G.I. claims can be a reputation killer for computer scientists. What one researcher believes is a sign of intelligence can easily be explained away by another, and the debate often sounds more appropriate to a philosophy club than a computer lab. Last year, Google fired a researcher who claimed that a similar A.I. system was sentient, a step beyond what Microsoft has claimed. A sentient system would not just be intelligent. It would be able to sense or feel what is happening in the world around it.
But some believe the industry has in the past year or so inched toward something that can’t be explained away: A new A.I. system that is coming up with humanlike answers and ideas that weren’t programmed into it.
Some A.I. experts saw the Microsoft paper as an opportunistic effort to make big claims about a technology that no one quite understood. Researchers also argue that general intelligence requires a familiarity with the physical world, which GPT-4 in theory does not have.