Why the quest for artificial intelligence almost died in infancy

| | October 19, 2017
ai e
This article or excerpt is included in the GLP’s daily curated selection of ideologically diverse news, opinion and analysis of biotechnology innovation.

It feels as if we’re riding the wave of a novel technological era, but the current rise in neural networks is actually a renaissance of sorts.

It may be hard to believe, but artificial intelligence researchers were already beginning to see the promise in neural networks during World War II in their mathematical models. But by the 1970s, the field was ready to give up on them entirely.

[I]n 1959, ADALINE arrived via researchers at Stanford University, and was at the time the biggest artificial brain. But it, too, could only handle a few processes at a time and was meant as a demonstration of machine learning rather than being set to a specific task.

“Unfortunately, these earlier successes caused people to exaggerate the potential of neural networks, particularly in light of the limitation in the electronics then available,” [professor Eyal Reingold] wrote in a history of artificial intelligence.

ADALINE and its primitive cousins may have faded from public perception as machine learning has come into its own over the past decade. But this revolution, decades in the making, wasn’t hampered by these neural networks. Instead, they were somehow both too primitive and too advanced for their time, but their time has certainly come.

The GLP aggregated and excerpted this blog/article to reflect the diversity of news, opinion, and analysis. Read full, original post: We Almost Gave Up On Building Artificial Brains

Share via
News on human & agricultural genetics and biotechnology delivered to your inbox.
Optional. Mail on special occasions.
Send this to a friend