Neural networks are the crown jewel of the AI boom. They gorge on data and do things like transcribe speech or describe images with near-perfect accuracy.
The catch is that neural nets, which are modeled loosely on the structure of the human brain, are typically constructed in software rather than hardware, and the software runs on conventional computer chips. That slows things down.
IBM has now shown that building key features of a neural net directly in silicon can make it 100 times more efficient. Chips built this way might turbocharge machine learning in coming years.
The IBM chip, like a neural net written in software, mimics the synapses that connect individual neurons in a brain. The strength of these synaptic connections needs to be tuned in order for the network to learn. In a living brain, this happens in the form of connections growing or withering over time.
The researchers tested a neural network built from the components of two simple image-recognition tasks: handwriting and color image classification. They found the system to be as accurate as a software-based deep neural network even though it consumed only 1 percent as much energy.
IBM will still need to build and test a complete chip. Nevertheless, the work may be a significant, biologically inspired step toward a computer with AI logic burned into its core.
Read full, original post: AI could get 100 times more energy-efficient with IBM’s new artificial synapses