Can AI function like a human brain? - Researchers have been asking this question for decades and their long-drawn-out pursuit has invoked doubts, ridicule, scorn, and what not.
But now, armed with Neuromorphic Computing, they are ready to show the world that their dream can change the world for better. As we unearth the benefits, the success of our machine learning and AI quest seem to depend to a great extent on the success of Neuromorphic Computing.
The technologies of the future like autonomous vehicles and robots will need access to and utilization of an enormous amount of data and information in real-time. Today, to a limited extent, this is done by machine learning and AI that depend on supercomputer power. But these needs are extending and speed, power, and size are emerging as prime impediments.
Neuromorphic Computing chips can process multiple facts, learn tasks and patterns at high speed. These chips are expected to consume less power (up to 1000 times less) and can work with the efficiency of supercomputers.
Researchers know it and so they are leaving no stone unturned. While a few are getting as literal as copying the physical form of the human brain, others are trying to replicate its function. It’s the latter we are optimistic about as they are expected to replace Gordon Moore’s groundbreaking idea of “packing transistors onto substrates”.
Though many innovators are driven to the cause, a few are leading the race. Here’s a look at important developments:
Intel’s Loihi - The Future of GPUs
A 14-nanometer chip with over 2 billion transistors and three managing Lakemont cores. “It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational “neurons” that communicate with one another, allowing the chip to understand stimuli.”
In the future, it is expected to learn from experiences and make decisions on its own. Icing on the cake, it uses a fragment of energy and is expected to replace GPUs.
IBM’s TrueNorth - The Hercules of Transistor Count
It has 4,096 cores, Samsung’s 28nm process with 5.4 billion transistors. It is IBM’s largest chip in transistor count and uses less than 100Mw of power while simulating complex recurrent neural networks. It has a power density of 20mW / cm2.
IBM says it can efficiently process “high-dimensional, noisy sensory data in real-time”. TrueNorth consumes less power than a conventional computer.
MIT’s - Brain on A Chip
A chip built from silicon geranium and with “more than 100 trillion synapses that mediate neuron signaling in the brain”. In one simulation it represented human handwriting with 95 percent accuracy. It could be used in making humanoids and autonomous driving technology.
Qualcomm’s - Zeroth processors
Working on three main goals of “biologically inspired learning; enabling devices to see and perceive the world as humans do and; creating and defining Neural Processing Unit—NPU”, Qualcomm is developing new computer architecture that dismantles the traditional mold.
Neuromorphic computing can greatly impact the future of machine learning and AI. “These new kind of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive.”-Deloitte market analysis
With Neuromorphic Computing at its side, the future of AI sure looks bright.
Did you know?
Neuromorphic Computing is the 5th generation of AI.
The 1st generation AI defined rules and followed classical logic to arrive at conclusions within a specific, narrowly outlined problem domain.
The 2nd generation AI used deep learning networks to analyze the inputs and were focused on sensing and perception.
The 3rd generation AI interpreted and adapted like the human thought process.
The 4th generation AI used a mix of different machine learning algorithms and other forms of Artificial Intelligence algorithms to achieve their goal or mission.