Neuromorphic chips have been approved in research showing that they are much more energy efficient at operating large deep learning networks than non-neuromorphic hardware.
This can become important as AI adoption increases.
The study was conducted by the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) in Austria using Intel’s Loihi 2 silicon, a second-generation experimental neuromorphic chip, announced by Intel Labs last year, which has about one million artificial neurons.
Their research paper, “A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware,” published in Nature Machine Intelligence, claims that Intel chips are up to 16 times more energy efficient in deep learning tasks than performing the same task on non-neuromorphic hardware. The hardware tested consisted of 32 Loihi chips.
While it may not seem surprising that specialized hardware would be more efficient for deep learning tasks, TU Graz claims that this is the first time it has been demonstrated experimentally.
According to TU Graz, this is important because such deep learning models are the subject of worldwide artificial intelligence research with a view to being implemented in real-world applications. However, the energy consumption of the hardware required to operate the models is a major obstacle on the way to a wider use of such systems.
This is also pointed out in another paper – “Brain-inspired computing needs a master plan,” published in Nature – in which the authors point out that “the astonishing results of advanced AI systems like DeepMind’s AlphaGo and AlphaZero require thousands of parallel processing devices, each can consume about 200 watts. “
“Our system is four to 16 times more energy efficient than other AI models on conventional hardware,” said Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science, who added that further efficiencies are likely with the next generation of Loihi. hardware.
In the TU Graz report, which was funded by Intel and The Human Brain Project, the researchers worked with algorithms involving temporal processes. An example is that the system answers questions about a previously told story or understands the relationship between objects or people from the context.
In this regard, the model mimicked human short-term memory, or at least a putative memory mechanism believed to be used in the human brain. The researchers linked two types of deep learning networks – feedback neural networks responsible for short-term memories and a feed-forward network – to determine which of the connections found are important for solving the current task.
Mike Davies, director of Intel’s Neuromorphic Computing Lab, said neuromorphic hardware such as Loihi chips are well-suited to the fast, sparse and unpredictable patterns of networking activity observed in the brain and needed for the most energy-efficient AI applications.
“Our work with TU Graz provides more evidence that neuromorphic technology can improve the energy efficiency of today’s deep learning workloads by rethinking their implementation from a biology perspective,” he said.
Alan Priestley, Gartner’s vice president of Emerging Technologies & Trends, agreed that neuromorphic chips have the potential to become widespread, thanks in part to their low power requirements.
“Given the challenges that current AI chip designs have in delivering the necessary performance within reasonable power frameworks, new architectures such as neuromorphic computing will be needed, and we are already seeing a number of startups developing neuromorphic chip designs to the extreme. low-power endpoint designs – including being integrated on sensor modules and in event-based cameras, “he told us.
According to Intel, its neuromorphic chip technology could at some point be integrated into a CPU to add energy-efficient AI processing to systems, or access to neuromorphic chips could be made available as a cloud service. ®