Published: 27 June 2024
Contributors: Rina Caballar, Cole Stryker
Neuromorphic computing, also known as neuromorphic engineering, is an approach to computing that mimics the way the human brain works. It entails designing hardware and software that simulate the neural and synaptic structures and functions of the brain to process information.
Neuromorphic computing might seem like a new field, but its origins date back to the 1980s. It was the decade when Misha Mahowald and Carver Mead developed the first silicon retina and cochlea and the first silicon neurons and synapses that pioneered the neuromorphic computing paradigm.1
Today, as artificial intelligence (AI) systems scale, they’ll need state-of-the-art hardware and software behind them. Neuromorphic computing can act as a growth accelerator for AI, boost high-performance computing and serve as one of the building blocks of artificial superintelligence. Experiments are even underway to combine neuromorphic computing with quantum computing.2
Neuromorphic computing has been cited by management consulting company Gartner as a top emerging technology for businesses.3 Similarly, professional services firm PwC notes that neuromorphic computing is an essential technology for organizations to explore since it’s progressing quickly but not yet mature enough to go mainstream.4
Learn how to confidently incorporate generative AI and machine learning into your business.
Register for the guide on AI governance
Since neuromorphic computing takes inspiration from the human brain, it borrows heavily from biology and neuroscience.
According to the Queensland Brain Institute, neurons “are the fundamental units of the brain and nervous system.”5 As messengers, these nerve cells relay information between different areas of the brain and to other parts of the body. When a neuron becomes active or “spikes,” it triggers the release of chemical and electrical signals that travel via a network of connection points called synapses, allowing neurons to communicate with each other.6
These neurological and biological mechanisms are modeled in neuromorphic computing systems through spiking neural networks (SNNs). A spiking neural network is a type of artificial neural network composed of spiking neurons and synapses.
Spiking neurons store and process data similar to biological neurons, with each neuron having its own charge, delay and threshold values. Synapses create pathways between neurons and also have delay and weight values associated with them. These values—neuron charges, neuron and synaptic delays, neuron thresholds and synaptic weights—can all be programmed within neuromorphic computing systems.7
In neuromorphic architecture, synapses are represented as transistor-based synaptic devices, employing circuits to transmit electrical signals. Synapses typically include a learning component, altering their weight values over time according to activity within the spiking neural network.7
Unlike conventional neural networks, SNNs factor timing into their operation. A neuron’s charge value accumulates over time; and when that charge reaches the neuron’s associated threshold value, it spikes, propagating information along its synaptic web. But if the charge value doesn’t go over the threshold, it dissipates and eventually “leaks.” Additionally, SNNs are event-driven, with neuron and synaptic delay values allowing asynchronous dissemination of information.7
Over the last few decades, many advancements in neuromorphic computing have come in the form of neuromorphic hardware.
In academia, one of the early implementations included Stanford University’s Neurogrid, whose mixed analog-digital multichip system can “simulate a million neurons with billions of synaptic connections in real time.”8 Meanwhile, research hub IMEC created a self-learning neuromorphic chip.9
Government bodies have also supported neuromorphic research efforts. The European Union’s Human Brain Project, for instance, was a 10-year initiative that ended in 2023 and aimed to understand the brain better, find new treatments for brain diseases and develop new brain-inspired computing technologies.
These technologies include the large-scale SpiNNaker and BrainScaleS neuromorphic machines. SpiNNaker runs in real time on digital multi-core chips, with a packet-based network for spike exchange optimization. BrainScaleS is an accelerated machine that emulates analog electronic models of neurons and synapses. It has a first-generation wafer-scale chip system (called BrainScaleS-1) and a second-generation single chip system (called BrainScaleS-2).10
Within the technology industry, neuromorphic processors include Loihi from Intel, NeuronFlow from GrAI Matter Labs and the TrueNorth and next-generation NorthPole neuromorphic chips from IBM®.
Most neuromorphic devices are made from silicon and use CMOS (complementary metal-oxide semiconductor) technology. But researchers are also looking into new types of materials, such as ferroelectric and phase-change materials. Nonvolatile electronic memory elements called memristors (a combination of “memory” and “resistor”) are another module to realize the colocation of memory and data processing in spiking neurons.
In the software realm, developing training and learning algorithms for neuromorphic computing involves both machine learning and non-machine learning techniques. Here are a few of them:7
To perform inferencing, pretrained deep neural networks can be converted to spiking neural networks using mapping strategies such as normalizing weights or activation functions. A deep neural network can also be trained in a way that its neurons are activated like spiking neurons.
These bio-inspired algorithms employ principles of biological evolution, such as mutation, reproduction and selection. Evolutionary algorithms can be used to design or train SNNs, changing and optimizing their parameters (delays and thresholds, for example) and structure (the number of neurons and the method of linking via synapses, for instance) as time passes.
Spiking neural networks lend themselves well to a graph representation, with an SNN taking the form of a directed graph. When one of the nodes in the graph spikes, the time at which other nodes also spike coincides with how long the shortest path is from the originating node.
In neuroscience, neuroplasticity refers to the ability of the human brain and nervous system to modify its neural pathways and synapses in response to an injury. In neuromorphic architecture, synaptic plasticity is typically implemented through spike timing-dependent plasticity. This operation adjusts the weights of synapses according to neurons’ relative spike timings.
Reservoir computing, which is based on recurrent neural networks, uses a “reservoir” to cast inputs to a higher-dimension computational space, with a readout mechanism trained to read the reservoir’s output.
In neuromorphic computing, input signals are fed to a spiking neural network, which acts as the reservoir. The SNN is untrained; instead, it relies on the recurrent connections within its network along with synaptic delays to map inputs to a higher-dimension computational space.
Neuromorphic systems hold a lot of computational promise. Here are some of the potential benefits this type of computing architecture offers:
As a brain-inspired technology, neuromorphic computing also involves the notion of plasticity. Neuromorphic devices are designed for real-time learning, continuously adapting to evolving stimuli in the form of inputs and parameters. This means they could excel at solving novel problems.
As mentioned previously, neuromorphic systems are event-based, with neurons and synapses processing in response to other spiking neurons. As a result, only the segment that’s computing spikes consumes power while the rest of the network stays idle. This leads to more efficient energy consumption.
Most modern computers, also known as von Neumann computers, have separate central processing units and memory units and the transfer of data between these units can cause a bottleneck that impacts speed. On the other hand, neuromorphic computing systems both store and process data in individual neurons, resulting in lower latency and swifter computation compared to von Neumann architecture.
Because of an SNN’s asynchronous nature, individual neurons can perform different operations concurrently. So theoretically, neuromorphic devices can execute as many tasks as there are neurons at a given time. As such, neuromorphic architectures have immense parallel processing capabilities, allowing them to complete functions quickly.
Neuromorphic computing is still an emerging field. And like any technology in its early stages, neuromorphic systems face a few challenges:
The process of converting deep neural networks to spiking neural networks can cause a drop in accuracy. Additionally, the memristors used in neuromorphic hardware may have cycle-to-cycle and device variations that can affect accuracy, as well as limits to synaptic weight values that can lower precision.7
As a somewhat nascent technology, neuromorphic computing has a shortage of standards when it comes to architecture, hardware and software. Neuromorphic systems also don’t have clearly defined and established benchmarks, sample datasets, testing tasks and metrics, so it becomes difficult to evaluate performance and prove effectiveness.
Most algorithmic approaches to neuromorphic computing still employ software designed for von Neumann hardware, which can confine the results to what von Neumann architecture can achieve. Meanwhile, APIs (application programming interfaces), coding models and programming languages for neuromorphic systems have yet to be developed or made more broadly available.
Neuromorphic computing is a complex domain, drawing from disciplines such as biology, computer science, electronic engineering, math, neuroscience and physics. This makes it difficult to comprehend outside of an academic lab specializing in neuromorphic research.
Current real-world applications for neuromorphic systems are sparse, but the computing paradigm can possibly be applied in these use cases:
Because of its high performance and orders of magnitude gains in energy efficiency, neuromorphic computing can help improve an autonomous vehicle’s navigational skills, allowing for quicker course correction and improved collision avoidance while lowering energy emissions.
Neuromorphic systems can help detect unusual patterns or activity that could signify cyberattacks or breaches. And these threats can be thwarted rapidly owing to the low latency and swift computation of neuromorphic devices.
The characteristics of neuromorphic architecture make it suitable for edge AI. Its low power consumption can help with the short battery life of devices like smartphones and wearables, while its adaptability and event-driven nature fit the information processing methods of remote sensors, drones and other Internet of Things (IoT) devices.
Because of its extensive parallel processing capabilities, neuromorphic computing can be used in machine learning applications for recognizing patterns in natural language and speech, analyzing medical images and processing imaging signals from fMRI brain scans and electroencephalogram (EEG) tests that measure electrical activity in the brain.
As an adaptable technology, neuromorphic computing can be used to enhance a robot’s real-time learning and decision-making skills, helping it better recognize objects, navigate intricate factory layouts and operate faster in an assembly line.
Reimagine how you work with AI: our diverse, global team of more than 20,000 AI experts can help you quickly and confidently design and scale AI and automation across your business, working across your own IBM watsonx™ technology and an open ecosystem of partners to deliver any AI model, on any cloud, guided by ethics and trust.
Build the future of your business with AI solutions that you can trust. With unparalleled experience in solving the world’s biggest business problems, IBM can assist you wherever you are on your AI journey.
Multiply the power of AI with our next-generation AI and data platform. IBM watsonx is a portfolio of business-ready tools, applications and solutions, designed to reduce the costs and hurdles of AI adoption while optimizing outcomes and responsible use of AI.
Discover how IBM Research® is leveraging inspiration from biology combined with advances in machine learning to create an energy-efficient computing paradigm.
Learn how the neuromorphic devices and systems team at IBM Research is exploring new materials and devices that accelerate deep neural network inference and training.
Explainable artificial intelligence (XAI) is a set of processes and methods that allow human users to comprehend and trust the results and output created by machine learning algorithms.
All links reside outside ibm.com
1 Carver Mead Earns Lifetime Contribution Award for Neuromorphic Engineering, Caltech, 7 May 2024.
2 Neuromorphic Quantum Computing, Quromorphic, Accessed 21 June 2024.
3 30 Emerging Technologies That Will Guide Your Business Decisions, Gartner, 12 February 2024.
4 The new Essential Eight technologies: what you need to know, PwC, 15 November 2023.
5 What is a neuron?, Queensland Brain Institute, Accessed 21 June 2024.
6 Action potentials and synapses, Queensland Brain Institute, Accessed 21 June 2024.
7 Opportunities for neuromorphic computing algorithms and applications, Nature, 31 January 2022.
8 Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations, IEEE, 24 April 2014.
9 IMEC demonstrates self-learning neuromorphic chip that composes music, IMEC, 16 May 2017.
10 Neuromorphic computing, Human Brain Project, Accessed 21 June 2024.