Neuromorphic computing, also known as neuromorphic engineering, is an approach to computing that mimics the way the human brain works. It entails designing hardware and software that simulate the neural and synaptic structures and functions of the brain to process information.
Neuromorphic computing might seem like a new field, but its origins date back to the 1980s. It was the decade when Misha Mahowald and Carver Mead developed the first silicon retina and cochlea and the first silicon neurons and synapses that pioneered the neuromorphic computing paradigm.1
Today, as artificial intelligence (AI) systems scale, they’ll need state-of-the-art hardware and software behind them. Neuromorphic computing can act as a growth accelerator for AI, boost high-performance computing and serve as one of the building blocks of artificial superintelligence. Experiments are even underway to combine neuromorphic computing with quantum computing.2
Neuromorphic computing has been cited by management consulting company Gartner as a top emerging technology for businesses.3 Similarly, professional services firm PwC notes that neuromorphic computing is an essential technology for organizations to explore since it’s progressing quickly but not yet mature enough to go mainstream.4
Since neuromorphic computing takes inspiration from the human brain, it borrows heavily from biology and neuroscience.
According to the Queensland Brain Institute, neurons “are the fundamental units of the brain and nervous system.”5 As messengers, these nerve cells relay information between different areas of the brain and to other parts of the body. When a neuron becomes active or “spikes,” it triggers the release of chemical and electrical signals that travel via a network of connection points called synapses, allowing neurons to communicate with each other.6
These neurological and biological mechanisms are modeled in neuromorphic computing systems through spiking neural networks (SNNs). A spiking neural network is a type of artificial neural network composed of spiking neurons and synapses.
Spiking neurons store and process data similar to biological neurons, with each neuron having its own charge, delay and threshold values. Synapses create pathways between neurons and also have delay and weight values associated with them. These values—neuron charges, neuron and synaptic delays, neuron thresholds and synaptic weights—can all be programmed within neuromorphic computing systems.7
In neuromorphic architecture, synapses are represented as transistor-based synaptic devices, employing circuits to transmit electrical signals. Synapses typically include a learning component, altering their weight values over time according to activity within the spiking neural network.7
Unlike conventional neural networks, SNNs factor timing into their operation. A neuron’s charge value accumulates over time; and when that charge reaches the neuron’s associated threshold value, it spikes, propagating information along its synaptic web. But if the charge value doesn’t go over the threshold, it dissipates and eventually “leaks.” Additionally, SNNs are event-driven, with neuron and synaptic delay values allowing asynchronous dissemination of information.7
Over the last few decades, many advancements in neuromorphic computing have come in the form of neuromorphic hardware.
In academia, one of the early implementations included Stanford University’s Neurogrid, whose mixed analog-digital multichip system can “simulate a million neurons with billions of synaptic connections in real time.”8 Meanwhile, research hub IMEC created a self-learning neuromorphic chip.9
Government bodies have also supported neuromorphic research efforts. The European Union’s Human Brain Project, for instance, was a 10-year initiative that ended in 2023 and aimed to understand the brain better, find new treatments for brain diseases and develop new brain-inspired computing technologies.
These technologies include the large-scale SpiNNaker and BrainScaleS neuromorphic machines. SpiNNaker runs in real time on digital multi-core chips, with a packet-based network for spike exchange optimization. BrainScaleS is an accelerated machine that emulates analog electronic models of neurons and synapses. It has a first-generation wafer-scale chip system (called BrainScaleS-1) and a second-generation single chip system (called BrainScaleS-2).10
Within the technology industry, neuromorphic processors include Loihi from Intel, NeuronFlow from GrAI Matter Labs and the TrueNorth and next-generation NorthPole neuromorphic chips from IBM®.
Most neuromorphic devices are made from silicon and use CMOS (complementary metal-oxide semiconductor) technology. But researchers are also looking into new types of materials, such as ferroelectric and phase-change materials. Nonvolatile electronic memory elements called memristors (a combination of “memory” and “resistor”) are another module to realize the colocation of memory and data processing in spiking neurons.
In the software realm, developing training and learning algorithms for neuromorphic computing involves both machine learning and non-machine learning techniques. Here are a few of them:7
To perform inferencing, pretrained deep neural networks can be converted to spiking neural networks using mapping strategies such as normalizing weights or activation functions. A deep neural network can also be trained in a way that its neurons are activated like spiking neurons.
These bio-inspired algorithms employ principles of biological evolution, such as mutation, reproduction and selection. Evolutionary algorithms can be used to design or train SNNs, changing and optimizing their parameters (delays and thresholds, for example) and structure (the number of neurons and the method of linking via synapses, for instance) as time passes.
Spiking neural networks lend themselves well to a graph representation, with an SNN taking the form of a directed graph. When one of the nodes in the graph spikes, the time at which other nodes also spike coincides with how long the shortest path is from the originating node.
In neuroscience, neuroplasticity refers to the ability of the human brain and nervous system to modify its neural pathways and synapses in response to an injury. In neuromorphic architecture, synaptic plasticity is typically implemented through spike timing-dependent plasticity. This operation adjusts the weights of synapses according to neurons’ relative spike timings.
Reservoir computing, which is based on recurrent neural networks, uses a “reservoir” to cast inputs to a higher-dimension computational space, with a readout mechanism trained to read the reservoir’s output.
In neuromorphic computing, input signals are fed to a spiking neural network, which acts as the reservoir. The SNN is untrained; instead, it relies on the recurrent connections within its network along with synaptic delays to map inputs to a higher-dimension computational space.
Neuromorphic systems hold a lot of computational promise. Here are some of the potential benefits this type of computing architecture offers:
As a brain-inspired technology, neuromorphic computing also involves the notion of plasticity. Neuromorphic devices are designed for real-time learning, continuously adapting to evolving stimuli in the form of inputs and parameters. This means they could excel at solving novel problems.
As mentioned previously, neuromorphic systems are event-based, with neurons and synapses processing in response to other spiking neurons. As a result, only the segment that’s computing spikes consumes power while the rest of the network stays idle. This leads to more efficient energy consumption.
Most modern computers, also known as von Neumann computers, have separate central processing units and memory units and the transfer of data between these units can cause a bottleneck that impacts speed. On the other hand, neuromorphic computing systems both store and process data in individual neurons, resulting in lower latency and swifter computation compared to von Neumann architecture.
Because of an SNN’s asynchronous nature, individual neurons can perform different operations concurrently. So theoretically, neuromorphic devices can execute as many tasks as there are neurons at a given time. As such, neuromorphic architectures have immense parallel processing capabilities, allowing them to complete functions quickly.
Neuromorphic computing is still an emerging field. And like any technology in its early stages, neuromorphic systems face a few challenges:
The process of converting deep neural networks to spiking neural networks can cause a drop in accuracy. Additionally, the memristors used in neuromorphic hardware may have cycle-to-cycle and device variations that can affect accuracy, as well as limits to synaptic weight values that can lower precision.7
As a somewhat nascent technology, neuromorphic computing has a shortage of standards when it comes to architecture, hardware and software. Neuromorphic systems also don’t have clearly defined and established benchmarks, sample datasets, testing tasks and metrics, so it becomes difficult to evaluate performance and prove effectiveness.
Most algorithmic approaches to neuromorphic computing still employ software designed for von Neumann hardware, which can confine the results to what von Neumann architecture can achieve. Meanwhile, APIs (application programming interfaces), coding models and programming languages for neuromorphic systems have yet to be developed or made more broadly available.
Neuromorphic computing is a complex domain, drawing from disciplines such as biology, computer science, electronic engineering, math, neuroscience and physics. This makes it difficult to comprehend outside of an academic lab specializing in neuromorphic research.
Current real-world applications for neuromorphic systems are sparse, but the computing paradigm can possibly be applied in these use cases:
Because of its high performance and orders of magnitude gains in energy efficiency, neuromorphic computing can help improve an autonomous vehicle’s navigational skills, allowing for quicker course correction and improved collision avoidance while lowering energy emissions.
Neuromorphic systems can help detect unusual patterns or activity that could signify cyberattacks or breaches. And these threats can be thwarted rapidly owing to the low latency and swift computation of neuromorphic devices.
The characteristics of neuromorphic architecture make it suitable for edge AI. Its low power consumption can help with the short battery life of devices like smartphones and wearables, while its adaptability and event-driven nature fit the information processing methods of remote sensors, drones and other Internet of Things (IoT) devices.
Because of its extensive parallel processing capabilities, neuromorphic computing can be used in machine learning applications for recognizing patterns in natural language and speech, analyzing medical images and processing imaging signals from fMRI brain scans and electroencephalogram (EEG) tests that measure electrical activity in the brain.
As an adaptable technology, neuromorphic computing can be used to enhance a robot’s real-time learning and decision-making skills, helping it better recognize objects, navigate intricate factory layouts and operate faster in an assembly line.
1 Carver Mead Earns Lifetime Contribution Award for Neuromorphic Engineering, Caltech, 7 May 2024.
2 Neuromorphic Quantum Computing, Quromorphic, Accessed 21 June 2024.
3 30 Emerging Technologies That Will Guide Your Business Decisions, Gartner, 12 February 2024.
4 The new Essential Eight technologies: what you need to know, PwC, 15 November 2023.
5 What is a neuron?, Queensland Brain Institute, Accessed 21 June 2024.
6 Action potentials and synapses, Queensland Brain Institute, Accessed 21 June 2024.
7 Opportunities for neuromorphic computing algorithms and applications, Nature, 31 January 2022.
8 Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations, IEEE, 24 April 2014.
9 IMEC demonstrates self-learning neuromorphic chip that composes music, IMEC, 16 May 2017.
10 Neuromorphic computing, Human Brain Project, Accessed 21 June 2024.
Train, validate, tune and deploy generative AI, foundation models and machine learning capabilities with IBM watsonx.ai, a next-generation enterprise studio for AI builders. Build AI applications in a fraction of the time with a fraction of the data.
Put AI to work in your business with IBM's industry-leading AI expertise and portfolio of solutions at your side.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.
Learn fundamental concepts and build your skills with hands-on labs, courses, guided projects, trials and more.
Learn how to confidently incorporate generative AI and machine learning into your business.
Want to get a better return on your AI investments? Learn how scaling gen AI in key areas drives change by helping your best minds build and deliver innovative new solutions.
Learn how to select the most suitable AI foundation model for your use case.
IBM® Granite™ is our family of open, performant and trusted AI models, tailored for business and optimized to scale your AI applications. Explore language, code, time series and guardrail options.
Dive into the 3 critical elements of a strong AI strategy: creating a competitive edge, scaling AI across the business and advancing trustworthy AI.
We surveyed 2,000 organizations about their AI initiatives to discover what's working, what's not and how you can get ahead.