For over a century, neuroscience has presented a straightforward narrative about memory. Neurons fire, they connect and, in those connections, called synapses, memories are formed. Neurons were seen as the engines of thought, and memory was believed to reside in the strength of their wiring. But a new model from IBM researchers suggests this view may be incomplete. It also raises the possibility that the biology behind human memory could help guide the development of the next generation of artificial intelligence.
The theory places astrocytes, non-neuronal glial cells that comprise approximately half of the brain, at the center of a previously unrecognized memory system. Long considered passive support cells, astrocytes may play an active role in storing and retrieving information. The model describes a form of associative memory that shares key features with advanced AI systems, including Transformers.
“There’s a mountain of evidence showing astrocytes are involved in cognition,” Leo Kozachkov, an IBM Researcher and the co-author of a recent paper on the theory, told IBM Think in an interview. “We wondered if they could implement powerful memory systems, and all signs pointed to yes.”
This model builds on a long history of neuroscience research into the “tripartite synapse,” where an astrocyte envelops the connection between two neurons. In the IBM team’s formulation, astrocytes are not passive observers. Instead, they take part in the processing and distribution of information across the brain in ways that resemble the memory-handling capabilities of some of the most sophisticated AI systems in use today.
Industry newsletter
Stay up to date on the most important—and intriguing—industry trends on AI, automation, data and beyond with the Think newsletter. See the IBM Privacy Statement.
Your subscription will be delivered in English. You will find an unsubscribe link in every newsletter. You can manage your subscriptions or unsubscribe here. Refer to our IBM Privacy Statement for more information.
The search for where memory lives in the brain has defined neuroscience for decades. The dominant model has credited synaptic plasticity, the strengthening or weakening of connections between neurons, as the substrate of memory. This idea underlies both biological theories and many of the core assumptions in artificial intelligence.
But the reality is more complicated. Experimental studies have found that astrocytes modulate synaptic strength, respond to neurotransmitters and neuromodulators, and appear to play a role in forming and retrieving long-term memories. These findings have not always fit neatly into standard computational models, and their implications have remained difficult to integrate into a coherent theoretical framework.
This is the context in which the IBM team’s model enters. It proposes a system in which neurons, synapses and astrocyte processes interact through a shared dynamical network. Each element is governed by equations derived from energy-based mathematical principles. The resulting system evolves toward stable attractor states that correspond to stored memories.
The central insight is that astrocytes can expand the memory capacity of the system. Their internal calcium signaling networks enable the integration and propagation of information across large spatial regions. This architecture supports a more distributed and flexible type of memory storage than what is possible in neuron-only networks.
Kozachkov explained how the idea developed. “First, we listened to experimental neuroscientists who study astrocytes,” he said. “They have an ever-growing mountain of evidence suggesting that astrocytes are involved in cognition, memory and behavior. But there is only a small collection of specific, formal theories about how neurons and astrocytes compute together.”
There was also a computational angle. The team had been working with Dense Associative Memory, an advanced type of network that builds on and extends the original Hopfield model. These systems are renowned for their robust memory capacity and exceptional pattern retrieval capabilities.
“Unfortunately, what these Dense Associative Memory networks gain in memory capacity, they lose in biological plausibility,” Kozachkov said. “So, we naturally wondered whether these networks could be implemented on biological hardware.”
Once the team began thinking about biological implementation, astrocytes quickly emerged as the most likely candidate. Their anatomical structure, their spatial organization and their biochemical dynamics all pointed to a potential role in memory.
Depending on how the system is tuned, the model can behave like a Dense Associative Memory or adopt the characteristics of a Transformer. This flexibility makes it more than a loose comparison to AI. It offers a practical approach to considering how the brain and modern machine learning systems might solve similar problems.
“If our theory is correct, even in concept, if not in specific detail, it has far-reaching implications for how we think about memory in the brain,” Kozachkov said. “Our theory suggests that memories can be encoded within the intracellular signaling pathways of a single astrocyte. Synaptic weights emerge from interactions within these pathways, as well as from interactions between astrocytes and synapses.”
The theory’s implications for AI are equally provocative. Current machine learning systems struggle with memory. Neural networks have limited capacity to retain long-term information, and architectures like attention layers or external memory units are typically used to overcome this. These components increase computational cost and complexity.
Among the predictions the model makes are that disrupting intracellular signaling in astrocytes should affect memory recall, and that selective interference with astrocytic networks could impair certain kinds of learning. These ideas are testable, although technically challenging, and could guide future work in both basic neuroscience and brain-inspired computing.
Of course, the model remains theoretical. The researchers are clear that their proposal is a framework, not a conclusion.
“First and foremost, it would be great if experimentalists made a serious effort to disconfirm our model,” Kozachkov said. “That is, to try to prove it wrong. I would be very happy to collaborate in that effort.”
For now, the theory invites a broader reconsideration of how intelligence is structured.
“We’re at the beginning of a Cambrian explosion of intelligence,” Kozachkov said. “For the first time, we know how to build non-animal entities that are intelligent. This has tremendous implications for neuroscience, which are hard to overstate.”
He added that he believes neuroscience still has much more to offer machine learning. “I don’t think we’ve even come close to exhausting the ideas we can take from the brain to build more intelligent systems. Not by a long shot.”