Prototype Software for Neuron-Centric Memory Architecture in AI (Case Nos. 2025-327/328)

Summary:

Researchers at UCLA’s Department of Integrative Biology & Physiology and Neurobiology has pioneered the first prototype of a novel, neuron-centric AI architecture featuring intracellular memory and adaptive computation capabilities, designed to enhance deep learning performance and efficiency. 

Background:


Deep learning leverages algorithms inspired by the structure and function of human neural networks, offering powerful solutions to a wide range of contemporary computational challenges. Modern AI architectures such as multilayer perceptrons (MLPs) and convolutional neural networks (CNNs) respond to input without memory retention. In the current state of the art, it is generally believed that long-term memory is stored within neuronal synapses as opposed to the nuclei of neurons. Artificial learning occurs by adjusting the strength of connections between artificial neurons through a method called backpropagation. Backpropagation often leads to memory loss in continual learning environments, a lack of context or long-term memory, nonadaptive behavior, expensive retraining costs, and a lack of modularity that breaks functionality. Due to these limitations, current AI models lack robust memory features and suffer from catastrophic forgetting. There is a need for a reliable AI system that can learn continuously without centralized retraining, has high-level memory retention, and can support fault tolerant modular design.  

Innovation:

Researchers at UCLA have developed the first AI system with stateful neurons that autonomously adapt, overcoming the limitations of current technologies. The system introduces the Neural Unit with Retentive Self-Adaptive Architecture (NURESA), a novel technology that enables internal data storage within the neuron, allowing it to retain information. This invention is based on groundbreaking research indicating that biological neurons store long-term memory within the nucleus rather than at the synapse, providing a biomimetic basis for improved computational design. Unlike traditional neural networks, NURESA neurons can autonomously store and contextualize information, giving them the ability to adapt and learn locally. Traditional limitations of backpropagation and memory loss are avoided by storing information directly within individual neurons. Its internal memory capabilities allow the NURESA system to process information efficiently and dynamically adjust behavior, reducing the likelihood that model retraining will be needed. NURESA neurons have the potential to be implemented on neuromorphic chips and reduce energy consumption by providing computationally efficient AI memory. The NURESA-based technology creates a new class of AI architecture that addresses real-world demands for adaptability, interpretability, and resilience in deep learning.

Potential Applications:

●    Continual learning agents
●    Edge and on-device AI systems (privacy-preserving, energy efficient)
●    Modular and interoperable networks with transferable memory 
●    AI systems aligned with biological principles
●    Robotics and multi-agent systems (defense, industrial, consumer)
●    Human-AI collaboration systems that adapt continuously

 

Advantages:

●    Dynamic continual learning without catastrophic forgetting
●    Context-based learning and long-context recall
●    Adaptive computation capabilities
●    Biologically inspired memory mechanisms
●    Scalable from edge devices to data centers
●    Stateful and transferable knowledge
●    Energy-efficient AI optimized for mobile and IoT
●    Reduced dependence on backpropagation, centralized cloud training
●    Greater potential for interpretability and safety

Development-To-Date:

First successful demonstration of the invention completed in April 2025. NURESA builds directly on the inventor’s pioneering research, demonstrating that memory may be stored at the level of individual neurons, translating this biological insight into a novel AI architecture with the potential for more human-like reasoning and long-term memory. 

Related Papers:

[1] Bédécarrats, A., Chen, S., Pearce, K., Cai, D., & Glanzman, D. L. (2018). RNA from trained Aplysia can induce an epigenetic engram for long-term sensitization in untrained Aplysia. eNeuro, 5(3), ENEURO.0038-18.2018. https://doi.org/10.1523/ENEURO.0038-18.2018 

[2] Chen S, Cai D, Pearce K, Sun PY, Roberts AC, Glanzman DL. (2014). Reinstatement of long-term memory following erasure of its behavioral and synaptic expression in Aplysia. eLife 3: e03896 

[3] Gold AR, Glanzman DL. (2021). The central importance of nuclear mechanisms in the storage of memory. Biochem Biophys Res Commun 564: 103-13. 

[4] Pearce, K., Cai, D., Roberts, A., & Glanzman, D. L. (2017). Role of protein synthesis and DNA methylation in the retention and reinstatement of long-term memory. Learning & Memory, 24(9), 400–403.
 
  [5] Zhang, R., Fu, T., Wang, S., Xiao, X., and Glanzman, D.L. (2025) Potential genomic changes induced during sensitization in Aplysia. Abstract, Second Symposium on Invertebrate Neuroscience (SIN), Tihany, Hungary 

Reference:

UCLA Case No. 2025-327 (Copyright: Prototype Software for Neuron-Centric Memory Architecture in AI)
UCLA Case No. 2025-328 (Neuron-Centric Artificial Intelligence Architecture)

Lead Inventor:

David Glanzman
 

Patent Information:
For More Information:
Joel Kehle
Business Development Officer
joel.kehle@tdg.ucla.edu
Inventors:
Alain Glanzman
David Glanzman