2021-125 Analog Nonvolatile Memory-Based In-Memory Computing Multiply-And-Accumulate (MAC) Engine

Summary:

UCLA researchers in the Department of the Electrical and Computer Engineering have developed a method to use Charge-Trap Transistors (CTTs) for analog nonvolatile memory-based in memory computing multiply-and accumulate (MAC) engine for high performance computations in AI platforms.

Background:

Advances in Machine Learning and Artificial Intelligence (AI) have largely focused on von Neumann architectures to accelerate computation using graphic processing units and individual custom application-specific integrated circuits (ASICs). While these systems have increased system performance and throughput, they are limited by the von Neumann memory bottleneck. Recently, the use of digital and analog in-memory in computing offers the possibility of surpassing these bottlenecks. But current methods are still limited in integrating the digital and analog memory devices. Therefore, there is a need for a method that can integrate the use of nonvolatile memory devices for efficient and high-performance computations in AI platforms.

Innovation:

UCLA researchers in the Department of the Electrical and Computer Engineering have developed a method to use Charge-Trap Transistors (CTTs) for analog nonvolatile memory-based in multiply-and accumulate (MAC) systems. The resulting method results in a system with reliable and nonvolatile weight storage, low power consumption, and high-throughput computation. The innovation improves system performance as it bypasses the von Neuman memory bottlenecks that prevented high performance. Furthermore, this innovation unique integration with CTT’s which have significantly lower power consumption makes it ideal for battery powered edge devices, devices used to enter core networks. The developed method could be utilized to compute fully connected layers in neural networks as well as more complex networks such as Convolutional Neural Network (CNN) layers for AI.

Potential Applications:

  • AI computing
  • Nonvolatile memory usage
  • Machine learning processing
  • High performance computation
  • Bioinformatics

Advantages:

  • Nonvolatile memory
  • High performance computations
  • Low integration cost
  • Complementary Metal Oxide Semiconductor (CMOS) compatibility
  • Low device variance
  • Excellent charge retention
  • High throughput computation

State of Development:

First successful demonstration of analog-dot product computation

Related Materials:

Du, Yuan et al. "An Analog Neural Network Computing Engine Using CMOS-Compatible Charge-Trap-Transistor (CTT)". IEEE Transactions On Computer-Aided Design Of Integrated Circuits And Systems, vol 38, no. 10, 2019, pp. 1811-1819. Institute of Electrical And Electronics Engineers (IEEE), doi:10.1109/tcad.2018.2859237.

Patent Information:
For More Information:
Nikolaus Traitler
Business Development Officer (BDO)
nick.traitler@tdg.ucla.edu
Inventors:
Subramanian Iyer