Efficient Stochastic Compute-In-Memory Circuit for Multi-Level or Accumulation (Case No. 2024-122)

Summary:

UCLA researchers in the Department of Electrical and Computer Engineering have developed a novel computational hardware that combines stochastic computing with compute-in-memory techniques that improves speed and computational efficiency.

Background:

In recent years, the need for advanced computational technologies has surged, driven by applications requiring intensive data processing such as machine learning and deep neural networks. Stochastic computing (SC) is attractive for neural network acceleration as it requires less space than traditional methods. This is because the hardware is simpler and thus more cost effective, energy efficient and scalable. However, the limited accuracy of SC, tied to the inherent randomness in its computation process, has hindered its widespread adoption. Compute-in-memory (CIM) technologies aim to reduce data movement by performing computations directly within memory arrays, although they are limited by the need for power-hungry analog-to-digital converters. If the advantages of each technology could be combined, adoption would increase, enabling cheaper, more efficient computing. 

Innovation:

UCLA researchers have developed Stochastic Computing in Memory (SCIM) with Range-Extended Stochastic Computing (REX-SC), which addresses the accuracy limitations of traditional SC by integrating SC with CIM techniques. REX-SC enhances computation accuracy without compromising the performance benefits of previous stochastic methods. By increasing the number of output bits after accumulation while maintaining the compactness of SC compute units, REX-SC significantly narrows the accuracy gap compared to fixed-point computations. This innovative approach ensures both the reduction in power and area overhead typically associated with CIM and the improved accuracy required for complex machine learning models.

Potential Applications:

•    Deep neural network acceleration
•    Image recognition systems
•    Voice recognition systems
•    Machine translation services
•    Energy-efficient computing

Advantages:

•    Higher computational accuracy – improvements of 3-8% over previous technologies.
•    Reduced energy consumption – up to 3.6x energy reduction.
•    Minimal area and power overhead.
•    Lower data movement requirements.
•    Scalability to large models.

State of Development: 

The circuit has been designed and demonstrated in simulation. A prototype has been built, and the researchers have authored a peer-reviewed publication describing the technology.

Related Papers:

1.    T. Li, W. Romaszkan, S. Pamarti and P. Gupta, "REX-SC: Range-Extended Stochastic Computing Accumulation for Neural Network Acceleration," in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 42, no. 12, pp. 4423-4435, Dec. 2023, doi: 10.1109/TCAD.2023.3284289.
2.    W. Romaszkan, T. Li, T. Melton, S. Pamarti, and P. Gupta, “ACOUSTIC : Accelerating Convolutional Neural Networks through Or-Unipolar Skipped Stochastic Computing,” in 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2020, pp. 768–773. https://ieeexplore.ieee.org/document/9116289

Reference:

UCLA Case No. 2024-122

Lead Inventor:

Sudhakar Pamarti, UCLA Professor of Electrical and Computer Engineering.
 

Patent Information:
For More Information:
Nikolaus Traitler
Business Development Officer (BDO)
nick.traitler@tdg.ucla.edu
Inventors:
Sudhakar Pamarti
Jiyue Yang
Soumitra Pal
Puneet Gupta
Tianmu Li
Wojciech Romaszkan