2018-003 Deep-Learning Accelerator and Analog Neuromorphic Computation with CMOS-Compatible Charge-Trap-Transistor (CTT) Technique

SUMMARY

UCLA researchers from the Department of Electrical Engineering have invented a charge trap transistor based computing architecture for neural networks applications.

BACKGROUND

Deep neural networks have wide applications in industries such as machine vision, voice recognition and artificial intelligence- all of which are billion dollar markets growing rapidly each year. Current commercial computing processors (such as CPU, GPU and accelerators) can hardly keep up with the increase in computation demand for complex deep neural network algorithms. The design limits in energy efficiency and on-chip memory density hinders the expansion of computation capabilities in existing commercial processors.

In recent years, analog computing engines have shown promises in increasing computing speed and energy efficiency, as well as decreasing design costs. However, these devices require new material and additional manufacturing processes that are not supported by major CMOS foundries. Hence, they are incompatible with existing commercial CMOS chips.

INNOVATION

UCLA researchers from the Department of Electrical Engineering have developed a charge trap transistor-based computing architecture for neural networks applications. This design increases the computing speed and cuts down the power consumption. Additionally, it is compatible with existing commercial processors.

The inventors designed a CTT with high-k-metal-gate (HKMG) memory-based non-volatile memory (eNVM) solution to increase computation speed and energy efficiency in deep-learning accelerator applications. Since certain information stored on-chip in a convolutional neuron network (CNN) is repeatedly read and used during computation, the fast reading and slow writing characteristics of CTT-based eNVM fit perfectly into a CNN accelerator.

Additionally, using the CTT-based eNVM in deep learning hardware architecture does not require new manufacturing materials or processes, making them perfectly compatible with existing CMOS chips. Therefore, they could easily be commercially implemented for high performance and low power computing in artificial intelligence applications.

APPLICATIONS

  • Deep neural networks accelerator
  • Image/voice recognition
  • Artificial intelligence

ADVANTAGES

  • High Speed
  • Low power consumption
  • Compatible with existing commercial manufacture processes
  • Low design costs

PATENT STATUS

United States Of America      US Patent Application      20200364548      06/17/2020

Patent Information:
For More Information:
Ed Beres
Business Development Officer
edward.beres@tdg.ucla.edu
Inventors:
Yuan Du
Li Du
Mau-Chung Frank Chang