2021-212 Configurable Memory Pool System
Summary:
UCLA researchers in the Department of Electrical and Computer Engineering have developed a node processing architecture that allows memory pool to be scalable and allows the integration of heterogeneous technologies downstream.
Background:
Modern electronic applications require high memory capacity and bandwidth. However, many of these applications require more memory capacity and bandwidth than can be provided by a single processing node. To resolve this issue, multiple processing nodes can be connected using interconnection technologies. This results in bottlenecks that slow down the system and constrains the design space resulting in worse memory utilization and performance. Furthermore, the scaling of this architecture is impossible and makes the downstream interconnection of heterogeneous technologies (e.g., DRAM, SRAM, Flash, etc.) difficult to use. A significant need remains to develop an architecture that can interconnect multiple processing nodes without suffering from poor memory capacity and latency.
Innovation:
UCLA researchers in the Department of Electrical and Computer Engineering have developed a densely integrated, chiplet based networked memory pool with high intra-pool bandwidth. The chiplet architecture provides a common interface to the network, allowing the memory to be assembled in a variety of different configurations. This allows for different memory interfaces such as DDRx/LPDDRx/GDDRx/PCle and communication interfaces such as OMI, Gen-Z, & CXL to be compatible without modifying the rest of the system. The memory pool can be scaled in capacity, and downstream interconnection of heterogenous technologies can be achieved. The developed processing node architecture could allow novel processing designs to be achieved.
Potential Applications:
- Dynamic random-access memory
- Flash memory
- Static random-access memory
- Magnetoresistive random-access memory
Advantages:
- High bandwidth
- Low latency
- Capable of connecting multiple processing nodes simultaneously
- Tunable performance for superior cost-performance-energy trade-off
- Connects different memory interfaces together
Development to Date:
First description of the complete invention has been accomplished
Related Papers:
- S. Pal et al., "Designing a 2048-Chiplet, 14336-Core Waferscale Processor," 2021 58th ACM/IEEE Design Automation Conference (DAC), 2021, pp. 1183-1188, doi: 10.1109/DAC18074.2021.9586194.
- S. Pal, D. Petrisko, R. Kumar and P. Gupta, "Design Space Exploration for Chiplet‐Assembly‐Base Processors," in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 4, pp. 1062‐1073, April 2020, doi: 10.1109/TVLSI.2020.2968904.
- S. Pal, D. Petrisko, M. Tomei, P. Gupta, S. S. Iyer and R. Kumar, "Architecting Waferscale Processors ‐ A GPU Case Study," 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA), Washington, DC, USA, 2019, pp. 250‐263, doi: 10.1109/HPCA.2019.00042.
Patent Information:
App Type |
Country |
Patent No. |
|
Issued Date |
|
|
|
Inventors:
|