Intel Labs Continues to Innovate Memory and Storage Technology for the Future

Highlights

  • Intel announced six new Intel Optane memory and storage products to alleviate data supply bottlenecks.

  • Intel Labs scientists continue to innovate memory and storage technology for the future, such as compute near-memory and computational storage.

author-image

By

Today Intel announced six new Intel Optane memory and storage products to help customers meet the challenges of 5G, network transformation, artificial intelligence (AI), and the intelligent edge. But for Intel Labs researchers, this is just one step forward in the ongoing journey of data. Innovations in memory and storage will continue to grow as scientists strive to break data barriers by developing future technologies to improve how data is moved and processed, such as compute near-memory (CNM) and computational storage. Both of these research areas use similar techniques, but they are applied in different ways — inside the chip design versus across the data center network.

Today, Intel Optane SSDs alleviate data supply bottlenecks and accelerate applications with fast caching and fast storage to increase scale-per-server and reduce transaction costs for latency-sensitive workloads. By following the journey of data, Intel Labs researchers are continuing to develop a holistic understanding on how information is captured, moved, processed, stored, and secured.

Intel Labs scientists have observed that two stages in the data journey — moving data and computing data — interplay with one another. For example, bottlenecks in moving data can limit how effectively data can be processed. These kind of bottlenecks show up along the storage and memory hierarchy.

Researchers are discovering that it is possible to mitigate these data movement overheads by placing computation near the memory or the storage device. Intel Labs built a test chip of an AI circuit that alleviates memory bottlenecks using CNM techniques. For storage devices, Intel Labs has demonstrated that computational storage may reduce network load and improve speed in edge infrastructure.

The following two blogs highlight the latest research from Intel Labs on CNM and computational storage:

Compute Near-Memory Reduces Data Transfer Energy and Increases Throughput in Neural Networks
Modern AI workloads require computational systems to process neural networks with millions of parameters. These workloads can limit the performance and energy-efficiency of the system when moving data between memory and compute. Intel Labs has built an AI circuit that can alleviate these memory bottlenecks by inserting distributed computational units into the memory array, creating compute near-memory systems. With built-in scalability and modularity, the AI accelerator circuit can address a range of needs from low-power IoT and edge devices all the way to data centers and servers.

Computational Storage Approach Uses Virtual Objects to Reduce Data Bottlenecks
Advances in storage technology have led to extremely large, fast devices. However, I/O speeds are not keeping pace and increasingly more time is needed to transfer data between storage devices and central processing units (CPUs) at data centers. Computational storage, a form of near data processing, could potentially deal with this problem of data movement bottlenecks in edge infrastructure. At SC20, Intel Labs scientists demonstrated a block-compatible design based on virtual objects, making numerous offloads possible in block-based computational storage.