As the size and complexity of datasets expands and the rapidity with which they need to be processed increases, organizations are facing the challenge of having to optimize their infrastructure to handle a simultaneous growth in both volume and velocity. The rise of data-intensive workloads such as real-time analytics, AI/ML, animation, and bioinformatics is driving demand for memory-optimized hardware that will be able to process large datasets in a minimum amount of time. However, present compute infrastructure is not always capable of supporting the complex configurations that these growing demands for memory technology require.
This white paper considers current memory deployment trends and practices, providing a brief market analysis to identify the challenges that modern companies are facing. It also covers the recent technology developments that aim to address these challenges and gives a best practice example of memory optimization for the purposes of genomics data processing. Through the use case of The Translational Genomics Research Institute (TGen), an affiliate of City of Hope, the paper examines the benefits of using next generation computation technology developed by MemVerge on phoenixNAP’s Hardware-as-a-Service platform to virtualize Intel® Optane™ Persistent Memory.
Download “Accelerating Genomics Data Processing with Persistent Memory and Big Memory Software” ›