System Benchmarking for the New Era of Computing

author-image

By

Data Center Modernization Is Happening, but Benchmarks Have Not Kept Up

The industry has traditionally relied on one or two benchmarks (SPECint, SPECjbb, TCP-C, etc.) for a simple, standardized way to assess CPU performance and, by extension, overall system performance in the data center. These traditional benchmarks were designed for node-level performance measurement. However, today’s modernized data centers comprise massive build out of IT infrastructure with a new underlying system architecture that have become more distributed, disaggregated and heterogenous to meet the requirements of new workloads, which are far more interactive and latency sensitive. Modern data center system architecture comprises a mix of CPU, Infrastructure Processing Unit (IPU) and special purpose accelerators. Traditional benchmarks fall a long way short of showing a representative performance of typical workloads, particularly cloud workloads running in the public/private Cloud, Edge, and/or enterprise data centers.

We rely on TCO modeling (Total Cost of Ownership) for CPU SKU selection and customization, which is typically based on per-vCPU SPECrate. However, SPECrate cannot accurately reflect the end-to-end performance of micro-service-based applications. This compromises our TCO model, which in turn leads to suboptimal CPU SKU customization. Also, our customers need to assess and compare the performance of different CSP offerings for competitive analysis and decision-making. Since the benchmarks in their toolbox do not reflect the end-to-end performance of cloud-native applications, their conclusions are often misleading and not in alignment with their best interests.

— Jian Chen, Architect, Alibaba Infrastructure Service

Microservices Workload Architecture

The shift in computing paradigm includes the adoption of microservices workload architecture – the shift from complex monolithic services that encompass the entire application functionality in a single binary to graphs with tens or hundreds of single-purpose, loosely-coupled microservices. Microservices fundamentally change a lot of assumptions current cloud systems are designed with, and present both opportunities and challenges. The increasing popularity of microservices is justified by several reasons. First, they promote composable software design, simplifying and accelerating development, with each microservice being responsible for a small subset of the application’s functionality. Despite their advantages, microservices represent a significant departure from the way cloud services are traditionally designed, and have broad implications. As clusters grow in size, and services become more complex, microservices requirements put increased pressure in the underlying data center infrastructure. Low tail latency and performance predictability are critical – performance under SLA. So, there’s a critical need to optimize performance monitoring and tuning through the use of more representative system-level benchmarking process.

It’s Time to Modernize the Benchmarks

In the ever-increasing complex world of data center transformation, there’s a need for modernizing the benchmark process. Also, the increasing shift to microservices has revealed the significant limitations of the use of traditional benchmarks as performance indicators. A system that performs well for traditional, general-purpose computing might not be optimal for modern, cloud-native applications and other data-intensive workloads that rely on system level components beyond the CPU.

We look forward to working with Intel and other companies in the industry to build a new, more representative cloud benchmark framework which is necessary to better guide future hardware development including CPUs, accelerators, network infrastructure and storage.

— George Chrysos, Chief Architect, Azure Hardware Development, Microsoft

Intel is leading the charter working with key industry players for a new system level benchmark. Intel is committed to attending to the evolving needs of public/private Cloud, Edge, and enterprise data center users relative to emerging workload performance of underlying system infrastructure of the data center. System level benchmark is important to customers and end users alike.

We envision a system-level benchmark that captures the representative micro-service usages and real-world deployment practices. Such a benchmark would drastically expand the benchmarking toolbox that can provide the right measure of the end-to-end performance. This will allow CSPs to develop an accurate TCO model and facilitate what-if analysis to root-cause performance bottlenecks. Also, end users need to assess and compare the performance of different CSP offerings for competitive analysis and decision-making – the new system level benchmark will enable them to do so.

For more information contact benchmarks@intel.com.