The Intel® Xeon® processor E5 v4 family of server processors introduces advanced new resource monitoring and control features designed to improve visibility into and control over how shared platform resources such as the L3 cache are used.
The Cache Monitoring Technology (CMT) feature in the Intel® Xeon® processor E5 v3 provides a starting point with improved visibility of how the L3 cache is used by threads, apps, virtual machines (VMs) or containers, and the new Intel® Resource Director Technology (Intel® RDT) feature set extends these capabilities substantially.
More information on the Intel RDT feature set can be found here.
An animation illustrating the key principles behind Intel RDT is posted here.
Key Intel Resource Director Technology Features on the Intel® Xeon® Processor E5 v4 Family:
- Cache Allocation Technology (CAT)
Formerly available on a limited subset of Intel Xeon processor E5 v3 communications SKUs, an enhanced version of CAT is now available across all Intel Xeon processor E5 v4 SKUs. CAT enables software control over the placement of data in the L3 cache, enabling new usages including prioritizing important VMs in the data center, containing “noisy neighbors,” and protecting important communications applications, such as virtual switches, from interference.
CAT is a feature that enables an OS, hypervisor / virtual machine manager (VMM) or similar system service management agent to specify the amount of cache space into which an application can fill. This technology provides control over last level cache (LLC) allocation.
Why is there a need to specify a certain amount of cache space that an application can use in LLC? When multiple applications are running concurrently, they will compete with each other for cache space. An application such as video streaming that requests a large chunk of data but never reuses that data does not necessarily use the cache well (low temporal locality), meaning LLC space is used that could otherwise be used to improve the performance of other applications or VMs.
In such cases, operators can use CAT to limit how much cache space the video streaming application can use, leaving cache space available for more important applications to use. Such prioritization is one key usage of CAT, and others are possible as described in a series of articles:
- Introduction to Cache Allocation Technology (CAT)
- Key Cache Allocation Technology Usage Models
- Proof points for Cache Allocation Technology
- Software enabling and support for Cache Allocation Technology
- Memory Bandwidth Monitoring (MBM)
Multi-core cache hierarchies benefit applications with substantial throughput and scalability. However, when many user processes are running concurrently, cache and memory bandwidth are frequently contended resources and must be monitored and used efficiently to achieve the best performance.
MBM allows monitoring of bandwidth from one level of the cache hierarchy to the next—in this case focusing on the L3 cache. The architecture is based on an extension of the existing Cache Monitoring Technology (CMT) feature, which uses per-thread tags, meaning that per-thread bandwidth can be measured for applications, VMs, or containers. Additionally, bandwidth to both local and remote memory controllers is provided (via local/total event codes), meaning bandwidth-aware scheduling and NUMA optimizations are possible.
More information about MBM can be found in a series of articles:
- Code and Data Prioritization (CDP)
Certain applications with large code footprints, or those with above-average sensitivity to protecting code in the cache may benefit from the software control provided by Code and Data Prioritization (CDP), which is an extension of CAT. CDP enables isolation and separate prioritization of code and data placement in the LLC in a software configurable manner, which can enable workload tuning of cache capacity to the characteristics of the workload.
More information about CDP is available in a series of articles:
The Intel RDT feature set enables improved monitoring of L3 cache occupancy and memory bandwidth, while providing new levels of control over the way that applications make use of the L3 cache. Enhanced capabilities are possible using these features, including improved telemetry, resource-aware scheduling, improved performance guarantees, and enhanced fairness and determinism. The articles referenced above provide additional details on the capabilities available.
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. Check with your system manufacturer or retailer or learn more at intel.com.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2016 Intel Corporation.
Product and Performance Information
Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.