HPC Software and Tools
When building high performance computing applications, developers and HPC practitioners seek to extract the most processing power available. Intel® software and tools enable high-performance programming and streamline the development of applications from edge to cloud.
Intel® Parallel Studio XE
Intel® Parallel Studio XE is a suite of tools designed to help developers break through performance bottlenecks by making it easier to build high-performance parallel applications for HPC and AI. It empowers developers to apply the latest techniques in vectorization, multithreading, multimode parallelization, and memory optimization. With three editions to choose from—Composer, Professional, and Cluster—developers can get the right level of support.
Intel® Parallel Studio XE Composer Edition features state-of-the-art compilers, performance libraries, parallel models, and high-performance Python* solutions. These include:
- Intel® C++ Compiler enables code that takes advantage of more cores and built-in technologies in Intel® processor-based platforms.
- Intel® Fortran Compiler makes it possible to build high-performance applications by generating optimized code for Intel® Xeon® Scalable processors and Intel® Core™ processors.
- Intel® Distribution for Python* helps developers achieve fast Python* applications and speed up core computational packages, all with minimal or no changes to the code.
- Intel® Math Kernel Library (Intel® MKL) is a ready-to-use math library that accelerates processing routines and application performance. This library helps optimize code for each Intel® processor family with minimal effort for future generations of processors.
- Intel® Data Analytics Acceleration Library (Intel® DAAL) accelerates the development of high-performance data science applications. This library helps applications make better predictions—fast—and analyze large data sets with the computing resources available.
- Intel® Integrated Performance Primitives (Intel® IPP) offers a library of ready-to-use, domain-specific functions that are optimized for diverse Intel® architectures.
- Intel® Threading Building Blocks (Intel® TBB) is a popular C++ library for shared memory parallel programming and heterogeneous computing (intranode distributed memory programming).
Intel® Parallel Studio XE Professional Edition has everything in the Composer Edition, plus a performance profiler, vectorization and thread advisor, and memory and thread debugger.
- Intel® Advisor gives developers tools to build well-threaded and vectorized code that exploits Intel® hardware capabilities. Intel® Advisor is available as part of Intel® Parallel Studio XE and Intel® System Studio.
- Intel® Inspector helps developers find and debug errors in threading, memory, and persistent memory. By correcting these errors early in the application design cycle, you can help avoid costly errors later.
- Intel® VTune™ Amplifier uses advanced sampling and profiling techniques to analyze your code and provide insights for optimizing performance. This tool works by collecting profiling data and simplifying its analysis and interpretation.
Intel® Parallel Studio XE Cluster Edition adds capabilities to scale out performance across nodes and includes the Intel® MPI Library, MPI profiling capabilities, and an advanced cluster diagnostics tool.
- Intel® MPI Library is a multifabric message-passing library that enhances distributed application performance by implementing the open-source MPICH specification. Developers can create and test complex applications on Intel® processor-based HPC clusters.
- Intel® Trace Analyzer and Collector is a graphical tool to help developers understand MPI application behavior across its full runtime. This tool is part of Intel® Parallel Studio XE.
- Intel® Cluster Checker enhances the reliability and performance of HPC clusters based on Intel® processors by verifying that cluster components are working together seamlessly. This improves uptime and productivity while helping lower the total cost of ownership.
Intel® software, tools, and frameworks for HPC streamline the development of applications from edge to cloud.
AI and Big Data Frameworks
Intel’s optimized deep learning and big data frameworks help accelerate performance on HPC systems while reducing the amount of work for developers and data scientists.
Intel Optimizations for Deep Learning Frameworks
The Intel® Optimization for TensorFlow* provides optimization of the popular, open source TensorFlow deep learning framework for Intel® Xeon® Scalable processors. This helps data scientists and HPC practitioners solve new business and research challenges.
The Intel® Optimization for Caffe* improves the performance of the popular Caffe framework on Intel® processors. Caffe is a deep learning framework that can be run on HPC clusters to enable AI applications.
Intel Optimizations for Big Data Frameworks
Intel® software contributions to big data and analytics frameworks help applications run fast and easily on HPC systems. Intel optimizations for big data tools and techniques support popular frameworks such as Apache Hadoop and Apache Spark.
Unified Programming with Intel® oneAPI Products
Workloads are becoming more diverse, and no single architecture is best for every workload. For optimized performance, system architects need to be able to choose from a mix of scalar, vector, matrix, and spatial (SVMS) architectures deployed in CPU, GPU, accelerator, and FPGA sockets.
Intel® oneAPI products will deliver tools to deploy applications and solutions across SVMS architectures. Its set of complementary toolkits—a base kit and specialty add-ons—simplify programming and help developers improve efficiency and innovation.
Intel® oneAPI Base Toolkit (Base Kit)
The Intel® oneAPI Base Toolkit (Base Kit) is a core set of tools and libraries for building and deploying high-performance, data-centric applications across diverse architectures. It features the Data Parallel C++ (DPC++) language, an evolution of C++ that:
- Allows code reuse across hardware targets—CPUs, GPUs, and FPGAs
- Permits custom tuning for individual accelerators
- Includes domain-specific libraries and the Intel® Distribution for Python* to provide drop-in acceleration across relevant architectures
- Delivers enhanced profiling, design assistance, and debug tools
Intel® oneAPI HPC Toolkit (HPC Kit)
Deliver fast applications that scale. The Intel® oneAPI HPC Toolkit helps developers build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multinode parallelization, and memory optimization.
Intel® oneAPI DL Framework Developer Toolkit (DLFD Kit)
Develop new—or customize existing—deep learning frameworks using common APIs with the Intel® oneAPI DL Framework Developer Toolkit Optimize for high performance on Intel® CPUs and GPUs for either single-node or multinode distributed processing.
Intel® oneAPI IoT Toolkit (IoT Kit)
The Intel® oneAPI IoT Toolkit is tailored for developers who want to accelerate the development of smart, connected devices for healthcare, smart homes, aerospace, security, and more.
Intel® oneAPI Rendering Toolkit (Render Kit)
The Intel® oneAPI Rendering Toolkit offers open source libraries for high-performance, high-fidelity visualization. This flexible alternative to dedicated graphics accelerators reduces coding complexity and I/O constraints. It’s optimized for Intel® Xeon® Scalable processors and supports big data use on platforms of all sizes, including HPC clusters. The toolkit includes:
- Intel® Embree, a collection of high-performance ray tracing kernels that improve the performance of photo-realistic rendering applications on Intel® processors.
- Intel® OSPRay, an open source, scalable, and portable ray tracing engine for visualization on Intel® processors.
- Intel® OpenSWR, a low-level rasterization library upstreamed to the Mesa OpenGL open source project that helps developers achieve high rendering performance when GPUs are unavailable or are too limiting.
- Intel® Open Image Denoise, an open source, high-performance denoising library for ray tracing.
- Intel® Open Volume Kernel Library (Intel® Open VKL) is a collection of computation kernels to improve the performance of volume rendering applications.
Open Source Software
As part of our commitment to supporting open source software, Intel is a member of the OpenHPC* community. OpenHPC* is open source HPC platform software for Intel® architecture-based systems. It simplifies the installation and management of HPC systems by reducing the integration and validation needed to run the software stack.