Intel® SoC Watch Analyze system power and thermal behavior on Intel® platforms with this command line tool.
Intel® System Debugger Speed up system bring-up and validation of system hardware and software using in-depth debug and trace of BIOS/UEFI, firmware, device drivers, operating system kernels, and more.
Intel® C++ Compiler Classic Use this standards-based C++ compiler with support for OpenMP* to take advantage of more cores and built-in technologies in platforms based on Intel® Xeon® Scalable processors and Intel® Core™ processors.
Intel® FPGA Add-On for oneAPI Base Toolkit Program these reconfigurable hardware accelerators to speed up specialized, data-centric workloads. This add-on requires installation of the Intel® oneAPI Base Toolkit.
Intel® Extension for PyTorch* This distribution provides simple front-end Python* APIs and utilities to get performance optimizations such as graph optimization and operator optimization with minor code changes.
Intel® Extension for Scikit-learn* Seamlessly speed up your scikit-learn* applications on Intel® CPUs and GPUs across single-node and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms. The toolkit also includes stock scikit-learn to provide a comprehensive Python environment installed with all required packages. The extension supports up to the last four versions of scikit-learn, and provides flexibility to use with your existing packages.
Intel® Neural Compressor Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open source Python library.
Intel® Optimization for PyTorch* In collaboration with Facebook*, this popular deep learning framework is now directly combined with many optimizations from Intel to provide superior performance on Intel® architecture. This package provides the binary version of latest PyTorch* release for CPU and further adds extensions and bindings from Intel with oneAPI Collective Communications Library (oneCCL) for efficient distributed training.
Intel® Optimization for TensorFlow* In collaboration with Google*, TensorFlow* has been directly optimized for Intel® architecture using the primitives of oneAPI Deep Neural Network Library (oneDNN) to maximize performance. This package provides the latest TensorFlow binary version compiled with CPU enabled settings (--config=mkl).
Intel® Optimization for XGBoost* This open source, machine learning framework includes optimizations contributed by Intel. It runs on Intel® hardware through Intel® software acceleration powered by oneAPI libraries. No code changes are required.
Analytics, AI and ML Libraries
GStreamer Video Analytics Plug-ins Use the GStreamer framework and build efficient, scalable video analytics applications with optimized plug-ins for video decode, encode, and inference.
Intel® Distribution of Modin* Scale data preprocessing across multi-nodes using this intelligent, distributed DataFrame library with an identical API to pandas. This library integrates with OmniSci* in the back end for accelerated analytics. Available via Anaconda*.
Model Zoo for Intel® Architecture Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open source, machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
XGBoost Optimized for Intel® Architecture In collaboration with the XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. This well-known, machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel architectures to significantly speed up model training and improve accuracy for better predictions.
Intel® Open Image Denoise Improve image quality with machine learning algorithms that selectively filter visual noise. This independent component can be used for noise reduction on 3D rendered images, with or without Intel® Embree.
Intel® OpenSWR Use a software rasterizer that's compatible with OpenGL* to work with datasets when GPU hardware isn’t available or is limiting.
Note Intel® OpenSWR is available as part of the Mesa OpenGL open source community project at OpenSWR.
Intel® OSPRay Use this ray-tracing engine to develop interactive, high-fidelity visualization applications.
Intel® OSPRay for Hydra* Connect the Intel® oneAPI Rendering Toolkit libraries in your application to the universal scene description (USD) Hydra* rendering subsystem by using the Intel® OSPRay for Hydra* plug-in. This plug-in enables fast preview exploration for compositing and animation, as well as high-quality, physically based photorealistic rendering of USD content.
Intel® OSPRay Studio Perform high-fidelity, ray traced, interactive, and real-time rendering through a graphical user interface with this new scene graph application addition to Intel® OSPRay.
Eclipse* IDE Plug-Ins Simplify application development for systems and IoT edge devices with this standards-based development IDE with the provided Eclipse* plug-ins. Requires a separate download.
IoT Connection Tools Connect sensors to device and device to cloud with this collection of abstraction libraries and tools.
Linux* Kernel Build Tools Using specialized platform project wizards that are integrated with Eclipse, quickly create, import, and customize Linux kernels based on the Yocto Project* for edge devices and systems.
Intel Contributions to the Industry Specification
The following components have a variant that has been documented and contributed to the industry specification.