Intel® oneAPI Video Processing Library Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing for broadcasting, live streaming and VOD, cloud gaming, and more.
Alphabetical by Name
Intel® Advisor Design code for efficient vectorization, threading, and offloading to accelerators.
Intel® C++ Compiler Classic Use this standards-based C++ compiler with support for OpenMP* to take advantage of more cores and built-in technologies in platforms based on Intel® Xeon® Scalable processors and Intel® Core™ processors.
Intel® Cluster Checker Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
Intel® Distribution of Modin* Scale data preprocessing across multi-nodes using this intelligent, distributed dataframe library with an identical API to pandas. This library integrates with OmniSci in the back end for accelerated analytics. Available via Anaconda.
Intel® Embree Improve the performance of photo-realistic rendering applications with this library of ray-tracing kernels. The kernels are optimized for the latest Intel® processors with support for Intel® Streaming SIMD Extensions [4.2] through to the latest Intel® Advanced Vector Extensions 512.
Intel® Extension for Scikit-learn* Seamlessly speed up your scikit-learn* applications on Intel® CPUs and GPUs across single-node and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine-learning algorithms. The toolkit also includes stock scikit-learn to provide a comprehensive Python environment installed with all required packages. The extension supports up to the last four versions of scikit-learn, and provides flexibility to use with your existing packages.
Intel® FPGA Add-On for oneAPI Base Toolkit Program these reconfigurable hardware accelerators to speed up specialized, data-centric workloads. This add-on requires installation of the Intel® oneAPI Base Toolkit.
Intel® Inspector Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
Intel® Neural Compressor Provide a unified, low-precision inference interface across multiple deep-learning frameworks optimized by Intel with this open-source Python library.
Intel® MPI Library Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
Intel® Open Image Denoise Improve image quality with machine learning algorithms that selectively filter visual noise. This independent component can be used for noise reduction on 3D rendered images, with or without Intel Embree.
Intel® OpenSWR Use this OpenGL-compatible software rasterizer to work with datasets when GPU hardware isn’t available or is limiting.
Note Intel® OpenSWR is available as part of the Mesa OpenGL open-source community project at OpenSWR.
Intel® Optimization for PyTorch* In collaboration with Facebook, this popular deep-learning framework is now directly combined with many Intel optimizations to provide superior performance on Intel® architecture. This package provides the binary version of latest PyTorch* release for CPU and further adds extensions and bindings from Intel with oneAPI Collective Communications Library (oneCCL) for efficient distributed training.
Intel® Optimization for TensorFlow* In collaboration with Google*, TensorFlow has been directly optimized for Intel® architecture using the primitives of oneAPI Deep Neural Network Library (oneDNN) to maximize performance. This package provides the latest TensorFlow binary version compiled with CPU enabled settings (--config=mkl).
Intel® OSPRay Use this ray-tracing engine to develop interactive, high-fidelity visualization applications.
Intel® OSPRay for Hydra* Connect the Intel oneAPI Rendering Toolkit libraries in your application to the universal scene description (USD) Hydra rendering subsystem by using the Intel® OSPRay for Hydra* plug-in. This plug-in enables fast preview exploration for compositing and animation, as well as high-quality, physically based photorealistic rendering of USD content.
Intel® OSPRay Studio Perform high-fidelity, ray traced, interactive, and real-time rendering through a graphical user interface with this new scene graph application addition to Intel OSPRay.
Intel® SoC Watch Analyze system power and thermal behavior on Intel® platforms with this command line tool.
Intel® System Debugger Speed up system bring-up and validation of system hardware and software using in-depth debug and trace of BIOS/UEFI, firmware, device drivers, operating system kernels, and more.
Eclipse* IDE Plug-Ins Simplify application development for systems and IoT edge devices with this standards-based development IDE with the provided Eclipse plug-ins. Requires a separate download.
GStreamer Video Analytics Plug-ins Use the GStreamer framework and build efficient, scalable video analytics applications with optimized plug-ins for video decode, encode, and inference.
IoT Connection Tools Connect sensors to device and device to cloud with this collection of abstraction libraries and tools.
Linux* Kernel Build Tools Using specialized Eclipse-integrated platform project wizards, quickly create, import, and customize Linux kernels based on the Yocto Project* for edge devices and systems.
Model Zoo for Intel® Architecture Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open-source, machine-learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
XGBoost Optimized for Intel® Architecture In collaboration with the XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. This well-known, machine-learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel architectures to significantly speed up model training and improve accuracy for better predictions.