The browser version you are using is not recommended for this site. Please consider upgrading to the latest version of your browser by clicking one of the following links.
This site is protected by reCAPTCHA and the Google <a href="https://policies.google.com/privacy">Privacy Policy</a> and <a href="https://policies.google.com/terms"> Terms of Service</a> apply.
AI Tools:
Data Analytics Classical Machine Learning Deep Learning Inference Optimizationpipconda
Intel provides access to the AI software through a public Anaconda* repository. If you do not have an existing conda-based Python* environment, install conda or Miniconda* before running the installation command.
Libmamba Solver Configuration
Libmamba is defined as the solver used to install Intel packages. To ensure you are using this solver, follow these instructions:
Activate the base conda environment.
source <conda PATH>/bin/activate
Update conda.
conda update -n base conda
Configure libmamba as the solver.
conda config --set solver libmamba
Verify that libmamba is set by typing:
conda config --show
We strongly recommend installing into a new conda environment first.
Install packages in a new environment using conda create. A list of available packages is located in the Intel repository at Anaconda.org. Not all packages in the repository have the current release. If the repository contains an outdated version of a required component, install a newer one via the command line or GUI.
cnvrg.io™ is a full-service machine learning operating system. The platform enables you to manage all your AI projects from one place. Using cnvrg.io requires a separate license and download.
All AI tools are available for offline installation using a stand-alone installer. Choose this option if your target installation environments are behind a firewall, you need to manage versions, or for other purposes.
Intel® Distribution for Python* is a cluster of packages, including the Python Interpreter and compilers, that are optimized via Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library (oneDAL) to make Python applications more efficient.
Intel® Distribution for Python* is a cluster of packages, including the Python Interpreter and compilers, that are optimized via Intel® oneAPI Math Kernel Library (oneMKL) and Intel® oneAPI Data Analytics Library (oneDAL) to make Python applications more efficient.
Intel® Optimization for XGBoost* is an upstreamed optimization from Intel that provides superior performance on Intel® CPUs. This well-known machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel® architecture to significantly speed up model training and improve accuracy for better predictions.
Intel® Extension for Scikit-learn* seamlessly speeds up your scikit-learn* applications on Intel® CPUs and GPUs across single nodes and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms.
Intel® Extension for TensorFlow* is a heterogeneous, high-performance, deep learning extension plug-in based on a TensorFlow* PluggableDevice interface that enables access to Intel CPU and GPU devices with TensorFlow for AI workload acceleration.
Intel® Extension for PyTorch* extends PyTorch with up-to-date feature optimizations for an extra performance boost on Intel hardware.
Intel® Distribution of Modin* is a distributed DataFrame library with an identical API to pandas. The library integrates with OmniSci* in the back end for accelerated analytics.
By accessing, downloading, or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third-party software included with the Software Package. Preset containers are published under Apache License 2.0.
Docker containers are available only for preset bundles. To download additional components, choose a package manager and select your component.
Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.13.0), Intel® Extension for PyTorch* (version 2.0.110), Intel® Optimization for XGBoost* (version 1.7.6), Intel® Extension for Scikit-learn* (version 2023.2), Intel® Distribution for Python* (version 3.9 and 3.10), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.9.1), Intel® Extension for PyTorch* (version 2.1), Intel® Optimization for XGBoost* (version 1.2.2), Intel® Extension for Scikit-learn* (version 2023), Intel® Distribution for Python* (version 2023.2.0), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
Intel® Distribution of Modin* (version 0.23.0), Intel® Extension for TensorFlow* (version 2.13.0), Intel® Extension for PyTorch* (version 2.0.110), Intel® Optimization for XGBoost* (version 1.7.6), Intel® Extension for Scikit-learn* (version 2023.2), and Intel® Neural Compressor (version 2.2) have been updated to include functional and security updates. Users should update to the latest versions as they become available.
Support
Start-up support is available if there is an issue with the AI Tools Selector functionality.
Share your thoughts about the preview version of this AI tools selector, and provide suggestions for improvement. Responses are anonymous, and Intel will not contact you unless you grant permission by providing an email address.