AI Tools, Libraries, and Framework Optimizations
Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel.
Intel provides a comprehensive portfolio of tools for all your AI needs, including data preparation, training, inference, deployment, and scaling. All tools are built on the foundation of a standards-based, unified oneAPI programming model with interoperability, openness, and extensibility as core tenets.
End-to-End Python* Data Science and AI Acceleration with Intel® AI Analytics Toolkit (AI Kit)
Accelerate end-to-end machine-learning and data-science pipelines. The toolkit includes:
- Optimized frameworks, a model repository, and a low-precision optimization tool for deep learning
- Extensions for scikit-learn* and XGBoost for machine learning
- Accelerated data analytics through the Intel® Distribution of Modin* system
- Optimized core Python* libraries
- Samples for end-to-end workloads
Write Once, Deploy Anywhere with the Intel® Distribution of OpenVINO™ Toolkit
Deploy high-performance inference applications from device to cloud. Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:
- Repository of open-source, pretrained, and preoptimized models ready for inference
- Model optimizer for your trained model
- Inference engine to run inference and output results on multiple processors, accelerators, and environments with a write-once, deploy-anywhere efficiency
Connect AI to Big Data with BigDL
Scale your AI models seamlessly to big data clusters with thousands of nodes for distributed training or inference. Built on top of the Apache Spark*, TensorFlow*, PyTorch*, Intel® Distribution of OpenVINO™ toolkit, and open-source Ray frameworks, this unified analytics and AI platform has an extensible architecture to support more libraries and frameworks.