Speed Up Machine Learning Training on CPUs with AI Tools
Subscribe Now
Stay in the know on all things CODE. Updates are delivered to your inbox.
Overview
Speed up tasks such as data preprocessing at scale, training, and inference while also gaining performance.
This session explores how two Intel® Tools can be used as drop-in replacements for stock pandas and scikit-learn* libraries to significantly speed up key tasks in machine learning model development and deployment on CPUs instead of GPUs.
- Intel® Distribution of Modin* enables data scientists to scale to distributed DataFrame processing without having to change API code.
- Intel® Extension of Scikit-learn* seamlessly accelerates scikit-learn applications for Intel CPUs and GPUs across single- and multi-node configurations.
This session covers an overview of these tools including:
- An overview of the tools—Intel Distribution of Modin and Intel Extension for Scikit-learn—and what you can do with them
- How to use Intel Distribution of Modin as a drop-in replacement for stock pandas
- How to use the scikit-learn extension as a drop-in replacement for stock scikit-learn libraries
- A live demo showcasing the performance improvements.
Skill level: Novice
Featured Software
Get the following tools as stand-alone versions from GitHub* or as part of AI Tools
Download Code Samples
Accelerate data science and AI pipelines-from preprocessing through machine learning-and provide interoperability for efficient model development.
You May Also Like
Related Articles