Data scientists and AI developers need the ability to explore and experiment with extremely large datasets as they converge on novel solutions for deployment in production applications. Exploration and experimentation means a lot of iteration, which is only feasible with fast turnaround times. While model training performance is an important part, the entire end-to-end process must be addressed. Loading, exploring, cleaning, and adding features to large datasets can often be so time-consuming that it limits exploration and experimentation. And responsiveness during inference is often crucial once a model is deployed.

Many of the solutions for large-scale AI development require installing new packages and rewriting code to use their APIs. For instance, data scientists and AI developers often use pandas to load data for machine learning applications. But once the size of the dataset gets to about 100 MB or larger, loading and cleaning the data really slows down because pandas is single-core only.

As a result, developers must change their workflow to use different data loading and preprocessing, such as switching to Apache Spark*, which requires data scientists to learn the Spark API and overhaul their code to integrate it. This is usually an inopportune time to make such changes and is not a good use of data scientists’ and AI developers’ skills.

Intel has been working to improve performance of popular Python* libraries while maintaining the usability of Python, by implementing the key underlying algorithms in built-in code using oneAPI performance libraries. This delivers concurrency at multiple levels, such as vectorization, multithreading, and multiprocessing with minimal impact on existing code. For example:

  • Intel® Distribution of Modin* scales pandas DataFrames to multiple cores with a single line of code change.
  • Intel® Optimization for PyTorch* or Intel® Optimization for TensorFlow* accelerate deep learning training and inference.
  • Intel® Extension for Scikit-learn* or XGBoost optimized for Intel® architecture speed up machine learning algorithms with no code changes.

In this session, see how to accelerate your end-to-end workflow with these technologies via a demonstration using the full New York City taxi fare dataset.


  • Rachel Oberman, technical consulting engineer, Intel
  • Todd Tomashek, machine learning engineer, Intel
  • Albert DeFusco, principal data scientist, Anaconda*

Get the Software

Get these Intel®-optimized versions of your Python libraries as part of the Intel® AI Analytics Toolkit, or download them as stand-alone components:

Additional Resources

AI Tools, Libraries, and Framework Optimizations



Intel® AI Analytics Toolkit

Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Get It Now

See All Tools