Intel® Optimization for XGBoost*
Fast Turnaround for Gradient Boosting Machine Learning
Speed Up XGBoost Algorithms on Intel® Hardware
XGBoost is an open source gradient boosting machine learning library. It performs well across a variety of data and problem types, so it often pushes the limits of compute resources.
Using XGBoost on Intel CPUs takes advantage of software accelerations powered by oneAPI, without requiring any code changes. Software optimizations deliver the maximum performance for your existing hardware. This enables faster iterations during development and training, and lower latency during inference.
Using this library with Intel optimizations, you can:
- Use Python* to develop, train, and deploy machine learning models that use a parallel tree gradient boosting algorithm.
- Automatically accelerate XGBoost training and inference performance on Intel CPUs.
- Further accelerate inference on Intel CPUs with advanced features not yet available in XGBoost by importing models into daal4py.
Intel® Optimization for XGBoost* is part of the end-to-end suite of Intel® AI and machine learning development tools and resources.
Download the AI Tools
Intel® Optimization for XGBoost* is available in the AI Tools Selector, which provides accelerated machine learning and data analytics pipelines with optimized deep learning frameworks and high-performing Python* libraries.
Download the Stand-Alone Version
A stand-alone version of Intel® Optimization for XGBoost* is available. You can install it using a package manager or build from the source.
Develop in the Cloud
Build and optimize oneAPI multiarchitecture applications using the latest Intel-optimized oneAPI and AI tools, and test your workloads across Intel® CPUs and GPUs. No hardware installations, software downloads, or configuration necessary.
Features
XGBoost Machine Learning Library
- Implement machine learning algorithms such as classification, regression, and ranking using gradient boosting.
- Perform parallel tree boosting to solve a wide variety of machine learning problems efficiently and accurately.
- Run single-node or distributed training.
Intel® Optimizations
- Speed up XGBoost histogram tree-building with automatic memory prefetching.
- Parallelize the XGBoost split function by automatically partitioning observations to multiple processing threads.
- Reduce memory consumption when building histograms.
daal4py Inference
- Further accelerate XGBoost model inference with daal4py, which uses the latest Intel® oneAPI Data Analytics Library (oneDAL) optimizations that are not yet ported to XGBoost.
- Get started by importing pretrained or custom-trained XGBoost and LightGBM models with a few lines of code.
- Reduce inference memory consumption and use L1 and L2 caches more efficiently.
- Take advantage of the latest instruction set features from Intel.
Benchmarks
Documentation & Code Samples
Demos
Optimize Utility Maintenance Prediction with the AI Tools
Using the Predictive Asset Maintenance Reference Kit as an example, learn how to optimize the training cycles, prediction throughput, and accuracy of your machine learning workflow.
Python* Data Science at Scale
XGBoost optimizations for Intel® architecture are part of an accelerated end-to-end machine learning pipeline, demonstrated using the New York City taxi dataset.
Accelerate XGBoost Gradient-Boosting Training and Inference
Learn how XGBoost optimizations for Intel architecture and the AI Tools help accelerate complex gradient boosting with large datasets.
News
Intel Releases Open Source AI Reference Kits
Access full AI development solutions for applications in healthcare, manufacturing, retail, and other industries.
Optimize XGBoost Training Performance
Training performance of XGBoost with a handwritten letters dataset is improved up to 16x on Amazon Web Services (AWS)*.
Improve Performance of XGBoost and LightGBM Inference
Speed XGBoost prediction by up to 36x by converting models to oneDAL.
Specifications
Processors:
- All CPUs with x86 architecture
Operating systems:
- Linux*
- Windows*
Language:
- Python
Get Help
Your success is our success. Access these support resources when you need assistance.

Stay Up to Date on AI Workload Optimizations
Sign up to receive hand-curated technical articles, tutorials, developer tools, training opportunities, and more to help you accelerate and optimize your end-to-end AI and data science workflows.
Take a chance and subscribe. You can change your mind at any time.