XGBoost is an open source gradient boosting machine learning library. It performs well across a variety of data and problem types, so it often pushes the limits of compute resources.
Using XGBoost on Intel CPUs takes advantage of software accelerations powered by oneAPI, without requiring any code changes. Software optimizations deliver the maximum performance for your existing hardware. This enables faster iterations during development and training, and lower latency during inference.
Using this library with Intel optimizations, you can:
- Use Python* to develop, train, and deploy machine learning models that use a parallel tree gradient boosting algorithm.
- Automatically accelerate XGBoost training and inference performance on Intel CPUs.
- Further accelerate inference on Intel CPUs with advanced features not yet available in XGBoost by importing models into daal4py.
Intel® Optimization for XGBoost* is part of the end-to-end suite of Intel® AI and machine learning development tools and resources.