Get started with the Intel® Optimization for XGBoost using the following commands.
Starting with XGBoost v81 and later, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. Use the Intel Optimization for XGBoost training by calling the popular hist tree method in the parameters. The inference for this optimization is automatically implemented with XGBoost v1.3.3 and higher.
For additional installation and tuning methods, see the Get Started Guide.
For more information, see Intel Optimization for XGBoost.
Basic Installation Using PyPI* |
pip install xgboost |
Basic Installation Using Anaconda* |
сonda install xgboost -c conda-forge |
Enable Training Optimizations from Intel Using the hist Tree Method Parameter as an Example |
# Note that tree_method is set to hist import xgboost as xgb import numpy as np # Set XGBoost parameters xgb_params = { 'alpha': 0.9, 'max_bin': 256, 'scale_pos_weight': 2, 'learning_rate': 0.1, 'subsample': 1, 'reg_lambda': 1, 'min_child_weight': 0, 'max_depth': 8, 'max_leaves': 2**8, 'tree_method': hist } # Create DMatrix from the training set (in CSV format) dtrain = xgb.DMatrix('letters_train.csv') # Train the model model_xgb = xgb.train(xgb_params, dtrain, 1000) |
For more information and support, or to report any issues, see the Intel® AI Analytics Toolkit Forum.
Sign up and try this optimization for free using Intel® Developer Cloud for oneAPI.