**XGBoost** is a powerful and popular implementation of the gradient boosting ensemble algorithm.

An important speciality in configuring XGBoost models is the nomination of loss function that is minimized during the training of the model.

The **loss function** must be matched to the predictive modeling problem type, in the same way we must segregate towardly loss functions based on problem types with deep learning neural networks.

In this tutorial, you will discover how to configure loss functions for XGBoost ensemble models.

After completing this tutorial, you will know:

- Specifying loss functions used when training XGBoost ensembles is a hair-trigger step, much like neural networks.
- How to configure XGBoost loss functions for binary and multi-class nomenclature tasks.
- How to configure XGBoost loss functions for regression predictive modeling tasks.

Let’s get started.

## Tutorial Overview

This tutorial is divided into three parts; they are:

- XGBoost and Loss Functions
- XGBoost Loss for Classification
- XGBoost Loss for Regression

## XGBoost and Loss Functions

Extreme Gradient Boosting, or XGBoost for short, is an efficient open-source implementation of the gradient boosting algorithm. As such, XGBoost is an algorithm, an open-source project, and a Python library.

It was initially ripened by Tianqi Chen and was described by Chen and Carlos Guestrin in their 2016 paper titled “XGBoost: A Scalable Tree Boosting System.”

It is planned to be both computationally efficient (e.g. fast to execute) and highly effective, perhaps increasingly constructive than other open-source implementations.

XGBoost supports a range of variegated predictive modeling problems, most notably nomenclature and regression.

XGBoost is trained by minimizing loss of an objective function versus a dataset. As such, the nomination of loss function is a hair-trigger hyperparameter and tied directly to the type of problem stuff solved, much like deep learning neural networks.

The implementation allows the objective function to be specified via the “*objective*” hyperparameter, and sensible defaults are used that work for most cases.

Nevertheless, there remains some ravages by beginners as to what loss function to use when training XGBoost models.

We will take a closer squint at how to configure the loss function for XGBoost in this tutorial.

Before we get started, let’s get setup.

XGBoost can be installed as a standalone library and an XGBoost model can be ripened using the scikit-learn API.

The first step is to install the XGBoost library if it is not once installed. This can be achieved using the pip python package manager on most platforms; for example:

sudo pip install xgboost |

You can then personize that the XGBoost library was installed correctly and can be used by running the pursuit script.

# trammels xgboost version import xgboost print(xgboost.__version__) |

Running the script will print your version of the XGBoost library you have installed.

Your version should be the same or higher. If not, you must upgrade your version of the XGBoost library.

It is possible that you may have problems with the latest version of the library. It is not your fault.

Sometimes, the most recent version of the library imposes spare requirements or may be less stable.

If you do have errors when trying to run the whilom script, I recommend downgrading to version 1.0.1 (or lower). This can be achieved by specifying the version to install to the pip command, as follows:

sudo pip install xgboost==1.0.1 |

If you see a warning message, you can safely ignore it for now. For example, unelevated is an example of a warning message that you may see and can ignore:

FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. |

If you require explicit instructions for your minutiae environment, see the tutorial:

The XGBoost library has its own custom API, although we will use the method via the scikit-learn wrapper classes: XGBRegressor and XGBClassifier. This will indulge us to use the full suite of tools from the scikit-learn machine learning library to prepare data and evaluate models.

Both models operate the same way and take the same arguments that influence how the visualization trees are created and widow to the ensemble.

For increasingly on how to use the XGBoost API with scikit-learn, see the tutorial:

Next, let’s take a closer squint at how to configure the loss function for XGBoost on nomenclature problems.

## XGBoost Loss for Classification

Classification tasks involve predicting a label or probability for each possible class, given an input sample.

There are two main types of nomenclature tasks with mutually sectional labels: binary nomenclature that has two matriculation labels, and multi-class nomenclature that have increasingly than two matriculation labels.

**Binary Classification**: Nomenclature task with two matriculation labels.**Multi-Class Classification**: Nomenclature task with increasingly than two matriculation labels.

For increasingly on the variegated types of nomenclature tasks, see the tutorial:

XGBoost provides loss functions for each of these problem types.

It is typical in machine learning to train a model to predict the probability of matriculation membership for probability tasks and if the task requires well-done matriculation labels to post-process the predicted probabilities (e.g. use argmax).

This tideway is used when training deep learning neural networks for classification, and is moreover recommended when using XGBoost for classification.

The loss function used for predicting probabilities for binary nomenclature problems is “*binary:logistic*” and the loss function for predicting matriculation probabilities for multi-class problems is “*multi:softprob*“.

- “
*multi:logistic*“: XGBoost loss function for binary classification. - “
*multi:softprob*“: XGBoost loss function for multi-class classification.

These string values can be specified via the “*objective*” hyperparameter when configuring your XGBClassifier model.

For example, for binary classification:

... # pinpoint the model for binary classification model = XGBClassifier(objective=‘binary:logistic’) |

And, for multi-class classification:

... # pinpoint the model for multi-class classification model = XGBClassifier(objective=‘binary:softprob’) |

Importantly, if you do not specify the “*objective*” hyperparameter, the *XGBClassifier* will automatically segregate one of these loss functions based on the data provided during training.

We can make this touchable with a worked example.

The example unelevated creates a synthetic binary nomenclature dataset, fits an *XGBClassifier* on the dataset with default hyperparameters, then prints the model objective configuration.

1 2 3 4 5 6 7 8 9 10 11 | # example of automatically choosing the loss function for binary classification from sklearn.datasets import make_classification from xgboost import XGBClassifier # pinpoint dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # pinpoint the model model = XGBClassifier() # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically segregate a loss function for binary classification.

binary:logistic |

Alternately, we can specify the objective and fit the model, confirming the loss function was used.

1 2 3 4 5 6 7 8 9 10 11 | # example of manually specifying the loss function for binary classification from sklearn.datasets import make_classification from xgboost import XGBClassifier # pinpoint dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # pinpoint the model model = XGBClassifier(objective=‘binary:logistic’) # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used to specify a loss function for binary classification.

binary:logistic |

Let’s repeat this example on a dataset with increasingly than two classes. In this case, three classes.

The well-constructed example is listed below.

1 2 3 4 5 6 7 8 9 10 11 | # example of automatically choosing the loss function for multi-class classification from sklearn.datasets import make_classification from xgboost import XGBClassifier # pinpoint dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1, n_classes=3) # pinpoint the model model = XGBClassifier() # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically chose a loss function for multi-class classification.

multi:softprob |

Alternately, we can manually specify the loss function and personize it was used to train the model.

1 2 3 4 5 6 7 8 9 10 11 | # example of manually specifying the loss function for multi-class classification from sklearn.datasets import make_classification from xgboost import XGBClassifier # pinpoint dataset X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1, n_classes=3) # pinpoint the model model = XGBClassifier(objective=“multi:softprob”) # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used to specify a loss function for multi-class classification.

multi:softprob |

Finally, there are other loss functions you can use for classification, including: “*binary:logitraw*” and “*binary:hinge*” for binary nomenclature and “*multi:softmax*” for multi-class classification.

You can see a full list here:

Next, let’s take a squint at XGBoost loss functions for regression.

## XGBoost Loss for Regression

Regression refers to predictive modeling problems where a numerical value is predicted given an input sample.

Although predicting a probability sounds like a regression problem (i.e. a probability is a numerical value), it is often not considered a regression type predictive modeling problem.

The XGBoost objective function used when predicting numerical values is the “*reg:squarederror*” loss function.

*“reg:squarederror”*: Loss function for regression predictive modeling problems.

This string value can be specified via the “*objective*” hyperparameter when configuring your *XGBRegressor* model.

For example:

... # pinpoint the model for regression model = XGBRegressor(objective=‘reg:squarederror’) |

Importantly, if you do not specify the “*objective*” hyperparameter, the *XGBRegressor* will automatically segregate this objective function for you.

We can make this touchable with a worked example.

The example unelevated creates a synthetic regression dataset, fits an *XGBRegressor* on the dataset, then prints the model objective configuration.

1 2 3 4 5 6 7 8 9 10 11 | # example of automatically choosing the loss function for regression from sklearn.datasets import make_regression from xgboost import XGBRegressor # pinpoint dataset X, y = make_regression(n_samples=1000, n_features=20, n_informative=15, noise=0.1, random_state=7) # pinpoint the model model = XGBRegressor() # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model automatically segregate a loss function for regression.

reg:squarederror |

Alternately, we can specify the objective and fit the model, confirming the loss function was used.

1 2 3 4 5 6 7 8 9 10 11 | # example of manually specifying the loss function for regression from sklearn.datasets import make_regression from xgboost import XGBRegressor # pinpoint dataset X, y = make_regression(n_samples=1000, n_features=20, n_informative=15, noise=0.1, random_state=7) # pinpoint the model model = XGBRegressor(objective=‘reg:squarederror’) # fit the model model.fit(X, y) # summarize the model loss function print(model.objective) |

Running the example fits the model on the dataset and prints the loss function configuration.

We can see the model used the specified a loss function for regression.

reg:squarederror |

Finally, there are other loss functions you can use for regression, including: “*reg:squaredlogerror*“, “*reg:logistic*“, “*reg:pseudohubererror*“, “*reg:gamma*“, and “*reg:tweedie*“.

You can see a full list here:

## Further Reading

This section provides increasingly resources on the topic if you are looking to go deeper.

### Tutorials

### APIs

## Summary

In this tutorial, you discovered how to configure loss functions for XGBoost ensemble models.

Specifically, you learned:

- Specifying loss functions used when training XGBoost ensembles is a hair-trigger step much like neural networks.
- How to configure XGBoost loss functions for binary and multi-class nomenclature tasks.
- How to configure XGBoost loss functions for regression predictive modeling tasks.

**Do you have any questions?**

Ask your questions in the comments unelevated and I will do my weightier to answer.

Comments are closed.