Bagging Classifier
We have already considered the principle of the work of bagging ensemble. Now let's apply this knowledge and create a model that will provide classification using such an ensemble in Python:
Firstly, we import BaggingClassifier
class that contains all the necessary tools to work with the bagging classifier.
Then, we create an instance of this class specifying the base model, the number of these models to create an ensemble, and n_jobs
parameter.
Note
We have already mentioned that bagging models can be fitted using parallel computing.
n_jobs=-1
means that we will use all available processors to train the model
Now we can use the .fit()
and .predict()
methods of BaggingClassifier
to fit the model on available data and make predictions:
Now, let's talk about the base models of an ensemble.
What models can be used as base?
We can use absolutely any models designed to perform classification tasks as the base models of the ensemble (logistic regression, SVM, neural networks, etc.).
It is also important to note that when using the .fit()
method, the ensemble will learn on different data subsamples by itself, so we don't need to specify additional parameters or manually control the learning process.
When we use the .predict()
method, the hard voting technique creates a final prediction.
If we want to create a final prediction using soft voting, we must use the base estimator with the implemented .predict_proba()
method instead of the .predict()
method. As a result, we will get the vector corresponding to each sample from the test set containing aggregated probabilities belonging to a particular class (we will call it the probability matrix).
Note
If we don't specify the base model of
BaggingClassifer
, the Decision Tree Classifier will be used by default.
Example of usage
Let's solve some simple classification problems using a bagging ensemble with logistic regression as a base model:
Code Description
make_classification()
function from sklearn.datasets
module to create a synthetic dataset for a binary classification problem. The dataset has 1000 samples, 10 features, 5 informative features, and 1 cluster per class. This synthetic dataset allows us to create a simple classification problem for demonstration purposes.
train_test_split()
function from sklearn.model_selection
module. The training set will be used to train the Bagging ensemble, and the testing set will be used to evaluate its performance.
LogisticRegression
class from sklearn.linear_model
module. The base model is a Logistic Regression classifier, which will be used as the weak learner within the Bagging ensemble.
BaggingClassifier
class with the base model (Logistic Regression) as the weak learner. The BaggingClassifier
will train multiple instances of the base model on different subsets of the training data.
.fit()
method. The BaggingClassifier
sequentially trains multiple instances of the base model on different subsets of the training data to create the ensemble.
.predict()
method. We then calculated the F1 score using f1_score()
function from sklearn.metrics
module.
¿Todo estuvo claro?
Contenido del Curso
Ensemble Learning
1. Basic Principles of Building Ensemble Models
Ensemble Learning
Bagging Classifier
We have already considered the principle of the work of bagging ensemble. Now let's apply this knowledge and create a model that will provide classification using such an ensemble in Python:
Firstly, we import BaggingClassifier
class that contains all the necessary tools to work with the bagging classifier.
Then, we create an instance of this class specifying the base model, the number of these models to create an ensemble, and n_jobs
parameter.
Note
We have already mentioned that bagging models can be fitted using parallel computing.
n_jobs=-1
means that we will use all available processors to train the model
Now we can use the .fit()
and .predict()
methods of BaggingClassifier
to fit the model on available data and make predictions:
Now, let's talk about the base models of an ensemble.
What models can be used as base?
We can use absolutely any models designed to perform classification tasks as the base models of the ensemble (logistic regression, SVM, neural networks, etc.).
It is also important to note that when using the .fit()
method, the ensemble will learn on different data subsamples by itself, so we don't need to specify additional parameters or manually control the learning process.
When we use the .predict()
method, the hard voting technique creates a final prediction.
If we want to create a final prediction using soft voting, we must use the base estimator with the implemented .predict_proba()
method instead of the .predict()
method. As a result, we will get the vector corresponding to each sample from the test set containing aggregated probabilities belonging to a particular class (we will call it the probability matrix).
Note
If we don't specify the base model of
BaggingClassifer
, the Decision Tree Classifier will be used by default.
Example of usage
Let's solve some simple classification problems using a bagging ensemble with logistic regression as a base model:
Code Description
make_classification()
function from sklearn.datasets
module to create a synthetic dataset for a binary classification problem. The dataset has 1000 samples, 10 features, 5 informative features, and 1 cluster per class. This synthetic dataset allows us to create a simple classification problem for demonstration purposes.
train_test_split()
function from sklearn.model_selection
module. The training set will be used to train the Bagging ensemble, and the testing set will be used to evaluate its performance.
LogisticRegression
class from sklearn.linear_model
module. The base model is a Logistic Regression classifier, which will be used as the weak learner within the Bagging ensemble.
BaggingClassifier
class with the base model (Logistic Regression) as the weak learner. The BaggingClassifier
will train multiple instances of the base model on different subsets of the training data.
.fit()
method. The BaggingClassifier
sequentially trains multiple instances of the base model on different subsets of the training data to create the ensemble.
.predict()
method. We then calculated the F1 score using f1_score()
function from sklearn.metrics
module.
¿Todo estuvo claro?