What is Ensemble of Models? | Basic Principles of Building Ensemble Models
Ensemble Learning

# What is Ensemble of Models?

After finishing this course you will be able to deal with the following tasks in our workspaces block. To work with the workspace you should follow the link and copy the workspace.
Pay attention!
It is strongly recommended to finish Classification with Python course before starting this one.

Go here if needed.

An ensemble model is a machine learning technique that combines the predictions of multiple individual models (also known as base models or weak learners) to make more accurate and robust predictions. Instead of relying on a single model, an ensemble leverages the crowd's wisdom, aggregating its constituent models' diverse opinions to produce a final prediction.
The principle of work of ensemble model is described at the following scheme:

We have week learners `c1`, `c2`, `c3` that provide predictions `p1`, `p2` and `p3`. Then all these predictions are aggregated using meta-classifier that gives us a final prediction.
The number of weak learners can be arbitrary and is chosen so as to achieve the best possible results with the available computing power.
Ensembles can be quite diverse, as a result they can be easily customized to solve specific problems:

1. We can use different models as weak learners: KNN, Logistic regression, SVM, decision trees, neural networks, etc.
2. We can use different agregation methods to create final predictions. There are 3 main aggregation techniques:
• bagging;
• boosting;
• stacking.

Correct choice of the aggregation method is the most interesting and effective tool in solving problems using ensembles of models. Aggregation methods will be discussed in more detail in the next chapters.

What is a base model (weak learner) of the ensemble?

Everything was clear?

Section 1. Chapter 1

Course Content

Ensemble Learning

# What is Ensemble of Models?

After finishing this course you will be able to deal with the following tasks in our workspaces block. To work with the workspace you should follow the link and copy the workspace.
Pay attention!
It is strongly recommended to finish Classification with Python course before starting this one.

Go here if needed.

An ensemble model is a machine learning technique that combines the predictions of multiple individual models (also known as base models or weak learners) to make more accurate and robust predictions. Instead of relying on a single model, an ensemble leverages the crowd's wisdom, aggregating its constituent models' diverse opinions to produce a final prediction.
The principle of work of ensemble model is described at the following scheme:

We have week learners `c1`, `c2`, `c3` that provide predictions `p1`, `p2` and `p3`. Then all these predictions are aggregated using meta-classifier that gives us a final prediction.
The number of weak learners can be arbitrary and is chosen so as to achieve the best possible results with the available computing power.
Ensembles can be quite diverse, as a result they can be easily customized to solve specific problems:

1. We can use different models as weak learners: KNN, Logistic regression, SVM, decision trees, neural networks, etc.
2. We can use different agregation methods to create final predictions. There are 3 main aggregation techniques:
• bagging;
• boosting;
• stacking.

Correct choice of the aggregation method is the most interesting and effective tool in solving problems using ensembles of models. Aggregation methods will be discussed in more detail in the next chapters.

What is a base model (weak learner) of the ensemble?