Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
KNeighborsClassifier | Modeling
course content

Зміст курсу

ML Introduction with scikit-learn

KNeighborsClassifierKNeighborsClassifier

When building a final estimator for a pipeline, we used the KNeighborsClassifier model.
This chapter will briefly explain how it works.

Note

How models work is not a main topic of this course, so it is OK if something seems unclear to you. It is explained in more detail in different courses like Linear Regression with Python or Classification with Python.

k-Nearest Neighbors

k-Nearest Neighbors is an ML algorithm based on finding the most similar instances in the training set to make a prediction.
KNeighborsClassifier is a scikit-learn implementation of a k-Nearest Neighbors algorithm for a classification task. Here is how it makes a prediction:

  1. For a new instance, find the k nearest (based on features) instances of the training set. Those k instances are called neighbors;
  2. Find the most frequent class among k neighbors. That class will be a prediction for the new instance.

k is the number of neighbors you want to consider. You need to specify this number when initializing the model. By default, k is set to 5.
With different values of k model yields different predictions.
It is called hyperparameter – a parameter you need to specify in advance and can change the model's predictions.
You can try setting different k values and find the optimal one for your task.

This process of adjusting hyperparameters is known as hyperparameter tuning, and it can help you optimize your model's performance.

KNeighborsClassifier during .fit()

Unlike most ML models, during training, KNeighborsClassifier does nothing but store the training set.
But even though training does not take time, calling the .fit(X, y) is mandatory for it to remember the training set.

KNeighborsClassifier during .predict()

During prediction, the KNeighborsClassifier greedily finds the k nearest neighbors for each new instance.

Note

In the videos above, only two features are used, 'body_mass_g' and 'culmen_depth_mm'. That is because it is hard to visualize a higher-dimensional plot.
Additional features will likely help the model separate green and red data points better, so the KNeighborsClassifier would make better predictions.

KNeighborsClassifier coding example

Let's build a KNeighborsClassifier, train it, and get its accuracy using the .score() method.
For the sake of simplicity, the data in a .csv file is already fully preprocessed.
To specify the k, use the n_neighbors argument of the KNeighborsClassifier constructor. We will try values 5 (the default value) and 1.

We get a pretty good accuracy! For a 1-nearest neighbor, even perfect accuracy.
Should we trust these scores? No. We evaluated the model on the training set.
The model was trained on these values, so naturally, it predicts the instances it has already seen well.
We should evaluate the model on the instances the model has never seen to understand how well it performs.
Jump into the next chapter to see how!

Все було зрозуміло?

Секція 4. Розділ 2
course content

Зміст курсу

ML Introduction with scikit-learn

KNeighborsClassifierKNeighborsClassifier

When building a final estimator for a pipeline, we used the KNeighborsClassifier model.
This chapter will briefly explain how it works.

Note

How models work is not a main topic of this course, so it is OK if something seems unclear to you. It is explained in more detail in different courses like Linear Regression with Python or Classification with Python.

k-Nearest Neighbors

k-Nearest Neighbors is an ML algorithm based on finding the most similar instances in the training set to make a prediction.
KNeighborsClassifier is a scikit-learn implementation of a k-Nearest Neighbors algorithm for a classification task. Here is how it makes a prediction:

  1. For a new instance, find the k nearest (based on features) instances of the training set. Those k instances are called neighbors;
  2. Find the most frequent class among k neighbors. That class will be a prediction for the new instance.

k is the number of neighbors you want to consider. You need to specify this number when initializing the model. By default, k is set to 5.
With different values of k model yields different predictions.
It is called hyperparameter – a parameter you need to specify in advance and can change the model's predictions.
You can try setting different k values and find the optimal one for your task.

This process of adjusting hyperparameters is known as hyperparameter tuning, and it can help you optimize your model's performance.

KNeighborsClassifier during .fit()

Unlike most ML models, during training, KNeighborsClassifier does nothing but store the training set.
But even though training does not take time, calling the .fit(X, y) is mandatory for it to remember the training set.

KNeighborsClassifier during .predict()

During prediction, the KNeighborsClassifier greedily finds the k nearest neighbors for each new instance.

Note

In the videos above, only two features are used, 'body_mass_g' and 'culmen_depth_mm'. That is because it is hard to visualize a higher-dimensional plot.
Additional features will likely help the model separate green and red data points better, so the KNeighborsClassifier would make better predictions.

KNeighborsClassifier coding example

Let's build a KNeighborsClassifier, train it, and get its accuracy using the .score() method.
For the sake of simplicity, the data in a .csv file is already fully preprocessed.
To specify the k, use the n_neighbors argument of the KNeighborsClassifier constructor. We will try values 5 (the default value) and 1.

We get a pretty good accuracy! For a 1-nearest neighbor, even perfect accuracy.
Should we trust these scores? No. We evaluated the model on the training set.
The model was trained on these values, so naturally, it predicts the instances it has already seen well.
We should evaluate the model on the instances the model has never seen to understand how well it performs.
Jump into the next chapter to see how!

Все було зрозуміло?

Секція 4. Розділ 2
some-alt