Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
GridSearchCV | Modeling
ML Introduction with scikit-learn
course content

Зміст курсу

ML Introduction with scikit-learn

ML Introduction with scikit-learn

1. Machine Learning Concepts
2. Preprocessing Data with Scikit-learn
3. Pipelines
4. Modeling

GridSearchCV

Now it is time to try improving the model's performance!
This is done by finding the best hyperparameters fitting our task.
This process is called Hyperparameter Tuning. The default approach is to try different hyperparameter values and calculate a cross-validation score for them. Then just choose the value that results in the best score.

This process can be done using the GridSearchCV class of the sklearn.model_selection module.

While constructing a GridSearchCV object, we need to pass the model and the parameters grid (and optionally scoring and the number of folds).
The parameters grid (param_grid) is a dictionary containing all the hyperparameters configurations we want to try.
For example, param_grid={'n_neighbors': [1, 3, 5, 7]} will try values 1, 3, 5, and 7 as the number of neighbors.
Then we need to train it using the .fit(X, y).
After that, we can access the model with the best parameters using the .best_estimator_ attribute and its cross-validation score using the .best_score_ attribute.

Let's try it out!

123456789101112131415
import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import GridSearchCV df = pd.read_csv('https://codefinity-content-media.s3.eu-west-1.amazonaws.com/a65bbc96-309e-4df9-a790-a1eb8c815a1c/penguins_pipelined.csv') # Assign X, y variables (X is already preprocessed and y is already encoded) X, y = df.drop('species', axis=1), df['species'] # Create the param_grid and initialize GridSearchCV object param_grid = {'n_neighbors': [1,3,5,7,9]} grid_search = GridSearchCV(KNeighborsClassifier(), param_grid) # Train the GridSearchCV object. During training it finds the best parameters grid_search.fit(X, y) # Print the best estimator and its cross-validation score print(grid_search.best_estimator_) print(grid_search.best_score_)

The next step would be to take the best_estimator_ and train it on the whole dataset since we already know it has the best parameters (out of those we tried), and we know its score.
This step is so obvious that GridSearchCV does it by default.
So the object (grid_search in our example) becomes a trained model with the best parameters.
We can now use this object for predicting or evaluating.
That's why GridSearchCV has .predict() and .score() methods.

123456789101112131415
import pandas as pd from sklearn.neighbors import KNeighborsClassifier from sklearn.model_selection import GridSearchCV df = pd.read_csv('https://codefinity-content-media.s3.eu-west-1.amazonaws.com/a65bbc96-309e-4df9-a790-a1eb8c815a1c/penguins_pipelined.csv') # Assign X, y variables (X is already preprocessed and y is already encoded) X, y = df.drop('species', axis=1), df['species'] # Create the param_grid and initialize GridSearchCV object param_grid = {'n_neighbors': [1,3,5,7,9]} grid_search = GridSearchCV(KNeighborsClassifier(), param_grid) # Train the GridSearchCV object. During training it finds the best parameters grid_search.fit(X, y) # Evaluate the grid_search on the training set # It is done only to show that .score() method works, evaluating on training set is not reliable. print(grid_search.score(X, y))

Once you trained a GridSearchCV object, you can use it to make predictions using the .predict() method. Is it correct?

Виберіть правильну відповідь

Все було зрозуміло?

Секція 4. Розділ 6
We're sorry to hear that something went wrong. What happened?
some-alt