Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Tune Hyperparameters with RandomizedSearchCV | Modeling
ML Introduction with scikit-learn
course content

Course Content

ML Introduction with scikit-learn

ML Introduction with scikit-learn

1. Machine Learning Concepts
2. Preprocessing Data with Scikit-learn
3. Pipelines
4. Modeling

bookTune Hyperparameters with RandomizedSearchCV

The idea behind RandomizedSearchCV is that it works the same as GridSearchCV, but instead of trying all the combinations, it tries a randomly sampled subset.
For example, this param_grid will have 100 combinations

The GridSearchCV would try all of them, which is time-consuming.
With RandomizedSearchCV, you can try only a randomly chosen subset of, say, 20 combinations.
It usually leads to a little worse result, but it is much faster.
You can control the number of combinations to be tested using the n_iter argument (set to 10 by default). Apart from that, working with it is the same as with GridSearchCV.

Task
test

Swipe to show code editor

Your task is to build GridSearchCV and RandomizedSearchCV with 20 combinations and compare the results.

  1. Initialize the RandomizedSearchCV object. Pass the parameters grid and set the number of combinations to 20.
  2. Initialize the GridSearchCV object.
  3. Train both GridSearchCV and RandomizedSearchCV objects.
  4. Print the best estimator of grid.
  5. Print the best score of randomized.

Note

You can try running the code several times. Look at the difference between the two scores. Sometimes the scores can be the same due to the presence of the best parameters among combinations sampled by RandomizedSearchCV.

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 4. Chapter 8
toggle bottom row

bookTune Hyperparameters with RandomizedSearchCV

The idea behind RandomizedSearchCV is that it works the same as GridSearchCV, but instead of trying all the combinations, it tries a randomly sampled subset.
For example, this param_grid will have 100 combinations

The GridSearchCV would try all of them, which is time-consuming.
With RandomizedSearchCV, you can try only a randomly chosen subset of, say, 20 combinations.
It usually leads to a little worse result, but it is much faster.
You can control the number of combinations to be tested using the n_iter argument (set to 10 by default). Apart from that, working with it is the same as with GridSearchCV.

Task
test

Swipe to show code editor

Your task is to build GridSearchCV and RandomizedSearchCV with 20 combinations and compare the results.

  1. Initialize the RandomizedSearchCV object. Pass the parameters grid and set the number of combinations to 20.
  2. Initialize the GridSearchCV object.
  3. Train both GridSearchCV and RandomizedSearchCV objects.
  4. Print the best estimator of grid.
  5. Print the best score of randomized.

Note

You can try running the code several times. Look at the difference between the two scores. Sometimes the scores can be the same due to the presence of the best parameters among combinations sampled by RandomizedSearchCV.

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 4. Chapter 8
toggle bottom row

bookTune Hyperparameters with RandomizedSearchCV

The idea behind RandomizedSearchCV is that it works the same as GridSearchCV, but instead of trying all the combinations, it tries a randomly sampled subset.
For example, this param_grid will have 100 combinations

The GridSearchCV would try all of them, which is time-consuming.
With RandomizedSearchCV, you can try only a randomly chosen subset of, say, 20 combinations.
It usually leads to a little worse result, but it is much faster.
You can control the number of combinations to be tested using the n_iter argument (set to 10 by default). Apart from that, working with it is the same as with GridSearchCV.

Task
test

Swipe to show code editor

Your task is to build GridSearchCV and RandomizedSearchCV with 20 combinations and compare the results.

  1. Initialize the RandomizedSearchCV object. Pass the parameters grid and set the number of combinations to 20.
  2. Initialize the GridSearchCV object.
  3. Train both GridSearchCV and RandomizedSearchCV objects.
  4. Print the best estimator of grid.
  5. Print the best score of randomized.

Note

You can try running the code several times. Look at the difference between the two scores. Sometimes the scores can be the same due to the presence of the best parameters among combinations sampled by RandomizedSearchCV.

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

The idea behind RandomizedSearchCV is that it works the same as GridSearchCV, but instead of trying all the combinations, it tries a randomly sampled subset.
For example, this param_grid will have 100 combinations

The GridSearchCV would try all of them, which is time-consuming.
With RandomizedSearchCV, you can try only a randomly chosen subset of, say, 20 combinations.
It usually leads to a little worse result, but it is much faster.
You can control the number of combinations to be tested using the n_iter argument (set to 10 by default). Apart from that, working with it is the same as with GridSearchCV.

Task
test

Swipe to show code editor

Your task is to build GridSearchCV and RandomizedSearchCV with 20 combinations and compare the results.

  1. Initialize the RandomizedSearchCV object. Pass the parameters grid and set the number of combinations to 20.
  2. Initialize the GridSearchCV object.
  3. Train both GridSearchCV and RandomizedSearchCV objects.
  4. Print the best estimator of grid.
  5. Print the best score of randomized.

Note

You can try running the code several times. Look at the difference between the two scores. Sometimes the scores can be the same due to the presence of the best parameters among combinations sampled by RandomizedSearchCV.

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Section 4. Chapter 8
Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
We're sorry to hear that something went wrong. What happened?
some-alt