Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Challenge: Comparing Models | Comparing Models
Classification with Python
course content

Зміст курсу

Classification with Python

Classification with Python

1. k-NN Classifier
2. Logistic Regression
3. Decision Tree
4. Random Forest
5. Comparing Models

bookChallenge: Comparing Models

Now we will compare the models we learned on one dataset. This is a breast cancer dataset. The target is the 'diagnosis' column (1 – malignant, 0 – benign).

We will apply GridSearchCV to each model to find the best parameters. Also, in this task, we would use the recall metric for scoring since we do not want to have False Negatives. GridSearchCV can choose the parameters based on the recall metric if you set scoring='recall'.

Завдання

The task is to build all the models we learned and to print the best parameters along with the best recall score of each model. You will need to fill in the parameter names in the param_grid dictionaries.

  1. For the k-NN model find the best n_neighbors value out of [3, 5, 7, 12].
  2. For the Logistic Regression run through [0.1, 1, 10] values of C.
  3. For a Decision Tree, we want to configure two parameters, max_depth and min_samples_leaf. Run through values [2, 4, 6, 10] for max_depth and [1, 2, 4, 7] for min_samples_leaf.
  4. For a Random Forest, find the best max_depth(maximum depth of each Tree) value out of [2, 4, 6] and the best number of trees(n_estimators). Try values [20, 50, 100] for the number of trees.

Note

The code takes some time to run(less than a minute).

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 5. Розділ 3
toggle bottom row

bookChallenge: Comparing Models

Now we will compare the models we learned on one dataset. This is a breast cancer dataset. The target is the 'diagnosis' column (1 – malignant, 0 – benign).

We will apply GridSearchCV to each model to find the best parameters. Also, in this task, we would use the recall metric for scoring since we do not want to have False Negatives. GridSearchCV can choose the parameters based on the recall metric if you set scoring='recall'.

Завдання

The task is to build all the models we learned and to print the best parameters along with the best recall score of each model. You will need to fill in the parameter names in the param_grid dictionaries.

  1. For the k-NN model find the best n_neighbors value out of [3, 5, 7, 12].
  2. For the Logistic Regression run through [0.1, 1, 10] values of C.
  3. For a Decision Tree, we want to configure two parameters, max_depth and min_samples_leaf. Run through values [2, 4, 6, 10] for max_depth and [1, 2, 4, 7] for min_samples_leaf.
  4. For a Random Forest, find the best max_depth(maximum depth of each Tree) value out of [2, 4, 6] and the best number of trees(n_estimators). Try values [20, 50, 100] for the number of trees.

Note

The code takes some time to run(less than a minute).

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 5. Розділ 3
toggle bottom row

bookChallenge: Comparing Models

Now we will compare the models we learned on one dataset. This is a breast cancer dataset. The target is the 'diagnosis' column (1 – malignant, 0 – benign).

We will apply GridSearchCV to each model to find the best parameters. Also, in this task, we would use the recall metric for scoring since we do not want to have False Negatives. GridSearchCV can choose the parameters based on the recall metric if you set scoring='recall'.

Завдання

The task is to build all the models we learned and to print the best parameters along with the best recall score of each model. You will need to fill in the parameter names in the param_grid dictionaries.

  1. For the k-NN model find the best n_neighbors value out of [3, 5, 7, 12].
  2. For the Logistic Regression run through [0.1, 1, 10] values of C.
  3. For a Decision Tree, we want to configure two parameters, max_depth and min_samples_leaf. Run through values [2, 4, 6, 10] for max_depth and [1, 2, 4, 7] for min_samples_leaf.
  4. For a Random Forest, find the best max_depth(maximum depth of each Tree) value out of [2, 4, 6] and the best number of trees(n_estimators). Try values [20, 50, 100] for the number of trees.

Note

The code takes some time to run(less than a minute).

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Now we will compare the models we learned on one dataset. This is a breast cancer dataset. The target is the 'diagnosis' column (1 – malignant, 0 – benign).

We will apply GridSearchCV to each model to find the best parameters. Also, in this task, we would use the recall metric for scoring since we do not want to have False Negatives. GridSearchCV can choose the parameters based on the recall metric if you set scoring='recall'.

Завдання

The task is to build all the models we learned and to print the best parameters along with the best recall score of each model. You will need to fill in the parameter names in the param_grid dictionaries.

  1. For the k-NN model find the best n_neighbors value out of [3, 5, 7, 12].
  2. For the Logistic Regression run through [0.1, 1, 10] values of C.
  3. For a Decision Tree, we want to configure two parameters, max_depth and min_samples_leaf. Run through values [2, 4, 6, 10] for max_depth and [1, 2, 4, 7] for min_samples_leaf.
  4. For a Random Forest, find the best max_depth(maximum depth of each Tree) value out of [2, 4, 6] and the best number of trees(n_estimators). Try values [20, 50, 100] for the number of trees.

Note

The code takes some time to run(less than a minute).

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Секція 5. Розділ 3
Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
some-alt