Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Try to Evaluate | Multivariate Linear Regression
Explore the Linear Regression Using Python

Glissez pour afficher le menu

book
Try to Evaluate

Let’s see which model is better using the metrics we already know.

MSE:

123
from sklearn.metrics import mean_squared_error print(mean_squared_error(Y_test, y_test_predicted).round(2)) print(mean_squared_error(Y_test, y_test_predicted2).round(2))
copy

MAE:

123
from sklearn.metrics import mean_absolute_error print(mean_absolute_error(Y_test, y_test_predicted).round(2)) print(mean_absolute_error(Y_test, y_test_predicted2).round(2))
copy

R-squared:

123
from sklearn.metrics import r2_score print(r2_score(Y_test, y_test_predicted).round(2)) print(r2_score(Y_test, y_test_predicted2).round(2))
copy

As a general rule, the more features a model includes, the lower the MSE (RMSE) and MAE will be. However, be careful about including too many features. Some of them may be extremely random, degrading the model's interpretability.

Tâche

Swipe to start coding

Let’s evaluate the model from the previous task:

  1. [Line #30] Import mean_squared_error for calculating metrics from scikit.metrics.
  2. [Line #31] Find MSE using method mean_squared_error() and Y_test, y_test_predicted2 as the parameters, assign it to the variable MSE, round the result to second digit.
  3. [Line #32] Print the variable MSE.
  4. [Line #35] Import r2_score from scikit.metrics.
  5. [Line #36] Find R-squared and assign it to the variable r_squared, round the result to second digit.
  6. [Line #37] Print the variable r_squared.

Solution

Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 5. Chapitre 2
single

single

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

close

Awesome!

Completion rate improved to 4.76

book
Try to Evaluate

Let’s see which model is better using the metrics we already know.

MSE:

123
from sklearn.metrics import mean_squared_error print(mean_squared_error(Y_test, y_test_predicted).round(2)) print(mean_squared_error(Y_test, y_test_predicted2).round(2))
copy

MAE:

123
from sklearn.metrics import mean_absolute_error print(mean_absolute_error(Y_test, y_test_predicted).round(2)) print(mean_absolute_error(Y_test, y_test_predicted2).round(2))
copy

R-squared:

123
from sklearn.metrics import r2_score print(r2_score(Y_test, y_test_predicted).round(2)) print(r2_score(Y_test, y_test_predicted2).round(2))
copy

As a general rule, the more features a model includes, the lower the MSE (RMSE) and MAE will be. However, be careful about including too many features. Some of them may be extremely random, degrading the model's interpretability.

Tâche

Swipe to start coding

Let’s evaluate the model from the previous task:

  1. [Line #30] Import mean_squared_error for calculating metrics from scikit.metrics.
  2. [Line #31] Find MSE using method mean_squared_error() and Y_test, y_test_predicted2 as the parameters, assign it to the variable MSE, round the result to second digit.
  3. [Line #32] Print the variable MSE.
  4. [Line #35] Import r2_score from scikit.metrics.
  5. [Line #36] Find R-squared and assign it to the variable r_squared, round the result to second digit.
  6. [Line #37] Print the variable r_squared.

Solution

Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

close

Awesome!

Completion rate improved to 4.76

Glissez pour afficher le menu

some-alt