Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Predict Prices Using Polynomial Regression | Choosing The Best Model
Linear Regression with Python
course content

Зміст курсу

Linear Regression with Python

Linear Regression with Python

1. Simple Linear Regression
2. Multiple Linear Regression
3. Polynomial Regression
4. Choosing The Best Model

bookPredict Prices Using Polynomial Regression

For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.

And also reminder of the mean_squared_error() function needed to calculate RMSE:

Now let's move to coding!

Завдання
test

Swipe to show code editor

  1. Assign the DataFrame with a single column 'age' of df to the X variable.
  2. Preprocess the X using the PolynomialFeatures class.
  3. Split the dataset using the appropriate function from sklearn.
  4. Build and train a model on the training set.
  5. Predict the targets of both training and test set.
  6. Calculate the RMSE for both training and test set.
  7. Print the summary table.

When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 4. Розділ 4
toggle bottom row

bookPredict Prices Using Polynomial Regression

For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.

And also reminder of the mean_squared_error() function needed to calculate RMSE:

Now let's move to coding!

Завдання
test

Swipe to show code editor

  1. Assign the DataFrame with a single column 'age' of df to the X variable.
  2. Preprocess the X using the PolynomialFeatures class.
  3. Split the dataset using the appropriate function from sklearn.
  4. Build and train a model on the training set.
  5. Predict the targets of both training and test set.
  6. Calculate the RMSE for both training and test set.
  7. Print the summary table.

When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 4. Розділ 4
toggle bottom row

bookPredict Prices Using Polynomial Regression

For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.

And also reminder of the mean_squared_error() function needed to calculate RMSE:

Now let's move to coding!

Завдання
test

Swipe to show code editor

  1. Assign the DataFrame with a single column 'age' of df to the X variable.
  2. Preprocess the X using the PolynomialFeatures class.
  3. Split the dataset using the appropriate function from sklearn.
  4. Build and train a model on the training set.
  5. Predict the targets of both training and test set.
  6. Calculate the RMSE for both training and test set.
  7. Print the summary table.

When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.

And also reminder of the mean_squared_error() function needed to calculate RMSE:

Now let's move to coding!

Завдання
test

Swipe to show code editor

  1. Assign the DataFrame with a single column 'age' of df to the X variable.
  2. Preprocess the X using the PolynomialFeatures class.
  3. Split the dataset using the appropriate function from sklearn.
  4. Build and train a model on the training set.
  5. Predict the targets of both training and test set.
  6. Calculate the RMSE for both training and test set.
  7. Print the summary table.

When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.

Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
Секція 4. Розділ 4
Switch to desktopПерейдіть на комп'ютер для реальної практикиПродовжуйте з того місця, де ви зупинились, використовуючи один з наведених нижче варіантів
We're sorry to hear that something went wrong. What happened?
some-alt