Challenge: Predicting Prices Using Polynomial Regression
For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.
And also reminder of the mean_squared_error() function with np.sqrt() needed to calculate RMSE:
rmse = np.sqrt(mean_squared_error(y_true, y_predicted))
Swipe to start coding
- Assign the DataFrame with a single column
'age'ofdfto theXvariable. - Preprocess the
Xusing thePolynomialFeaturesclass. - Split the dataset using the appropriate function from
sklearn. - Build and train a model on the training set.
- Predict the targets of both training and test set.
- Calculate the RMSE for both training and test set.
- Print the summary table.
Solution
When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.
Thanks for your feedback!
single
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Can you explain how to use the train_test_split function in this context?
What is the formula for calculating RMSE, and why is it important?
How can I interpret the difference between training and test RMSE values?
Awesome!
Completion rate improved to 5.26
Challenge: Predicting Prices Using Polynomial Regression
Swipe to show menu
For this challenge, you will build the same Polynomial Regression of degree 2 as in the previous challenge. However, you will need to split the set into a training set and a test set to calculate RMSE for both those sets. This is required to judge whether the model overfits/underfits or not.
Here is the reminder of the train_test_split() function you'll want to use.
And also reminder of the mean_squared_error() function with np.sqrt() needed to calculate RMSE:
rmse = np.sqrt(mean_squared_error(y_true, y_predicted))
Swipe to start coding
- Assign the DataFrame with a single column
'age'ofdfto theXvariable. - Preprocess the
Xusing thePolynomialFeaturesclass. - Split the dataset using the appropriate function from
sklearn. - Build and train a model on the training set.
- Predict the targets of both training and test set.
- Calculate the RMSE for both training and test set.
- Print the summary table.
Solution
When you complete the task, you will notice that the test RMSE is even lower than the training RMSE. Usually, models do not show better results on unseen instances. Here, the difference is tiny and caused by chance. Our dataset is relatively small, and while splitting, the test set received a bit better(easier to predict) data points.
Thanks for your feedback!
single