Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Overfitting, Underfitting, and Generalization | Introduction to Hyperparameter Tuning
Hyperparameter Tuning Basics

bookOverfitting, Underfitting, and Generalization

When you train a machine learning model, you want it to learn the underlying patterns in your data so it can make accurate predictions on new, unseen data. However, the model's ability to generalize depends on how complex it is and how its hyperparameters are set. If your model is too complex, it might memorize the training data—including noise and outliers—rather than learning general trends. This is known as overfitting. On the other hand, if your model is too simple, it may fail to capture important relationships in the data, resulting in poor performance even on the training set. This is called underfitting. Hyperparameters such as tree depth in decision trees, the number of neighbors in k-nearest neighbors, or the regularization strength in linear models directly influence whether your model is likely to underfit, overfit, or strike the right balance for good generalization.

Note
Overfitting vs. Underfitting

Overfitting occurs when a model performs well on training data but poorly on test data. Underfitting occurs when a model performs poorly on both.

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
import numpy as np import matplotlib.pyplot as plt # Generate sample data np.random.seed(0) X = np.linspace(0, 10, 100) y_true = np.sin(X) y = y_true + np.random.normal(scale=0.3, size=X.shape) # Underfitting: Linear model from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X.reshape(-1, 1), y) y_pred_linear = lin_reg.predict(X.reshape(-1, 1)) # Good fit: Polynomial degree 3 from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import Ridge poly3 = PolynomialFeatures(degree=3) X_poly3 = poly3.fit_transform(X.reshape(-1, 1)) ridge3 = Ridge(alpha=0.1) ridge3.fit(X_poly3, y) y_pred_poly3 = ridge3.predict(X_poly3) # Overfitting: Polynomial degree 15 poly15 = PolynomialFeatures(degree=15) X_poly15 = poly15.fit_transform(X.reshape(-1, 1)) ridge15 = Ridge(alpha=1e-10) ridge15.fit(X_poly15, y) y_pred_poly15 = ridge15.predict(X_poly15) # Plotting plt.figure(figsize=(15, 4)) plt.subplot(1, 3, 1) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_linear, 'r', label='Underfit (Linear)') plt.title('Underfitting') plt.legend() plt.subplot(1, 3, 2) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_poly3, 'b', label='Good fit (Degree 3)') plt.title('Good Fit') plt.legend() plt.subplot(1, 3, 3) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_poly15, 'm', label='Overfit (Degree 15)') plt.title('Overfitting') plt.legend() plt.tight_layout() plt.show()
copy
question mark

Which scenario describes overfitting in a machine learning model?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 1. Kapitel 3

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

Awesome!

Completion rate improved to 9.09

bookOverfitting, Underfitting, and Generalization

Svep för att visa menyn

When you train a machine learning model, you want it to learn the underlying patterns in your data so it can make accurate predictions on new, unseen data. However, the model's ability to generalize depends on how complex it is and how its hyperparameters are set. If your model is too complex, it might memorize the training data—including noise and outliers—rather than learning general trends. This is known as overfitting. On the other hand, if your model is too simple, it may fail to capture important relationships in the data, resulting in poor performance even on the training set. This is called underfitting. Hyperparameters such as tree depth in decision trees, the number of neighbors in k-nearest neighbors, or the regularization strength in linear models directly influence whether your model is likely to underfit, overfit, or strike the right balance for good generalization.

Note
Overfitting vs. Underfitting

Overfitting occurs when a model performs well on training data but poorly on test data. Underfitting occurs when a model performs poorly on both.

12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
import numpy as np import matplotlib.pyplot as plt # Generate sample data np.random.seed(0) X = np.linspace(0, 10, 100) y_true = np.sin(X) y = y_true + np.random.normal(scale=0.3, size=X.shape) # Underfitting: Linear model from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(X.reshape(-1, 1), y) y_pred_linear = lin_reg.predict(X.reshape(-1, 1)) # Good fit: Polynomial degree 3 from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import Ridge poly3 = PolynomialFeatures(degree=3) X_poly3 = poly3.fit_transform(X.reshape(-1, 1)) ridge3 = Ridge(alpha=0.1) ridge3.fit(X_poly3, y) y_pred_poly3 = ridge3.predict(X_poly3) # Overfitting: Polynomial degree 15 poly15 = PolynomialFeatures(degree=15) X_poly15 = poly15.fit_transform(X.reshape(-1, 1)) ridge15 = Ridge(alpha=1e-10) ridge15.fit(X_poly15, y) y_pred_poly15 = ridge15.predict(X_poly15) # Plotting plt.figure(figsize=(15, 4)) plt.subplot(1, 3, 1) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_linear, 'r', label='Underfit (Linear)') plt.title('Underfitting') plt.legend() plt.subplot(1, 3, 2) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_poly3, 'b', label='Good fit (Degree 3)') plt.title('Good Fit') plt.legend() plt.subplot(1, 3, 3) plt.scatter(X, y, color='gray', label='Noisy data') plt.plot(X, y_true, 'g--', label='True function') plt.plot(X, y_pred_poly15, 'm', label='Overfit (Degree 15)') plt.title('Overfitting') plt.legend() plt.tight_layout() plt.show()
copy
question mark

Which scenario describes overfitting in a machine learning model?

Select the correct answer

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 1. Kapitel 3
some-alt