Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Evaluating Forecast Performance | Section
Forecasting With Classical Models

bookEvaluating Forecast Performance

Sveip for å vise menyen

Evaluating the performance of your forecasting model is crucial to ensure that your predictions are reliable and useful. There are several statistical metrics commonly used to measure forecast accuracy. The mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE) are among the most popular.

Mean absolute error (MAE) measures the average magnitude of errors in a set of forecasts, without considering their direction. It is calculated as:

MAE=1nyiy^iMAE = \frac{1}{n}\sum|yᵢ - ŷᵢ|

where yiyᵢ is the actual value, y^iŷᵢ is the forecasted value, and nn is the number of observations. MAE is easy to interpret: it gives you the average absolute difference between your predictions and the actual values, in the same units as your data.

Root mean squared error (RMSE) is similar to MAE but penalizes larger errors more heavily. It is calculated as:

RMSE=1n(yiy^i)2RMSE = \sqrt{\frac{1}{n} \sum (yᵢ - ŷᵢ)²}

Because it squares the errors before averaging, RMSE will be more sensitive to large errors than MAE. Like MAE, RMSE is also expressed in the same units as your data.

Mean absolute percentage error (MAPE) expresses forecast accuracy as a percentage. It is calculated as:

MAPE=100n(yiy^i)yiMAPE = \frac{100}{n} \sum \left| \frac{(yᵢ - ŷᵢ)}{yᵢ} \right|

MAPE is useful because it provides a sense of the error relative to the actual values, making it easier to compare models across different datasets. However, MAPE can be misleading if actual values are close to zero, as it can inflate the error percentage.

When interpreting these metrics:

  • Lower values indicate better model performance;
  • MAE and RMSE are best when you care about the scale of errors in the original units;
  • MAPE is helpful for expressing errors as a percentage, but should be used with caution if your data contains zeros or very small values.
123456789101112131415161718
import numpy as np from sklearn.metrics import mean_absolute_error, mean_squared_error # Example actual values and ARIMA predictions y_true = np.array([120, 130, 125, 140, 138]) y_pred = np.array([118, 132, 123, 137, 142]) # Compute MAE mae = mean_absolute_error(y_true, y_pred) print("Mean Absolute Error (MAE):", mae) # Compute RMSE rmse = np.sqrt(mean_squared_error(y_true, y_pred)) print("Root Mean Squared Error (RMSE):", rmse) # Compute MAPE mape = np.mean(np.abs((y_true - y_pred) / y_true)) * 100 print("Mean Absolute Percentage Error (MAPE):", mape, "%")
copy
question mark

Which metric would be most appropriate if you want to compare forecast errors across different datasets with different scales?

Select the correct answer

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 1. Kapittel 9

Spør AI

expand

Spør AI

ChatGPT

Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår

Seksjon 1. Kapittel 9
some-alt