Course Content

Linear Regression with Python

## Linear Regression with Python

# Metrics

When building a model, it is important to measure its performance.

We require a score associated with the model that accurately describes how well it fits the data. This score is known as a **metric**, and there are numerous metrics available.

In this chapter, we will focus on the most commonly used ones.

We will use the following notation:

We are already familiar with one metric, **SSR** (**Sum of Squared Residuals**), which we minimized to identify the optimal parameters.

Using our notation, we can express the formula for SSR as follows:

or equally:

This metric was good for comparing models with the same number of instances. However, it doesn't give us an understanding of how well the model performs. Here is why:

Suppose you have two models on the different training sets (shown in the image below).

You can see that the first model fits well but still has a higher SSR than the second model, which visually fits the data worse. It happened only because the first model has much more data points, so the sum is higher, but on average, the first model's residuals are lower. So taking the average of squared residuals as a metric would describe the model better. That is precisely what the **Mean Squared Error**(**MSE**) is.

## MSE

or equally:

To calculate the MSE metric using python, you can use NumPy's functions:

Or you can use Scikit-learn's `mean_squared_error()`

method:

Where `y_true`

is an array of actual target values and `y_pred`

is an array of predicted target values for the same features.

The problem is the error it shows is squared. For example, suppose the MSE of the model predicting houses is 49 dollars². We are interested in price, not price squared, as given by MSE, so we would like to take the root of MSE and get 7 dollars. Now we have a metric with the same unit as the predicted value. This metric is called **Root Mean Squared Error(RMSE)**.

## RMSE

To calculate the RMSE metric using python, you can use NumPy's functions:

Or you can use Scikit-learn's `mean_squared_error()`

method with `squared=False`

:

## MAE

In SSR, we squared the residuals to get rid of the sign. The second approach would be taking the absolute values of residuals instead of squaring them. That is the idea behind **Mean Absolute Error**(**MAE**).

or equally

It is the same as the MSE, but instead of squaring residuals, we take their absolute values.

To calculate the MAE metric using python, you can use NumPy's functions:

Or you can use Scikit-learn's `mean_absolute_error()`

method:

For choosing the parameters, we used the SSR metric. That is because it was good for mathematical calculations and allowed us to get the Normal Equation. But to further compare the models, you can use any other metric.

Note

For comparing models, SSR, MSE, and RMSE will always identically choose which model is better and which is worse. And MAE can sometimes prefer a different model than SSR/MSE/RMSE since those penalize high residuals much more. Usually, you want to choose one metric a priori and focus on minimizing it.

Now you can surely tell that the second model is better since all its metrics are lower. However, lower metrics do not always mean the model is better. The following chapter will tell you why.

Everything was clear?