Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Linear Regression with N Features | Section
Supervised Learning Essentials

bookLinear Regression with N Features

N-Feature Linear Regression Equation

As we have seen, adding the new feature to the linear regression model is as easy as adding it along with the new parameter to the model's equation. We can add much more than two parameters that way.

Note
Note

Consider n to be a whole number greater than two.

ypred=Ξ²0+Ξ²1x1+Ξ²2x2+β‹―+Ξ²nxny_{\text{pred}} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_n x_n

Where:

  • Ξ²0,Ξ²1,Ξ²2,…,Ξ²n\beta_0, \beta_1, \beta_2, \dots, \beta_n – are the model's parameters;
  • ypredy_{\text{pred}} – is the prediction of a target;
  • x1x_1 – is the first feature value;
  • x2x_2 – is the second feature value;
  • …\dots
  • xnx_n – is the n-th feature value.

Normal Equation

The only problem is the visualization. If we have two parameters, we need to build a 3D plot. But if we have more than two parameters, the plot will be more than three-dimensional. But we live in a 3-dimensional world and cannot imagine higher-dimensional plots. However, it is not necessary to visualize the result. We only need to find the parameters for the model to work. Luckily, it is relatively easy to find them. The good old Normal Equation will help us:

Ξ²βƒ—=(Ξ²0Ξ²1…βn)=(X~TX~)βˆ’1X~Tytrue\vec{\beta} = \begin{pmatrix} \beta_0 \\ \beta_1 \\ \dots \\ \beta_n \end{pmatrix} = (\tilde{X}^T \tilde{X})^{-1} \tilde{X}^T y_{\text{true}}

Where:

  • Ξ²0,Ξ²1,…,Ξ²n\beta_0, \beta_1, \dots, \beta_n – are the model's parameters;
  • X~\tilde{X} – is a matrix, containing 1s as a first column, and X1βˆ’XnX_1 - X_n as other columns:
X~=(βˆ£βˆ£β€¦βˆ£1X1…Xnβˆ£βˆ£β€¦βˆ£)\tilde{X} = \begin{pmatrix} | & | & \dots & | \\ 1 & X_1 & \dots & X_n \\ | & | & \dots & | \end{pmatrix}
  • XkX_k – is an array of k-th feature values from the training set;
  • ytruey_{\text{true}} – is an array of target values from the training set.

X̃ Matrix

Notice that only the X̃ matrix has changed. You can think of the columns of this matrix as each responsible for its β parameter. The following video explains what I mean.

The first column of 1s is needed to find the Ξ²β‚€ parameter.

question mark

Choose the INCORRECT statement.

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 6

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookLinear Regression with N Features

Swipe to show menu

N-Feature Linear Regression Equation

As we have seen, adding the new feature to the linear regression model is as easy as adding it along with the new parameter to the model's equation. We can add much more than two parameters that way.

Note
Note

Consider n to be a whole number greater than two.

ypred=Ξ²0+Ξ²1x1+Ξ²2x2+β‹―+Ξ²nxny_{\text{pred}} = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \dots + \beta_n x_n

Where:

  • Ξ²0,Ξ²1,Ξ²2,…,Ξ²n\beta_0, \beta_1, \beta_2, \dots, \beta_n – are the model's parameters;
  • ypredy_{\text{pred}} – is the prediction of a target;
  • x1x_1 – is the first feature value;
  • x2x_2 – is the second feature value;
  • …\dots
  • xnx_n – is the n-th feature value.

Normal Equation

The only problem is the visualization. If we have two parameters, we need to build a 3D plot. But if we have more than two parameters, the plot will be more than three-dimensional. But we live in a 3-dimensional world and cannot imagine higher-dimensional plots. However, it is not necessary to visualize the result. We only need to find the parameters for the model to work. Luckily, it is relatively easy to find them. The good old Normal Equation will help us:

Ξ²βƒ—=(Ξ²0Ξ²1…βn)=(X~TX~)βˆ’1X~Tytrue\vec{\beta} = \begin{pmatrix} \beta_0 \\ \beta_1 \\ \dots \\ \beta_n \end{pmatrix} = (\tilde{X}^T \tilde{X})^{-1} \tilde{X}^T y_{\text{true}}

Where:

  • Ξ²0,Ξ²1,…,Ξ²n\beta_0, \beta_1, \dots, \beta_n – are the model's parameters;
  • X~\tilde{X} – is a matrix, containing 1s as a first column, and X1βˆ’XnX_1 - X_n as other columns:
X~=(βˆ£βˆ£β€¦βˆ£1X1…Xnβˆ£βˆ£β€¦βˆ£)\tilde{X} = \begin{pmatrix} | & | & \dots & | \\ 1 & X_1 & \dots & X_n \\ | & | & \dots & | \end{pmatrix}
  • XkX_k – is an array of k-th feature values from the training set;
  • ytruey_{\text{true}} – is an array of target values from the training set.

X̃ Matrix

Notice that only the X̃ matrix has changed. You can think of the columns of this matrix as each responsible for its β parameter. The following video explains what I mean.

The first column of 1s is needed to find the Ξ²β‚€ parameter.

question mark

Choose the INCORRECT statement.

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 6
some-alt