Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Polynomial Regression | Section
Practice
Projects
Quizzes & Challenges
Quizzes
Challenges
/
Supervised Learning Essentials

bookPolynomial Regression

In the previous chapter, we explored quadratic regression, which has the graph of a parabola. In the same way, we could add the x³ to the equation to get the Cubic Regression that has a more complex graph. We could also add x⁴ and so on.

A Degree of a Polynomial Regression

In general, it is called the polynomial equation and is the equation of Polynomial Regression. The highest power of x defines a degree of a Polynomial Regression in the equation. Here is an example

N-Degree Polynomial Regression

Considering n to be a whole number greater than two, we can write down the equation of an n-degree Polynomial Regression.

ypred=Ξ²0+Ξ²1x+Ξ²2x2+β‹―+Ξ²nxny_{\text{pred}} = \beta_0 + \beta_1 x + \beta_2 x^2 + \dots + \beta_n x^n

Where:

  • Ξ²0,Ξ²1,Ξ²2,…,Ξ²n\beta_0, \beta_1, \beta_2, \dots, \beta_n – are the model's parameters;
  • ypredy_{\text{pred}} – is the prediction of a target;
  • xx – is the feature value;
  • nn – is the Polynomial Regression's degree.

Normal Equation

And as always, parameters are found using the Normal Equation:

Ξ²βƒ—=(Ξ²0Ξ²1…βn)=(X~TX~)βˆ’1X~Tytrue\vec{\beta} = \begin{pmatrix} \beta_0 \\ \beta_1 \\ \dots \\ \beta_n \end{pmatrix} = (\tilde{X}^T \tilde{X})^{-1} \tilde{X}^T y_{\text{true}}

Where:

  • Ξ²0,Ξ²1,…,Ξ²n\beta_0, \beta_1, \dots, \beta_n – are the model's parameters;
X~=(βˆ£βˆ£βˆ£β€¦βˆ£1XX2…Xnβˆ£βˆ£βˆ£β€¦βˆ£)\tilde{X} = \begin{pmatrix} | & | & | & \dots & | \\ 1 & X & X^2 & \dots & X^n \\ | & | & | & \dots & | \end{pmatrix}
  • XX – is an array of feature values from the training set;
  • XkX^k – is the element-wise power of kk of the XX array;
  • ytruey_{\text{true}} – is an array of target values from the training set.

Polynomial Regression with Multiple Features

To create even more complex shapes, you can use the Polynomial Regression with more than one feature. But even with two features, 2-degree Polynomial Regression has quite a long equation.

Most of the time, you won't need that complex model. Simpler models (like Multiple Linear Regression) usually describe the data well enough, and they are much easier to interpret, visualize and less computationally expensive.

question mark

Choose the INCORRECT statement.

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 11

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookPolynomial Regression

Swipe to show menu

In the previous chapter, we explored quadratic regression, which has the graph of a parabola. In the same way, we could add the x³ to the equation to get the Cubic Regression that has a more complex graph. We could also add x⁴ and so on.

A Degree of a Polynomial Regression

In general, it is called the polynomial equation and is the equation of Polynomial Regression. The highest power of x defines a degree of a Polynomial Regression in the equation. Here is an example

N-Degree Polynomial Regression

Considering n to be a whole number greater than two, we can write down the equation of an n-degree Polynomial Regression.

ypred=Ξ²0+Ξ²1x+Ξ²2x2+β‹―+Ξ²nxny_{\text{pred}} = \beta_0 + \beta_1 x + \beta_2 x^2 + \dots + \beta_n x^n

Where:

  • Ξ²0,Ξ²1,Ξ²2,…,Ξ²n\beta_0, \beta_1, \beta_2, \dots, \beta_n – are the model's parameters;
  • ypredy_{\text{pred}} – is the prediction of a target;
  • xx – is the feature value;
  • nn – is the Polynomial Regression's degree.

Normal Equation

And as always, parameters are found using the Normal Equation:

Ξ²βƒ—=(Ξ²0Ξ²1…βn)=(X~TX~)βˆ’1X~Tytrue\vec{\beta} = \begin{pmatrix} \beta_0 \\ \beta_1 \\ \dots \\ \beta_n \end{pmatrix} = (\tilde{X}^T \tilde{X})^{-1} \tilde{X}^T y_{\text{true}}

Where:

  • Ξ²0,Ξ²1,…,Ξ²n\beta_0, \beta_1, \dots, \beta_n – are the model's parameters;
X~=(βˆ£βˆ£βˆ£β€¦βˆ£1XX2…Xnβˆ£βˆ£βˆ£β€¦βˆ£)\tilde{X} = \begin{pmatrix} | & | & | & \dots & | \\ 1 & X & X^2 & \dots & X^n \\ | & | & | & \dots & | \end{pmatrix}
  • XX – is an array of feature values from the training set;
  • XkX^k – is the element-wise power of kk of the XX array;
  • ytruey_{\text{true}} – is an array of target values from the training set.

Polynomial Regression with Multiple Features

To create even more complex shapes, you can use the Polynomial Regression with more than one feature. But even with two features, 2-degree Polynomial Regression has quite a long equation.

Most of the time, you won't need that complex model. Simpler models (like Multiple Linear Regression) usually describe the data well enough, and they are much easier to interpret, visualize and less computationally expensive.

question mark

Choose the INCORRECT statement.

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 1. ChapterΒ 11
some-alt