# Decision Boundary

Let's plot the results of Logistic Regression. Consider the following two-feature example:

Once we build a Logistic Regression, we can plot a **decision boundary**. It shows each class's region where new instances are predicted as that class. For example, here is the decision boundary of Logistic Regression applied to the data above.

We can see that the line perfectly separates two classes here. When that happens, the dataset is called **linearly separable**. However, that is not always the case. What if the dataset would be like this:

Above is a decision boundary for a little different dataset. Here the data is not linearly separable; hence the predictions made by Logistic Regression are imperfect.

Unfortunately, by default, the Logistic Regression cannot predict more complex decision boundaries, so it is the best prediction we can get.

But remember that Logistic Regression is derived from Linear Regression which has a solution to a problem with the model being too simple. This solution is a Polynomial Regression, and we can use its equation for calculating **z** to get a more complex decision boundary shape!

Just like in Polynomial Regression, we only need to apply a `PolynomialFeatures`

transformer to our `X`

. Here is the syntax:

It uses the equation of a second-degree polynomial regression. You can get even more complex decision boundaries making a degree higher, but as shown in the next chapter, the model may suffer from overfitting.

Everything was clear?

Course Content

Classification with Python

## Classification with Python

5. Comparing Models

# Decision Boundary

Let's plot the results of Logistic Regression. Consider the following two-feature example:

Once we build a Logistic Regression, we can plot a **decision boundary**. It shows each class's region where new instances are predicted as that class. For example, here is the decision boundary of Logistic Regression applied to the data above.

We can see that the line perfectly separates two classes here. When that happens, the dataset is called **linearly separable**. However, that is not always the case. What if the dataset would be like this:

Above is a decision boundary for a little different dataset. Here the data is not linearly separable; hence the predictions made by Logistic Regression are imperfect.

Unfortunately, by default, the Logistic Regression cannot predict more complex decision boundaries, so it is the best prediction we can get.

But remember that Logistic Regression is derived from Linear Regression which has a solution to a problem with the model being too simple. This solution is a Polynomial Regression, and we can use its equation for calculating **z** to get a more complex decision boundary shape!

Just like in Polynomial Regression, we only need to apply a `PolynomialFeatures`

transformer to our `X`

. Here is the syntax:

It uses the equation of a second-degree polynomial regression. You can get even more complex decision boundaries making a degree higher, but as shown in the next chapter, the model may suffer from overfitting.

Everything was clear?