Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Taylor Expansion and Gradient Descent | Gradient Descent Mechanics
Mathematics of Optimization in ML

bookTaylor Expansion and Gradient Descent

To understand the mechanics of gradient descent, you need to see how Taylor expansion connects the gradient of a function to the steps you take during optimization. When you have a differentiable function f(x)f(x) that you want to minimize, you can use the Taylor expansion to approximate the function near a point xx. The first-order Taylor expansion of ff at xx for a small change Ξ”xΞ”x is:

f(x+Ξ”x)β‰ˆf(x)+βˆ‡f(x)tΞ”xf(x + Ξ”x) β‰ˆ f(x) + βˆ‡f(x)α΅— Ξ”x

Here, βˆ‡f(x)βˆ‡f(x) is the gradient of ff at xx, and Ξ”xΞ”x is a small vector representing how you move from xx. The term βˆ‡f(x)tΞ”xβˆ‡f(x)α΅— Ξ”x tells you how much ff will increase or decrease if you move in the direction of Ξ”xΞ”x.

To decrease the value of f(x)f(x), you want to choose Ξ”xΞ”x so that f(x+Ξ”x)f(x + Ξ”x) is less than f(x)f(x). The Taylor expansion shows that the change in ff is roughly βˆ‡f(x)tΞ”xβˆ‡f(x)α΅— Ξ”x. This means you should pick Ξ”xΞ”x so that this term is negative: move in the direction where the gradient's dot product with your step is negative. The steepest decrease happens when Ξ”xΞ”x points exactly opposite to the gradient, so you set:

Ξ”x=βˆ’Ξ±βˆ‡f(x)Ξ”x = -Ξ± βˆ‡f(x)

where Ξ±Ξ± is a small positive number called the learning rate. Plugging this back into the Taylor expansion gives:

f(xβˆ’Ξ±βˆ‡f(x))β‰ˆf(x)βˆ’Ξ±βˆ£βˆ£βˆ‡f(x)∣∣2f(x - Ξ± βˆ‡f(x)) β‰ˆ f(x) - Ξ± ||βˆ‡f(x)||Β²

This shows that moving a small step against the gradient reduces the function value, and the amount of decrease is proportional to the squared norm of the gradient.

The update rule for gradient descent is then:

x←xβˆ’Ξ±βˆ‡f(x)x ← x - Ξ± βˆ‡f(x)

You repeat this step, updating xx each time, until the gradient is close to zero or you reach a stopping criterion.

Note
Note

Intuitive summary: the gradient points in the direction of steepest increase of the objective. By moving in the exact opposite direction, you take the fastest path downhill, ensuring that the objective decreases most rapidly for a small step. This is why gradient descent works: each update moves you toward a local minimum by following the negative gradient.

12345678910111213141516171819202122232425
import numpy as np # Define a simple quadratic function: f(x) = (x - 3)**2 def f(x): return (x - 3)**2 # Its gradient: f'(x) = 2*(x - 3) def grad_f(x): return 2 * (x - 3) # Start at x = 0 x = 0.0 learning_rate = 0.1 # Compute gradient at x gradient = grad_f(x) # Take one gradient descent step x_new = x - learning_rate * gradient print(f"Initial x: {x}") print(f"Gradient at x: {gradient}") print(f"Updated x after one step: {x_new}") print(f"Function value before: {f(x)}") print(f"Function value after: {f(x_new)}")
copy
question mark

Which statement best describes the gradient descent update rule as derived from the Taylor expansion?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Awesome!

Completion rate improved to 5.56

bookTaylor Expansion and Gradient Descent

Swipe to show menu

To understand the mechanics of gradient descent, you need to see how Taylor expansion connects the gradient of a function to the steps you take during optimization. When you have a differentiable function f(x)f(x) that you want to minimize, you can use the Taylor expansion to approximate the function near a point xx. The first-order Taylor expansion of ff at xx for a small change Ξ”xΞ”x is:

f(x+Ξ”x)β‰ˆf(x)+βˆ‡f(x)tΞ”xf(x + Ξ”x) β‰ˆ f(x) + βˆ‡f(x)α΅— Ξ”x

Here, βˆ‡f(x)βˆ‡f(x) is the gradient of ff at xx, and Ξ”xΞ”x is a small vector representing how you move from xx. The term βˆ‡f(x)tΞ”xβˆ‡f(x)α΅— Ξ”x tells you how much ff will increase or decrease if you move in the direction of Ξ”xΞ”x.

To decrease the value of f(x)f(x), you want to choose Ξ”xΞ”x so that f(x+Ξ”x)f(x + Ξ”x) is less than f(x)f(x). The Taylor expansion shows that the change in ff is roughly βˆ‡f(x)tΞ”xβˆ‡f(x)α΅— Ξ”x. This means you should pick Ξ”xΞ”x so that this term is negative: move in the direction where the gradient's dot product with your step is negative. The steepest decrease happens when Ξ”xΞ”x points exactly opposite to the gradient, so you set:

Ξ”x=βˆ’Ξ±βˆ‡f(x)Ξ”x = -Ξ± βˆ‡f(x)

where Ξ±Ξ± is a small positive number called the learning rate. Plugging this back into the Taylor expansion gives:

f(xβˆ’Ξ±βˆ‡f(x))β‰ˆf(x)βˆ’Ξ±βˆ£βˆ£βˆ‡f(x)∣∣2f(x - Ξ± βˆ‡f(x)) β‰ˆ f(x) - Ξ± ||βˆ‡f(x)||Β²

This shows that moving a small step against the gradient reduces the function value, and the amount of decrease is proportional to the squared norm of the gradient.

The update rule for gradient descent is then:

x←xβˆ’Ξ±βˆ‡f(x)x ← x - Ξ± βˆ‡f(x)

You repeat this step, updating xx each time, until the gradient is close to zero or you reach a stopping criterion.

Note
Note

Intuitive summary: the gradient points in the direction of steepest increase of the objective. By moving in the exact opposite direction, you take the fastest path downhill, ensuring that the objective decreases most rapidly for a small step. This is why gradient descent works: each update moves you toward a local minimum by following the negative gradient.

12345678910111213141516171819202122232425
import numpy as np # Define a simple quadratic function: f(x) = (x - 3)**2 def f(x): return (x - 3)**2 # Its gradient: f'(x) = 2*(x - 3) def grad_f(x): return 2 * (x - 3) # Start at x = 0 x = 0.0 learning_rate = 0.1 # Compute gradient at x gradient = grad_f(x) # Take one gradient descent step x_new = x - learning_rate * gradient print(f"Initial x: {x}") print(f"Gradient at x: {gradient}") print(f"Updated x after one step: {x_new}") print(f"Function value before: {f(x)}") print(f"Function value after: {f(x_new)}")
copy
question mark

Which statement best describes the gradient descent update rule as derived from the Taylor expansion?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1
some-alt