Taylor Expansion and Gradient Descent
To understand the mechanics of gradient descent, you need to see how Taylor expansion connects the gradient of a function to the steps you take during optimization. When you have a differentiable function f(x) that you want to minimize, you can use the Taylor expansion to approximate the function near a point x. The first-order Taylor expansion of f at x for a small change Δx is:
f(x+Δx)≈f(x)+∇f(x)tΔxHere, ∇f(x) is the gradient of f at x, and Δx is a small vector representing how you move from x. The term ∇f(x)tΔx tells you how much f will increase or decrease if you move in the direction of Δx.
To decrease the value of f(x), you want to choose Δx so that f(x+Δx) is less than f(x). The Taylor expansion shows that the change in f is roughly ∇f(x)tΔx. This means you should pick Δx so that this term is negative: move in the direction where the gradient's dot product with your step is negative. The steepest decrease happens when Δx points exactly opposite to the gradient, so you set:
Δx=−α∇f(x)where α is a small positive number called the learning rate. Plugging this back into the Taylor expansion gives:
f(x−α∇f(x))≈f(x)−α∣∣∇f(x)∣∣2This shows that moving a small step against the gradient reduces the function value, and the amount of decrease is proportional to the squared norm of the gradient.
The update rule for gradient descent is then:
x←x−α∇f(x)You repeat this step, updating x each time, until the gradient is close to zero or you reach a stopping criterion.
Intuitive summary: the gradient points in the direction of steepest increase of the objective. By moving in the exact opposite direction, you take the fastest path downhill, ensuring that the objective decreases most rapidly for a small step. This is why gradient descent works: each update moves you toward a local minimum by following the negative gradient.
12345678910111213141516171819202122232425import numpy as np # Define a simple quadratic function: f(x) = (x - 3)**2 def f(x): return (x - 3)**2 # Its gradient: f'(x) = 2*(x - 3) def grad_f(x): return 2 * (x - 3) # Start at x = 0 x = 0.0 learning_rate = 0.1 # Compute gradient at x gradient = grad_f(x) # Take one gradient descent step x_new = x - learning_rate * gradient print(f"Initial x: {x}") print(f"Gradient at x: {gradient}") print(f"Updated x after one step: {x_new}") print(f"Function value before: {f(x)}") print(f"Function value after: {f(x_new)}")
Tak for dine kommentarer!
Spørg AI
Spørg AI
Spørg om hvad som helst eller prøv et af de foreslåede spørgsmål for at starte vores chat
Can you explain why the gradient is negative in this example?
How does the learning rate affect the convergence of gradient descent?
Can you show how multiple steps of gradient descent would look for this function?
Awesome!
Completion rate improved to 5.56
Taylor Expansion and Gradient Descent
Stryg for at vise menuen
To understand the mechanics of gradient descent, you need to see how Taylor expansion connects the gradient of a function to the steps you take during optimization. When you have a differentiable function f(x) that you want to minimize, you can use the Taylor expansion to approximate the function near a point x. The first-order Taylor expansion of f at x for a small change Δx is:
f(x+Δx)≈f(x)+∇f(x)tΔxHere, ∇f(x) is the gradient of f at x, and Δx is a small vector representing how you move from x. The term ∇f(x)tΔx tells you how much f will increase or decrease if you move in the direction of Δx.
To decrease the value of f(x), you want to choose Δx so that f(x+Δx) is less than f(x). The Taylor expansion shows that the change in f is roughly ∇f(x)tΔx. This means you should pick Δx so that this term is negative: move in the direction where the gradient's dot product with your step is negative. The steepest decrease happens when Δx points exactly opposite to the gradient, so you set:
Δx=−α∇f(x)where α is a small positive number called the learning rate. Plugging this back into the Taylor expansion gives:
f(x−α∇f(x))≈f(x)−α∣∣∇f(x)∣∣2This shows that moving a small step against the gradient reduces the function value, and the amount of decrease is proportional to the squared norm of the gradient.
The update rule for gradient descent is then:
x←x−α∇f(x)You repeat this step, updating x each time, until the gradient is close to zero or you reach a stopping criterion.
Intuitive summary: the gradient points in the direction of steepest increase of the objective. By moving in the exact opposite direction, you take the fastest path downhill, ensuring that the objective decreases most rapidly for a small step. This is why gradient descent works: each update moves you toward a local minimum by following the negative gradient.
12345678910111213141516171819202122232425import numpy as np # Define a simple quadratic function: f(x) = (x - 3)**2 def f(x): return (x - 3)**2 # Its gradient: f'(x) = 2*(x - 3) def grad_f(x): return 2 * (x - 3) # Start at x = 0 x = 0.0 learning_rate = 0.1 # Compute gradient at x gradient = grad_f(x) # Take one gradient descent step x_new = x - learning_rate * gradient print(f"Initial x: {x}") print(f"Gradient at x: {gradient}") print(f"Updated x after one step: {x_new}") print(f"Function value before: {f(x)}") print(f"Function value after: {f(x_new)}")
Tak for dine kommentarer!