Challenge: Compare Convergence Speed
Uppgift
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Lösning
Var allt tydligt?
Tack för dina kommentarer!
Avsnitt 4. Kapitel 4
single
Fråga AI
Fråga AI
Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal
Awesome!
Completion rate improved to 5.26
Challenge: Compare Convergence Speed
Svep för att visa menyn
Uppgift
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Lösning
Var allt tydligt?
Tack för dina kommentarer!
Avsnitt 4. Kapitel 4
single