Challenge: Compare Convergence Speed
Tâche
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Solution
Tout était clair ?
Merci pour vos commentaires !
Section 4. Chapitre 4
single
Demandez à l'IA
Demandez à l'IA
Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion
Awesome!
Completion rate improved to 5.26
Challenge: Compare Convergence Speed
Glissez pour afficher le menu
Tâche
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Solution
Tout était clair ?
Merci pour vos commentaires !
Section 4. Chapitre 4
single