Challenge: Compare Convergence Speed
Task
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Solution
Everything was clear?
Thanks for your feedback!
SectionΒ 4. ChapterΒ 4
single
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Suggested prompts:
Can you explain this in simpler terms?
What are the main points I should remember?
Can you give me an example?
Awesome!
Completion rate improved to 5.26
Challenge: Compare Convergence Speed
Swipe to show menu
Task
Swipe to start coding
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Solution
Everything was clear?
Thanks for your feedback!
SectionΒ 4. ChapterΒ 4
single