Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Challenge: Compare Convergence Speed | Scaling and Model Performance
Feature Scaling and Normalization Deep Dive

bookChallenge: Compare Convergence Speed

Uppgift

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Lösning

Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 4. Kapitel 4
single

single

Fråga AI

expand

Fråga AI

ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

close

Awesome!

Completion rate improved to 5.26

bookChallenge: Compare Convergence Speed

Svep för att visa menyn

Uppgift

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Lösning

Switch to desktopByt till skrivbordet för praktisk övningFortsätt där du är med ett av alternativen nedan
Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 4. Kapitel 4
single

single

some-alt