Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Oppiskele Challenge: Compare Convergence Speed | Scaling and Model Performance
Feature Scaling and Normalization Deep Dive

bookChallenge: Compare Convergence Speed

Tehtävä

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Ratkaisu

Oliko kaikki selvää?

Miten voimme parantaa sitä?

Kiitos palautteestasi!

Osio 4. Luku 4
single

single

Kysy tekoälyä

expand

Kysy tekoälyä

ChatGPT

Kysy mitä tahansa tai kokeile jotakin ehdotetuista kysymyksistä aloittaaksesi keskustelumme

Suggested prompts:

Can you explain this in simpler terms?

What are the main points I should remember?

Can you give me an example?

close

Awesome!

Completion rate improved to 5.26

bookChallenge: Compare Convergence Speed

Pyyhkäise näyttääksesi valikon

Tehtävä

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Ratkaisu

Switch to desktopVaihda työpöytään todellista harjoitusta vartenJatka siitä, missä olet käyttämällä jotakin alla olevista vaihtoehdoista
Oliko kaikki selvää?

Miten voimme parantaa sitä?

Kiitos palautteestasi!

Osio 4. Luku 4
single

single

some-alt