Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Challenge: Compare Convergence Speed | Scaling and Model Performance
Feature Scaling and Normalization Deep Dive

bookChallenge: Compare Convergence Speed

Oppgave

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Løsning

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 4. Kapittel 4
single

single

Spør AI

expand

Spør AI

ChatGPT

Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår

close

Awesome!

Completion rate improved to 5.26

bookChallenge: Compare Convergence Speed

Sveip for å vise menyen

Oppgave

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Løsning

Switch to desktopBytt til skrivebordet for virkelighetspraksisFortsett der du er med et av alternativene nedenfor
Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 4. Kapittel 4
single

single

some-alt