Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprenda Challenge: Compare Convergence Speed | Scaling and Model Performance
Feature Scaling and Normalization Deep Dive

bookChallenge: Compare Convergence Speed

Tarefa

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Solução

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 4
single

single

Pergunte à IA

expand

Pergunte à IA

ChatGPT

Pergunte o que quiser ou experimente uma das perguntas sugeridas para iniciar nosso bate-papo

Suggested prompts:

Can you explain this in simpler terms?

What are the main points I should remember?

Can you give me an example?

close

Awesome!

Completion rate improved to 5.26

bookChallenge: Compare Convergence Speed

Deslize para mostrar o menu

Tarefa

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Solução

Switch to desktopMude para o desktop para praticar no mundo realContinue de onde você está usando uma das opções abaixo
Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 4
single

single

some-alt