Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprende Challenge: Compare Convergence Speed | Scaling and Model Performance
Feature Scaling and Normalization Deep Dive

bookChallenge: Compare Convergence Speed

Tarea

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Solución

¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 4. Capítulo 4
single

single

Pregunte a AI

expand

Pregunte a AI

ChatGPT

Pregunte lo que quiera o pruebe una de las preguntas sugeridas para comenzar nuestra charla

Suggested prompts:

Can you explain this in simpler terms?

What are the main points I should remember?

Can you give me an example?

close

Awesome!

Completion rate improved to 5.26

bookChallenge: Compare Convergence Speed

Desliza para mostrar el menú

Tarea

Swipe to start coding

You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.

Steps:

  1. Generate synthetic data X (one feature) and y using the relation y = 3 * X + noise.
  2. Implement a simple gradient descent function that minimizes MSE loss:
    def gradient_descent(X, y, lr, steps):
        w = 0.0
        history = []
        for _ in range(steps):
            grad = -2 * np.mean(X * (y - w * X))
            w -= lr * grad
            history.append(w)
        return np.array(history)
    
  3. Run gradient descent twice:
    • on the original X,
    • and on the standardized X_scaled = (X - mean) / std.
  4. Plot or print the loss decrease for both to see that scaling accelerates convergence.
  5. Compute and print final weights and losses for both cases.

Solución

Switch to desktopCambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 4. Capítulo 4
single

single

some-alt