Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lära Challenge: Training the Perceptron | Neural Network from Scratch
Introduction to Neural Networks

Svep för att visa menyn

book
Challenge: Training the Perceptron

Before proceeding with training the perceptron, keep in mind that it uses the binary cross-entropy loss function discussed earlier. The final key concept before implementing backpropagation is the formula for the derivative of this loss function with respect to the output activations, an. Below are the formulas for the loss function and its derivative:

To verify that the perceptron is training correctly, the fit() method also prints the average loss at each epoch. This is calculated by averaging the loss over all training examples in that epoch:

python

Finally, the formulas for computing gradients are as follows:

The sample training data (X_train) along with the corresponding labels (y_train) are stored as NumPy arrays in the utils.py file. Additionally, instances of the activation functions are also defined there:

python
Uppgift

Swipe to start coding

  1. Compute the following gradients: dz, d_weights, d_biases, and da_prev in the backward() method of the Layer class.
  2. Compute the output of the model in the fit() method of the Perceptron class.
  3. Compute da (dan) before the loop, which is the gradient of the loss with respect to output activations.
  4. Compute da and perform backpropagation in the loop by calling the appropriate method for each of the layers.

If you implemented training correctly, given the learning rate of 0.01, the loss should steadily decrease with each epoch.

Lösning

Switch to desktopByt till skrivbordet för praktisk övningFortsätt där du är med ett av alternativen nedan
Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 2. Kapitel 10
Vi beklagar att något gick fel. Vad hände?

Fråga AI

expand
ChatGPT

Fråga vad du vill eller prova någon av de föreslagna frågorna för att starta vårt samtal

book
Challenge: Training the Perceptron

Before proceeding with training the perceptron, keep in mind that it uses the binary cross-entropy loss function discussed earlier. The final key concept before implementing backpropagation is the formula for the derivative of this loss function with respect to the output activations, an. Below are the formulas for the loss function and its derivative:

To verify that the perceptron is training correctly, the fit() method also prints the average loss at each epoch. This is calculated by averaging the loss over all training examples in that epoch:

python

Finally, the formulas for computing gradients are as follows:

The sample training data (X_train) along with the corresponding labels (y_train) are stored as NumPy arrays in the utils.py file. Additionally, instances of the activation functions are also defined there:

python
Uppgift

Swipe to start coding

  1. Compute the following gradients: dz, d_weights, d_biases, and da_prev in the backward() method of the Layer class.
  2. Compute the output of the model in the fit() method of the Perceptron class.
  3. Compute da (dan) before the loop, which is the gradient of the loss with respect to output activations.
  4. Compute da and perform backpropagation in the loop by calling the appropriate method for each of the layers.

If you implemented training correctly, given the learning rate of 0.01, the loss should steadily decrease with each epoch.

Lösning

Switch to desktopByt till skrivbordet för praktisk övningFortsätt där du är med ett av alternativen nedan
Var allt tydligt?

Hur kan vi förbättra det?

Tack för dina kommentarer!

Avsnitt 2. Kapitel 10
Switch to desktopByt till skrivbordet för praktisk övningFortsätt där du är med ett av alternativen nedan
Vi beklagar att något gick fel. Vad hände?
some-alt