Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Challenge: Creating a Perceptron | Neural Network from Scratch
Introduction to Neural Networks

Glissez pour afficher le menu

book
Challenge: Creating a Perceptron

Since our goal is to implement a multilayer perceptron, creating a Perceptron class will simplify model initialization. Its only attribute, layers is essentially a list of the Layer objects that define the structure of the network:

python

The variables used to initialize the layers are the following:

  • input_size : the number of input features ;

  • hidden_size : the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);

  • output_size : the number of neurons in the output layer .

The structure of the resulting perceptron should be as follows:

Tâche

Swipe to start coding

Your goal is to set up the basic structure of the perceptron by implementing its layers:

  1. Initialize the weights (a matrix) and biases (a vector) with random values from a uniform distribution in range [-1, 1) using NumPy.
  2. Compute the raw output values of the neurons in the forward() method of the Layer class.
  3. Apply the activation function to the raw outputs in the forward() method of the Layer class and return the result.
  4. Define three layers in the Perceptron class: two hidden layers with the same number of neurons and one output layer. Both hidden layers should use the relu activation function, while the output layer should use sigmoid.

Solution

Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 4

Demandez à l'IA

expand
ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

book
Challenge: Creating a Perceptron

Since our goal is to implement a multilayer perceptron, creating a Perceptron class will simplify model initialization. Its only attribute, layers is essentially a list of the Layer objects that define the structure of the network:

python

The variables used to initialize the layers are the following:

  • input_size : the number of input features ;

  • hidden_size : the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);

  • output_size : the number of neurons in the output layer .

The structure of the resulting perceptron should be as follows:

Tâche

Swipe to start coding

Your goal is to set up the basic structure of the perceptron by implementing its layers:

  1. Initialize the weights (a matrix) and biases (a vector) with random values from a uniform distribution in range [-1, 1) using NumPy.
  2. Compute the raw output values of the neurons in the forward() method of the Layer class.
  3. Apply the activation function to the raw outputs in the forward() method of the Layer class and return the result.
  4. Define three layers in the Perceptron class: two hidden layers with the same number of neurons and one output layer. Both hidden layers should use the relu activation function, while the output layer should use sigmoid.

Solution

Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 4
Switch to desktopPassez à un bureau pour une pratique réelleContinuez d'où vous êtes en utilisant l'une des options ci-dessous
Nous sommes désolés de vous informer que quelque chose s'est mal passé. Qu'est-il arrivé ?
some-alt