Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Challenge: Creating a Perceptron | Neural Network from Scratch
Quizzes & Challenges
Quizzes
Challenges
/
Introduction to Neural Networks with Python

bookChallenge: Creating a Perceptron

To build a multilayer perceptron (MLP), it is helpful to define a Perceptron class. It stores a list of Layer objects that make up the network:

class Perceptron:
    def __init__(self, layers):
        self.layers = layers

The MLP will use three values:

  • input_size: number of input features;
  • hidden_size: number of neurons in each hidden layer;
  • output_size: number of neurons in the output layer.

Thus, the model consists of:

  1. An input layer;
  2. Two hidden layers (same neuron count, ReLU);
  3. An output layer (sigmoid).
Task

Swipe to start coding

Your task is to implement the basic structure of this MLP.

1. Initialize layer parameters (__init__)

  • Create a weight matrix of shape (n_neurons, n_inputs);
  • Create a bias vector of shape (n_neurons, 1);
  • Fill them with random values in [-1, 1) using np.random.uniform().

2. Implement forward propagation (forward)

  • Compute raw neuron outputs:
np.dot(self.weights, self.inputs) + self.biases
  • Apply the assigned activation function and return the output.

3. Define the MLP layers

  • Two hidden layers, each with hidden_size neurons and ReLU activation;
  • One output layer with output_size neurons and sigmoid activation.

Solution

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 4
single

single

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

Suggested prompts:

Can you explain how to implement the Layer class for this MLP?

What activation functions should I use for each layer?

How do I connect the layers together in the Perceptron class?

close

Awesome!

Completion rate improved to 4

bookChallenge: Creating a Perceptron

Swipe to show menu

To build a multilayer perceptron (MLP), it is helpful to define a Perceptron class. It stores a list of Layer objects that make up the network:

class Perceptron:
    def __init__(self, layers):
        self.layers = layers

The MLP will use three values:

  • input_size: number of input features;
  • hidden_size: number of neurons in each hidden layer;
  • output_size: number of neurons in the output layer.

Thus, the model consists of:

  1. An input layer;
  2. Two hidden layers (same neuron count, ReLU);
  3. An output layer (sigmoid).
Task

Swipe to start coding

Your task is to implement the basic structure of this MLP.

1. Initialize layer parameters (__init__)

  • Create a weight matrix of shape (n_neurons, n_inputs);
  • Create a bias vector of shape (n_neurons, 1);
  • Fill them with random values in [-1, 1) using np.random.uniform().

2. Implement forward propagation (forward)

  • Compute raw neuron outputs:
np.dot(self.weights, self.inputs) + self.biases
  • Apply the assigned activation function and return the output.

3. Define the MLP layers

  • Two hidden layers, each with hidden_size neurons and ReLU activation;
  • One output layer with output_size neurons and sigmoid activation.

Solution

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 4
single

single

some-alt