Challenge: Creating a Perceptron
To build a multilayer perceptron (MLP), it is helpful to define a Perceptron class. It stores a list of Layer objects that make up the network:
class Perceptron:
def __init__(self, layers):
self.layers = layers
The MLP will use three values:
input_size: number of input features;hidden_size: number of neurons in each hidden layer;output_size: number of neurons in the output layer.
Thus, the model consists of:
- An input layer;
- Two hidden layers (same neuron count, ReLU);
- An output layer (sigmoid).
Swipe to start coding
Your task is to implement the basic structure of this MLP.
1. Initialize layer parameters (__init__)
- Create a weight matrix of shape
(n_neurons, n_inputs); - Create a bias vector of shape
(n_neurons, 1); - Fill them with random values in [-1, 1) using
np.random.uniform().
2. Implement forward propagation (forward)
- Compute raw neuron outputs:
np.dot(self.weights, self.inputs) + self.biases
- Apply the assigned activation function and return the output.
3. Define the MLP layers
- Two hidden layers, each with
hidden_sizeneurons and ReLU activation; - One output layer with
output_sizeneurons and sigmoid activation.
Solution
Thanks for your feedback!
single
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Can you explain how to implement the Layer class for this MLP?
What activation functions should I use for each layer?
How do I connect the layers together in the Perceptron class?
Awesome!
Completion rate improved to 4
Challenge: Creating a Perceptron
Swipe to show menu
To build a multilayer perceptron (MLP), it is helpful to define a Perceptron class. It stores a list of Layer objects that make up the network:
class Perceptron:
def __init__(self, layers):
self.layers = layers
The MLP will use three values:
input_size: number of input features;hidden_size: number of neurons in each hidden layer;output_size: number of neurons in the output layer.
Thus, the model consists of:
- An input layer;
- Two hidden layers (same neuron count, ReLU);
- An output layer (sigmoid).
Swipe to start coding
Your task is to implement the basic structure of this MLP.
1. Initialize layer parameters (__init__)
- Create a weight matrix of shape
(n_neurons, n_inputs); - Create a bias vector of shape
(n_neurons, 1); - Fill them with random values in [-1, 1) using
np.random.uniform().
2. Implement forward propagation (forward)
- Compute raw neuron outputs:
np.dot(self.weights, self.inputs) + self.biases
- Apply the assigned activation function and return the output.
3. Define the MLP layers
- Two hidden layers, each with
hidden_sizeneurons and ReLU activation; - One output layer with
output_sizeneurons and sigmoid activation.
Solution
Thanks for your feedback!
single