Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Lære Challenge: Creating a Perceptron | Neural Network from Scratch
Introduction to Neural Networks

book
Challenge: Creating a Perceptron

Since our goal is to implement a multilayer perceptron, creating a Perceptron class will simplify model initialization. Its only attribute, layers is essentially a list of the Layer objects that define the structure of the network:

python
class Perceptron:
def __init__(self, layers):
self.layers = layers

The variables used to initialize the layers are the following:

  • input_size: the number of input features;
  • hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);
  • output_size: the number of neurons in the output layer.

The structure of the resulting perceptron should be as follows:

Oppgave

Swipe to start coding

Your goal is to set up the basic structure of the perceptron by implementing its layers:

  1. Initialize the weights (a matrix) and biases (a vector) with random values from a uniform distribution in range [-1, 1) using NumPy.
  2. Compute the raw output values of the neurons in the forward() method of the Layer class.
  3. Apply the activation function to the raw outputs in the forward() method of the Layer class and return the result.
  4. Define three layers in the Perceptron class: two hidden layers with the same number of neurons and one output layer. Both hidden layers should use the relu activation function, while the output layer should use sigmoid.

Løsning

import numpy as np
import os
os.system('wget https://codefinity-content-media.s3.eu-west-1.amazonaws.com/f9fc718f-c98b-470d-ba78-d84ef16ba45f/section_2/activations.py 2>/dev/null')
from activations import relu, sigmoid

# Fix the seed for reproducibility
np.random.seed(10)

class Layer:
def __init__(self, n_inputs, n_neurons, activation_function):
self.inputs = np.zeros((n_inputs, 1))
self.outputs = np.zeros((n_neurons, 1))
# 1. Initialize the weight matrix and the bias vector with random values
self.weights = np.random.uniform(-1, 1, (n_neurons, n_inputs))
self.biases = np.random.uniform(-1, 1, (n_neurons, 1))
self.activation = activation_function
def forward(self, inputs):
self.inputs = np.array(inputs).reshape(-1, 1)
# 2. Compute the raw output values of the neurons
self.outputs = np.dot(self.weights, self.inputs) + self.biases
# 3. Apply the activation function
return self.activation(self.outputs)

class Perceptron:
def __init__(self, layers):
self.layers = layers

input_size = 2
hidden_size = 3
output_size = 1
# 4. Define three layers: 2 hidden layers and 1 output layer
hidden_1 = Layer(input_size, hidden_size, relu)
hidden_2 = Layer(hidden_size, hidden_size, relu)
output_layer = Layer(hidden_size, output_size, sigmoid)

Alt var klart?

Hvordan kan vi forbedre det?

Takk for tilbakemeldingene dine!

Seksjon 2. Kapittel 4
import numpy as np
import os
os.system('wget https://codefinity-content-media.s3.eu-west-1.amazonaws.com/f9fc718f-c98b-470d-ba78-d84ef16ba45f/section_2/activations.py 2>/dev/null')
from activations import relu, sigmoid

# Fix the seed for reproducibility
np.random.seed(10)

class Layer:
def __init__(self, n_inputs, n_neurons, activation_function):
self.inputs = np.zeros((n_inputs, 1))
self.outputs = np.zeros((n_neurons, 1))
# 1. Initialize the weight matrix and the bias vector with random values
self.weights = ___
self.biases = ___
self.activation = activation_function
def forward(self, inputs):
self.inputs = np.array(inputs).reshape(-1, 1)
# 2. Compute the raw output values of the neurons
self.outputs = ___
# 3. Apply the activation function
return ___

class Perceptron:
def __init__(self, layers):
self.layers = layers

input_size = 2
hidden_size = 3
output_size = 1
# 4. Define three layers: 2 hidden layers and 1 output layer
hidden_1 = ___
hidden_2 = ___
output_layer = ___

layers = [hidden_1, hidden_2, output_layer]
# A perceptron with 3 layers
perceptron = Perceptron(layers)

print("Weights of the third neuron in the second hidden layer:")
print(np.round(perceptron.layers[1].weights[2], 2))

print("Weights of the neuron in the output layer:")
print(np.round(perceptron.layers[2].weights[0], 2))
toggle bottom row
some-alt