Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Вивчайте Forward Propagation | Neural Network from Scratch
Introduction to Neural Networks

book
Forward Propagation

You have already implemented forward propagation for a single layer in the previous chapter. Now, the goal is to implement complete forward propagation, from inputs to outputs.

To implement the entire forward propagation process, you need to define the forward() method in the Perceptron class. This method performs forward propagation layer by layer by calling the respective method for each layer:

python
class Perceptron:
def __init__(self, layers):
self.layers = layers

def forward(self, inputs):
x = inputs
for layer in ...:
# Pass x layer by layer
x = ...

return ...

The inputs pass through the first hidden layer, with each layer's outputs serving as inputs for the next, until reaching the final layer to produce the final output.

Завдання

Swipe to start coding

Your goal is to implement forward propagation for the perceptron:

  1. Iterate over the layers of the perceptron.
  2. Pass x through each layer in the network sequentially.
  3. Return the final output after all layers have processed the input.

If the forward() method is implemented correctly, the perceptron should output a single number between 0 and 1 when given certain inputs (e.g, [1, 0]).

Рішення

import numpy as np
import os
os.system('wget https://codefinity-content-media.s3.eu-west-1.amazonaws.com/f9fc718f-c98b-470d-ba78-d84ef16ba45f/section_2/layers.py 2>/dev/null')
from layers import hidden_1, hidden_2, output_layer

# Fix the seed of the "random" library, so it will be easier to test our code
np.random.seed(10)

class Perceptron:
def __init__(self, layers):
self.layers = layers

def forward(self, inputs):
x = inputs
for layer in self.layers:
# 1. Pass x layer by layer
x = layer.forward(x)
# 2. Return the result
return x

layers = [hidden_1, hidden_2, output_layer]
perceptron = Perceptron(layers)
# Testing the perceptron with two inputs: 1 and 0
inputs = [1, 0]
print(f'Inputs: {inputs}')
print(f'Output: {perceptron.forward(inputs)[0, 0]:.2f}')
Все було зрозуміло?

Як ми можемо покращити це?

Дякуємо за ваш відгук!

Секція 2. Розділ 5
import numpy as np
import os
os.system('wget https://codefinity-content-media.s3.eu-west-1.amazonaws.com/f9fc718f-c98b-470d-ba78-d84ef16ba45f/section_2/layers.py 2>/dev/null')
from layers import hidden_1, hidden_2, output_layer

# Fix the seed of the "random" library, so it will be easier to test our code
np.random.seed(10)

class Perceptron:
def __init__(self, layers):
self.layers = layers

def forward(self, inputs):
x = inputs
# 1. Iterate over the layers
for layer in ___:
# 2. Pass x layer by layer
x = ___
# 3. Return the result
return ___

layers = [hidden_1, hidden_2, output_layer]
perceptron = Perceptron(layers)
# Testing the perceptron with two inputs: 1 and 0
inputs = [1, 0]
print(f'Inputs: {inputs}')
print(f'Outputs: {perceptron.forward(inputs)[0, 0]:.2f}')

Запитати АІ

expand
ChatGPT

Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат

some-alt