Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Single Neuron Implementation | Neural Network from Scratch
Introduction to Neural Networks with Python

bookSingle Neuron Implementation

Note
Definition

A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.

For this example, we build a neural network with one neuron for a binary classification task (e.g., spam detection). The neuron receives numerical features and outputs a value between 0 and 1, representing the probability that an email is spam (1) or ham (0).

Step-by-step:

  1. Multiply each input by its weight;
  2. Sum all weighted inputs;
  3. Add a bias to shift the output;
  4. Pass the result through a sigmoid activation, which converts it to the ((0,1)) range for probability output.
Note
Note

Bias of the neuron is also a trainable parameter.

Neuron Class

A neuron needs to store its weights and bias making a class a natural way to group these related properties.

Note
Note

While this class won't be part of the final neural network implementation, it effectively illustrates key principles.

class Neuron:
    def __init__(self, n_inputs):
        self.weights = ...
        self.bias = ...
  • weights: randomly initialized values (one per input);
  • bias: a single random value. Both are drawn from a uniform distribution in ([-1, 1]) using np.random.uniform() to break symmetry.

Forward Propagation

The neuron’s activate() method computes the weighted sum and applies the sigmoid. The weighted sum uses the dot product of the weights and inputs:

input_sum_with_bias = np.dot(self.weights, inputs) + self.bias

Then we apply the activation to get the neuron’s final output.

Using np.dot() avoids loops and computes the entire weighted sum in one line. The sigmoid function then transforms this raw value into a probability:

def activate(self, inputs):
    input_sum_with_bias = ...
    output = ...
    return output

Sigmoid Activation

Given raw output (z), the sigmoid is:

Οƒ(z)=11+eβˆ’z\sigma(z) = \frac{1}{1 + e^{-z}}

It maps any number to ((0,1)), making it ideal for binary classification, where the neuron’s output must represent a probability.

Using this formula, sigmoid can be implemented as a simple function in Python:

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

The formula for the ReLU function is as follows, which basically sets the output equal to zz if it is positive and 0 otherwise:

ReLU(z)=max(0,z)ReLU(z) = max(0, z)
def relu(z):
    return np.maximum(0, z)

1. What is the role of the bias term in a single neuron?

2. Why do we initialize weights with small random values rather than zeros?

question mark

What is the role of the bias term in a single neuron?

Select the correct answer

question mark

Why do we initialize weights with small random values rather than zeros?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

bookSingle Neuron Implementation

Swipe to show menu

Note
Definition

A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.

For this example, we build a neural network with one neuron for a binary classification task (e.g., spam detection). The neuron receives numerical features and outputs a value between 0 and 1, representing the probability that an email is spam (1) or ham (0).

Step-by-step:

  1. Multiply each input by its weight;
  2. Sum all weighted inputs;
  3. Add a bias to shift the output;
  4. Pass the result through a sigmoid activation, which converts it to the ((0,1)) range for probability output.
Note
Note

Bias of the neuron is also a trainable parameter.

Neuron Class

A neuron needs to store its weights and bias making a class a natural way to group these related properties.

Note
Note

While this class won't be part of the final neural network implementation, it effectively illustrates key principles.

class Neuron:
    def __init__(self, n_inputs):
        self.weights = ...
        self.bias = ...
  • weights: randomly initialized values (one per input);
  • bias: a single random value. Both are drawn from a uniform distribution in ([-1, 1]) using np.random.uniform() to break symmetry.

Forward Propagation

The neuron’s activate() method computes the weighted sum and applies the sigmoid. The weighted sum uses the dot product of the weights and inputs:

input_sum_with_bias = np.dot(self.weights, inputs) + self.bias

Then we apply the activation to get the neuron’s final output.

Using np.dot() avoids loops and computes the entire weighted sum in one line. The sigmoid function then transforms this raw value into a probability:

def activate(self, inputs):
    input_sum_with_bias = ...
    output = ...
    return output

Sigmoid Activation

Given raw output (z), the sigmoid is:

Οƒ(z)=11+eβˆ’z\sigma(z) = \frac{1}{1 + e^{-z}}

It maps any number to ((0,1)), making it ideal for binary classification, where the neuron’s output must represent a probability.

Using this formula, sigmoid can be implemented as a simple function in Python:

def sigmoid(z):
    return 1 / (1 + np.exp(-z))

The formula for the ReLU function is as follows, which basically sets the output equal to zz if it is positive and 0 otherwise:

ReLU(z)=max(0,z)ReLU(z) = max(0, z)
def relu(z):
    return np.maximum(0, z)

1. What is the role of the bias term in a single neuron?

2. Why do we initialize weights with small random values rather than zeros?

question mark

What is the role of the bias term in a single neuron?

Select the correct answer

question mark

Why do we initialize weights with small random values rather than zeros?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1
some-alt