Single Neuron Implementation
A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.
For this example, we build a neural network with one neuron for a binary classification task (e.g., spam detection). The neuron receives numerical features and outputs a value between 0 and 1, representing the probability that an email is spam (1) or ham (0).
Step-by-step:
- Multiply each input by its weight;
- Sum all weighted inputs;
- Add a bias to shift the output;
- Pass the result through a sigmoid activation, which converts it to the ((0,1)) range for probability output.
Bias of the neuron is also a trainable parameter.
Neuron Class
A neuron needs to store its weights and bias making a class a natural way to group these related properties.
While this class won't be part of the final neural network implementation, it effectively illustrates key principles.
class Neuron:
def __init__(self, n_inputs):
self.weights = ...
self.bias = ...
weights: randomly initialized values (one per input);bias: a single random value. Both are drawn from a uniform distribution in ([-1, 1]) usingnp.random.uniform()to break symmetry.
Forward Propagation
The neuronβs activate() method computes the weighted sum and applies the sigmoid.
The weighted sum uses the dot product of the weights and inputs:
input_sum_with_bias = np.dot(self.weights, inputs) + self.bias
Then we apply the activation to get the neuronβs final output.
Using np.dot() avoids loops and computes the entire weighted sum in one line.
The sigmoid function then transforms this raw value into a probability:
def activate(self, inputs):
input_sum_with_bias = ...
output = ...
return output
Sigmoid Activation
Given raw output (z), the sigmoid is:
Ο(z)=1+eβz1βIt maps any number to ((0,1)), making it ideal for binary classification, where the neuronβs output must represent a probability.
Using this formula, sigmoid can be implemented as a simple function in Python:
def sigmoid(z):
return 1 / (1 + np.exp(-z))
The formula for the ReLU function is as follows, which basically sets the output equal to z if it is positive and 0 otherwise:
ReLU(z)=max(0,z)def relu(z):
return np.maximum(0, z)
1. What is the role of the bias term in a single neuron?
2. Why do we initialize weights with small random values rather than zeros?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 4
Single Neuron Implementation
Swipe to show menu
A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.
For this example, we build a neural network with one neuron for a binary classification task (e.g., spam detection). The neuron receives numerical features and outputs a value between 0 and 1, representing the probability that an email is spam (1) or ham (0).
Step-by-step:
- Multiply each input by its weight;
- Sum all weighted inputs;
- Add a bias to shift the output;
- Pass the result through a sigmoid activation, which converts it to the ((0,1)) range for probability output.
Bias of the neuron is also a trainable parameter.
Neuron Class
A neuron needs to store its weights and bias making a class a natural way to group these related properties.
While this class won't be part of the final neural network implementation, it effectively illustrates key principles.
class Neuron:
def __init__(self, n_inputs):
self.weights = ...
self.bias = ...
weights: randomly initialized values (one per input);bias: a single random value. Both are drawn from a uniform distribution in ([-1, 1]) usingnp.random.uniform()to break symmetry.
Forward Propagation
The neuronβs activate() method computes the weighted sum and applies the sigmoid.
The weighted sum uses the dot product of the weights and inputs:
input_sum_with_bias = np.dot(self.weights, inputs) + self.bias
Then we apply the activation to get the neuronβs final output.
Using np.dot() avoids loops and computes the entire weighted sum in one line.
The sigmoid function then transforms this raw value into a probability:
def activate(self, inputs):
input_sum_with_bias = ...
output = ...
return output
Sigmoid Activation
Given raw output (z), the sigmoid is:
Ο(z)=1+eβz1βIt maps any number to ((0,1)), making it ideal for binary classification, where the neuronβs output must represent a probability.
Using this formula, sigmoid can be implemented as a simple function in Python:
def sigmoid(z):
return 1 / (1 + np.exp(-z))
The formula for the ReLU function is as follows, which basically sets the output equal to z if it is positive and 0 otherwise:
ReLU(z)=max(0,z)def relu(z):
return np.maximum(0, z)
1. What is the role of the bias term in a single neuron?
2. Why do we initialize weights with small random values rather than zeros?
Thanks for your feedback!