Single Neuron Implementation
A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.
For this example, the goal is to build a neural network with a single neuron. It will be used for a binary classification task, such as spam detection, where 0 corresponds to a ham (non-spam) email and 1 corresponds to a spam email.
The neuron will receive numerical features extracted from emails as inputs and produce an output between 0 and 1, representing the probability that a given email is spam.
Here is what happens step by step:
- Each input is multiplied by a corresponding weight; the weights are learnable parameters that determine the importance of each input;
- All weighted inputs are summed together;
- A bias term is added to the input sum, allowing the neuron to shift its output and providing additional flexibility to the model;
- The sum is passed through an activation function. Because a single neuron directly produces the final output (a probability), the sigmoid function is used to compress values into the range (0,1).
Bias of the neuron is also a trainable parameter.
Neuron Class
A neuron needs to store its weights and bias making a class a natural way to group these related properties.
While this class won't be part of the final neural network implementation, it effectively illustrates key principles.
class Neuron:
def __init__(self, n_inputs):
self.weights = ...
self.bias = ...
weights: a list of randomly initialized values that determine how important each input (n_inputsis the number of inputs) is to the neuron;bias: a randomly initialized value that helps the neuron make flexible decisions.
Weights and bias should be randomly initialized with small values between -1 and 1, drawn from a uniform distribution, to break symmetry and ensure that different neurons learn different features.
To recap, NumPy provides the random.uniform() function to generate a random number or an array (by specifying the size argument) of random numbers from a uniform distribution within the [low, high) range.
import numpy as np
np.random.uniform(low, high, size=...)
Forward Propagation
The Neuron class should include an activate() method that calculates the weighted sum of the inputs and then applies the activation function (the sigmoid function in this case).
When two vectors of equal length β weights and inputs β are available, the weighted sum can be efficiently computed using the dot product of these vectors.
This allows us to compute the weighted sum in a single line of code using the numpy.dot() function, eliminating the need for a loop. The bias can then be directly added to the result to get input_sum_with_bias. The output is then computed by applying the sigmoid activation function:
def activate(self, inputs):
input_sum_with_bias = ...
output = ...
return output
Activation Functions
The formula for the sigmoid function is as follows, given that z represents the weighted sum of inputs with bias added (raw output value) for this particular neuron:
Ο(z)=1+eβz1βUsing this formula, sigmoid can be implemented as a simple function in Python:
def sigmoid(z):
return 1 / (1 + np.exp(-z))
The formula for the ReLU function is as follows, which basically sets the output equal to z if it is positive and 0 otherwise:
ReLU(z)=max(0,z)def relu(z):
return np.maximum(0, z)
1. What is the role of the bias term in a single neuron?
2. Why do we initialize weights with small random values rather than zeros?
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Can you show me how to complete the Neuron class with the random initialization?
How do I use the activate method with some example inputs?
What is the difference between using sigmoid and ReLU in this context?
Awesome!
Completion rate improved to 4
Single Neuron Implementation
Swipe to show menu
A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.
For this example, the goal is to build a neural network with a single neuron. It will be used for a binary classification task, such as spam detection, where 0 corresponds to a ham (non-spam) email and 1 corresponds to a spam email.
The neuron will receive numerical features extracted from emails as inputs and produce an output between 0 and 1, representing the probability that a given email is spam.
Here is what happens step by step:
- Each input is multiplied by a corresponding weight; the weights are learnable parameters that determine the importance of each input;
- All weighted inputs are summed together;
- A bias term is added to the input sum, allowing the neuron to shift its output and providing additional flexibility to the model;
- The sum is passed through an activation function. Because a single neuron directly produces the final output (a probability), the sigmoid function is used to compress values into the range (0,1).
Bias of the neuron is also a trainable parameter.
Neuron Class
A neuron needs to store its weights and bias making a class a natural way to group these related properties.
While this class won't be part of the final neural network implementation, it effectively illustrates key principles.
class Neuron:
def __init__(self, n_inputs):
self.weights = ...
self.bias = ...
weights: a list of randomly initialized values that determine how important each input (n_inputsis the number of inputs) is to the neuron;bias: a randomly initialized value that helps the neuron make flexible decisions.
Weights and bias should be randomly initialized with small values between -1 and 1, drawn from a uniform distribution, to break symmetry and ensure that different neurons learn different features.
To recap, NumPy provides the random.uniform() function to generate a random number or an array (by specifying the size argument) of random numbers from a uniform distribution within the [low, high) range.
import numpy as np
np.random.uniform(low, high, size=...)
Forward Propagation
The Neuron class should include an activate() method that calculates the weighted sum of the inputs and then applies the activation function (the sigmoid function in this case).
When two vectors of equal length β weights and inputs β are available, the weighted sum can be efficiently computed using the dot product of these vectors.
This allows us to compute the weighted sum in a single line of code using the numpy.dot() function, eliminating the need for a loop. The bias can then be directly added to the result to get input_sum_with_bias. The output is then computed by applying the sigmoid activation function:
def activate(self, inputs):
input_sum_with_bias = ...
output = ...
return output
Activation Functions
The formula for the sigmoid function is as follows, given that z represents the weighted sum of inputs with bias added (raw output value) for this particular neuron:
Ο(z)=1+eβz1βUsing this formula, sigmoid can be implemented as a simple function in Python:
def sigmoid(z):
return 1 / (1 + np.exp(-z))
The formula for the ReLU function is as follows, which basically sets the output equal to z if it is positive and 0 otherwise:
ReLU(z)=max(0,z)def relu(z):
return np.maximum(0, z)
1. What is the role of the bias term in a single neuron?
2. Why do we initialize weights with small random values rather than zeros?
Thanks for your feedback!