Loss Function
In training a neural network, it is necessary to measure how accurately the model predicts the correct results. This is done using a loss function, which calculates the difference between the modelβs predictions and the actual target values. The objective of training is to minimize this loss, making the predictions as close as possible to the true outputs.
For binary classification tasks, one of the most widely used loss functions is the cross-entropy loss, which is particularly effective for models that output probabilities.
Derivation of Cross-Entropy Loss
To understand the cross-entropy loss, consider the maximum likelihood principle. In a binary classification problem, the goal is to train a model that estimates the probability y^β that a given input belongs to class 1. The true label y can take one of two values: 0 or 1.
An effective model should assign high probabilities to correct predictions. This idea is formalized through the likelihood function, which represents the probability of observing the actual data given the modelβs predictions.
For a single training example, assuming independence, the likelihood can be expressed as:
P(yβ£x)=y^βy(1βy^β)1βyThis expression means the following:
- If y=1, then P(yβ£x)=y^β β the model should assign a high probability to class 1;
- If y=0, then P(yβ£x)=1βy^β β the model should assign a high probability to class 0.
In both cases, the objective is to maximize the probability that the model assigns to the correct class.
P(yβ£x) means the probability of observing the actual class label y given the inputs x.
To simplify optimization, the log-likelihood is used instead of the likelihood function because taking the logarithm converts products into sums, making differentiation more straightforward:
logP(yβ£x)=ylog(y^β)+(1βy)log(1βy^β)Since training aims to maximize the log-likelihood, the loss function is defined as its negative value so that the optimization process becomes a minimization problem:
L=β(ylog(y^β)+(1βy)log(1βy^β))This is the binary cross-entropy loss function, commonly used for classification problems.
Given that the output variable represents y^β for a particular training example, and the target variable represents y for this training example, this loss function can be implemented as follows:
import numpy as np
loss = -(target * np.log(output) + (1 - target) * np.log(1 - output))
Why This Formula?
Cross-entropy loss has a clear intuitive interpretation:
- If y=1, the loss simplifies to βlog(y^β), meaning the loss is low when y^β is close to 1 and very high when y^β is close to 0;
- If y=0, the loss simplifies to βlog(1βy^β), meaning the loss is low when y^β is close to 0 and very high when it is close to 1.
Since logarithms grow negatively large as their input approaches zero, incorrect predictions are heavily penalized, encouraging the model to make confident, correct predictions.
If multiple examples are passed during forward propagation, the total loss is computed as the average loss across all examples:
L=βN1βi=1βNβ(yiβlog(y^βiβ)+(1βyiβ)log(1βy^βiβ))where N is the number of training samples.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 4
Loss Function
Swipe to show menu
In training a neural network, it is necessary to measure how accurately the model predicts the correct results. This is done using a loss function, which calculates the difference between the modelβs predictions and the actual target values. The objective of training is to minimize this loss, making the predictions as close as possible to the true outputs.
For binary classification tasks, one of the most widely used loss functions is the cross-entropy loss, which is particularly effective for models that output probabilities.
Derivation of Cross-Entropy Loss
To understand the cross-entropy loss, consider the maximum likelihood principle. In a binary classification problem, the goal is to train a model that estimates the probability y^β that a given input belongs to class 1. The true label y can take one of two values: 0 or 1.
An effective model should assign high probabilities to correct predictions. This idea is formalized through the likelihood function, which represents the probability of observing the actual data given the modelβs predictions.
For a single training example, assuming independence, the likelihood can be expressed as:
P(yβ£x)=y^βy(1βy^β)1βyThis expression means the following:
- If y=1, then P(yβ£x)=y^β β the model should assign a high probability to class 1;
- If y=0, then P(yβ£x)=1βy^β β the model should assign a high probability to class 0.
In both cases, the objective is to maximize the probability that the model assigns to the correct class.
P(yβ£x) means the probability of observing the actual class label y given the inputs x.
To simplify optimization, the log-likelihood is used instead of the likelihood function because taking the logarithm converts products into sums, making differentiation more straightforward:
logP(yβ£x)=ylog(y^β)+(1βy)log(1βy^β)Since training aims to maximize the log-likelihood, the loss function is defined as its negative value so that the optimization process becomes a minimization problem:
L=β(ylog(y^β)+(1βy)log(1βy^β))This is the binary cross-entropy loss function, commonly used for classification problems.
Given that the output variable represents y^β for a particular training example, and the target variable represents y for this training example, this loss function can be implemented as follows:
import numpy as np
loss = -(target * np.log(output) + (1 - target) * np.log(1 - output))
Why This Formula?
Cross-entropy loss has a clear intuitive interpretation:
- If y=1, the loss simplifies to βlog(y^β), meaning the loss is low when y^β is close to 1 and very high when y^β is close to 0;
- If y=0, the loss simplifies to βlog(1βy^β), meaning the loss is low when y^β is close to 0 and very high when it is close to 1.
Since logarithms grow negatively large as their input approaches zero, incorrect predictions are heavily penalized, encouraging the model to make confident, correct predictions.
If multiple examples are passed during forward propagation, the total loss is computed as the average loss across all examples:
L=βN1βi=1βNβ(yiβlog(y^βiβ)+(1βyiβ)log(1βy^βiβ))where N is the number of training samples.
Thanks for your feedback!