Course Content

Introduction to Neural Networks

## Introduction to Neural Networks

# Summary

## Concept of a Neural Network

**Neuron** is the basic unit of information processing.

**Weights** are special coefficients that determine how important each input is to the neuron.

The **process of training a neural network** is to adjust the weight of each neuron in such a way that the results they give are the most accurate.

The **activation function** converts the sum to the neuron's output value.

Examples of activation functions include:

**Sigmoid Function**;**ReLU**;**Hyperbolic Tangent**.

**Forward propagation** is the process by which information passes through the Neural Network from the input layer to the output layer. When the information reaches the output layer, the network makes a prediction or inference based on the data it has processed.

**Backpropagation** is the process in which the error information is used to traverse the network back and adjust the weights of the neurons.

## Neural Network from Scratch

The **bias** allows the neuron to shift its output, adding flexibility to the modeling capability. Bias of the neuron is also a trainable parameter.

**Multilayer Perceptron** can have multiple layers:

**An input layer:**It receives the input data;**Hidden layers:**These layers process the data and extract patterns;**Output layer:**Produces the final prediction or classifications.

Each layer consists of **multiple neurons**, and the output from one layer becomes the input for the next layer.

We can split **backpropagation algorithm** into several steps:

**Forward Propagation**;**Error Computing**;**Calculating the Gradient (Delta)**;**Modifying Weights and Biases (Taking a Step in Gradient Descent)**.

**Learning rate** is an integral component of the **gradient descent** algorithm, the learning rate can be visualized as the pace of training. A higher learning rate accelerates the training process; however, an excessively high rate might cause the neural network to overlook valuable insights and patterns within the data.

There are many different ways to **calculate the quality of a model**:

**Accuracy**;**Mean Squared Error (MSE)**;**Cross-entropy (Cross-entropy)**;- And many others...

**Creating a model:**

**Training a model:**

**Predict output values:**

## Conclusion

What to look for when choosinge between the **Traditional Models** and the **Neural Networks**:

**Dataset Size**;**Complexity of a Problem**;**Interpretability**;**Resources**.

The most commonly used **types of neural networks**:

**Feedforward Neural Networks (FNN)**or**Multi-layer Perceptrons (MLP)**;**Convolutional Neural Networks (CNN)**;**Recurrent Neural Networks (RNN)**;**Autoencoders (AE)**;**Generative Adversarial Networks (GAN)**;**Modular Neural Networks (MNN)**.

**Libraries** for Deep Learning:

**Tensorflow**;**PyTorch**.

Everything was clear?