How we humans train ourselves. Well the simple answer, By Experience. But how do we experience? The way we react to the inputs decides our experience.
We reinforce the reactions to the inputs to the desired levels (of society or parents). Simple. Just like this, the training of the neural network happens by giving certain inputs with random weights to the inputs.
The hidden layers react according to the activation function. The output produced is compared with the desired outcome.
The difference between the predicted outcome (by the machine) and the desired outcome is technical known as Loss Function, which needs to be minimized.
This particular process is known as forward pass or feed-forward propagation. But, the predicted outcome (by the machine) seldom conforms to the desired outcome. By this time you are aware that weights of the inputs play a critical role in deciding the predicted output by the machine.
To complete the training and calibrate the weights of the inputs, the method of backward propagation is followed until the difference between the predicted outcome (by the machine) and the desired outcome is minimized. This completes the training of the neural network.
Well, we need to take up the calibration issue of the weights. That’s Backpropagation. We will talk about that later. Until then Good Bye.
To know more, you can mail to smartsubu2020@gmail.com.