This 60-second short-video explores a supervised learning model, specifically a feedforward neural network, being trained to recognize handwritten digits.
The network has three layers:
- An input layer with 9 neurons, which receives data from a nine-segment sensor pad.
- A hidden layer with 9 neurons.
- An output layer with 2 neurons, which can light two bulbs to represent the digits 0 or 1.
The video shows the model being trained on labeled data points consisting of handwritten inputs and the corresponding label (i.e., correct answers). Initially, the model gets most of the classifications wrong, and a backpropagation algorithm adjusts knobs, representing the network parameters, to improve the model’s accuracy.
The demo illustrates the training phase of a neural network. Where wrong guesses by the model result in the training algorithm calculating a loss between the correct answer and the guessed answer. That loss is then used by the backpropagation algorithm to adjust the parameters so as to guess better in the future.
Once trained, the network can then be used in "inference" where the parameters are no longer adjusted.