Gradient Descent



Source code

Submitted by




Supported by

Carl Zeiss Foundation

A treasure chest lies at the deepest point of the seabed. You have a limited number of tries to find it. Special probes will help you. They tell you how steep and deep the seabed is at the current position.

Just like how humans can learn from mistakes, so can neural networks. When a calculation differs from the expected solution, a neural network can determine this error and minimize it through further tries. That means: to find the minimum of the error function (its deepest spot where the treasure chest is in our game). This minimum is approximated by samples (here: sending down probes). “Gradient Descent” is the name of this method and it is used to identify a local or ideally a global minimum. It is one of the most important methods of AI.

Main File