In this representation there is no bias involved, the green arrow are the model weights and it defines an hyperplane orthogonal to them centred in the coordinates [0,0]. In each iteration the sample being tested is shown in orange. If the sample is in the wrong side of the hyperplane the vector representing the sample is shown, then the red vector is the same vector reescaled by the learning rate and it is summed to the weights vector. Then, the next sample is tested.
Download this example here: activation_functions.ipynb
Download this example here: mlp_example_linear.ipynb
Download this example here: mlp_example_sin.ipynb it also needs this file mlp.py
Download this example here: mlp_example_sin_cos.ipynb it also needs this file mlp.py
Download this example here: generalization.ipynb
Download this example here:
error_surface.ipynb
Or visualize it better here:
nbviewer