Skip to Content
Learn
Perceptron
The Perceptron Algorithm

But one question still remains — how do we tweak the weights optimally? We can’t just play around randomly with the weights until the correct combination magically pops up. There needs to be a way to guarantee that the perceptron improves its performance over time.

This is where the Perceptron Algorithm comes in. The math behind why this works is outside the scope of this lesson, so we’ll directly apply the algorithm to optimally tweak the weights and nudge the perceptron towards zero error.

The most important part of the algorithm is the update rule where the weights get updated:

weight=weight+(errorinput)weight = weight + (error * input)

We keep on tweaking the weights until all possible labels are correctly predicted by the perceptron. This means that multiple passes might need to be made through the training_set before the Perceptron Algorithm comes to a halt.

In this exercise, you will continue to work on the .training() method. We have made the following changes to this method from the last exercise:

  • foundLine = False (a boolean that indicates whether the perceptron has found a line to separate the positive and negative labels)
  • while not foundLine: (a while loop that continues to train the perceptron until the line is found)
  • total_error = 0 (to count the total error the perceptron makes in each round)
  • total_error += abs(error) (to update the total error the perceptron makes in each round)

Instructions

1.

If the algorithm doesn’t find an error, the perceptron must have correctly predicted the labels for all points.

Outside the for loop (but inside the while loop), change the value of foundLine to True if total_error equals 0.

2.

In order to update the weight for each inputs, create another for loop (inside the existing for loop) that iterates a loop variable i through a range of self.num_inputs.

3.

Inside the second for loop, update each weight self.weights[i] by applying the update rule:

weight=weight+(errorinputs)weight = weight + (error * inputs)
4.

Great job! Now give it a try for yourself.

Train cool_perceptron using small_training_set.

You can also print out the optimal weights to see for yourself!

Folder Icon

Sign up to start coding

Already have an account?