Now that we have our training set, we can start feeding inputs into the perceptron and comparing the actual outputs against the expected labels!
Every time the output mismatches the expected label, we say that the perceptron has made a training error — a quantity that measures “how bad” the perceptron is performing.
As mentioned in the last exercise, the goal is to nudge the perceptron towards zero training error. The training error is calculated by subtracting the predicted label value from the actual label value.
For each point in the training set, the perceptron either produces a
+1 or a
-1 (as we are using the Sign Activation Function). Since the labels are also a
+1 or a
-1, there are four different possibilities for the error the perceptron makes:
These training error values will be crucial in improving the perceptron’s performance as we will see in the upcoming exercises.
.training() method, let’s find the perceptron’s error on each
First, we need the perceptron’s predicted output for a point. Inside the
for loop, create a variable called
prediction and assign it the correct label value using
inputs in a single statement.
Create a variable named
actual and assign it the actual label for each
Create a variable called
error and assign it the value of
actual - prediction.