Learn

Now that we have our training set, we can start feeding inputs into the perceptron and comparing the actual outputs against the expected labels!

Every time the output mismatches the expected label, we say that the perceptron has made a training error — a quantity that measures “how bad” the perceptron is performing.

As mentioned in the last exercise, the goal is to nudge the perceptron towards zero training error. The training error is calculated by subtracting the predicted label value from the actual label value.

$training\ error = actual\ label - predicted\ label$

For each point in the training set, the perceptron either produces a +1 or a -1 (as we are using the Sign Activation Function). Since the labels are also a +1 or a -1, there are four different possibilities for the error the perceptron makes:

Actual Predicted Training Error
+1 +1 0
+1 -1 2
-1 -1 0
-1 +1 -2

These training error values will be crucial in improving the perceptron’s performance as we will see in the upcoming exercises.

### Instructions

1.

In the .training() method, let’s find the perceptron’s error on each inputs in training_set.

First, we need the perceptron’s predicted output for a point. Inside the for loop, create a variable called prediction and assign it the correct label value using .activation(), .weighted_sum(), and inputs in a single statement.

2.

Create a variable named actual and assign it the actual label for each inputs in training_set.

3.

Create a variable called error and assign it the value of actual - prediction.