Codecademy Logo

Perceptron

Perceptron Bias Term

The bias term is an adjustable, numerical term added to a perceptron’s weighted sum of inputs and weights that can increase classification model accuracy.

The addition of the bias term is helpful because it serves as another model parameter (in addition to weights) that can be tuned to make the model’s performance on training data as good as possible.

The default input value for the bias weight is 1 and the weight value is adjustable.

weighted_sum = x1*w1 + x2*w2 + x3*w3 + 1*wbias

Perceptrons as Linear Classifiers

At the end of successful training, a perceptron is able to create a linear classifier between data samples (also called features). It finds this decision boundary by using the linear combination (or weighted sum) of all of the features. The perceptron separates the training data set into two distinct sets of features, bounded by the linear classifier.

 An example of perceptrons as linear classifiers

The perceptron graph is labeled 'Iteration: All', The x-axis has labels from -20 to 60 and is marked at increments of 20. The y-axis has labels from -20 to 60 and is also marked at increments of 20. In the perceptron graph there are 12 green plus signs in the bottom left center of the graph. There are also 17 red minus signs in the upper right center of the graph. There is a straight blue line separating the green plus signs from the red minus signs. All of the green plus signs are under the blue line and all of the red minus signs are above the blue line. The initial position of the blue line is going from the upper left corner of the graph to the bottom right corner of the graph.

 In the animation of the graph the blue line moves continuously to different positions all around the graph with different angles, starting points, and ending points.

Adjusting Perceptron Weights

The main goal of a perceptron is to make accurate classifications. To train a model to do this, perceptron weights must be optimizing for any specific classification task at hand.

The best weight values can be chosen by training a perceptron on labeled training data that assigns an appropriate label to each data sample (feature). This data is compared to the outputs of the perceptron and weight adjustments are made. Once this is done, a better classification model is created!

training_set = {(18, 49): -1, (2, 17): 1, (24, 35): -1, (14, 26): 1, (17, 34): -1}

Perceptron Weighted Sum

The first step in the perceptron classification process is calculating the weighted sum of the perceptron’s inputs and weights.

To do this, multiply each input value by its respective weight and then add all of these products together. This sum gives an appropriate representation of the inputs based on their importance.

inputs = [x1,x2,x3]
weights = [w1,w2,w3]
weighted_sum = x1*w1 + x2*w2 + x3*w3

Optimizing Perceptron Weights

To increase the accuracy of a perceptron’s classifications, its weights need to be slightly adjusted in the direction of a decreasing training error. This will eventually lead to minimized training error and therefore optimized weight values.

Each weight is appropriately updated with this formula:

weight=weight+(errorinput)weight = weight + (error * input)
An example graph of optimizing perceptron weights. 

The x-axis is labeled with the letter 'W'. The y-axis is labeled as J a function of W (J(W)). There is a blue line on the graph in the shape of a parabola that opens up, similar to the letter 'U'.  The vertex of the parabola, its lowest part, is close to the x-axis, but does not touch it. On the parabola there is a point labeled 'Initial weight', it is on the upper right side. Multiple arrows, in a line following the curve, point down and to the left from the initial weight point towards the vertex, which is the center and lowest point.   At the initial weight point there is a tangent line labeled 'Gradient'.

Also, there is a line pointing to the vertex of the parabola (the bottom of the u-shaped line) labeled, 'Global cost minimum', with text underneath indicating that this is the minimum of the function (J subscript  min with the w in parentheses next to the J).

Introduction to Perceptrons

Perceptrons are the building blocks of neural networks. They are artificial models of biological neurons that simulate the task of decision-making. Perceptrons aim to solve binary classification problems given their input.

The basis of the idea of the perceptron is rooted in the words perception (the ability to sense something) and neurons (nerve cells in the human brain that turn sensory input into meaningful information).

An example of how a perceptron works.

On the far left are a group of 3 inputs which include the letter 'X subscript 1', 'X subscript 2', and after an ellipsis symbol (three dots) 'X subscript n'. From each of the inputs there are lines pointing to a box to the right. The lines are labeled with the letter 'W' subscript 1, W subscript 2, and W subscript n. 

The box that these lines are pointing to is subdivided into two halves by a vertical divider line. In the first half, there is a sigma symbol. Under the sigma symbol there is a label, 'Weighted Sum'. The next half of the box has the function symbol. The label for the function symbol is 'Activation Function'. 

To the right of the box there is a right pointing arrow which points towards the letter 'Y'. The letter 'Y' is labeled 'Output'.

Perceptron Activation Functions

The second step of the perceptron classification process involves an activation function. One of these special functions is applied to the weighted sum of inputs and weights to constrain perceptron output to a value in a certain range, depending on the problem.

Some example ranges are [0,1], [-1,1], [0,100].

The sign activation function is a common activation function that contains the perceptron output to be either 1 or -1:

  • If weighted sum > 0, return 1.
  • If weighted sum < 0, return -1.
An example of how a perceptron works.


On the far left are a group of 3 inputs which include the letter 'X subscript 1', 'X subscript 2', and after an ellipsis symbol (three dots) 'X subscript n'. From each of the inputs there are lines pointing to a box to the right. The lines are labeled with the letter 'W' subscript 1, W subscript 2, and W subscript n. 

The box that these lines are pointing to is subdivided into two halves by a vertical divider line. In the first half, there is a sigma symbol. Under the sigma symbol there is a label, 'Weighted Sum'. The next half of the box has the function symbol. The label for the function symbol is 'Activation Function'. 

To the right of the box there is a right pointing arrow which points towards the letter 'Y'. The letter 'Y' is labeled 'Output'.

Perceptron Training Error

Training error is the measure of how accurate a perceptron’s classification is for a specific training data sample. It essentially measures “how bad” the perceptron is performing and helps determine what adjustments need to be made to the weights of that sample to increase classification accuracy.

trainingerror=actuallabelpredictedlabeltraining_error = actual_label - predicted_label

The goal of a perceptron is to have a training error of 0; this indicates that a perceptron is performing well on a data sample.

Actual Label Predicted Label Training Error
+1 +1 0
+1 -1 2
-1 -1 0
-1 +1 -2

Perceptron Main Components

Perceptrons use three main components for classification:

  • Input: Numerical input values correspond to features. i.e. [22, 130] could represent a person’s age & weight features.

  • Weights: Each feature has a weight assigned; this determines the feature’s importance. i.e. In a class, homework might be weighted 30%, but a final exam 50%. Therefore the final is more important to the overall grade (output).

  • Output: This is computed using inputs and weights. Output is either binary (1,0) or a value in a continuous range (70-90).

An example of the main components of perceptrons.

On the far left are a group of 3 inputs which include the letter 'X subscript 1', 'X subscript 2', and after an ellipsis symbol (three dots) 'X subscript n'. From each of the inputs there are lines pointing to a box to the right. The lines are labeled with the letter 'W' subscript 1, W subscript 2, and W subscript n. 

The box that these lines are pointing to is subdivided into two halves by a vertical divider line. In the first half, there is a sigma symbol. Under the sigma symbol there is a label, 'Weighted Sum'. The next half of the box has the function symbol. The label for the function symbol is 'Activation Function'. 

To the right of the box there is a right pointing arrow which points towards the letter 'Y'. The letter 'Y' is labeled 'Output'.

Learn more on Codecademy