Learn

Once we have a confusion matrix, there are a few different statistics we can use to summarize the four values in the matrix. These include accuracy, precision, recall, and F1 score. We won’t go into much detail about these metrics here, but a quick summary is shown below (T = true, F = false, P = positive, N = negative). For all of these metrics, a value closer to 1 is better and closer to 0 is worse.

  • Accuracy = (TP + TN)/(TP + FP + TN + FN)
  • Precision = TP/(TP + FP)
  • Recall = TP/(TP + FN)
  • F1 score: weighted average of precision and recall

In sklearn, we can calculate these metrics as follows:

# accuracy: from sklearn.metrics import accuracy_score print(accuracy_score(y_true, y_pred)) # output: 0.7 # precision: from sklearn.metrics import precision_score print(precision_score(y_true, y_pred)) # output: 0.67 # recall: from sklearn.metrics import recall_score print(recall_score(y_true, y_pred)) # output: 0.8 # F1 score from sklearn.metrics import f1_score print(f1_score(y_true, y_pred)) # output: 0.73

Instructions

1.

In the workspace, we’ve fit the same logistic regression model on the codecademyU training data and made predictions for the test data. y_pred contains the predicted classes and y_test contains the true classes.

Also, note that we’ve changed the train-test split (by using a different value for the random_state parameter, making the confusion matrix different from the one you saw in the previous exercise.

Calculate the accuracy for the model and print it out.

2.

Calculate the F1 score for the model and print it out.

Take this course for free

Mini Info Outline Icon
By signing up for Codecademy, you agree to Codecademy's Terms of Service & Privacy Policy.

Or sign up using:

Already have an account?