TLDR
In this Mage Academy lesson on model evaluation, we’ll learn how to calculate recall using a confusion matrix and understand what it means for a model to have a high or low recall value.
Glossary
Definition
Calculation
How to code
Definition
Recall is a classification metric used to measure your model’s performance. It’s represented by the total number of true positives divided by the total number of actual positives. Recall is described as a way of remembering or making insights based on the past. For a binary classification model, where there are only two choices for a prediction, recall is also synonymous with sensitivity.
A high recall value means that a model is very sensitive to changes and will adapt well.
While a low recall model will struggle with spontaneous changes in user behavior and will do poorly when making predictions on new data.
Calculation
In our
Confusion Matrix, our true positives lie in the first quadrant, and our actual positives are the sum of the False Negatives and True Positives.
Using this example, we can calculate it as (515/ 515+32). Hence we get a recall of 94.14%.
How to code
Recall can be calculated by scratch or using a library called SKLearn.
Example data
Let’s start by calculating the true positives, which is when the model successfully predicted a positive value. In this case, 1 will be positive and 0 will be negative.
1
2
3
4
5
6
# Calculate True Positives
true_pos = 0
for i in range(0, total):
# Is a match, and is positive
if (y_pred[i] == y_true[i]) and (y_true[i] == 1):
true_pos+=1
Next, we’ll calculate the actual positives, this is when the result is supposed to be positive, aka 1.
1
2
3
4
5
6
# Calculate Actual Positives
actual_pos = 0
for i in range(0, total):
# True value is positive
if y_true[i] == 1:
actual_pos += 1
Finally, we’ll use the two values to calculate the metrics for recall.
1
2
3
print("True Positives:", true_pos)
print("Actual Positives:", actual_pos)
print("Recall:", true_pos / actual_pos)
SKLearn or SciKitLearn, is a Python library that handles calculating the precision of a model using the
recall_score
method.
1
2
3
4
from sklearn.metrics import recall_score
y_pred = [1, 0, 1, 0]
y_true = [0, 1, 1, 0]
recall_score(y_true, y_pred)
Related Lessons
Confusion matrix (Beginner)
Sensibility (Intermediate)
F1-Score (Advanced)
Start building for free
No need for a credit card to get started.
Trying out Mage to build ranking models won’t cost a cent.
No need for a credit card to get started. Trying out Mage to build ranking models won’t cost a cent.