Confusion Matrix
A confusion matrix is a performance measurement tool for classification problems. It is used to evaluate the accuracy of a classification, particularly in terms of how well it predicts different classes. The matrix compares the predicted classifications to the actual (true) classifications.
A typical confusion matrix for a binary classification problem is a 2x2 table, with the following structure:
Key Terms:
True Positive (TP): The number of positive instances that were correctly classified as positive.
True Negative (TN): The number of negative instances that were correctly classified as negative.
False Positive (FP): The number of negative instances that were incorrectly classified as positive (also called a Type I error).
False Negative (FN): The number of positive instances that were incorrectly classified as negative (also called a Type II error).
The confusion matrix can help assess how well the model distinguishes normal vs. anomalous data. False positives (normal data classified as anomalous) and false negatives (anomalous data classified as normal) are especially critical here.
Accuracy, Precision, Recall, and F1-Score are key performance metrics for evaluating the effectiveness of classification models. They are especially important in understanding how well a model handles different types of errors. Let’s break each of them down, with a focus on how they are calculated and when they are useful:
Accuracy
Accuracy is the proportion of correct predictions (both true positives and true negatives) to the total number of predictions.
Accuracy = (TP + TN)/(TP + TN + FP +FN)
Precision
Precision is the proportion of correct positive predictions out of all the instances that were predicted as positive. In other words, it answers the question: Of all the instances the model classified as positive, how many were actually positive?
Precision = TP/(TP + FP)
Recall (Sensitivity or True Positive Rate)
Recall is the proportion of actual positive instances that were correctly predicted by the model. It answers the question: Of all the actual positives, how many did the model correctly identify?
Precision = TP/(TP + FN)
Confusion Matrix with python
Import the libraries
Import required libraries
import numpy as np
from sklearn.metrics import accuracy_score, confusion_matrix, precision_score, recall_score, f1_score
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
Spam Data
Sample data with actual and predicted values.
actual = np.array(['Spam', 'Spam', 'Spam', 'Not Spam', 'Spam', 'Not Spam', 'Spam', 'Spam', 'Not Spam', 'Not Spam', 'Spam', 'Not Spam','Spam','Spam'])
predicted = np.array(['Spam', 'Not Spam', 'Spam', 'Not Spam', 'Spam', 'Spam', 'Spam', 'Spam', 'Not Spam', 'Not Spam', 'Not Spam','Spam','Not Spam','Not Spam'])
Confusion Matrix
Get the confusion matrix using sklearn.metrics
# 2. Confusion Matrix
conf_matrix = confusion_matrix(actual, predicted, labels=['Spam', 'Not Spam'])
Display confusion matrix
Display the confusion matrix using heatmap
sns.heatmap(conf_matrix,
annot=True,
cmap='viridis',
fmt='g',
xticklabels=['Spam','Not Spam'],
yticklabels=['Spam','Not Spam'])
plt.ylabel('Actual', fontsize=14)
plt.title('Confusion Matrix', fontsize=17, pad=20)
plt.gca().xaxis.set_label_position('top')
plt.xlabel('Prediction', fontsize=13)
plt.gca().xaxis.tick_top()
plt.gca().figure.subplots_adjust(bottom=0.2)
plt.show()
Calculate Accuracy, Precision, Recall
# 3. Accuracy, Precision, Recall
accuracy = accuracy_score(actual, predicted)
precision = precision_score(actual, predicted, pos_label='Spam')
recall = recall_score(actual, predicted, pos_label='Spam')
print(f'Accuracy: {accuracy}')
print(f'Precision: {precision}')
print(f'Recall: {recall}')
Output
Accuracy: 0.5714285714285714
Precision: 0.7142857142857143
Recall: 0.5555555555555556
Conclusion
Confusion matrix is a great way to talk about the model accuracy in terms of mathematical equations which gives a better understanding of the effectiveness of the model or data.
Comments
Post a Comment