Performance Analysis of Classification model
Performance Analysis of Classification models In a machine learning Algorithm once the model is built, the next step is the use of various performance criteria to evaluate Machine learning Models. In the Classification model output is a discrete value therefore for classification performance analysis following methods are used Confusion matrix Accuracy Precision Recall (sensitivity) Specificity ROC curve (AUC) ROC Area Under Curve is useful when we are not concerned about whether the small dataset/class of dataset is positive or not, in contrast to F1 score where the class being positive is important. F-score( F1 score is useful when the size of the positive class is relatively small ) Performance metrics should be chosen based on the problem domain, project goals, and objectives. A confusion matrix A confusion matrix is a table that is used to describe the performance of algorithm (or "classifier") on a set of test data for which the true values/targ