Concise and unambiguous assessment of a machine learning algorithm is key to classifier design and performance improvement.In the multi-class classification task, where each instance can only be labeled as one class, the confusion matrix is a powerful tool for performance assessment by quantifying the classification overlap.However, in the multi-label classification task, where each instance can be labeled with more than one class, the confusion matrix is undefined.Performance assessment of the multi-label classifier is currently based on calculating performance averages, such as Zodiac DC33 Parts hamming loss, precision, recall, and F-score.While the current assessment techniques present a reasonable representation of each class and overall performance, their aggregate nature results in ambiguity when identifying false negative (FN) and false positive (FP) results.
To address this gap, we define a method of creating the multi-label confusion matrix (MLCM) based on three proposed categories of multi-label problems.After establishing the shortcomings of current methods for identifying FN and FP, we demonstrate the usage of the MLCM with the classification of two Home publicly available multi-label data sets: i) a 12-lead ECG data set with nine classes, and ii) a movie poster data set with eighteen classes.A comparison of the MLCM results against statistics from the current techniques is presented to show the effectiveness in providing a concise and unambiguous understanding of a multi-label classifier behavior.