### Multilabel ranking metrics¶

• Multilabel learning: each sample can have any number of ground truth labels. The goal is to give high scores and better rankings to the ground truth labels.

### Coverage Error¶

• Returns the average #labels that have to be included in the final prediction so that all true labels are predicted.

• Useful if you want to know how many top-scored labels you need to predict without missing one.

### Label Ranking Avg Precision (LRAP)¶

• For each ground truth label, what fraction of higher-ranked labels were true labels?

### Label Ranking Loss¶

• Returns an average number of label pairs that are incorrectly ordered (true labels have a lower score than false labels).

### Discounted Cumulative Gain (DCG) and Normalized DCG¶

• Ranking metrics. They compare a predicted order to ground-truth scores (such as relevance of answers to a query.)

• DCG orders the true targets (e.g. relevance of query answers) in the predicted order, multiplies them by a logarithmic decay, and sums the result. The sum can be truncated after the first $K$ results, in which case we call it DCG@K. NDCG, or NDCG@K is DCG divided by the DCG obtained by a perfect prediction, so that it is always between 0 and 1. Usually, NDCG is preferred to DCG.