Difference MeanIoU vs Jaccard Score #2963
-
|
Hey there, I was wondering why the MeanIoU and BinaryJaccardIndex differ. Shouldn't -by definition- both yield the exact same result? from torchmetrics.segmentation import MeanIoU
from torchmetrics.classification import BinaryJaccardIndex
import torch
torch.manual_seed(37)
num_classes = 2
preds = torch.randint(0, num_classes, (4, num_classes, 4, 4))
target = torch.randint(0, num_classes, (4, num_classes, 4, 4))
miou = MeanIoU(num_classes=num_classes)
miou_per_class = MeanIoU(num_classes=num_classes, per_class=True, include_background=True)
miou_per_class_no_background = MeanIoU(num_classes=num_classes, per_class=True, include_background=False)
miou(preds, target)
miou_per_class(preds, target)
miou_per_class_no_background(preds, target)
jaccard = BinaryJaccardIndex()
jaccard(preds, target)
Best, rot8 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
|
Thanks for your question! Although
If you want the exact same result, make sure the inputs and parameters (e.g., number of classes, inclusion of background) match the expectations of each metric and consider using MeanIoU with |
Beta Was this translation helpful? Give feedback.
Thanks for your question! Although
MeanIoUandBinaryJaccardIndexare mathematically related (both based on the Jaccard index), they may differ slightly in practice due to implementation details.Key points to consider:
MeanIoUin torchmetrics computes the Intersection-over-Union per class and averages over classes. It has options to include or exclude the background class.BinaryJaccardIndexis designed for binary classification and computes the Jaccard similarity (IoU) directly on the binary outputs.MeanIoUexpects integer class labels, whileBinar…