You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections.
Metric collections just got faster
Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, recall, and f1 score in classification. In TorchMetrics, we have for a long time provided the MetricCollection object for chaining such metrics together for an easy interface to calculate them all at once. However, in many cases, such a collection of metrics shares some of the underlying computations that have been repeated for every metric in the collection. In Torchmetrics v0.8 we have introduced the concept of compute_groups to MetricCollection that will, as default, be auto-detected and group metrics that share some of the same computations.
Thus, if you are using MetricCollections in your code, upgrading to TorchMetrics v0.8 should automatically make your code run faster without any code changes.
Many exciting new metrics
TorchMetrics v0.8 includes several new metrics within the classification and image domain, both for the functional and modular API. We refer to the documentation for the full description of all metrics if you want to learn more about them.
SpectralAngleMapper or SAM was added to the image package. This metric can calculate the spectral similarity between given reference spectra and estimated spectra.
CoverageError was added to the classification package. This metric can be used when you are working with multi-label data. The metric works similar to the sklearn counterpart and computes how far you need to go through ranked scores such that all true labels are covered.
LabelRankingAveragePrecision and LabelRankingLoss were added to the classification package. Both metrics are used in multi-label ranking problems, where the goal is to give a better rank to the labels associated with each sample. Each metric gives a measure of how well your model is doing this.
ErrorRelativeGlobalDimensionlessSynthesis or ERGAS was added to the image package. This metric can be used to calculate the accuracy of Pan sharpened images considering the normalized average error of each band of the resulting image.
UniversalImageQualityIndex was added to the image package. This metric can assess the difference between two images, which considers three different factors when computed: loss of correlation, luminance distortion, and contrast distortion.
ClasswiseWrapper was added to the wrapper package. This wrapper can be used in combinations with metrics that return multiple values (such as classification metrics with the average=None argument). The wrapper will unwrap the result into a dict with a label for each value.
[0.8.0] - 2022-04-14
Added
Added WeightedMeanAbsolutePercentageError to regression package (New metric: WMAPE #948)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
We are excited to announce that TorchMetrics v0.8 is now available. The release includes several new metrics in the classification and image domains and some performance improvements for those working with metrics collections.
Metric collections just got faster
Common wisdom dictates that you should never evaluate the performance of your models using only a single metric but instead a collection of metrics. For example, it is common to simultaneously evaluate the accuracy, precision, recall, and f1 score in classification. In TorchMetrics, we have for a long time provided the MetricCollection object for chaining such metrics together for an easy interface to calculate them all at once. However, in many cases, such a collection of metrics shares some of the underlying computations that have been repeated for every metric in the collection. In Torchmetrics v0.8 we have introduced the concept of compute_groups to MetricCollection that will, as default, be auto-detected and group metrics that share some of the same computations.
Thus, if you are using MetricCollections in your code, upgrading to TorchMetrics v0.8 should automatically make your code run faster without any code changes.
Many exciting new metrics
TorchMetrics v0.8 includes several new metrics within the classification and image domain, both for the functional and modular API. We refer to the documentation for the full description of all metrics if you want to learn more about them.
SpectralAngleMapperor SAM was added to the image package. This metric can calculate the spectral similarity between given reference spectra and estimated spectra.CoverageErrorwas added to the classification package. This metric can be used when you are working with multi-label data. The metric works similar to thesklearncounterpart and computes how far you need to go through ranked scores such that all true labels are covered.LabelRankingAveragePrecisionandLabelRankingLosswere added to the classification package. Both metrics are used in multi-label ranking problems, where the goal is to give a better rank to the labels associated with each sample. Each metric gives a measure of how well your model is doing this.ErrorRelativeGlobalDimensionlessSynthesisor ERGAS was added to the image package. This metric can be used to calculate the accuracy of Pan sharpened images considering the normalized average error of each band of the resulting image.UniversalImageQualityIndexwas added to the image package. This metric can assess the difference between two images, which considers three different factors when computed: loss of correlation, luminance distortion, and contrast distortion.ClasswiseWrapperwas added to the wrapper package. This wrapper can be used in combinations with metrics that return multiple values (such as classification metrics with the average=None argument). The wrapper will unwrap the result into adictwith a label for each value.[0.8.0] - 2022-04-14
Added
WeightedMeanAbsolutePercentageErrorto regression package (New metric: WMAPE #948)CoverageError(Multilabel Ranking metrics #787)LabelRankingAveragePrecisionandLabelRankingLoss(Multilabel Ranking metrics #787)SpectralAngleMapper(Add new metrics:SAM#885)ErrorRelativeGlobalDimensionlessSynthesis(Adds new image metric -ERGAS#894)UniversalImageQualityIndex(Added new image metric - UQI #824)SpectralDistortionIndex(Adds new image metric -d_lambda#873)MetricCollectioninMetricTracker(Support for collection in Tracker #718)StructuralSimilarityIndexMeasure(3D extension for SSIM #818)MetricCollection(Smart update of metric collection #709)ClasswiseWrapperfor better logging of classification metrics with multiple output values (Better support for classwise logging #832)**kwargsargument for passing additional arguments to base class (Refactor/move args to kwargs #833)ignore_indexfor the Accuracy metric (Negativeignore_indexfor the Accuracy metric #362)adaptive_kfor theRetrievalPrecisionmetric (Addedadaptive_kargument to IR Precision metric #910)reset_real_featuresargument image quality assessment metrics (Optionally Avoid recomputing features #722)compute_on_cputo all metrics (New argumentcompute_on_cpu#867)Changed
num_classesinjaccard_indexa required argument (Update num_classes in jaccard score to be a required argument #853, Removeget_num_classesinjaccard_index#914)permutation_invariant_training(Improved shape checking ofpermutation_invariant_training#864)None(Refactor: allow reduction None #891)MetricTracker.best_metricwill now give a warning when computing on metric that do not have a best (Makebest_metricin MetricTracker more robust #913)Deprecated
compute_on_step(Deprecate/compute on step #792)dist_sync_on_step,process_group,dist_sync_fndirect argument (Refactor/move args to kwargs #833)Removed
WERandfunctional.werSSIMandfunctional.ssimPSNRandfunctional.psnrFBetaandfunctional.fbetaF1andfunctional.f1Hingeandfunctional.hingeIoUandfunctional.iouMatthewsCorrcoefPearsonCorrcoefSpearmanCorrcoefMAPandfunctional.pairwise.manhattenPESQandfunctional.audio.pesqPITandfunctional.audio.pitSDRandfunctional.audio.sdrandfunctional.audio.si_sdrSNRandfunctional.audio.snrandfunctional.audio.si_snrSTOIandfunctional.audio.stoiFixed
MAPmetric in specific cases (Fix MAP device placement #950)ClasswiseWrapperwith theprefixargument ofMetricCollection(Fix compatibility between ClasswiseWrapper and prefix/postfix arg in MetricCollection #843)BestScoreon GPU (Fix BertScore on GPU #912)ROUGEScore(Fix RougeL/RougeLSum implementation #944)Contributors
@ankitaS11, @ashutoshml, @Borda, @hookSSi, @justusschock, @lucadiliello, @quancs, @rusty1s, @SkafteNicki, @stancld, @vumichien, @weningerleon, @yassersouri
If we forgot someone due to not matching commit email with GitHub account, let us know :]
This discussion was created from the release Faster collection and more metrics!.
Beta Was this translation helpful? Give feedback.
All reactions