Skip to content
Discussion options

You must be logged in to vote

I understand that the automatic call to compute inside metric(preds, targets) can be costly and you want to avoid it until the end of the evaluation epoch.
By design, TorchMetrics updates internal state with the call to the metric and calls compute to get a final value. To achieve your goal, you should avoid using the metric as a callable during every batch. Instead, call the .update(preds, targets) method to update the metric state without triggering computation, then call .compute() only once when you need the final result (e.g., at the end of the epoch).

metric = YourMetric(...)
for batch in dataloader:
    metric.update(preds, targets)  # just update state
final_result = metric.compute(…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by Borda
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants