-
-
Notifications
You must be signed in to change notification settings - Fork 465
feat(metrics): [Trace Metrics 5] Add batch processor for Metrics #4984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 12-17-add_transport_types_for_metrics
Are you sure you want to change the base?
feat(metrics): [Trace Metrics 5] Add batch processor for Metrics #4984
Conversation
Instructions and example for changelogPlease add an entry to Example: ## Unreleased
### Features
- [Trace Metrics 5] Add batch processor for Metrics ([#4984](https://github.com/getsentry/sentry-java/pull/4984))If none of the above apply, you can opt out of this check by adding |
ede7fc3 to
d7a5d68
Compare
f3fe6f7 to
b324629
Compare
markushi
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good, just one concern within MetricsBatchProcessor.close() which I would like to discuss before merging.
sentry/src/main/java/io/sentry/metrics/DefaultMetricsBatchProcessorFactory.java
Show resolved
Hide resolved
| executorService.submit(() -> executorService.close(options.getShutdownTimeoutMillis())); | ||
| } else { | ||
| executorService.close(options.getShutdownTimeoutMillis()); | ||
| while (!queue.isEmpty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
h: Hmm if we have some customer emitting metrics in an endless loop, and the app/sdk gets closed - wouldn't this continue to call flushBatch()? Maybe it makes sense to have a isClosed flag to guard against that and early exit here + within add()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in MetricsApi there's a check that should prevent adding anything after the SDK has closed down:
if (!scopes.isEnabled()) {
options
.getLogger()
.log(SentryLevel.WARNING, "Instance is disabled and this 'metrics' call is a no-op.");
return;
}
There might be an issue where this loops and the SDK can't set enabled to false, gonna verify.
| if (hasScheduled && !forceSchedule) { | ||
| return; | ||
| } | ||
| try (final @NotNull ISentryLifecycleToken ignored = scheduleLock.acquire()) { | ||
| final @Nullable Future<?> latestScheduledFlush = scheduledFlush; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is the only place where hasScheduled is being used outside the scheduleLock, for consistency it could make sense to move this inside.
| if (hasScheduled && !forceSchedule) { | |
| return; | |
| } | |
| try (final @NotNull ISentryLifecycleToken ignored = scheduleLock.acquire()) { | |
| final @Nullable Future<?> latestScheduledFlush = scheduledFlush; | |
| try (final @NotNull ISentryLifecycleToken ignored = scheduleLock.acquire()) { | |
| if (hasScheduled && !forceSchedule) { | |
| return; | |
| } | |
| final @Nullable Future<?> latestScheduledFlush = scheduledFlush; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here was to avoid waiting for the lock when scheduling has already happened anyways.
The code checks whether queue is empty before hasScheduled = false.
Do you think ensuring this check is correct warrants the performance hit from acquiring the lock for each metric added?
b324629 to
ff0fe29
Compare

📜 Description
💡 Motivation and Context
💚 How did you test it?
📝 Checklist
sendDefaultPIIis enabled.🔮 Next steps