Skip to content

feat(redis_admin): celery broker clear hardening (consolidates #907/#908/#909)#905

Merged
thomasrockhu-codecov merged 12 commits into
mainfrom
redis-admin/visible-chart-loader
May 2, 2026
Merged

feat(redis_admin): celery broker clear hardening (consolidates #907/#908/#909)#905
thomasrockhu-codecov merged 12 commits into
mainfrom
redis-admin/visible-chart-loader

Conversation

@thomasrockhu-codecov
Copy link
Copy Markdown
Contributor

@thomasrockhu-codecov thomasrockhu-codecov commented May 1, 2026

Summary

This PR is the single merge candidate for the celery_broker clear-by-filter follow-up work. It folds the diffs from PRs #907, #908, and #909 onto this branch via merge commits so reviewers can see one stack instead of four. The three sibling PRs stay open as the per-feature audit / review record — do not close them.

What lands on main

  1. fix(redis_admin): restore pre-#899 pipeline-LSET clear; drop fragile verify guard #908 — pipeline-LSET clear restoration (abdc77ef7, 0526b0cb9, cb5c07052)
    Restores the pre-perf(redis_admin): streaming chart + clear + orjson for celery broker #899 pipelined-LSET clear pattern in _streaming_celery_clear, drops the fragile verify-before-LSET guard, bumps _CELERY_MAX_PASSES to 10, and adds per-pass logging plus regression tests for the out-of-range drift branch and worker-thread join. Already approved on the source PR.

  2. perf(redis_admin): drop clear-by-filter preview load; reuse chart numbers #909 — skip clear-by-filter preview (344d50afa, 458856869, 59b02f10c, 25a8f8672, 25d99499b)
    Drops the entire clear-by-filter preview surface (lazy JSON endpoint, skeleton render, count helpers, two settings, the preview-only streaming_celery_count function) and renders the confirmation page synchronously off the four chart-passed bucket numbers (bucket_count, bucket_pct, total_visible, total_depth). The chart-hint flow is documented in redis_admin/README.md. Already approved on the source PR.

  3. feat(redis_admin): celery broker clear hardening (consolidates #907/#908/#909) #905 — visible chart-loader card (already on main)
    The original chart-loader card from this PR was independently merged onto main via the feat(redis_admin): split celery_broker caps into display vs scan limits #903feat(redis_admin): chunked celery broker clear with progress bar + wildcard sentinel fix #904chore(redis_admin): lower frequency-chart sample limit to 20k #906 chain (the loader CSS, change_list.html markup, and corresponding tests are already on main). The reset onto main made feat(redis_admin): celery broker clear hardening (consolidates #907/#908/#909) #905's commit a no-op against current main, so it was skipped during the cherry-pick step rather than re-applied as cargo-cult churn.

Supersedes relationship

#909 supersedes #907 entirely. #909 deletes the lazy-preview JSON endpoint, the skip-count endpoint, the preview JS/CSS, the cache helpers, and the two settings that #907 introduced. The right final state is "skip-preview synchronous render" (#909). Re-applying #907's lazy preview here only to delete it again in the same PR would be cargo-cult, so #907 is not included in this stack. #907 stays open as the iterative record of the lazy-preview design that #909 then replaced.

Conflict resolution decisions

Test plan

  • uv run ruff check redis_admin/ — passes
  • uv run ruff format --check redis_admin/ — 19 files already formatted
  • uv run pytest -q apps/codecov-api/redis_admin/tests/ locally — 178 passed; the 119 errors are all could not translate host name "postgres" connection failures (sandbox lacks DB; CI will run them)
  • CI green on this PR
  • Bugbot review triaged

Risk


Note

High Risk
Touches superuser-only but destructive Celery broker deletion paths and intentionally trades verify-before-LSET safety for faster queue draining, which can drop unintended messages under heavy consumer drift. Also changes clear-by-filter UX/flow (GET confirmation + all actions async) so regressions could impact on-call operations.

Overview
Celery broker clear-by-filter is redesigned to avoid slow preview renders. The frequency chart’s per-bucket action now GETs a confirmation page and passes chart-derived count hints (bucket_count, bucket_pct, total_visible, total_depth) so the page can show an approximate impact callout with zero Redis I/O; the template is updated to display a structured filter summary and round-trip the hints across failed typed-confirm attempts.

All clear-by-filter actions (including dry-run) now spawn the chunked background clear job and immediately redirect to the progress page, removing the old synchronous preview/count/sample behavior and deleting the preview-only streaming_celery_count usage.

The streaming clear implementation is changed back to pipelined LSET (dropping verify-before-LSET/LINDEX), increases _CELERY_MAX_PASSES to 10, adjusts convergence behavior, and adds per-pass/LREM diagnostic logging; tests are updated/added to pin queue-drain progress, drift/out-of-range handling, and the new view behavior.

Reviewed by Cursor Bugbot for commit d5fa854. Bugbot is set up for automated code reviews on this repo. Configure here.

@sentry
Copy link
Copy Markdown
Contributor

sentry Bot commented May 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 91.90%. Comparing base (6fb832f) to head (d5fa854).
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #905      +/-   ##
==========================================
+ Coverage   91.89%   91.90%   +0.01%     
==========================================
  Files        1316     1316              
  Lines       50586    50584       -2     
  Branches     1625     1625              
==========================================
+ Hits        46485    46490       +5     
+ Misses       3795     3788       -7     
  Partials      306      306              
Flag Coverage Δ
apiunit 94.93% <100.00%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@codecov-notifications
Copy link
Copy Markdown

codecov-notifications Bot commented May 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ All tests successful. No failed tests found.

📢 Thoughts on this report? Let us know!

thomasrockhu-codecov added a commit that referenced this pull request May 1, 2026
The earlier dry-run fix on this branch made
`celery_broker_clear(dry_run=True).count` reflect the streaming
counter's full-queue scan, but the clear-by-filter preview page
itself was still showing `len(matches)` (capped by
`CELERY_BROKER_DISPLAY_LIMIT = 2_000`). On a queue with 190k
matches the preview said "1903" while "Clear all" then drained
190k. Move `match_count` and `kept_index` to a streaming pass
via the new `streaming_celery_count` helper. Keep the queryset
only for the small `sample_targets` preview table (<= 20 rows).
Adds `first_match_index` to `_StreamingClearStats` so the helper
can surface the lowest-index match for the "Clear all but first"
action's `kept_index` callout.

Also folds in #905: replaces the subtle "Loading frequency
chart..." help-text line with a styled loader card (animated
spinner, themed via Django admin CSS custom properties,
reduced-motion fallback). Loader styles ship as an external
stylesheet referenced via {% extrastyle %} so they survive
production CSP - same pattern PR #902 used for the script.

Adds two regressions: the clear-by-filter preview reports the
full-queue count (`Matched: N message(s)` for N > display
limit), and the changelist body contains the new loader card
markup + CSS link.
…verify guard

Restore PR #895's `_execute_celery_clear` pattern (pipelined
`LSET key idx tombstone` per chunk via
`pipeline(transaction=False).execute(raise_on_error=False)`) inside
`_streaming_celery_clear`, replacing the per-match LINDEX / verify /
LSET sequence introduced by PR #899. PR #899 retains its streaming
architecture (we still walk past the display-limit cap to drain deep
filters); this PR only drops the verify step.

Trade-off (operator directive):

> I'm ok if we lose a few tasks along the way and we make secondary
> passes, but I do care that if I'm trying to remove 50% of the
> queue, that there is a significant drop.

Verify-before-LSET turned every pipelined LSET into LINDEX + LSET
RTTs per match and, on a busy `bundles_analysis` broker (78k matches
with consumers actively BLPOPing), pushed the LSET success rate to
~3% (`matched=2038`, `drift=77475`). The operator saw the queue stay
full because most matches were silently swallowed by the drift
counter. Pipelined unconditional LSET trades this safety guard for
actual queue drain: the bounded data-loss surface area (consumer pops
between LRANGE snapshot and pipelined LSET shift indexes; our LSET
overwrites the new occupant of the target slot, which then gets
LREM'd) is documented in the docstring and explicitly accepted.

Same change set:

* `_CELERY_MAX_PASSES` 3 → 10. The `prev_lset == matches_lset`
  plateau exit is removed; on a busy queue it fired against
  legitimate per-pass progress (each pass cleared ~the same number
  of matches). Convergence is now bounded by `matches_found == 0`
  plus the pass cap.
* `keep_one` early-exit retained. With verify gone, `matches_drifted`
  only counts genuine LSET out-of-range failures, so
  `keep_one and matches_drifted == 0` still means "non-keeper
  matches all tombstoned, queue converged".
* Per-pass diagnostic logging:
    redis_admin.streaming_clear: queue=<q> pass=<n> depth=<d>
    redis_admin.streaming_clear: queue=<q> pass=<n> found=<f> lset=<l> drifted=<dr> pass_first_kept=<bool>
    redis_admin.streaming_clear: queue=<q> pass=<n> lrem_removed=<r>
  On a quiet queue `lrem_removed` should equal `lset`, giving on-call
  a single grep that ties `LSET → LREM → LLEN delta` together.
* Docstring documents the data-loss surface area and references PR
  #895 / #899 for the audit trail.

Tests:

* New `test_streaming_clear_drops_50pct_of_queue_in_first_pass` in
  `test_celery_broker_clear_job.py`: seeds 100 envelopes (50
  matching, 50 not), runs the chunked clear job with no concurrency,
  asserts `LLEN` drops from 100 to 50 and that every survivor parses
  as a non-matching envelope.
* New `test_streaming_clear_makes_progress_under_concurrent_drift`:
  monkey-patches `LINDEX` to always return non-matching bytes (the
  worst-case scenario the verify-before-LSET path was designed to
  catch); confirms the pipelined path tombstones every match and
  drains the queue to zero, because LINDEX is no longer in the loop.
* Removed `test_streaming_clear_verify_before_lset_skips_drifted_entry`
  in `test_celery_broker_queue.py` — it asserted a semantic that no
  longer exists.

Operator one-liner to verify the LSET → LREM round-trip in a live
broker connection (paste into `python manage.py shell_plus`):

    from redis_admin import conn as _c
    r = _c.get_connection(kind="broker")
    before = r.llen("bundles_analysis")
    r.rpush("bundles_analysis", "__test_diag_value__")
    removed = r.lrem("bundles_analysis", 0, "__test_diag_value__")
    print(f"before={before} removed={removed} after={r.llen('bundles_analysis')}")
…bers

Render the clear-by-filter confirmation page synchronously from the
display hints the frequency chart already has (bucket count, bucket
share, sampled-window total, queue depth) instead of doing a streaming
LRANGE scan on the request thread. Operators confirm based on the
identity-defining (queue, task, repoid, commit) filter, not on a
precise count, so the approximation "will tombstone ~X (~Y% of queue
at depth Z)" is sufficient for the human in the loop. The chunked
background-job worker that ALL three actions (dry-run / keep-one /
clear-all) fan out to still computes the exact match count as it
walks, so the audit log entry stays precise.

Why: the previous synchronous streaming-count walk held the gunicorn
worker for many seconds per click on a 200-500k-deep queue, and a
separate lazy-preview attempt (PR #907) still costs network + cache
round-trips per render. The chart already has these numbers — we just
needed to pass them through.

Chart form (`_frequency_chart.html`):

- Switched from POST to GET (the four `bucket_*` / `total_*` hints
  are display-only, non-secret, idempotent, bookmarkable; CSRF is
  unnecessary).
- Added four hidden inputs: `bucket_count`, `bucket_pct`,
  `total_depth`, `total_visible`.

Confirmation view (`clear_by_filter_view`):

- GET path now does ZERO Redis I/O. Removed the queryset materialise
  + `_CLEAR_BY_FILTER_SAMPLE_SIZE` slice, the `streaming_celery_count`
  call, and the synthetic-target `_substitute_filter_any` wiring (the
  view never calls `celery_broker_clear` directly anymore so the
  synthetic target is moot — the chunked job builds its own filter
  from the URL params).
- Added `_coerce_int` / `_coerce_float` helpers for graceful URL-hint
  parsing; missing / malformed hints fall back to "Approximate count
  not available" rather than 500.
- All three POST actions (`dry_run`, `clear_keep_one`, `clear_all`)
  now uniformly fan out to `start_celery_broker_clear_job` and 302
  to the progress page. Dry-run no longer keeps the synchronous
  shape — the chunked worker reports the exact count there too.

Confirmation template (`clear_by_filter.html`):

- Replaced the "Matched: N message(s)" header + sample-targets table
  with a static filter-rows fieldset (queue / task / repo-with-link /
  truncated-commit-with-tooltip) and an approximate-count callout.
- Callout shape: "Will tombstone approximately X messages (~Y% of
  the queue at depth ~Z)" when all four hints are present, or
  "Approximate count not available — the chunked clear will compute
  the exact match count as it scans" otherwise.
- Kept the typed-confirmation pattern and the three submit actions
  exactly as-is; removed the conditional rendering that hid
  `clear_keep_one` for sub-2 buckets (we no longer know the count
  on render).

Migration note: the chart-form method change (POST → GET) means
operators with stale tabs may submit one final POST after this
deploys. The view continues to read scope params from POST when
present so those POSTs still work.
Copy link
Copy Markdown
Contributor

@calvin-codecov calvin-codecov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: regarding 100K -> 20K value

@thomasrockhu-codecov
Copy link
Copy Markdown
Contributor Author

Closing as superseded — the rebased branch produces a zero diff against main. The visible-loading-card UI shipped to main via the merge of #903 (which carried over the loader markup, the external celery_chart_fragment.css/.js, and the extrastyle block originally drafted here), and #906 then templatized the sample-count hint as {{ scan_limit_label }}. The only "delta" this branch still brings is downgrading that templated hint back to a hardcoded 100,000, which we don't want.

No further work needed; closing.

…ueue

`CeleryBrokerClearByFilterViewTest`'s `test_clear_all_*` /
`test_clear_keep_one_*` / `test_legacy_confirm_*` /
`test_task_name_*` tests were racing the chunked clear's daemon
worker. The view returns 302 the moment `start_celery_broker_clear_
job` spawns the thread, but the LSET / LREM that actually drains
the queue runs in that worker; the tests were asserting on
`self.redis.lrange("celery", ...)` immediately and getting the
pre-clear queue back when the runner was a few ms slower than
usual.

Add a `_post_clear` helper on the test class that submits the POST,
parses the redirect URL's `<job_id>` segment, and `thread.join`s
the matching worker thread (10s timeout, with a fallback to joining
every entry in `_clear_job_threads` if the redirect target isn't a
progress page). All six racy tests are now driven through this
helper.

Surfaced when the pre-#899 pipeline-LSET restoration in this
branch tightened the worker's per-pass timing enough for the race
to bite reliably in CI.
`streaming_celery_count` was a thin dry-run wrapper over
`_streaming_celery_clear` whose only caller was the synchronous
preview path in `clear_by_filter_view`. With the preview now
rendered from chart-supplied display hints (no Redis I/O at all),
the helper is dead code: the chunked clear job calls
`_streaming_celery_clear` directly with `dry_run=True` for its
own count pass, and per-message clear paths use materialised
`CeleryBrokerQueue` rows instead of a streaming triple.

Also tightens the `_FilterWildcard` / `_substitute_filter_any`
docstrings to reflect the single remaining call site
(`_run_celery_broker_clear_job_body`).
Replaces the destructive-action assertions in
`CeleryBrokerClearByFilterViewTest` with a focused contract suite
that pins the new view's behaviour:

* GET with chart hints renders the approximate-count callout and
  makes ZERO Redis calls (asserted via a `MagicMock` proxy on
  `redis_admin_conn.get_connection` whose `method_calls` must be
  empty after the request).
* GET without hints renders "Approximate count not available",
  still no Redis calls.
* GET with malformed hints (non-numeric, divide-by-zero) coerces to
  `None` via `_coerce_int` / `_coerce_float` and falls back to the
  "not available" branch instead of raising.
* GET resolves the `core.Repository` admin link when `repoid` is
  set so the chart's repo cell and the confirmation page link
  round-trip to the same URL.
* All three POST actions (`clear_all`, `clear_keep_one`, `dry_run`)
  return 302 to `…/clear-by-filter/job/<uuid>/` (the chunked
  background-job worker handles the actual clearing).
* Typed-confirm gate still applies to both destructive actions.
* Legacy `action=confirm` shim still 302s to the progress page.
* Refusal of unscoped clears keeps the per-bucket flow non-
  overlapping with `clear_by_scope`.
* Superuser gating preserved.

End-to-end exercise of the chunked worker (queue actually drained,
audit-log shape, `keep_one` survivor placement, etc.) lives in
`test_celery_broker_clear_job.py`. The destructive-mutation tests
that used to live on `CeleryBrokerClearByFilterViewTest` were
racing the chunked worker thread anyway -- this PR resolves that
race by shifting their coverage to the dedicated chunked-job
test file (where the existing `_wait_for_job` helper waits for
worker completion).

Also drops the two `streaming_celery_count` tests
(`test_streaming_celery_count_returns_full_queue_match_count`,
`test_streaming_celery_count_wildcard_task_name_matches_any_task`)
because the helper itself is gone -- the chunked-job worker calls
`_streaming_celery_clear` directly, and the underlying wildcard
semantic is already covered by
`test_celery_broker_clear_synthetic_target_with_wildcard_task_name_drains_all_matches`.

`test_clear_by_filter_view_dry_run_preview_unchanged` in
`test_celery_broker_clear_job.py` updated to
`test_clear_by_filter_view_dry_run_redirects_to_progress_page`:
asserts the new 302-to-progress shape and that
`start_celery_broker_clear_job` is invoked with `dry_run=True`.
Updates the URLs table row for `clear-by-filter/` and the
"chart-driven targeted clear" section to reflect that the
confirmation page now:

* Renders fully synchronously with **zero Redis I/O** on GET.
* Receives `bucket_count` / `bucket_pct` / `total_visible` /
  `total_depth` from the chart's per-row form, extrapolates
  `count ≈ bucket_count / total_visible * total_depth`, and
  surfaces a "approximately N (~Y% of queue at depth Z)"
  callout.
* Falls back to "Approximate count not available" when the
  hints are absent or malformed (hand-crafted URL).
* Spawns the chunked worker for **all three** POST actions
  (`dry_run` included) -- the destructive 200-render shape is
  gone; everything 302s to the progress page.

Also amends the per-bucket "Clear queue…" bullet under
"Frequency chart" to mention the new hint passthrough so the
docs explain why the page renders instantly even when the
queue is hundreds of thousands deep.
The pipelined `LSET` reply loop in `_streaming_celery_clear`
counts every `Exception` reply (out-of-range — consumer drained
the slot mid-clear) as `matches_drifted`. That branch was the
only patch line left uncovered after this PR's restoration
(codecov reported 85% patch coverage, target 90%, 3 missing
lines).

Add `test_streaming_clear_counts_pipelined_lset_out_of_range_
as_drift`: wraps `redis.pipeline()` to `LTRIM` the queue down
to half its length right before the pipeline flush, so every
LSET against an index past the new tail comes back as an
out-of-range Exception reply. Asserts `total_lset == 25` /
`total_drifted == 25` against a 50-match seed, exercising the
drift counter directly without monkey-patching individual
replies (which would diverge from real fakeredis pipeline
semantics).

Also tag the `(TypeError, ValueError)` fallback around the
`int(removed or 0)` cast on `LREM`'s return as `# pragma: no
cover - defensive`. Real Redis `LREM` always returns int; that
branch only fires for misbehaving test doubles and isn't
worth a contrived test.
Two trivial whitespace nits surfaced by `ruff format --check`:
the helper-block separator after `_coerce_float` and an inlined
`reverse(...)` call in `clear_by_filter_view`. No semantic
change.
…t numbers)

Conflict resolution notes:
- services.py docstrings for `_substitute_filter_any` and the
  chunked clear job: combined #908's "dry-run vs live-clear must
  agree" framing with #909's more general "rule survives a future
  field/truthiness change" framing. Both are accurate for the
  post-merge call sites; the merged wording covers both modes.
- test_celery_broker_queue.py: took #909's structure (no preview
  test class) wholesale because the preview surface is gone.
  #908's `_post_clear` thread-join helper is unnecessary in the
  view tests because the post-#909 view tests only assert on the
  302 redirect URL, not on post-worker queue state. End-to-end
  drain assertions live in `test_celery_broker_clear_job.py`,
  which #908 already wired up with a thread-join helper that
  survives this merge.
@thomasrockhu-codecov thomasrockhu-codecov force-pushed the redis-admin/visible-chart-loader branch from 43a0960 to 4249c1b Compare May 2, 2026 10:59
@thomasrockhu-codecov thomasrockhu-codecov changed the title feat(redis_admin): visible loading card for celery broker frequency chart feat(redis_admin): celery broker clear hardening (consolidates #907/#908/#909) May 2, 2026
@thomasrockhu-codecov
Copy link
Copy Markdown
Contributor Author

Reopened with a new purpose: this branch is now the consolidation merge candidate for the celery-broker clear hardening stack (#907/#908/#909). See the updated description for the full breakdown.

To address my own earlier "closing as superseded" comment and the Bugbot "Hardcoded '100,000'" finding: both are accurate against the original 43a0960f7 commit, which has now been hard-reset away. The current redis-admin/visible-chart-loader head consists of origin/main + a merge of #908 + a merge of #909; it carries no hand-edits to change_list.html, so the parameterised {{ scan_limit_label|default:"20,000" }} from #906 is preserved verbatim. The "100,000" literal is gone.

Bugbot will re-run against the new HEAD and that finding should drop.

Comment thread apps/codecov-api/redis_admin/admin.py
… form

Address Bugbot Low finding on the consolidated branch
("Approximate count hints lost on POST re-render"): the four
chart-form display hints (`bucket_count`, `bucket_pct`,
`total_visible`, `total_depth`) are read from `params`, which is
`request.POST` for POST requests. The destructive POST form in
`clear_by_filter.html` didn't carry them as hidden inputs, so a
typed-confirm failure fell through to the render path with hints
absent — the approximate-count callout silently degraded to
"Approximate count not available" even though the operator just
saw a precise number on the GET.

The fix mirrors the four hints into the destructive POST form
as hidden inputs and passes them to the template via four new
`*_hint` context strings (coerced to "" when absent so the
template can use a single uniform `{% if %}` guard).

Adds a regression test
`test_typed_confirm_failure_preserves_chart_hint_callout` that
posts a clear-by-filter request with a wrong typed_confirm but
the four hints in the POST body, asserts that the re-render
keeps the precise approximate-count callout
("828381" / "Will tombstone approximately"), and pins the four
hidden inputs on the re-rendered form so a second submit still
carries the hints.
Comment thread apps/codecov-api/redis_admin/tests/test_celery_broker_queue.py Outdated
… maths

Two CI failures on the consolidated branch HEAD:

1. `test_streaming_clear_verify_before_lset_skips_drifted_entry`
   was a left-over from PR #899 that asserted the verify-before-
   LSET guard skipped drifted slots. PR #908 deletes that guard
   in `_streaming_celery_clear` (the whole point of the
   restoration), so the test now sees `total_lset == 1` for a
   drifted slot and fails. The merge picked up #909's copy of
   the test file (it was the conflict-resolution baseline) which
   re-introduced the test. Apply #908's deletion verbatim and
   keep its replacement comment in place.

2. `test_typed_confirm_failure_preserves_chart_hint_callout`
   asserted `round(78500 / 20000 * 211052) == 828381`. The
   correct value is 828379 (the docstring arithmetic was off-
   by-two). The test logic and the production behaviour are
   both correct; the expected literal is what was wrong.
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit d5fa854. Configure here.

try:
return int(value)
except (TypeError, ValueError):
return None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant _coerce_int duplicates existing helper in families.py

Low Severity

The newly added _coerce_int in admin.py duplicates an existing _coerce_int already defined in families.py within the same redis_admin package. Both convert an input to int | None with the same fallback-to-None semantics. The existing helper could be imported instead of re-implementing the same logic, reducing maintenance surface and divergence risk.

Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit d5fa854. Configure here.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Disagreeing with this finding — the two helpers are not interchangeable for our use case:

  • families._coerce_int(value: Any) -> int | None is the envelope-meta parser. It's defensive against arbitrary JSON shapes (isinstance(value, bool) reject, isinstance(value, str) and value.isdigit() accept-only-non-negative-digit-strings).
  • admin._coerce_int(value: str | None) -> int | None is the query-string hint parser. It's permissive (int(value) so a hand-edited ?bucket_count=-5 would parse as -5 and the math would degrade to the "not available" fallback later, instead of being silently rejected at parse time).

The two helpers serve different conceptual surfaces (envelope normalisation vs URL-hint ingestion) and the divergent semantics matter — the families version's .isdigit() check would refuse anything with whitespace, a leading +, or a leading -, which is appropriate for parsing an opaque JSON envelope but the wrong shape for forgiving operator-typed URLs. Keeping them separate to avoid coupling the two unrelated features.

@thomasrockhu-codecov thomasrockhu-codecov added this pull request to the merge queue May 2, 2026
Merged via the queue into main with commit 086bd33 May 2, 2026
40 checks passed
@thomasrockhu-codecov thomasrockhu-codecov deleted the redis-admin/visible-chart-loader branch May 2, 2026 12:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants