feat(search_issues): Add flags field to search_issues dataset#7757
feat(search_issues): Add flags field to search_issues dataset#7757sentry[bot] wants to merge 1 commit intomasterfrom
Conversation
| self, event_data: IssueEventData, processed: MutableMapping[str, Any] | ||
| ) -> None: | ||
| existing_flags = event_data.get("flags", None) | ||
| flags: Mapping[str, Any] = _as_dict_safe(cast(Dict[str, Any], existing_flags)) | ||
| if not existing_flags: | ||
| processed["flags.key"], processed["flags.value"] = [], [] | ||
| else: | ||
| processed["flags.key"], processed["flags.value"] = extract_nested( | ||
| flags, lambda s: _unicodify(s) or None | ||
| ) | ||
|
|
There was a problem hiding this comment.
Bug: The PR adds logic to process flags for search issues but is missing the required ClickHouse migration to add the flags and _flags_hash_map columns to the table.
Severity: CRITICAL
Suggested Fix
Add a new database migration for the search_issues storage. This migration should add the flags nested column and the _flags_hash_map materialized column with the FLAGS_HASH_MAP_COLUMN expression to the ClickHouse table schema.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: snuba/datasets/processors/search_issues_processor.py#L162-L173
Potential issue: The `_process_flags` method populates `flags.key` and `flags.value` for
search issues. However, the pull request is missing a corresponding database migration
to add the necessary columns to the `search_issues` ClickHouse table. Without the
migration, the `flags` nested column and the `_flags_hash_map` materialized column will
not exist. This will cause any attempt to insert an event with flags to fail due to a
column mismatch error, breaking data ingestion for those events.
Did we get this right? 👍 / 👎 to inform future reviews.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| else: | ||
| processed["flags.key"], processed["flags.value"] = extract_nested( | ||
| flags, lambda s: _unicodify(s) or None | ||
| ) |
There was a problem hiding this comment.
Flags extracted from wrong location with wrong format
High Severity
The _process_flags method reads flags from event_data["flags"] as a flat dict, but Sentry's event payload stores feature flags inside data.contexts.flags.values as a list of {"flag": "name", "result": value} objects. This is confirmed by the Rust errors processor and its tests in test_errors_processor.py. Since there's no flags key at the top level of event data, event_data.get("flags", None) will always return None, and flags.key/flags.value will always be empty lists — making the flags feature silently non-functional for search issues.


Fixes SNUBA-9VD. The issue was that:
search_issuesentity lacksflags_keymapping, causing Snuba query validation failure.This fix was generated by Seer in Sentry, triggered by pierre.massat@sentry.io. 👁️ Run ID: 10536125
Not quite right? Click here to continue debugging with Seer.
Legal Boilerplate
Look, I get it. The entity doing business as "Sentry" was incorporated in the State of Delaware in 2015 as Functional Software, Inc. and is gonna need some rights from me in order to utilize my contributions in this here PR. So here's the deal: I retain all rights, title and interest in and to my contributions, and by keeping this boilerplate intact I confirm that Sentry can use, modify, copy, and redistribute my contributions, under Sentry's choice of terms.