TEST: Automation safety check - DO NOT MERGE#3
Conversation
Fix broken URL and update SentinelOne solution to version 3.0.7
Co-authored-by: RamboV <68921481+RamboV@users.noreply.github.com>
Co-authored-by: secpfe <20690457+secpfe@users.noreply.github.com>
Corrected typos in alert display name, updated API endpoint and increased page size in polling config, fixed JSON body formatting in playbook deployment, and improved post-deployment instructions. Also updated package files to reflect these changes.
…roFox/Data-Connectors/CTI/aiohttp-3.13.3 Bump aiohttp from 3.12.14 to 3.13.3 in /Solutions/ZeroFox/Data Connectors/CTI
…pid7InsightVM/Data-Connectors/aiohttp-3.13.3 Bump aiohttp from 3.12.14 to 3.13.3 in /Solutions/Rapid7InsightVM/Data Connectors
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.6.0 to 2.6.3. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](urllib3/urllib3@2.6.0...2.6.3) --- updated-dependencies: - dependency-name: urllib3 dependency-version: 2.6.3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
Input validation additions
…form/Data-Connectors/urllib3-2.6.3
…ET-Protect-Platform/Data-Connectors/urllib3-2.6.3 Bump urllib3 from 2.6.0 to 2.6.3 in /Solutions/ESET Protect Platform/Data Connectors
Bump Amazon Web Services solution version to 3.0.8, update analytic rule versions, and refine text descriptions in createUiDefinition.json and mainTemplate.json for clarity and accuracy. Also, add the 3.0.8.zip package.
Bumped PaloAlto-PAN-OS solution version to 3.0.11, updated analytic rule and playbook descriptions, and replaced a reference blog URL in beaconing detection text. Also updated Okta Single Sign-On analytic rule content product ID to version 1.1.2.
Updated AWSAthena solution version to 3.0.2 and changed the FunctionAppConnector's Python runtime from 3.9 to 3.12 in deployment templates. Added release notes for the update and included the new package zip.
…ti-75e6bc5210 Bump js-yaml
Updated analytic rule logic to use ConnectionID instead of NetworkSessionId in ZscalerZPAUnexpectedSessionDuration.yaml. Incremented analytic rule and solution versions to 3.0.4 in mainTemplate.json and added the corresponding 3.0.4.zip package. Also made minor formatting improvements to entityMappings and requiredDataConnectors in mainTemplate.json.
…13179 Repackaged : AzureDevOpsAuditing
Renamed preview image files from 'Dark' and 'Light' to 'Black' and 'White' variants for BeyondTrustPMCloud workbooks. Updated WorkbooksMetadata.json to reference the new image filenames for consistency.
Added the BeyondTrustLogo.svg file to the Workbooks/Images/Logos directory for use as a branding or UI asset.
…pp playbook Added metadata section to the Function App nested mainTemplate as required by Content Hub: - title: TacitRed Defender TI - Function App - description: Azure Function for processing TacitRed threat intelligence - prerequisites: TacitRed API Key and Sentinel workspace - postDeployment: Configuration steps after deployment - support: Partner tier with Data443 contact info - author: Data443 Risk Mitigation, Inc. - tags: ThreatIntelligence, DefenderTI, AzureFunction, TacitRed Note: The Logic App playbook already had metadata. Reference: Solutions/Microsoft Defender XDR/Playbooks/AttackSimulatorTrainingNonReporters/azuredeploy.json
Rename BeyondTrustPMCloud preview images
…visibility - Move Function App from Playbooks/CustomConnector/ to Playbooks/ (matches Fortinet pattern) - Remove functionCode.zip from Package folder per reviewer request - Remove duplicate workspaceResourceId variable that caused V3 packaging error - Update Data/Solution file with new playbook paths - Re-run V3 packaging to regenerate mainTemplate.json
…eviewer pattern Applied same changes as Microsoft reviewer made to PR Azure#13267: - Updated support.tier to lowercase 'partner' - Updated lastUpdateTime to 2026-01-22 - Metadata already at END of nested mainTemplates (correct pattern) Reference: PR Azure#13267 (TacitRed-SentinelOne) reviewer changes
…k metadata pattern
Per Microsoft reviewer feedback (playbook not visible in Content Hub):
- Changed metadata resource apiVersion from 2025-09-01 to 2022-01-01-preview
- Fixed first metadata resource name to use single brackets + parameters('workspace')
- Updated displayName to use pb- prefix pattern: pb-tacitred-to-defender-ti
- Updated description to match displayName pattern
- Updated support.tier to lowercase 'partner'
Reference: PR Azure#13267 (TacitRed-SentinelOne) which is working correctly
Per Azure MCP Server verification and SentinelOne deep dive: - Changed parentId from double brackets [[variables(...)] to single brackets [variables(...)] - This matches the working SentinelOne pattern exactly - API version 2022-01-01-preview confirmed correct for nested metadata resources Reference: PR Azure#13267 (TacitRed-SentinelOne) working pattern
…nelOne pattern Changed API versions from 2025-09-01 to 2023-04-01-preview for: - contentTemplates (Function App and Playbook) - contentPackages This matches the working SentinelOne and CrowdStrike solutions that are successfully loading playbooks in Content Hub. Reference: PR Azure#13267 (SentinelOne) and PR Azure#13269 (CrowdStrike) as requested by Microsoft reviewer
…sources Inside contentTemplates nested mainTemplate, ARM expressions need double brackets [[ for proper escaping. Fixed all variable references in both Function App and Playbook metadata resources. This was causing deployment error: 'Unable to process template language expressions for resource... The template variable play details' Reference: Working SentinelOne pattern uses [[variables(...)]]
…urces Per CiscoMeraki reference pattern, metadata resources inside nested contentTemplates should use SINGLE brackets [] for variables defined in the outer template scope. Double brackets [[]] caused variables to be passed as literal strings instead of being resolved. Fixed both Function App and Playbook metadata resources to match the working CiscoMeraki solution pattern. Reference: Solutions/CiscoMeraki/Package/mainTemplate.json
SentinelOne uses a MIXED bracket pattern in metadata resources: - name, parentId: DOUBLE brackets [[...]] - contentId, version, sourceId, email: SINGLE brackets [...] Applied this exact pattern to both Function App and Playbook metadata resources to match the working SentinelOne solution. Reference: data443-fork/feature/tacitred-sentinelone-v1
V3 tooling (createSolutionV3.ps1) regenerated the mainTemplate.json
with correct metadata structure:
1. AzureFunction metadata: uses [[variables('workspace-name')]]
and [[variables('playbookId1')] for name/parentId
2. Playbook metadata: uses single brackets [concat()] and
[variables('playbookId2')] for name/parentId
3. Fixed displayName from 'pb-tacitred-to-defender-ti' to
'TacitRedToDefenderTI'
ARM-TTK passes except for known contentProductId/id exceptions
that Microsoft CI skips.
…ontent Hub loading Root cause: Playbook nested template used reference() to get Function App URL dynamically. Content Hub fails to load playbook templates that use reference() for resources that don't exist at validation time. Fix: Replace reference() with FunctionAppUrl parameter that user provides after deploying Function App first. Changes: - Playbooks/TacitRedToDefenderTI/azuredeploy.json: Replace FunctionAppName parameter with FunctionAppUrl, remove reference() call, update workflow to use @parameters('FunctionAppUrl') - Package/mainTemplate.json: Mirror changes in nested playbook template - Package/createUiDefinition.json: Fix docs.microsoft.com -> learn.microsoft.com - Package/3.0.0.zip: Regenerated Ref: CiscoMeraki/Playbooks pattern uses user-provided parameters, not reference() ARM-TTK: 49/49 passed
|
🔒 Security Approval Required This fork PR requires manual approval before automated testing can run. For security, a maintainer must:
Note: If new commits are added later, simply remove and re-add the 🤖 Automated security check • Created: 2026-01-30T03:25:07.274Z |
|
Hi @mazamizo21 This is a TEST COMMENT for verifying the automation system. Feedback (FAKE)
This is a safe test - no production work needed. |
|
🔒 Security Approval Required This fork PR requires manual approval before automated testing can run. For security, a maintainer must:
Note: If new commits are added later, simply remove and re-add the 🤖 Automated security check • Created: 2026-01-30T03:27:38.239Z |
| @@ -32,7 +38,9 @@ | |||
| secret_name (str): secret name to get its value. | |||
| """ | |||
| try: | |||
| logging.info("Retrieving secret {} from {}.".format(secret_name, self.keyvault_name)) | |||
| logging.info( | |||
| "Retrieving secret {} from {}.".format(secret_name, self.keyvault_name) | |||
Check failure
Code scanning / CodeQL
Clear-text logging of sensitive information High
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, to fix clear-text logging of sensitive information, remove sensitive values from log messages or replace them with non-sensitive representations (such as static descriptions, redacted tokens, or irreversible hashes where operationally necessary). The goal is to preserve enough context for debugging/operations without exposing secret material or identifiers that might be considered sensitive.
For this specific file, the best fix with minimal functional impact is:
- Stop logging
secret_nameandretrieved_secret.namein clear text. - Keep the log messages, but replace the dynamic secret identifier with a redacted placeholder or a generic description, while still logging non-sensitive context like the key vault name.
- Do not change how secrets are retrieved or stored; only adjust the log messages.
Concretely:
- In
get_keyvault_secret, change the log at line 42 from"Retrieving secret {} from {}.".format(secret_name, self.keyvault_name)to a message that does not includesecret_name, e.g."Retrieving secret from key vault '%s'." % self.keyvault_name. - In
get_keyvault_secret, change the log at line 45 from"Retrieved secret value for {}.".format(retrieved_secret.name)to something like"Retrieved secret value from key vault '%s'." % self.keyvault_name. - In
set_keyvault_secret, change the log at line 60 from"Creating or updating a secret '{}'.".format(secret_name)to a generic message without the name, e.g."Creating or updating a secret in key vault '%s'." % self.keyvault_name, and similarly at line 62.
No new imports or methods are required; only the string formatting of existing logging calls needs to be adjusted.
| @@ -39,10 +39,12 @@ | ||
| """ | ||
| try: | ||
| logging.info( | ||
| "Retrieving secret {} from {}.".format(secret_name, self.keyvault_name) | ||
| "Retrieving secret from key vault '%s'." % self.keyvault_name | ||
| ) | ||
| retrieved_secret = self.client.get_secret(secret_name) | ||
| logging.info("Retrieved secret value for {}.".format(retrieved_secret.name)) | ||
| logging.info( | ||
| "Retrieved secret value from key vault '%s'." % self.keyvault_name | ||
| ) | ||
| return retrieved_secret.value | ||
|
|
||
| except ResourceNotFoundError as err: | ||
| @@ -57,10 +57,14 @@ | ||
| secret_name (str): secret name to update its value or create it. | ||
| secret_value (str): secret value to be set as value of given secret name. | ||
| """ | ||
| logging.info("Creating or updating a secret '{}'.".format(secret_name)) | ||
| logging.info( | ||
| "Creating or updating a secret in key vault '%s'." % self.keyvault_name | ||
| ) | ||
| self.client.set_secret(secret_name, secret_value) | ||
| logging.info("Secret created successfully : '{}' .".format(secret_name)) | ||
|
|
||
| logging.info( | ||
| "Secret created or updated successfully in key vault '%s'." | ||
| % self.keyvault_name | ||
| ) | ||
| def get_properties_list_of_secrets(self): | ||
| """To get list of secrets stored in keyvault with its properties. | ||
|
|
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, to fix “empty except” issues, either (a) replace them with explicit condition checks that avoid raising the exception in the first place, or (b) add appropriate handling such as logging, re‑raising, or compensating actions, and document that ignoring is intentional if that is truly desired.
Here, the best fix that preserves existing functionality is to avoid raising IndexError at all by checking the row length before accessing each optional column. This keeps the behavior (“if column is missing, do nothing”) identical, but removes the need for try/except entirely. To make the code self‑documenting, we can add a brief comment indicating these are optional columns which may not be present in all rows. Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py, in the region after the event = { ... } dict is created (lines 849–880), replace each try: event[...] = row[N]; except IndexError: pass with an if len(row) > N: conditional assignment. No new imports are required, and existing logging remains untouched.
| @@ -878,42 +878,25 @@ | ||
| 'disk encryption': row[28], | ||
| 'anti malware agents': row[29] | ||
| } | ||
| try: | ||
| # Optional columns may not be present in all rows; add them only if available. | ||
| if len(row) > 30: | ||
| event['transaction id'] = row[30] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 31: | ||
| event['block reason'] = row[31] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 32: | ||
| event['application port'] = row[32] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 33: | ||
| event['application protocol'] = row[33] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 34: | ||
| event['tunnel type'] = row[34] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 35: | ||
| event['secure client version'] = row[35] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 36: | ||
| event['possible match ruleset id'] = row[36] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 37: | ||
| event['possible match rule id'] = row[37] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 38: | ||
| event['possible match posture'] = row[38] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| event['source process id'] = row[39] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
Generally, the right fix is to avoid using empty except blocks. Instead, either (a) prevent the exception by checking bounds before accessing row[index], or (b) handle the exception in a meaningful way (e.g., logging or adding a comment to document why it is safe to ignore).
Here, we want to preserve existing behavior: if the row does not have that index, simply don’t add that key to event. The cleanest change without altering functionality is to replace each try/except IndexError: pass with a conditional that only reads row[index] when it exists. This removes the empty except and makes the intent clear. Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py around the shown snippet, for each block like:
try:
event['transaction id'] = row[30]
except IndexError:
passwe should replace it with:
if len(row) > 30:
event['transaction id'] = row[30]and similarly for all subsequent optional fields (indices 31–38). No new imports or helper methods are required; we just use len(row) checks. This maintains the same behavior (field present only if the column exists) while satisfying CodeQL by removing the empty except blocks.
| @@ -878,42 +878,24 @@ | ||
| 'disk encryption': row[28], | ||
| 'anti malware agents': row[29] | ||
| } | ||
| try: | ||
| if len(row) > 30: | ||
| event['transaction id'] = row[30] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 31: | ||
| event['block reason'] = row[31] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 32: | ||
| event['application port'] = row[32] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 33: | ||
| event['application protocol'] = row[33] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 34: | ||
| event['tunnel type'] = row[34] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 35: | ||
| event['secure client version'] = row[35] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 36: | ||
| event['possible match ruleset id'] = row[36] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 37: | ||
| event['possible match rule id'] = row[37] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 38: | ||
| event['possible match posture'] = row[38] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| event['source process id'] = row[39] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, to fix empty except blocks, either remove the try/except by guarding the code (e.g., checking length before indexing), or add appropriate handling: logging, re‑raising, or otherwise acting on the exception. For cases where ignoring the exception is genuinely intended, add a clear comment explaining why, and optionally log at debug level.
Here, all the try/except IndexError: pass blocks are doing the same thing: “if this column exists, add it to event; otherwise, skip it.” We can preserve that behavior while avoiding empty except blocks by replacing each try/except with an index‑range guard: if len(row) > N: event['field'] = row[N]. This keeps the functionality (missing columns are simply not set) but removes exception‑based control flow and the empty handlers.
Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py, in the region starting around line 881 where event is built from row, replace each try/except IndexError: pass pair for indices 30 through 56 with a simple if len(row) > index: assignment. No new imports or helper methods are required; everything uses existing variables (row, event) and standard Python operations.
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
To fix the problem, we should stop using empty except IndexError: pass blocks for optional CSV fields and instead guard access to row[...] with an index check. This preserves the existing behavior (if the column is missing, the field is simply not added to event) while removing silent exception handling and making the code’s intent clearer.
Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py, in the method that builds event from row, replace each try block that assigns from a particular index with a conditional if len(row) > N: before the assignment, for indices 30 through 36 (and any further ones in the omitted section, if present in the same pattern). No new imports are required because we only use basic Python operations. The rest of the logic in the method and file remains unchanged.
| @@ -878,34 +878,20 @@ | ||
| 'disk encryption': row[28], | ||
| 'anti malware agents': row[29] | ||
| } | ||
| try: | ||
| if len(row) > 30: | ||
| event['transaction id'] = row[30] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 31: | ||
| event['block reason'] = row[31] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 32: | ||
| event['application port'] = row[32] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 33: | ||
| event['application protocol'] = row[33] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 34: | ||
| event['tunnel type'] = row[34] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 35: | ||
| event['secure client version'] = row[35] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 36: | ||
| event['possible match ruleset id'] = row[36] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| event['possible match rule id'] = row[37] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, the issue should be fixed by avoiding empty except blocks. Instead of relying on exceptions for control flow and then ignoring them, the code should explicitly check conditions that might fail (here, the row length) before accessing elements, or otherwise record that data is missing (e.g., set a default value). This preserves current behavior (no exception raised for short rows) but removes the silent swallowing of errors.
The best way to fix this specific case without changing functionality is to replace each try/except IndexError: pass block with a simple length check before assigning the column to the event dict. For instance, instead of:
try:
event['Traffic Source'] = row[30]
except IndexError:
passwe check:
if len(row) > 30:
event['Traffic Source'] = row[30]Functionally this is the same: if the index is out of range, we simply skip setting that key. However, we no longer use exceptions as control flow and no longer have an empty except block. We should apply this to all the optional columns at indices 30–36. No new imports or helper methods are required; we only update the body of parse_csv_cdfw where these lines occur.
Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py, within the parse_csv_cdfw method around lines 706–733, replace each try/except IndexError: pass sequence with an if len(row) > index: guard assigning the value. The rest of the method and file remains unchanged.
| @@ -703,34 +703,20 @@ | ||
| event['CASI Category IDs'] = row[29] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 30: | ||
| event['Traffic Source'] = row[30] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 31: | ||
| event['Content Category IDs'] = row[31] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 32: | ||
| event['Content Category List IDs'] = row[32] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 33: | ||
| event['Organization ID'] = row[33] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 34: | ||
| event['Egress IP'] = row[34] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 35: | ||
| event['Egress'] = row[35] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 36: | ||
| event['Event Correlation ID'] = row[36] | ||
| except IndexError: | ||
| pass | ||
| else: | ||
| event = {"message": convert_list_to_csv_line(row)} | ||
| event['EventType'] = 'cloudfirewalllogs' |
|
|
||
| logging.info('Call to get AWS SSM Inventory successful.') | ||
| base_url = req.url.split('?')[0] |
Check notice
Code scanning / CodeQL
Unused local variable Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, unused local variables should either be removed or renamed to a clearly "unused" name (such as _ or unused_base_url) if they are intentionally there for documentation. Here, the variable base_url is calculated but not used at all, and the rest of the logic (building response and returning the HTTP response) does not depend on it. Therefore, the best fix without changing existing functionality is to delete the base_url assignment line.
Concretely, in Solutions/AWS Systems Manager/Playbooks/CustomConnector/AWS_SSM_FunctionAppConnector/GetInventory/__init__.py, remove line 100:
base_url = req.url.split('?')[0]No new methods, imports, or definitions are needed, and no other code in the snippet needs adjustment. This will eliminate the unused variable and the CodeQL warning while preserving the current behavior (which already returns "nextLink": None).
| @@ -97,8 +97,6 @@ | ||
|
|
||
| logging.info('Pagination handling completed.') | ||
|
|
||
| base_url = req.url.split('?')[0] | ||
|
|
||
| response = { | ||
| "value": all_entities, | ||
| "nextLink": None |
| # Version 14 — The same as version 13, but adds the Event correlation ID field to Proxy logs | ||
| try: | ||
| event['Event correlation ID'] = row[54] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, the problem is fixed by avoiding empty except blocks: either prevent the exception using explicit checks (e.g., check the row length before indexing) or handle it meaningfully (typically by logging a warning or debug message). For expected/optional missing fields, it’s better to use simple bounds checks or default values rather than catching IndexError. For data conversion, we should narrow the exception type and log failures, so that only bad input is caught and not unrelated bugs.
The best low-impact fix here is:
- For the optional CSV columns (
row[48]throughrow[54]), remove thetry/except IndexErrorand instead checklen(row)before accessing each index. This preserves behavior (fields are only set when available) but removes the emptyexceptand makes the logic clearer. - For the integer conversion loop, restrict the exception to
ValueErrorandTypeError, and log a message when conversion fails instead of silently passing. This keeps behavior similar (field remains unconverted if invalid) while making failures visible.
Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py:
- Replace the series of
try: event['...'] = row[XX] except IndexError: passbetween lines ~523–553 with conditional assignments such asif len(row) > 48: event['Organization ID'] = row[48], etc. We already have access torow, andlen(row)is cheap. - Update the
for field in int_fields:block so that theexceptcatchesValueErrorandTypeError, and logs using the existingloggingmodule (already imported at line 3). For example:logging.debug("Failed to convert %s to int: %r", field, event.get(field)). This does not change output structure but adds observability.
No new imports are needed; we can reuse the existing logging import.
| @@ -520,37 +520,23 @@ | ||
| event['Warn Categories'] = row[47] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 48: | ||
| event['Organization ID'] = row[48] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 49: | ||
| event['Application Entity Name'] = row[49] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 50: | ||
| event['Application Entity Category'] = row[50] | ||
| except IndexError: | ||
| pass | ||
| # Version 12 — The same as version 11, but adds the Egress IP field to Proxy logs. | ||
| try: | ||
| if len(row) > 51: | ||
| event['Egress IP'] = row[51] | ||
| except IndexError: | ||
| pass | ||
| # Version 13 — The same as version 12, but adds the AI Model Name, AI Supply Chain Categories field to Proxy logs | ||
| try: | ||
| if len(row) > 52: | ||
| event['AI Model Name'] = row[52] | ||
| except IndexError: | ||
| pass | ||
| try: | ||
| if len(row) > 53: | ||
| event['AI Supply Chain Categories'] = row[53] | ||
| except IndexError: | ||
| pass | ||
| # Version 14 — The same as version 13, but adds the Event correlation ID field to Proxy logs | ||
| try: | ||
| if len(row) > 54: | ||
| event['Event correlation ID'] = row[54] | ||
| except IndexError: | ||
| pass | ||
| int_fields = [ | ||
| 'requestSize', | ||
| 'responseSize', | ||
| @@ -560,8 +536,12 @@ | ||
| for field in int_fields: | ||
| try: | ||
| event[field] = int(event[field]) | ||
| except Exception: | ||
| pass | ||
| except (ValueError, TypeError): | ||
| logging.debug( | ||
| "Failed to convert field '%s' to int; value was %r", | ||
| field, | ||
| event.get(field) | ||
| ) | ||
| else: | ||
| event = {"message": convert_list_to_csv_line(row)} | ||
| event = self.convert_empty_string_to_null_values(event) |
| pass | ||
| try: | ||
| event['AI Supply Chain Categories'] = row[53] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, an "empty except" should either be replaced by more specific handling (logging, default values, clean fallback) or be documented with a clear comment explaining why it is safe to ignore. For this code, the best non‑breaking fix is to retain the behavior of skipping missing optional fields while adding debug‑level logging to record that an IndexError occurred. Likewise, for the integer conversion loop, we can log when a conversion fails but still keep the existing behavior of leaving the field unset or non‑integer.
Concretely, in Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py, within the shown region around lines 521–564, we should replace the except IndexError: pass blocks with except IndexError as e: blocks that emit a logging.debug message mentioning the field name and index. For the except Exception: pass around int(event[field]), we should catch as Exception as e and log a debug message indicating which field failed to convert and why. This uses the already‑imported logging module, requires no new imports, and does not change functional behavior because failures were previously ignored entirely; now they are still ignored but recorded.
| @@ -518,39 +518,39 @@ | ||
| pass | ||
| try: | ||
| event['Warn Categories'] = row[47] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Account ID' (row[47]) missing in CSV row: %s", e) | ||
| try: | ||
| event['Organization ID'] = row[48] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Organization ID' (row[48]) missing in CSV row: %s", e) | ||
| try: | ||
| event['Application Entity Name'] = row[49] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Application Entity Name' (row[49]) missing in CSV row: %s", e) | ||
| try: | ||
| event['Application Entity Category'] = row[50] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Application Entity Category' (row[50]) missing in CSV row: %s", e) | ||
| # Version 12 — The same as version 11, but adds the Egress IP field to Proxy logs. | ||
| try: | ||
| event['Egress IP'] = row[51] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Egress IP' (row[51]) missing in CSV row: %s", e) | ||
| # Version 13 — The same as version 12, but adds the AI Model Name, AI Supply Chain Categories field to Proxy logs | ||
| try: | ||
| event['AI Model Name'] = row[52] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'AI Model Name' (row[52]) missing in CSV row: %s", e) | ||
| try: | ||
| event['AI Supply Chain Categories'] = row[53] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'AI Supply Chain Categories' (row[53]) missing in CSV row: %s", e) | ||
| # Version 14 — The same as version 13, but adds the Event correlation ID field to Proxy logs | ||
| try: | ||
| event['Event correlation ID'] = row[54] | ||
| except IndexError: | ||
| pass | ||
| except IndexError as e: | ||
| logging.debug("Optional field 'Event correlation ID' (row[54]) missing in CSV row: %s", e) | ||
| int_fields = [ | ||
| 'requestSize', | ||
| 'responseSize', | ||
| @@ -560,8 +543,8 @@ | ||
| for field in int_fields: | ||
| try: | ||
| event[field] = int(event[field]) | ||
| except Exception: | ||
| pass | ||
| except Exception as e: | ||
| logging.debug("Failed to convert field '%s' to int in event %s: %s", field, event, e) | ||
| else: | ||
| event = {"message": convert_list_to_csv_line(row)} | ||
| event = self.convert_empty_string_to_null_values(event) |
| # Version 13 — The same as version 12, but adds the AI Model Name, AI Supply Chain Categories field to Proxy logs | ||
| try: | ||
| event['AI Model Name'] = row[52] | ||
| except IndexError: |
Check notice
Code scanning / CodeQL
Empty except Note
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
In general, to fix empty except blocks, the handler should either (1) log the exception, (2) take corrective action (e.g., set a default value), or (3) explicitly document why it is safe to ignore the exception. Here we want to preserve the existing behavior that missing columns do not break ingestion, while avoiding a bare pass.
The best approach with minimal functional change is: when an IndexError happens (i.e., the CSV row is shorter than expected), explicitly set the corresponding event field to None (or some sentinel like an empty string) instead of doing nothing. This keeps downstream behavior similar (field may be absent or "falsey") but is clearer and removes the empty handler. Alternatively, we could simply add a comment explaining intentional ignore; however, improving the behavior slightly by assigning None is more robust and still consistent with treating these fields as optional. No new imports are required; we only modify the except IndexError bodies in the provided region of Solutions/CiscoUmbrella/Data Connectors/ciscoUmbrellaDataConn/__init__.py.
Concretely, for each block:
try:
event['Detected Response File Type'] = row[46]
except IndexError:
passwe will change it to:
try:
event['Detected Response File Type'] = row[46]
except IndexError:
event['Detected Response File Type'] = Noneand do the same for the surrounding fields (Warn Categories, Organization ID, Application Entity Name, Application Entity Category, Egress IP, AI Model Name, AI Supply Chain Categories, Event correlation ID). This eliminates empty except blocks without altering control flow or adding dependencies.
| @@ -515,42 +515,42 @@ | ||
| try: | ||
| event['Detected Response File Type'] = row[46] | ||
| except IndexError: | ||
| pass | ||
| event['Detected Response File Type'] = None | ||
| try: | ||
| event['Warn Categories'] = row[47] | ||
| except IndexError: | ||
| pass | ||
| event['Warn Categories'] = None | ||
| try: | ||
| event['Organization ID'] = row[48] | ||
| except IndexError: | ||
| pass | ||
| event['Organization ID'] = None | ||
| try: | ||
| event['Application Entity Name'] = row[49] | ||
| except IndexError: | ||
| pass | ||
| event['Application Entity Name'] = None | ||
| try: | ||
| event['Application Entity Category'] = row[50] | ||
| except IndexError: | ||
| pass | ||
| event['Application Entity Category'] = None | ||
| # Version 12 — The same as version 11, but adds the Egress IP field to Proxy logs. | ||
| try: | ||
| event['Egress IP'] = row[51] | ||
| except IndexError: | ||
| pass | ||
| event['Egress IP'] = None | ||
| # Version 13 — The same as version 12, but adds the AI Model Name, AI Supply Chain Categories field to Proxy logs | ||
| try: | ||
| event['AI Model Name'] = row[52] | ||
| except IndexError: | ||
| pass | ||
| event['AI Model Name'] = None | ||
| try: | ||
| event['AI Supply Chain Categories'] = row[53] | ||
| except IndexError: | ||
| pass | ||
| event['AI Supply Chain Categories'] = None | ||
| # Version 14 — The same as version 13, but adds the Event correlation ID field to Proxy logs | ||
| try: | ||
| event['Event correlation ID'] = row[54] | ||
| except IndexError: | ||
| pass | ||
| event['Event correlation ID'] = None | ||
| int_fields = [ | ||
| 'requestSize', | ||
| 'responseSize', |
|
|
||
| [skip ci]" | ||
|
|
||
| git push origin ${{ github.event.pull_request.head.ref }} |
Check warning
Code scanning / CodeQL
Code injection Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI about 2 months ago
General approach: Avoid using ${{ ... }} expressions that embed user-controlled values directly inside run: shell scripts. Instead, assign the untrusted value to an environment variable using workflow expression syntax, and then reference it using native shell variable expansion ($VAR) within the script.
Concrete fix here: For the “Commit updated bundles” step, move ${{ github.event.pull_request.head.ref }} into a new env variable (e.g., PR_HEAD_REF) and then change the git push line to use $PR_HEAD_REF. This prevents GitHub from interpolating the untrusted value into the shell script; the shell now receives the value as normal data via the environment, which is the recommended pattern. Functionality remains identical because git push origin <branch> is the same call; we’re only changing how the branch name gets passed in.
Changes needed in .github/workflows/aws-s3-bundle-update.yaml:
-
In the “Commit updated bundles” step (lines 77–106), add an environment variable, e.g.:
env: GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }} PR_HEAD_REF: ${{ github.event.pull_request.head.ref }}
-
Update line 101 from:
git push origin ${{ github.event.pull_request.head.ref }}to:
git push origin "$PR_HEAD_REF"
No additional imports or external dependencies are needed.
| @@ -78,6 +78,7 @@ | ||
| if: steps.check_update.outputs.skip != 'true' | ||
| env: | ||
| GITHUB_TOKEN: ${{ steps.generate_token.outputs.token }} | ||
| PR_HEAD_REF: ${{ github.event.pull_request.head.ref }} | ||
| run: | | ||
| git config --local user.email "action@github.com" | ||
| git config --local user.name "GitHub Action" | ||
| @@ -98,7 +99,7 @@ | ||
|
|
||
| [skip ci]" | ||
|
|
||
| git push origin ${{ github.event.pull_request.head.ref }} | ||
| git push origin "$PR_HEAD_REF" | ||
|
|
||
| echo "✅ Successfully updated and committed bundle files" | ||
| else |
| - name: Create Pull Request | ||
| if: steps.check_changes.outputs.changed == 'true' | ||
| id: create_pr | ||
| uses: peter-evans/create-pull-request@v6 |
Check warning
Code scanning / CodeQL
Unpinned tag for a non-immutable Action in workflow Medium
|
Closing test PR - automation paused until Taz is ready to test tomorrow. Will focus only on PR Azure#13266 (TacitRed Defender). |
Co-authored-by: jlheard <1328792+jlheard@users.noreply.github.com>
Co-authored-by: v-maheshbh <207855009+v-maheshbh@users.noreply.github.com>
)" This reverts commit 803d26e.
Co-authored-by: anthonylamark <3209818+anthonylamark@users.noreply.github.com>
* recompile using v3 tool * recompile using v3 tool * self review * follow up from claude on depends for contentPackages * revert * cleanup * minimize diff
This is a TEST PR ONLY for verifying the Sentinel PR automation system.
Purpose
Safety
DO NOT MERGE THIS PR