Skip to content

Conversation

@jeanschmidt
Copy link
Contributor

@jeanschmidt jeanschmidt commented Nov 8, 2025

Currently, we ignore all job signals that are 'pytest failure's. This is done as we parse test signals independently in a separate track. This have been introduced to avoid reverts to get confused with signals that are noisy / flaky. A typical case would be a flakiness hapening right before a streak of test failures. Without the current behaviour autorevert would act on the flaky commit instead in the commit that introduced the failure.

With the test track, we noticed a few cases where for some reason we're not obtaining information for tests, and we're investigating all of them with the goal of improving the quality and fix all possible gaps. Having stated that, it is bad that in a few cases we could have reacted based on job signal but we failed to do so as we ignore job signals.

So, after discussion, we decided to re-include job signals that have pytest failures. But, to avoid problems and avoid flakiness, we opted to separate jobs in different columns for each classification rule into the loopback window.

@pytorch-bot pytorch-bot bot added the ci-no-td label Nov 8, 2025
@vercel
Copy link

vercel bot commented Nov 8, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

1 Skipped Deployment
Project Deployment Preview Updated (UTC)
torchci Ignored Ignored Preview Nov 8, 2025 0:52am

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Nov 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants