-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[Issue 11312] Data test run results always show failures #11313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned. For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing. On style; I added this line in order to comply with existing code style. However this renders setting status (pre-existing) and failures (as off this) before the control flow redundant. Alternative style would be to set status and failures as passing before the controlflow and skip the else.
Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA. In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR. CLA has not been signed by users: @vglocus |
Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide. |
Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA. In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR. CLA has not been signed by users: @vglocus |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #11313 +/- ##
==========================================
- Coverage 88.96% 86.42% -2.55%
==========================================
Files 189 190 +1
Lines 24170 24194 +24
==========================================
- Hits 21504 20910 -594
- Misses 2666 3284 +618
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
@vglocus Thanks for (re)opening this PR! I agree this is a good change. As @dbeatty10 explained in #9808 (comment), it is highly unlikely to represent a behavior change for anyone who's currently relying on the (IMO surprising & incorrect) Could you please update some of the functional tests here, so we can ensure this works going forward? (You can look #9657 for inspiration; I'd recommend giving credit by adding @tbog357 as a co-contributor in the changelog entry.) |
Resolves #11312
Problem
Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned. For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.
Solution
For data tests, always set failures to the actual failure count, even if technically not a failure.
Checklist