Skip to content

[Issue 11312] Data test run results always show failures #11313

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

vglocus
Copy link

@vglocus vglocus commented Feb 15, 2025

Resolves #11312

Problem

Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned. For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.

Solution

For data tests, always set failures to the actual failure count, even if technically not a failure.

Checklist

  • I have read the contributing guide and understand what's expected of me.
  • I have run this code in development, and it appears to resolve the stated issue.
  • This PR includes tests, or tests are not required or relevant for this PR.
  • This PR has no interface changes (e.g., macros, CLI, logs, JSON artifacts, config files, adapter interface, etc.) or this PR has already received feedback and approval from Product or DX.
  • This PR includes type annotations for new and modified functions.

Previously, a RunResult for a data test always showed 0 failures for a passing test, no matter what the DataTestResult returned.
For example a test that will error or warn if '> 10' and the result is 4 failures, the RunResult returns 0 failures, because it is passing.

On style; I added this line in order to comply with existing code style. However this renders setting status (pre-existing) and failures (as off this) before the control flow redundant.
Alternative style would be to set status and failures as passing before the controlflow and skip the else.
@vglocus vglocus requested a review from a team as a code owner February 15, 2025 08:17
Copy link

cla-bot bot commented Feb 15, 2025

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

Copy link
Contributor

Thank you for your pull request! We could not find a changelog entry for this change. For details on how to document a change, see the contributing guide.

@github-actions github-actions bot added the community This PR is from a community member label Feb 15, 2025
Copy link

cla-bot bot commented Feb 17, 2025

Thanks for your pull request, and welcome to our community! We require contributors to sign our Contributor License Agreement and we don't seem to have your signature on file. Check out this article for more information on why we have a CLA.

In order for us to review and merge your code, please submit the Individual Contributor License Agreement form attached above above. If you have questions about the CLA, or if you believe you've received this message in error, please reach out through a comment on this PR.

CLA has not been signed by users: @vglocus

@cla-bot cla-bot bot added the cla:yes label Mar 11, 2025
Copy link

codecov bot commented Mar 17, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 86.42%. Comparing base (aa89740) to head (0ebbf88).
Report is 13 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #11313      +/-   ##
==========================================
- Coverage   88.96%   86.42%   -2.55%     
==========================================
  Files         189      190       +1     
  Lines       24170    24194      +24     
==========================================
- Hits        21504    20910     -594     
- Misses       2666     3284     +618     
Flag Coverage Δ
integration 82.78% <100.00%> (-3.53%) ⬇️
unit 62.71% <0.00%> (+0.17%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Unit Tests 62.71% <0.00%> (+0.17%) ⬆️
Integration Tests 82.78% <100.00%> (-3.53%) ⬇️
🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@jtcohen6
Copy link
Contributor

jtcohen6 commented Apr 7, 2025

@vglocus Thanks for (re)opening this PR! I agree this is a good change. As @dbeatty10 explained in #9808 (comment), it is highly unlikely to represent a behavior change for anyone who's currently relying on the (IMO surprising & incorrect) failures=0 to represent "success" for tests that return nonzero failing records that are less than the configured warn/error threshold.

Could you please update some of the functional tests here, so we can ensure this works going forward? (You can look #9657 for inspiration; I'd recommend giving credit by adding @tbog357 as a co-contributor in the changelog entry.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla:yes community This PR is from a community member
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature] Test run results should include failure value, even if not fail or warning
2 participants