Fix benchmark function to safely access max_pass_run_data and no successful runs#91
Fix benchmark function to safely access max_pass_run_data and no successful runs#91skalthoff wants to merge 1 commit intoBotBlake:developfrom
Conversation
…no successful runs
BotBlake
left a comment
There was a problem hiding this comment.
Since we only talked about this PR in private until now, let me comment on GitHub as well.
Implementation-wise it seems to do the same as #87
However I am not a fan of that implementation approach.
Specifically I want to understand WHY these values could be missing.
Since these values are IMPORTANT for the test results, I personaly do not think it is a good idea to have them be anything BUT the values provided directly from ffmpeg.
In my oppinion a test where ANY ffmpeg process does not return all requied values should be marked as FAILED, with a failure reason that explains the situation. (e.g. "missing_values")
Perhaps change the PR as a Draft if you intend to work on this.
Otherwise I suggest closing it for the time being and re-open it when necessary.
This pull request enhances error handling and robustness in the
benchmarkfunction ofjellybench_py/core.py. The main adjustment ensures dictionary keys are safely accessed, preventing runtime errors caused by missing keys.Previously, encountering missing keys (
speedorrss_kb) in themax_pass_run_datadictionary triggered aKeyError, interrupting execution:Improvements:
benchmarkfunctiongetmethod with appropriate default values (0.0forspeed, and0forrss_kb) to gracefully handle missing keys.These changes isolate the fix within the affected function, enhancing code stability without side effects elsewhere.