-
Notifications
You must be signed in to change notification settings - Fork 101
Record Return Values in Trial #314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Record Return Values in Trial #314
Conversation
Is the feature implemented in this PR in principle something that could be merged and if so, what are the requirements? |
Hey @AlexanderNenninger, sorry for the neglect and thanks for the PR! |
Hi @gdalle, So you mean keeping the Then there's still the issue of how we deal with a lack of computation results. I just checked and a If we make mutable struct ContainsVec
v::AbstractVector
end
cv = ContainsVec(fill(nothing, 2^32))
Base.summarysize(cv) # = sizeof(cv.v) + sizeof(cv) = 48
cv.v[2^32] # = nothing |
Given the discussion on Discourse I'm really skeptical about the benefit/risk ratio of this feature. What do you think? |
The feature would be pretty useful, and the implementation was a pretty minor change to the code base. I think some empirical evidence is necessary at this point. To gather that evidence, it would be good to know
If we can show that there's only negligible (to be defined) impact on benchmark quality and acceptable (to be defined) impact simplicity of the code base, this PR makes sense, otherwise not. |
For me this is a non-goal of BenchmarkTools, and I also don't understand how recording the values helps with that? There is a maintenance cost and a mental model cost associated with changing the inner benchmarking code. You could write your own harness code to do what you want, but as the maintainers of BT we must ask ourselves if a feature for a few people is worth it. |
I should add that Info appreciate you putting in the effort! I also handed over the maintenance burden almost entirely to @gdalle, so I will leave the final call up to him. |
Alright. |
Sorry for the wasted efforts, and thank you for the contribution nonetheless! |
#335 is also tangentially related |
No effort was wasted. We're using the code of this PR anyway, it just stays forked. |
This PR is intended as a minor extension to the
Trial
andTrialEstimate
types. It enables benchmarking of non-deterministic functions. This is useful in monte carlo settings to calculate success probabilities and expected runtimes.Currently given the code below
there is no way to determine post-benchmark, what the return value of a function was.
Summary of Changes
return_values::Vector{Any}
toTrial
andTrialEstimate
Trial
andTrialEstimate
where it makes sense.minimum
andmaximum
will retain their return valuescopy
makes adeepcopy
of the return values. This is due tocopy
not being implemented forString
(s. copy not implemented for String JuliaLang/julia#31995 (comment))push!
et. al. default to pushingnothing
. This adds a little memory overhead, since mostTrial
s are expected to contain aVector
ofnothing
.tests/SerializationTests.jl
relied on all fields ofTrial
being comparable usingBase.approx
. This invariant is now broken. The localeq
function has been modified.eq
now falls back toisequal
comparison, ifisapprox
is not defined for its arguments.tests/TrialsTests.jl
copy(::TrialJudgement)
was broken independently of the changes listed above. Fixed and added test.