-
Notifications
You must be signed in to change notification settings - Fork 0
Move old API (push!, delete!, ...) to benchmarks.jl #90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… getindex on non power of two weights
…ing, and eliminate unintentnional infinite recursion to fix some tests
… draft of this commit broke)
…e rng test much more robust now that it can pass that way
…bug in stage two weight storage that also significantly hurts performance)
…nificands (fixes test)
* Add check of presence in push! (#87) * Convert to Int for indexing * Update src/DynamicDiscreteSamplers.jl Co-authored-by: Lilith Orion Hafner <[email protected]> * Update DynamicDiscreteSamplers.jl * Update benchmarks.jl * Update runtests.jl * Update DynamicDiscreteSamplers.jl * use % in rand * remove formatting * Add test * Revert "Add test" This reverts commit b300f78. * debugging1 * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * remove onee * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * Update DynamicDiscreteSamplers.jl * fix? --------- Co-authored-by: Lilith Orion Hafner <[email protected]>
…g m2 (#88) * Insert 32 words into metadata to store level_weights_nonzero * Save the non-zero weights information on updates * Update m2 using that weights information --- Co-authored-by: Adriano Meligrana <[email protected]>
--------- Co-authored-by: Lilith Orion Hafner <[email protected]>
…to that path. (#104) The point of adjusting `m[3]` and recomputing approximate weights to increase `m[4]` above 2^32 is to make sure we hit the slow path infrequently so we only need to do it if we are actually hitting the slow path. This means that pathological cases of repeatedly inserting and removing a weight to stress m[4] require calls to rand in order to trigger approximate level weight recomputation and remain pathological. This fixes all the pathological cases we currently benchmark to within a factor of 2-3 of main! But... it does not fix some slightly harder to produce pathological cases. But those cases are slightly less pathological, and this PR includes benchmarks for them. This PR does somewhat regress those new worst cases.
Optimize weight updating in _rand_slow_path by using only the top 64 bits instead of the top 65 bits of significand sums This reduces accuracy by one bit but continues to reliably set the new m[4] within an acceptable range. Also - manually hoist bounds checking - Remove obsolete 0 < m4 check - Remove TODO about checking results (they are checked with an assert) - Rename x2 to x (the original x was deleted a while ago) - Move the initial assignment to x into the main loop
I found a source of inequivalence from previous API tests, but now it should be totally equivalent as far as I can tell |
* Make recompute_weights faster and more complex by splitting the loop into regions based on where different words of significand sums are relevant * Add some tests that failed during development of this PR
…uting them (#112) * Update `set_global_shift_decrease!` to shift weights rather than recomputing them, taking advantage of the fact that the shift is always down (note: this includes a bug) and add comments about invariants and a TODO * Add test * inline the only remaining call-site of recompute_range Note: this does include some regressions, though the gestalt is positive and 5B′ is an improvement.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.