Skip to content

Move old API (push!, delete!, ...) to benchmarks.jl #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 155 commits into from

Conversation

Tortar
Copy link
Collaborator

@Tortar Tortar commented Feb 14, 2025

No description provided.

…ing, and eliminate unintentnional infinite recursion to fix some tests
…e rng test much more robust now that it can pass that way
…bug in stage two weight storage that also significantly hurts performance)
LilithHafner and others added 19 commits February 18, 2025 07:22
* Add check of presence in push! (#87)

* Convert to Int for indexing

* Update src/DynamicDiscreteSamplers.jl

Co-authored-by: Lilith Orion Hafner <[email protected]>

* Update DynamicDiscreteSamplers.jl

* Update benchmarks.jl

* Update runtests.jl

* Update DynamicDiscreteSamplers.jl

* use % in rand

* remove formatting

* Add test

* Revert "Add test"

This reverts commit b300f78.

* debugging1

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* remove onee

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* Update DynamicDiscreteSamplers.jl

* fix?

---------

Co-authored-by: Lilith Orion Hafner <[email protected]>
…g m2 (#88)

* Insert 32 words into metadata to store level_weights_nonzero

* Save the non-zero weights information on updates

* Update m2 using that weights information

---

Co-authored-by: Adriano Meligrana <[email protected]>
---------

Co-authored-by: Lilith Orion Hafner <[email protected]>
…to that path. (#104)

The point of adjusting `m[3]` and recomputing approximate weights to increase `m[4]` above 2^32 is to make sure we hit the slow path infrequently so we only need to do it if we are actually hitting the slow path. This means that pathological cases of repeatedly inserting and removing a weight to stress m[4] require calls to rand in order to trigger approximate level weight recomputation and remain pathological.

This fixes all the pathological cases we currently benchmark to within a factor of 2-3 of main! But... it does not fix some slightly harder to produce pathological cases. But those cases are slightly less pathological, and this PR includes benchmarks for them. This PR does somewhat regress those new worst cases.
Optimize weight updating in _rand_slow_path by using only the top 64 bits instead of the top 65 bits of significand sums

This reduces accuracy by one bit but continues to reliably set the new m[4] within an acceptable range. Also
- manually hoist bounds checking
- Remove obsolete 0 < m4 check
- Remove TODO about checking results (they are checked with an assert)
- Rename x2 to x (the original x was deleted a while ago)
- Move the initial assignment to x into the main loop
@Tortar
Copy link
Collaborator Author

Tortar commented Feb 22, 2025

I found a source of inequivalence from previous API tests, but now it should be totally equivalent as far as I can tell

LilithHafner and others added 8 commits February 23, 2025 07:28
* Make recompute_weights faster and more complex by splitting the loop into regions based on where different words of significand sums are relevant

* Add some tests that failed during development of this PR
…uting them (#112)

* Update `set_global_shift_decrease!` to shift weights rather than recomputing them, taking advantage of the fact that the shift is always down (note: this includes a bug) and add comments about invariants and a TODO

* Add test

* inline the only remaining call-site of recompute_range

Note: this does include some regressions, though the gestalt is positive and 5B′ is an improvement.
Base automatically changed from lh/anew-dev-2 to main May 25, 2025 00:03
@Tortar Tortar closed this May 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants