-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document bias and entropy-exhausted behavior #184
base: main
Are you sure you want to change the base?
Conversation
`choose` and `choose_iter` incorrectly claimed to return `Error::NotEnoughData` when they in fact default to the first choice. This also documents that default in various other APIs. Additionally, `int_in_range` (and APIs that rely on it) has bias for non-power-of-two ranges. `u.int_in_range(0..=170)` for example will consume one byte of entropy, and take its value modulo 171 (the size of the range) to generate the returned integer. As a result, values in `0..=84` (the first ~half of the range) are twice as likely to get chosen as the rest (assuming the underlying bytes are uniform). In general, the result distribution is only uniform if the range size is a power of two (where the modulo just masks some bits). It would be accurate to document that return values are biased towards lower values when the range size is not a power of two, but do we want this much detail in the documented “contract” of this method? Similarly, I just called `ratio` “approximate”. `u.ratio(5, 7)` returns true for 184 out of 256 possible underlying byte values, ~0.6% too often. In the worst case, `u.ratio(84, 170)` return true ~33% too often. Notably, `#[derive(Arbitrary)]` chooses enum variants not with `choose_index` (although that seems most appropriate from reading `Unstructured` docs) but by always consuming 4 bytes of entropy: ```rust // Use a multiply + shift to generate a ranged random number // with slight bias. For details, see: // https://lemire.me/blog/2016/06/30/fast-random-shuffling Ok(match (u64::from(<u32 as arbitrary::Arbitrary>::arbitrary(u)?) * #count) >> 32 { #(#variants,)* _ => unreachable!() }) ``` `int_in_range` tries to minimize consumption based on the range size but that contributes to having more bias than multiply + shift. Is this a real trade-off worth having two methods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
In general, PRs to improve the accuracy of output distributions when given actually-random input bytes are appreciated (modulo performance regressions) but also note that because it is generally a grey-box fuzzer like libFuzzer that is providing the input, the input is not made of actually-random bytes and so the output distribution is going to be skewed either way (which is roughly the intention, since we want the grey-box fuzzer's knowledge of code coverage or whatever to bias the results towards things that the fuzzer deems are more interesting)
Do you have a sense of whether |
The |
choose
andchoose_iter
incorrectly claimed to returnError::NotEnoughData
when they in fact default to the first choice. This also documents that default in various other APIs.Additionally,
int_in_range
(and APIs that rely on it) has bias for non-power-of-two ranges.u.int_in_range(0..=170)
for example will consume one byte of entropy, and take its value modulo 171 (the size of the range) to generate the returned integer. As a result, values in0..=84
(the first ~half of the range) are twice as likely to get chosen as the rest (assuming the underlying bytes are uniform). In general, the result distribution is only uniform if the range size is a power of two (where the modulo just masks some bits).It would be accurate to document that return values are biased towards lower values when the range size is not a power of two, but do we want this much detail in the documented “contract” of this method?
Similarly, I just called
ratio
“approximate”.u.ratio(5, 7)
returns true for 184 out of 256 possible underlying byte values, ~0.6% too often. In the worst case,u.ratio(84, 170)
return true ~33% too often.Notably,
#[derive(Arbitrary)]
chooses enum variants not withchoose_index
(although that seems most appropriate from readingUnstructured
docs) but by always consuming 4 bytes of entropy:int_in_range
tries to minimize consumption based on the range size but that contributes to having more bias than multiply + shift. Is this a real trade-off worth having two methods?