You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm implementing test vectors for ChillDKG. To ensure reproducibility, I use fixed values for random and hostseckey inputs, so every run of the test vector generation script yields the same vectors.
However, two internal functions, certeq_participant_step and pop_prove, currently call schnorr_sign using a freshly generated random 32-byte value for the aux_rand parameter. This fresh randomness prevents the consistent reproduction of test vectors.
participant_step1 produces varying outputs (pmsg1) between runs because the Pop component generated by pop_prove differs each time, even with fixed inputs.
Similarly, participant_step2 outputs vary because of the randomness introduced by certeq_participant_step.
These output variations propagate, affecting other participant and coordinator APIs relying on participant_step1 and participant_step2.
Is it a good idea to replace the fresh randomness with derived values (via a tagged hash) to achieve reproducibility?
The text was updated successfully, but these errors were encountered:
I'm implementing test vectors for ChillDKG. To ensure reproducibility, I use fixed values for
random
andhostseckey
inputs, so every run of the test vector generation script yields the same vectors.However, two internal functions,
certeq_participant_step
andpop_prove
, currently callschnorr_sign
using a freshly generated random 32-byte value for theaux_rand
parameter. This fresh randomness prevents the consistent reproduction of test vectors.bip-frost-dkg/python/chilldkg_ref/chilldkg.py
Lines 81 to 83 in 1e34161
bip-frost-dkg/python/chilldkg_ref/simplpedpop.py
Lines 38 to 41 in 1e34161
Specifically:
participant_step1
produces varying outputs (pmsg1
) between runs because thePop
component generated bypop_prove
differs each time, even with fixed inputs.participant_step2
outputs vary because of the randomness introduced bycerteq_participant_step
.These output variations propagate, affecting other participant and coordinator APIs relying on
participant_step1
andparticipant_step2
.Is it a good idea to replace the fresh randomness with derived values (via a tagged hash) to achieve reproducibility?
The text was updated successfully, but these errors were encountered: