Skip to content

Conversation

agnxsh
Copy link
Contributor

@agnxsh agnxsh commented Oct 19, 2025

No description provided.

# Uncategorized helper functions from the spec
import
chronos, chronicles, results, taskpools,
chronos, chronicles, results, taskpools, times,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

times is a stdlib library. should be on a separate line and prefixed by std/:

import
  std/times,
  chronos, chronicles, results, taskpools,

@github-actions
Copy link

Unit Test Results

       15 files  ±0    3 030 suites  ±0   1h 38m 59s ⏱️ + 13m 48s
12 061 tests ±0  11 491 ✔️ ±0  570 💤 ±0  0 ±0 
76 481 runs  ±0  75 629 ✔️ ±0  852 💤 ±0  0 ±0 

Results for commit 4bedb57. ± Comparison against base commit d71ea30.

let startTime = Moment.now()
const reconstructionTimeout = 2.seconds
let reconstructionTimeout =
(initDuration(nanoseconds = slotDuration * 100_000_000)).inNanoseconds()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100_000_000 is off by a factor of 10.

More generally, initDuration supports milliseconds too, so there's no particular reason to use nanoseconds here.

inNanoSeconds just converts all this back into an int64 anyway, so this whole line is a roundbout method of multiplying by what's supposed to be 1_000_000_000 which invokes initDuration/inNanoseconds for no obvious reason.

That is, one should be able to do something like

  let reconstructionTimeout = slotDuration * 1000

or similar, rather than round-tripping through these std/times types. Also, as SLOT_DURATION_MS in milliseconds will come in, this will have to support milliseconds, so that's probably the correct granularity. Seconds will have insufficient granularity (already in some cases) and microseconds pointlessly large to handle, and the times library does like to operate in integers.

While the two

    if (now - startTime).nanoseconds > reconstructionTimeout

lines use nanoseconds, they could use milliseconds just as easily without the evidently easy-to-typo large numbers.

let
currentCgc = node.dataColumnQuarantine.custodyColumns.lenu64
nonSupernode =
currentCgc > node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS div 2 and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This boundary condition seems to create an odd hole for node.dataColumnQuarantine.custodyColumns.lenu64 == node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS div 2. It won't be rejected by the earlier

  if node.dataColumnQuarantine.custodyColumns.lenu64 <
      node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS div 2:
    return

and it won't trigger nonSupernode status here (64 > 64 and 64 < 128 == false).

Maybe >= was meant? But see comment below for a slightly broader point, related to this.

currentCgc < node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS

if not(node.config.peerdasSupernode) or
nonSupernode:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> len('  if not(node.config.peerdasSupernode) or nonSupernode:')
55

so not really sure why it's two lines.

But, beyond the most superficial cosmetics, earlier in the function there's already a

if node.dataColumnQuarantine.custodyColumns.lenu64 <
node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS div 2:
return

which is effectively part of this logic too. I'm not sure why these two chunks of custody column checking should exist separately, and it's simpler it they're combined: to achieve a similar effect, check once for not(node.config.peerdasSupernode) or node.dataColumnQuarantine.custodyColumns.lenu64 < node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS. Nothing about this function's current behavior changes between custody count of 63 and 65 (why 64? is that intentional?), it's just ruling everything under 128 out.

I'm not really sure in what circumstances node.config.peerdasSupernode could hold though and have also node.dataColumnQuarantine.custodyColumns.lenu64 < node.dag.cfg.NUMBER_OF_CUSTODY_GROUPS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants