Skip to content

Conversation

@kamilsa
Copy link
Collaborator

@kamilsa kamilsa commented Jan 14, 2026

🗒️ Description

Introduces aggregator role and subnet aggregation.

  • Every validator is assigned to one of the attestation subnets
  • Aggregators collect signatures that correspond to validators from their subnet
    * If aggregators collected 90% of signatures from their subnet by the beginning of slot 2, they produce aggregated attestation and propagate it into aggregation topic
  • Validators subscribe to aggregation topic, so that next block proposer may include committee aggregations into the block (without recursive aggregation for now)
Interval Devnet 2 Devnet 3
0 Aggregate signatures and put proofs into the block Put collected subnet proofs into the block
1 Cast a vote and publish to attestation topic Cast a vote and publish to attestation topic + publish to attestation_{subnet_id} topic
2 Update safe target Update safe target. Aggregator: aggregate collected signatures and broadcast a subnet proof into the aggregation topic
3 Attestations accepted Attestations accepted

Remaining work

- [ ] Process the case when aggregators did not observe enough signatures by the beginning of interval 2
- [ ] Add predicates for gossipsub propagations of aggregated signatures (e.g. do not propagate aggregation if we already observed the one for the same committee, but proving more validators signatures) Not needed for now as we have only a single aggregator per committee, so only one aggregation

  • Update to latest master

🔗 Related Issues or PRs

leanEthereum/pm#56
leanEthereum/pm#58

✅ Checklist

  • Ran tox checks to avoid unnecessary CI fails:
    uvx tox
  • Considered adding appropriate tests for the changes (will add in a separate PR)
  • Considered updating the online docs in the ./docs/ directory.

Copy link
Contributor

@jihoonsong jihoonsong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! Please excuse leaving some comments while it's still in draft. Just wanted to help iterate faster :)

# Configure the genesis state.
genesis_config = Config(
genesis_time=genesis_time,
attestation_subnet_count=AGGREGATION_COMMITTEE_COUNT,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I admit that I was the one who advocated for attestation committee, but based on the fact that validators only push their attestations to aggregators in their subnet without subscribing to it, I now think aggregation committee gives us slightly better description.

I don't mind whichever we choose—either attestation committee or aggregation committee—but I do think we need to stick to one thing consistently in the Lean spec and pq-devnet-3.md in the pm repo.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong preference too, however I think the rationale for topic names in beacon chain spec is based on the type of messages that are being propagated to this topic. For consistency we should probably stick to the same logic and keep using attestation subnets and attestation committees

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great! In the same vein, what do you think about this one?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me, applied your suggestion

# Conflicts:
#	src/lean_spec/subspecs/forkchoice/store.py
#	src/lean_spec/subspecs/networking/__init__.py

When aggregation is added, aggregators will collect attestations and combine them.
Aggregated attestations will be broadcast separately.
Devnet-2 introduces signatures aggregation. Aggregators will collect attestations and combine them. Aggregated attestations will be broadcast separately.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Devnet-2 introduces signatures aggregation. Aggregators will collect attestations and combine them. Aggregated attestations will be broadcast separately.
Devnet-3 introduces signatures aggregation. Aggregators will collect attestations and combine them. Aggregated attestations will be broadcast separately.


In the devnet-3 design, however, there is one global subnet for signed
attestations propagation, in addition to publishing into per committee subnets.
This is due to 3SF-mini consensus design, that requires 2/3+ of all
Copy link
Contributor

@g11tech g11tech Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this global bit is not required, once the aggregtors publish signed attestations in the 2nd interval, they can be imported by all validators in the 3rd interval

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, without global topic for attestations, we might not be able to receive proofs in time to update safe target during interval 2:

  • Interval 0: block propagation
  • Interval 1: votes propagation
  • Interval 2: signatures aggregation (up to one second for 1000 validators in subnet with 1000sigs/second expected sigs aggregation rate) + proof distribution => No time for updating safe target => in next slot validator votes for old target

for data, validator_ids in data_to_validator_ids.items()
]

class SignedAggregatedAttestation(Container):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anshalshukla / @GrapeBaBa do we already have this type?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also better to use message, signature terminlogy

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also need aggregated bit vector here as well,

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we also need aggregated bit vector here as well,

AggregatedSignatureProof contains AggregationBits

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anshalshukla / @GrapeBaBa do we already have this type?

no

@unnawut unnawut added the specs Scope: Changes to the specifications label Jan 23, 2026
@unnawut unnawut added this to the pq-devnet-3 milestone Jan 23, 2026

| Topic Name | Message Type | Encoding |
|------------------------------------------------------------|-----------------------------|--------------|
| /lean/consensus/devnet3/blocks/ssz_snappy | SignedBlockWithAttestation | SSZ + Snappy |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

original prefix is /leanconsensus, this is an expected change /lean/consensus?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no.
Thanks for noticing. Fixed

@kamilsa kamilsa changed the title [WIP] Committee aggregation Committee aggregation Jan 28, 2026
@kamilsa kamilsa marked this pull request as ready for review January 28, 2026 09:44
Copilot AI review requested due to automatic review settings January 28, 2026 09:44
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Introduces committee-level attestation aggregation and subnet-aware validator behavior, updating forkchoice, networking, and node layers to support an aggregator role and per-validator store context in preparation for devnet-3.

Changes:

  • Extend the forkchoice Store and State to track a local validator_id, manage per-attester XMSS signatures and aggregated proofs, and use SignedAggregatedAttestation/aggregated_payloads for block production and attestation processing.
  • Add subnet and topic abstractions for per-committee attestation gossip and aggregation (compute_subnet_id, new gossipsub topic kinds) and wire validator-aware stores into Node, SyncService, and networking.
  • Update tests, fixtures, and documentation to reflect the new aggregation model, Store.get_forkchoice_store API, and gossip decoding behavior.

Reviewed changes

Copilot reviewed 32 out of 32 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
tests/lean_spec/subspecs/validator/test_service.py Adjusts validator service tests to use a test validator_id and the new Store.get_forkchoice_store signature; updates attestation tests to work with aggregated payloads instead of raw gossip signatures.
tests/lean_spec/subspecs/ssz/test_state.py Updates the SSZ encoding round-trip test to a single-line expected hex string consistent with current State encoding.
tests/lean_spec/subspecs/node/test_node.py Updates node tests to pass validator_id into Node._try_load_from_database, validating that store time still uses intervals per slot.
tests/lean_spec/subspecs/networking/test_network_service.py Extends the mock networking store with a validator_id and on_gossip_attestation signature compatible with the aggregator-aware SyncService.
tests/lean_spec/subspecs/networking/client/test_gossip_reception.py Allows decode_message to return None in type hints and asserts non-None for valid block/attestation topics.
tests/lean_spec/subspecs/forkchoice/test_validator.py Ensures all forkchoice validator tests construct Store with a validator_id, exercising block production and attestation with the updated store API.
tests/lean_spec/subspecs/forkchoice/test_time_management.py Adds validator_id to stores and updates get_forkchoice_store usage and time-based tests to the new parameter names.
tests/lean_spec/subspecs/forkchoice/test_store_attestations.py Switches attestation storage tests from gossip_signatures to explicit aggregated proofs and aggregated_payloads, including immutability checks with the new API.
tests/lean_spec/subspecs/containers/test_state_aggregation.py Rewrites block-building aggregation tests to use aggregated proofs only (aggregated_payloads) and the new gossip-aggregation separation, dropping direct gossip_signatures usage.
tests/lean_spec/helpers/init.py Introduces TEST_VALIDATOR_ID and exports it for consistent test construction of validator-aware stores.
tests/lean_spec/conftest.py Updates the shared base_store fixture to use Store.get_forkchoice_store with an explicit ValidatorIndex as validator_id.
src/lean_spec/subspecs/sync/service.py Refactors imports for Store, PeerId, and SignedAttestation and routes gossip attestations through Store.on_gossip_attestation with an is_aggregator flag derived from the store’s validator_id.
src/lean_spec/subspecs/node/node.py Adds get_local_validator_id, threads validator_id into Store.get_forkchoice_store and _try_load_from_database, and ensures nodes construct validator-aware stores from genesis or checkpoint.
src/lean_spec/subspecs/node/helpers.py Introduces an is_aggregator(validator_id) helper (currently a placeholder that always returns False) for future ENR-based aggregator selection.
src/lean_spec/subspecs/node/init.py Re-exports get_local_validator_id alongside Node and NodeConfig.
src/lean_spec/subspecs/networking/subnet.py Adds compute_subnet_id to map validator indices to attestation subnet IDs based on committee count.
src/lean_spec/subspecs/networking/service/service.py Updates gossip attestation handling to call the new SyncService.on_gossip_attestation signature using keyword arguments.
src/lean_spec/subspecs/networking/gossipsub/topic.py Defines ATTESTATION_SUBNET and AGGREGATED_ATTESTATION topic kinds and a committee_aggregation factory, extending the topic system for subnet and aggregation channels.
src/lean_spec/subspecs/networking/client/event_source.py Changes GossipHandler.decode_message to return Optional and adjusts tests/callers, while still only decoding block and global attestation topics.
src/lean_spec/subspecs/networking/init.py Re-exports compute_subnet_id from the networking package for use by forkchoice and other subspecs.
src/lean_spec/subspecs/forkchoice/store.py Extends Store with validator_id, updates get_forkchoice_store, rewires attestation handling to distinguish gossip vs aggregated proofs, adds on_gossip_aggregated_attestation, committee signature aggregation, and aggregator-aware ticking, and changes block processing to conditionally cache proposer signatures by committee.
src/lean_spec/subspecs/containers/state/state.py Splits signature aggregation into gossip aggregation vs selection from aggregated_payloads, updates build_block to rely solely on aggregated proofs, and adds helper methods for reusing/combining proofs.
src/lean_spec/subspecs/containers/attestation/attestation.py Imports aggregation types and introduces SignedAggregatedAttestation to carry attestation data plus an aggregated signature proof.
src/lean_spec/subspecs/containers/attestation/init.py Exports SignedAggregatedAttestation from the attestation package.
src/lean_spec/subspecs/containers/init.py Re-exports SignedAggregatedAttestation at the top-level containers namespace.
src/lean_spec/subspecs/chain/config.py Adds ATTESTATION_COMMITTEE_COUNT configuration constant (currently set to 1) for committee/subnet computations.
src/lean_spec/main.py Threads validator_id into Store.get_forkchoice_store when initializing from checkpoint and imports get_local_validator_id from the node package.
packages/testing/src/consensus_testing/test_fixtures/verify_signatures.py Simplifies block-fixture building by dropping direct gossip_signatures input to State.build_block in favor of empty aggregated_payloads.
packages/testing/src/consensus_testing/test_fixtures/state_transition.py Removes ad-hoc gossip signature generation from fixture-based state transitions, aligning with the new block-building API.
packages/testing/src/consensus_testing/test_fixtures/fork_choice.py Adapts fixture-based fork choice tests to validator-aware stores, passes current_validator into on_block, and uses the new committee aggregation APIs to build aggregated payloads before calling State.build_block.
docs/client/validator.md Updates validator docs to describe committees, attestation subnets, aggregator role, and the new attestation/aggregation flow for devnet-3.
docs/client/networking.md Expands networking docs with committee assignment metadata, detailed gossip topics (including subnet and aggregation topics), and the corresponding SSZ message types.
Comments suppressed due to low confidence (1)

src/lean_spec/subspecs/networking/client/event_source.py:388

  • GossipHandler.decode_message only handles TopicKind.BLOCK and TopicKind.ATTESTATION, but now that TopicKind also includes ATTESTATION_SUBNET and AGGREGATED_ATTESTATION the function silently falls through and returns None for those topics. Together with _handle_gossip_message only matching on BLOCK/ATTESTATION, this means subnet and aggregation topics described in the updated networking docs are effectively ignored rather than yielding SignedAttestation/SignedAggregatedAttestation or a clear error. It would be more robust either to add explicit branches for the new topic kinds (decoding to the expected SSZ types) or to raise a GossipMessageError for unsupported kinds, and to update the docstring to reflect the | None return behavior if you keep the None path.
    def decode_message(
        self,
        topic_str: str,
        compressed_data: bytes,
    ) -> SignedBlockWithAttestation | SignedAttestation | None:
        """
        Decode a gossip message from topic and compressed data.

        Processing proceeds in order:

        1. Parse topic to determine message type.
        2. Decompress Snappy-framed data.
        3. Decode SSZ bytes using the appropriate schema.

        Each step can fail independently. Failures are wrapped in
        GossipMessageError for uniform handling.

        Args:
            topic_str: Full topic string (e.g., "/leanconsensus/0x.../block/ssz_snappy").
            compressed_data: Snappy-compressed SSZ data.

        Returns:
            Decoded block or attestation.

        Raises:
            GossipMessageError: If the message cannot be decoded.
        """
        # Step 1: Parse topic to determine message type.
        #
        # The topic string contains the fork digest and message kind.
        # Invalid topics are rejected before any decompression work.
        # This prevents wasting CPU on malformed messages.
        try:
            topic = GossipTopic.from_string(topic_str)
        except ValueError as e:
            raise GossipMessageError(f"Invalid topic: {e}") from e

        # Step 2: Decompress Snappy-framed data.
        #
        # Gossipsub uses raw Snappy compression (not framed).
        #
        # Raw Snappy has no stream identifier or CRC checksums.
        # Decompression fails if:
        #   - Compressed data is corrupted or truncated.
        #   - Copy offsets reference data beyond buffer bounds.
        #
        # Failed decompression indicates network corruption or a malicious peer.
        try:
            ssz_bytes = decompress(compressed_data)
        except SnappyDecompressionError as e:
            raise GossipMessageError(f"Snappy decompression failed: {e}") from e

        # Step 3: Decode SSZ based on topic kind.
        #
        # SSZ decoding fails if the bytes don't match the expected schema.
        # For example: wrong length, invalid field values, or truncation.
        #
        # The topic determines which schema to use. This is why topic
        # validation must happen first.
        try:
            match topic.kind:
                case TopicKind.BLOCK:
                    return SignedBlockWithAttestation.decode_bytes(ssz_bytes)
                case TopicKind.ATTESTATION:
                    return SignedAttestation.decode_bytes(ssz_bytes)
        except SSZSerializationError as e:

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +18 to +20
return (
False # Placeholder implementation, in future should be defined by node operator settings
)
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is_aggregator is currently hard-coded to always return False, while the validator/networking docs now describe aggregators being selected via an is_aggregator ENR flag. As a result, the runtime node never enters the aggregator code paths in Store.on_gossip_attestation/tick_interval (collection of committee signatures and aggregation), which could be confusing given the documentation. If this placeholder behavior is intentional for devnet-3, consider adding an explicit TODO or brief comment tying this helper to the future ENR-based configuration so that the divergence from the docs is clear.

Suggested change
return (
False # Placeholder implementation, in future should be defined by node operator settings
)
# TODO(devnet-3): This is a placeholder. Once ENR-based configuration is wired in,
# this helper should consult the validator's ENR `is_aggregator` flag (or equivalent
# node-operator setting) instead of always returning False.
return False

Copilot uses AI. Check for mistakes.
Comment on lines +947 to +979
new_aggregated_payloads = dict(self.aggregated_payloads)

attestations = self.latest_new_attestations
committee_signatures = self.gossip_signatures

attestation_list = [
Attestation(validator_id=vid, data=data) for vid, data in attestations.items()
]

head_state = self.states[self.head]
# Perform aggregation
aggregated_results = head_state.aggregate_gossip_signatures(
attestation_list,
committee_signatures,
)

# iterate to broadcast aggregated attestations
for aggregated_attestation, aggregated_signature in aggregated_results:
_ = SignedAggregatedAttestation(
data=aggregated_attestation.data,
proof=aggregated_signature,
)
# Note: here we should broadcast the aggregated signature to committee_aggregators topic

# Compute new aggregated payloads
for aggregated_attestation, aggregated_signature in aggregated_results:
data_root = aggregated_attestation.data.data_root_bytes()
validator_ids = aggregated_signature.participants.to_validator_indices()
for vid in validator_ids:
sig_key = SignatureKey(vid, data_root)
if sig_key not in new_aggregated_payloads:
new_aggregated_payloads[sig_key] = []
new_aggregated_payloads[sig_key].append(aggregated_signature)
Copy link

Copilot AI Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aggregate_committee_signatures builds new_aggregated_payloads via a shallow dict(self.aggregated_payloads) copy and then appends to the inner lists, which means previous Store snapshots can observe mutations to their aggregated_payloads lists. Other code paths (e.g. on_gossip_aggregated_attestation) deep-copy the map to preserve Store immutability, so this shallow copy is inconsistent with that pattern and risks subtle bugs when older Store instances are reused in tests or logic. Consider deep-copying the values (or using copy.deepcopy) before mutating so that each Store instance has its own independent aggregated_payloads structure.

Copilot uses AI. Check for mistakes.

return results

def compute_aggregated_signatures(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should remove this compute_aggregated_signatures() function since it's only serving the spec's unit tests. I can have a go at this after the PR is merged, but just flagging that clients don't need to implement this function.


# Return store with updated signature map
return store.model_copy(update={"gossip_signatures": new_gossip_sigs})
# Return store with updated signature maps
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

super nit: it's still a single map

Suggested change
# Return store with updated signature maps
# Return store with updated signature map

def on_block(
self,
signed_block_with_attestation: SignedBlockWithAttestation,
current_validator: ValidatorIndex | None = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this use the internal self.validator_id similar to on_gossip_attestation() above instead of allowing a custom arg?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

specs Scope: Changes to the specifications

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants