From 273145b91ce18ecf05a5439eeb37e03701c53d25 Mon Sep 17 00:00:00 2001 From: Pablo Deymonnaz Date: Tue, 12 May 2026 16:25:25 -0300 Subject: [PATCH] refactor(types): port leanSpec Type-1 / Type-2 aggregation envelope (#361) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit ## πŸ—’οΈ Description / Motivation Ports the typed two-level multi-signature envelope introduced by contributor commit [`anshalshukla/leanSpec@0ab09dd`](https://github.com/anshalshukla/leanSpec/commit/0ab09dd166efb398be0373d32cdac3ec18f2b071) ("dummy type 1 and type 2 aggregation with block proofs") to ethlambda: - `TypeOneMultiSignature` β€” single-message N-signer proof; replaces `AggregatedSignatureProof` on the `SignedAggregatedAttestation` gossip wire. - `TypeTwoMultiSignature` β€” merged multi-message proof binding every per-attestation Type-1 plus a singleton proposer Type-1 over the block root. - `SignedBlock.signature: BlockSignatures` β†’ `SignedBlock.proof: ByteListMiB` carrying the SSZ-encoded merged Type-2. The upstream commit is WIP (verify functions are explicit stubs, not yet in canonical `leanethereum/leanSpec`). ethlambda leads the wire-shape migration so the type plumbing is in place when canonical absorbs the refactor and real `lean_multisig` bindings land. **Opening as draft until canonical catches up.** ## What Changed Landed as three commits, one per phase. Each phase compiled and passed `make test` independently. ### Phase 1 β€” `f2d0fb5` β€” additive type plumbing - `crates/common/types/src/block.rs` β€” added `TypeOneInfo`, `TypeOneMultiSignature`, `TypeOneInfos` (SSZ-list limit `MAX_ATTESTATIONS_DATA + 1`), `TypeTwoMultiSignature`, and `BytecodeClaim` (typed alias for `H256`, placeholder until `lean_multisig` defines the trusted evaluation). - SSZ round-trip + capacity unit tests. - Pure additive: no consumers yet. ### Phase 2 β€” `18a60b5` β€” gossip-layer pipeline - `crates/common/types/src/attestation.rs` β€” `SignedAggregatedAttestation.proof` β†’ `TypeOneMultiSignature`. - `crates/blockchain/src/aggregation.rs` β€” `AggregatedGroupOutput.proof`, `aggregate_job`, `resolve_child_pubkeys`, `select_proofs_greedily` all carry/read Type-1. - `crates/storage/src/store.rs` β€” `PayloadEntry.proofs: Vec`; subsumption logic reads `info.participants`. - Block-builder helpers (`compact_attestations`, `extend_proofs_greedily`, `build_block`) operate on Type-1 throughout. - Temporary `to_legacy` / `from_legacy` boundary at block assembly + block-body ingestion so `SignedBlock` wire stayed legacy through Phase 2. ### Phase 3 β€” `fc9ce1f` β€” block wire + storage - `SignedBlock.signature: BlockSignatures` β†’ `SignedBlock.proof: ByteListMiB`. Legacy `BlockSignatures` / `AttestationSignatures` / `AggregatedSignatureProof` removed. - `crates/blockchain/src/lib.rs::propose_block` wraps the proposer XMSS as a singleton Type-1, calls `aggregate_type_2`, SSZ-encodes the merged proof, and stashes it on `SignedBlock.proof`. - `crates/blockchain/src/store.rs::verify_signatures` rewritten as a structural-only check (mirrors upstream `verify_type_2` stub): decode the merged proof, assert `info.len() == attestations.len() + 1`, validate per-attestation `(message, slot, participants)` alignment and the trailing proposer entry; no per-Type-1 crypto. - `crates/storage/src/store.rs::write_signed_block` / `get_signed_block` now store `ByteListMiB` blobs in the existing `BlockSignatures` column family (renaming deferred to avoid a CF migration). - `aggregate_type_2` is a no-crypto stub today: it preserves the full `TypeOneInfos` metadata list but leaves `proof: ByteListMiB::default()`. Real merging arrives when `lean_multisig` exposes a merged-proof primitive β€” the existing `aggregate_proofs` only handles single-message merging. - Test fixtures regenerated from canonical leanSpec (`make leanSpec/fixtures`). The regen also cleared three pre-existing forkchoice spec failures on `main` (`AttestationTooFarInFuture` Γ—2, `AggregateVerificationFailed(InvalidProof)` on `test_valid_gossip_aggregated_attestation`) β€” they were stale-fixture artifacts. ## Correctness / Behavior Guarantees **Verified at gossip:** `on_gossip_aggregated_attestation` continues to run real `ethlambda_crypto::verify_aggregated_signature` on every `SignedAggregatedAttestation`. Invalid aggregates are rejected at the gossip boundary just like before. **Block-level becomes structural:** Block-level verification no longer crypto-verifies the merged proof. The merged proof bytes can't be split client-side (the type-2 merging primitive doesn't exist in `lean-multisig` yet β€” the existing `aggregate_proofs` is single-message only). `verify_signatures` enforces: - `info.len() == attestations.len() + 1`, - each `info[i]` matches the corresponding `block.body.attestations[i]` on `participants`, `slot`, and `message`, - the trailing `info[N]` has `message == block_root`, `slot == block.slot`, single-bit `participants` set to `block.proposer_index`, - all participant indices fit within the validator registry. This is the conscious "mirror upstream stubs" trade-off agreed during planning. When `lean_multisig` ships a real `verify_type_2`, the structural stub is swapped for the real call. **Block-body ingestion preserves fork-choice LMD GHOST inputs:** since the merged proof can't be split, `process_new_block` inserts info-only Type-1 entries (real `(message, slot, participants)`, empty proof bytes) into the payload buffer. `extract_latest_known_attestations` works unchanged. Empty-bytes entries never get fed back into `aggregate_proofs` (that path is only hit when multiple proofs share the same `AttestationData`, in which case at least one came from gossip with real bytes). **Storage:** Table name kept (`BlockSignatures`) to avoid a RocksDB CF migration; doc comment updated. Renaming to `Table::BlockProof` is a follow-up. **Skipped tests, all behind `TODO(type1-type2)`:** - `ssz_spectests.rs`: `SignedBlock`, `BlockSignatures`, `AggregatedSignatureProof`, `SignedAggregatedAttestation` β€” on-disk SSZ bytes still use the legacy schema since canonical leanSpec hasn't absorbed the refactor. - `signature_spectests.rs`: `test_invalid_proposer_signature` β€” relies on block-level proposer-signature crypto, which is now a structural stub. Attempted to bump `LEAN_SPEC_COMMIT_HASH` to `anshalshukla/leanSpec@0ab09dd` to regenerate fixtures against the new schema. Reverted: the upstream testing harness in that commit (`leanSpec/packages/testing/src/consensus_testing/keys.py`) still imports `AttestationSignatures`, which the same commit removes β€” `fill` crashes on module load. Documented in a `NOTE(type1-type2)` in the Makefile. ## Tests Added / Run - Added: SSZ round-trip and capacity unit tests for the new Type-1/Type-2 containers in `crates/common/types/src/block.rs`. - Updated: `verify_signatures_rejects_participants_mismatch`, `build_block_caps_attestation_data_entries`, `on_block_rejects_duplicate_attestation_data`, the `compact_attestations` and `extend_proofs_greedily` tests, all `forkchoice_spectests.rs` step builders, `signature_types.rs` fixture converter, and the `rpc::test_get_latest_finalized_block` test β€” all rebuilt to construct the new merged-proof shape. - Verified locally: - `make fmt` β€” clean - `cargo clippy --workspace --all-targets -- -D warnings` β€” clean - `cargo test --workspace --release` β€” green (84 forkchoice spec tests, 7 signature spec tests with 1 expected skip, all unit tests pass) ## Related Issues / PRs - Upstream commit being ported: [`anshalshukla/leanSpec@0ab09dd`](https://github.com/anshalshukla/leanSpec/commit/0ab09dd166efb398be0373d32cdac3ec18f2b071) - Follow-ups when canonical absorbs the refactor: - Swap the structural `verify_type_2` stub for the real `lean_multisig` primitive. - Revert `LEAN_SPEC_COMMIT_HASH` skip markers in `ssz_spectests.rs` and `signature_spectests.rs`. - Consider renaming `Table::BlockSignatures` β†’ `Table::BlockProof`. ## βœ… Verification Checklist - [x] Ran `make fmt` β€” clean - [x] Ran `make lint` (clippy with `-D warnings`) β€” clean - [x] Ran `cargo test --workspace --release` β€” all passing --------- Co-authored-by: TomΓ‘s GrΓΌner <47506558+MegaRedHand@users.noreply.github.com> --- Cargo.lock | 1 + Makefile | 7 + crates/blockchain/Cargo.toml | 2 + crates/blockchain/src/aggregation.rs | 30 +- crates/blockchain/src/lib.rs | 34 +- crates/blockchain/src/store.rs | 373 ++++++++++-------- .../blockchain/tests/forkchoice_spectests.rs | 17 +- .../blockchain/tests/signature_spectests.rs | 18 + crates/common/test-fixtures/Cargo.toml | 1 + .../common/test-fixtures/src/fork_choice.rs | 65 +-- .../test-fixtures/src/verify_signatures.rs | 94 +++-- crates/common/types/src/attestation.rs | 8 +- crates/common/types/src/block.rs | 292 ++++++++++---- crates/common/types/tests/ssz_spectests.rs | 33 +- crates/common/types/tests/ssz_types.rs | 105 +---- crates/net/rpc/src/lib.rs | 11 +- crates/net/rpc/src/test_driver.rs | 12 +- crates/storage/src/store.rs | 66 ++-- 18 files changed, 695 insertions(+), 474 deletions(-) diff --git a/Cargo.lock b/Cargo.lock index d410f613..4f18935d 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2209,6 +2209,7 @@ version = "0.1.0" dependencies = [ "ethlambda-types", "hex", + "libssz", "libssz-types", "serde", "serde_json", diff --git a/Makefile b/Makefile index 0fb13940..164409f0 100644 --- a/Makefile +++ b/Makefile @@ -25,6 +25,13 @@ docker-build: ## 🐳 Build the Docker image @echo # 2026-04-29 +# NOTE(type1-type2): an attempted bump to anshalshukla/leanSpec@0ab09dd ("dummy +# type 1 and type 2 aggregation with block proofs") was reverted because the +# testing harness in that branch still imports `AttestationSignatures`, which +# the same commit removed β€” the fixture generator fails to load. We stay on +# the canonical commit and skip the affected SSZ-spec and signature-spec test +# cases until the upstream refactor lands together with matching testing-side +# updates. LEAN_SPEC_COMMIT_HASH:=18fe71fee49f8865a5c8a4cb8b1787b0cbc9e25b leanSpec: diff --git a/crates/blockchain/Cargo.toml b/crates/blockchain/Cargo.toml index 65c6ecf2..f25234cd 100644 --- a/crates/blockchain/Cargo.toml +++ b/crates/blockchain/Cargo.toml @@ -19,6 +19,8 @@ ethlambda-crypto.workspace = true ethlambda-metrics.workspace = true ethlambda-types.workspace = true +libssz.workspace = true + spawned-concurrency.workspace = true tokio.workspace = true diff --git a/crates/blockchain/src/aggregation.rs b/crates/blockchain/src/aggregation.rs index e100b035..b48390fb 100644 --- a/crates/blockchain/src/aggregation.rs +++ b/crates/blockchain/src/aggregation.rs @@ -13,7 +13,7 @@ use ethlambda_crypto::aggregate_mixed; use ethlambda_storage::Store; use ethlambda_types::{ attestation::{AggregationBits, HashedAttestationData}, - block::{AggregatedSignatureProof, ByteListMiB}, + block::{ByteListMiB, BytecodeClaim, TypeOneInfo, TypeOneMultiSignature}, primitives::H256, signature::{ValidatorPublicKey, ValidatorSignature}, state::Validator, @@ -65,7 +65,7 @@ pub struct AggregationSnapshot { /// as a message payload so the store can be updated and gossip publish fired. pub struct AggregatedGroupOutput { pub(crate) hashed: HashedAttestationData, - pub(crate) proof: AggregatedSignatureProof, + pub(crate) proof: TypeOneMultiSignature, pub(crate) participants: Vec, pub(crate) keys_to_delete: Vec<(u64, H256)>, } @@ -232,7 +232,7 @@ fn build_job( /// can't be fully resolved (passing fewer pubkeys than the proof expects would /// produce an invalid aggregate). fn resolve_child_pubkeys( - child_proofs: &[AggregatedSignatureProof], + child_proofs: &[TypeOneMultiSignature], validators: &[Validator], ) -> (Vec<(Vec, ByteListMiB)>, Vec) { let mut children = Vec::with_capacity(child_proofs.len()); @@ -253,7 +253,7 @@ fn resolve_child_pubkeys( continue; } accepted_child_ids.extend(&participant_ids); - children.push((child_pubkeys, proof.proof_data.clone())); + children.push((child_pubkeys, proof.proof.clone())); } (children, accepted_child_ids) @@ -290,8 +290,16 @@ pub fn aggregate_job(job: AggregationJob) -> Option { participants.dedup(); let aggregation_bits = aggregation_bits_from_validator_indices(&participants); - let proof = AggregatedSignatureProof::new(aggregation_bits, proof_data); - metrics::observe_aggregated_proof_size(proof.proof_data.len()); + let proof = TypeOneMultiSignature { + info: TypeOneInfo { + message: data_root, + slot: job.slot, + participants: aggregation_bits, + bytecode_claim: BytecodeClaim::ZERO, + }, + proof: proof_data, + }; + metrics::observe_aggregated_proof_size(proof.proof.len()); Some(AggregatedGroupOutput { hashed: job.hashed, @@ -328,14 +336,14 @@ pub fn finalize_aggregation_session(store: &Store) { /// no proof adds new coverage. This keeps the number of children minimal /// while maximizing the validators we can skip re-aggregating from scratch. fn select_proofs_greedily( - new_proofs: &[AggregatedSignatureProof], - known_proofs: &[AggregatedSignatureProof], -) -> (Vec, HashSet) { - let mut selected: Vec = Vec::new(); + new_proofs: &[TypeOneMultiSignature], + known_proofs: &[TypeOneMultiSignature], +) -> (Vec, HashSet) { + let mut selected: Vec = Vec::new(); let mut covered: HashSet = HashSet::new(); for proof_set in [new_proofs, known_proofs] { - let mut remaining: Vec<&AggregatedSignatureProof> = proof_set.iter().collect(); + let mut remaining: Vec<&TypeOneMultiSignature> = proof_set.iter().collect(); while !remaining.is_empty() { let best_idx = remaining diff --git a/crates/blockchain/src/lib.rs b/crates/blockchain/src/lib.rs index 28390c3f..707af364 100644 --- a/crates/blockchain/src/lib.rs +++ b/crates/blockchain/src/lib.rs @@ -8,9 +8,10 @@ use ethlambda_types::{ ShortRoot, aggregator::AggregatorController, attestation::{SignedAggregatedAttestation, SignedAttestation}, - block::{BlockSignatures, SignedBlock}, + block::{ByteListMiB, SignedBlock, TypeOneMultiSignature, TypeTwoMultiSignature}, primitives::{H256, HashTreeRoot as _}, }; +use libssz::SszEncode as _; use crate::aggregation::{ AGGREGATION_DEADLINE, AggregateProduced, AggregationDeadline, AggregationDone, @@ -42,10 +43,7 @@ pub const MILLISECONDS_PER_INTERVAL: u64 = 800; pub const INTERVALS_PER_SLOT: u64 = 5; /// Milliseconds in a slot (derived from interval duration and count). pub const MILLISECONDS_PER_SLOT: u64 = MILLISECONDS_PER_INTERVAL * INTERVALS_PER_SLOT; -/// Maximum number of distinct AttestationData entries per block. -/// -/// See: leanSpec commit 0c9528a (PR #536). -pub const MAX_ATTESTATIONS_DATA: usize = 16; +pub use ethlambda_types::block::MAX_ATTESTATIONS_DATA; /// Future-slot tolerance for gossip attestations, expressed in intervals. /// /// Bounds the clock skew the time check is willing to absorb when admitting a @@ -318,7 +316,7 @@ impl BlockChainServer { let _timing = metrics::time_block_building(); // Build the block with attestation signatures - let Ok((block, attestation_signatures, _post_checkpoints)) = + let Ok((block, type_one_proofs, _post_checkpoints)) = store::produce_block_with_signatures(&mut self.store, slot, validator_id) .inspect_err(|err| error!(%slot, %validator_id, %err, "Failed to build block")) else { @@ -337,15 +335,25 @@ impl BlockChainServer { return; }; - // Assemble SignedBlock + // Assemble SignedBlock: wrap the proposer's XMSS signature as a + // singleton Type-1 and fold every attestation Type-1 plus the + // proposer Type-1 into the block's single merged Type-2 proof. + let proposer_proof_bytes = ByteListMiB::try_from(proposer_signature.to_vec()) + .expect("XMSS signature fits in ByteListMiB"); + let proposer_t1 = TypeOneMultiSignature::for_proposer( + validator_id, + proposer_proof_bytes, + block_root, + slot, + ); + let mut all_proofs = type_one_proofs; + all_proofs.push(proposer_t1); + let merged = TypeTwoMultiSignature::from_type_1s(all_proofs); + let proof_bytes = ByteListMiB::try_from(merged.to_ssz()) + .expect("merged Type-2 proof fits in ByteListMiB"); let signed_block = SignedBlock { message: block, - signature: BlockSignatures { - proposer_signature, - attestation_signatures: attestation_signatures - .try_into() - .expect("attestation signatures within limit"), - }, + proof: proof_bytes, }; // Process the block locally before publishing diff --git a/crates/blockchain/src/store.rs b/crates/blockchain/src/store.rs index 7c7e7f86..a7d0561f 100644 --- a/crates/blockchain/src/store.rs +++ b/crates/blockchain/src/store.rs @@ -11,12 +11,16 @@ use ethlambda_types::{ AggregatedAttestation, AggregationBits, Attestation, AttestationData, HashedAttestationData, SignedAggregatedAttestation, SignedAttestation, validator_indices, }, - block::{AggregatedAttestations, AggregatedSignatureProof, Block, BlockBody, SignedBlock}, + block::{ + AggregatedAttestations, Block, BlockBody, ByteListMiB, BytecodeClaim, SignedBlock, + TypeOneInfo, TypeOneMultiSignature, TypeTwoMultiSignature, + }, checkpoint::Checkpoint, primitives::{H256, HashTreeRoot as _}, signature::ValidatorSignature, state::State, }; +use libssz::SszDecode as _; use tracing::{info, trace, warn}; use crate::{ @@ -372,7 +376,7 @@ pub fn on_gossip_aggregated_attestation( { let _timing = metrics::time_pq_sig_aggregated_signatures_verification(); ethlambda_crypto::verify_aggregated_signature( - &aggregated.proof.proof_data, + &aggregated.proof.proof, pubkeys, &data_root, slot, @@ -381,7 +385,7 @@ pub fn on_gossip_aggregated_attestation( .map_err(StoreError::AggregateVerificationFailed)?; // Read stats before moving the proof into the store. - let num_participants = aggregated.proof.participants.count_ones(); + let num_participants = aggregated.proof.info.participants.count_ones(); let target_slot = aggregated.data.target.slot; let target_root = aggregated.data.target.root; let source_slot = aggregated.data.source.slot; @@ -507,18 +511,41 @@ fn on_block_core( store.insert_signed_block(block_root, signed_block.clone()); store.insert_state(block_root, post_state); - // Process block body attestations and their signatures + // Process block body attestations and feed them into the payload buffer + // so fork choice's LMD GHOST overlay can see block-only votes. + // + // Since the block carries a single merged Type-2 proof, we cannot recover + // per-attestation proof bytes here. The entries we insert are info-only + // (`TypeOneInfo` from the merged proof's `info` list, with empty `proof` + // bytes). Real per-attestation proof bytes still arrive via gossip + // (`SignedAggregatedAttestation`) and verify there; this insertion is + // purely for fork-choice vote bookkeeping. Compact aggregation paths + // (`compact_attestations` β†’ `aggregate_proofs`) only run when there are + // multiple proofs per attestation data, so info-only entries are safe. let aggregated_attestations = &block.body.attestations; - let attestation_signatures = &signed_block.signature.attestation_signatures; + let merged = TypeTwoMultiSignature::from_ssz_bytes(signed_block.proof.iter().as_slice()) + .map_err(|_| StoreError::ProposerSignatureDecodingFailed)?; + let expected_info_len = aggregated_attestations.len() + 1; + if merged.info.len() != expected_info_len { + return Err(StoreError::AttestationSignatureMismatch { + signatures: merged.info.len(), + attestations: aggregated_attestations.len(), + }); + } - // Store one proof per attestation data in known aggregated payloads. - let mut known_entries: Vec<(HashedAttestationData, AggregatedSignatureProof)> = Vec::new(); - for (att, proof) in aggregated_attestations + let mut known_entries: Vec<(HashedAttestationData, TypeOneMultiSignature)> = + Vec::with_capacity(aggregated_attestations.len()); + for (att, info) in aggregated_attestations .iter() - .zip(attestation_signatures.iter()) + .zip(merged.info.iter().take(aggregated_attestations.len())) { - known_entries.push((HashedAttestationData::new(att.data.clone()), proof.clone())); - // Count each participating validator as a valid attestation + let hashed = HashedAttestationData::new(att.data.clone()); + let type_one = TypeOneMultiSignature { + info: info.clone(), + proof: ByteListMiB::default(), + }; + known_entries.push((hashed, type_one)); + // Count each participating validator as a valid attestation. let count = validator_indices(&att.aggregation_bits).count() as u64; metrics::inc_attestations_valid(count); } @@ -689,7 +716,7 @@ pub fn produce_block_with_signatures( store: &mut Store, slot: u64, validator_index: u64, -) -> Result<(Block, Vec, PostBlockCheckpoints), StoreError> { +) -> Result<(Block, Vec, PostBlockCheckpoints), StoreError> { // Get parent block and state to build upon let head_root = get_proposal_head(store, slot); let head_state = store @@ -876,9 +903,9 @@ fn union_aggregation_bits(a: &AggregationBits, b: &AggregationBits) -> Aggregati /// - Multiple entries: merged into one using recursive proof aggregation /// (leanSpec PR #510). fn compact_attestations( - entries: Vec<(AggregatedAttestation, AggregatedSignatureProof)>, + entries: Vec<(AggregatedAttestation, TypeOneMultiSignature)>, head_state: &State, -) -> Result, StoreError> { +) -> Result, StoreError> { if entries.len() <= 1 { return Ok(entries); } @@ -904,7 +931,7 @@ fn compact_attestations( } // Wrap in Option so we can .take() items by index without cloning - let mut items: Vec> = + let mut items: Vec> = entries.into_iter().map(Some).collect(); let mut compacted = Vec::with_capacity(order.len()); @@ -918,7 +945,7 @@ fn compact_attestations( } // Collect all entries for this AttestationData - let group_items: Vec<(AggregatedAttestation, AggregatedSignatureProof)> = indices + let group_items: Vec<(AggregatedAttestation, TypeOneMultiSignature)> = indices .iter() .map(|&idx| items[idx].take().expect("index used once")) .collect(); @@ -945,7 +972,7 @@ fn compact_attestations( .map_err(|_| StoreError::PubkeyDecodingFailed(vid)) }) .collect::, _>>()?; - Ok((pubkeys, proof.proof_data.clone())) + Ok((pubkeys, proof.proof.clone())) }) .collect::, StoreError>>()?; @@ -953,7 +980,15 @@ fn compact_attestations( let merged_proof_data = aggregate_proofs(children, &data_root, slot) .map_err(StoreError::SignatureAggregationFailed)?; - let merged_proof = AggregatedSignatureProof::new(merged_bits.clone(), merged_proof_data); + let merged_proof = TypeOneMultiSignature { + info: TypeOneInfo { + message: data_root, + slot: data.slot, + participants: merged_bits.clone(), + bytecode_claim: BytecodeClaim::ZERO, + }, + proof: merged_proof_data, + }; let merged_att = AggregatedAttestation { aggregation_bits: merged_bits, data, @@ -978,8 +1013,8 @@ fn compact_attestations( /// Each selected proof is appended to `selected` paired with its /// corresponding AggregatedAttestation. fn extend_proofs_greedily( - proofs: &[AggregatedSignatureProof], - selected: &mut Vec<(AggregatedAttestation, AggregatedSignatureProof)>, + proofs: &[TypeOneMultiSignature], + selected: &mut Vec<(AggregatedAttestation, TypeOneMultiSignature)>, att_data: &AttestationData, ) { if proofs.is_empty() { @@ -1018,7 +1053,7 @@ fn extend_proofs_greedily( .collect(); let att = AggregatedAttestation { - aggregation_bits: proof.participants.clone(), + aggregation_bits: proof.info.participants.clone(), data: att_data.clone(), }; @@ -1045,9 +1080,9 @@ fn build_block( proposer_index: u64, parent_root: H256, known_block_roots: &HashSet, - aggregated_payloads: &HashMap)>, -) -> Result<(Block, Vec, PostBlockCheckpoints), StoreError> { - let mut selected: Vec<(AggregatedAttestation, AggregatedSignatureProof)> = Vec::new(); + aggregated_payloads: &HashMap)>, +) -> Result<(Block, Vec, PostBlockCheckpoints), StoreError> { + let mut selected: Vec<(AggregatedAttestation, TypeOneMultiSignature)> = Vec::new(); if !aggregated_payloads.is_empty() { // Genesis edge case: when building on genesis (slot 0), @@ -1154,9 +1189,16 @@ fn build_block( Ok((final_block, aggregated_signatures, post_checkpoints)) } -/// Verify all signatures in a signed block. +/// Structural verification of a signed block's merged Type-2 proof. /// -/// Each attestation has a corresponding proof in the signature list. +/// Phase 3 of the Type-1 / Type-2 aggregation migration replaces the per- +/// attestation `verify_aggregated_signature` plus standalone proposer-signature +/// check with a structural alignment check on the merged Type-2 blob: the +/// `info` list must hold one entry per block-body attestation plus one +/// trailing entry for the proposer. Cryptographic verification of each Type-1 +/// still happens at gossip ingestion (`on_gossip_aggregated_attestation`); the +/// block-level crypto path returns once `lean_multisig` exposes a real +/// merged-proof verification primitive. /// /// Exposed publicly so RPC handlers (notably the Hive test-driver /// `verify_signatures/run` endpoint) can run the exact same verification path @@ -1166,110 +1208,68 @@ pub fn verify_block_signatures( state: &State, signed_block: &SignedBlock, ) -> Result<(), StoreError> { - use ethlambda_crypto::verify_aggregated_signature; - use ethlambda_types::signature::ValidatorSignature; - let total_start = std::time::Instant::now(); let block = &signed_block.message; let attestations = &block.body.attestations; - let attestation_signatures = &signed_block.signature.attestation_signatures; - if attestations.len() != attestation_signatures.len() { + let merged = TypeTwoMultiSignature::from_ssz_bytes(signed_block.proof.iter().as_slice()) + .map_err(|_| StoreError::ProposerSignatureDecodingFailed)?; + + let expected_info_len = attestations.len() + 1; + if merged.info.len() != expected_info_len { return Err(StoreError::AttestationSignatureMismatch { - signatures: attestation_signatures.len(), + signatures: merged.info.len(), attestations: attestations.len(), }); } + let validators = &state.validators; let num_validators = validators.len() as u64; - // Verify each attestation's signature proof in parallel - let aggregated_start = std::time::Instant::now(); - - // Prepare verification inputs sequentially (cheap: bit checks + pubkey lookups) - let verification_inputs: Vec<_> = attestations - .iter() - .zip(attestation_signatures) - .map(|(attestation, aggregated_proof)| { - if attestation.aggregation_bits != aggregated_proof.participants { - return Err(StoreError::ParticipantsMismatch); - } - - let slot: u32 = attestation.data.slot.try_into().expect("slot exceeds u32"); - let message = attestation.data.hash_tree_root(); - - // Collect attestation public keys with bounds check in a single pass - let public_keys: Vec<_> = validator_indices(&attestation.aggregation_bits) - .map(|vid| { - if vid >= num_validators { - return Err(StoreError::InvalidValidatorIndex); - } - validators[vid as usize] - .get_attestation_pubkey() - .map_err(|_| StoreError::PubkeyDecodingFailed(vid)) - }) - .collect::>()?; - - Ok((&aggregated_proof.proof_data, public_keys, message, slot)) - }) - .collect::>()?; - - // Run expensive signature verification in parallel. - // into_par_iter() moves each tuple, avoiding a clone of public_keys. - use rayon::prelude::*; - verification_inputs.into_par_iter().try_for_each( - |(proof_data, public_keys, message, slot)| { - let result = { - let _timing = metrics::time_pq_sig_aggregated_signatures_verification(); - verify_aggregated_signature(proof_data, public_keys, &message, slot) - }; - match result { - Ok(()) => { - metrics::inc_pq_sig_aggregated_signatures_valid(); - Ok(()) - } - Err(e) => { - metrics::inc_pq_sig_aggregated_signatures_invalid(); - Err(StoreError::AggregateVerificationFailed(e)) - } + // Per-attestation entries: messages, slots, and participants must mirror + // the block body. The crypto binding for each is already checked at gossip. + for (attestation, info) in attestations.iter().zip(merged.info.iter()) { + if attestation.aggregation_bits != info.participants { + return Err(StoreError::ParticipantsMismatch); + } + if info.slot != attestation.data.slot { + return Err(StoreError::AttestationSignatureMismatch { + signatures: merged.info.len(), + attestations: attestations.len(), + }); + } + if info.message != attestation.data.hash_tree_root() { + return Err(StoreError::ParticipantsMismatch); + } + for vid in validator_indices(&attestation.aggregation_bits) { + if vid >= num_validators { + return Err(StoreError::InvalidValidatorIndex); } - }, - )?; - - let aggregated_elapsed = aggregated_start.elapsed(); - - let proposer_start = std::time::Instant::now(); - - // Verify proposer signature over block root using proposal key - let proposer_signature = - ValidatorSignature::from_bytes(&signed_block.signature.proposer_signature) - .map_err(|_| StoreError::ProposerSignatureDecodingFailed)?; - - let proposer = validators - .get(block.proposer_index as usize) - .ok_or(StoreError::InvalidValidatorIndex)?; - - let proposer_pubkey = proposer - .get_proposal_pubkey() - .map_err(|_| StoreError::PubkeyDecodingFailed(proposer.index))?; + } + } - let slot: u32 = block.slot.try_into().expect("slot exceeds u32"); + // Trailing proposer entry: single bit for `block.proposer_index`, + // message equals the block root, slot matches the block slot. + let proposer_info = &merged.info[attestations.len()]; let block_root = block.hash_tree_root(); - - if !proposer_signature.is_valid(&proposer_pubkey, slot, &block_root) { + if proposer_info.message != block_root || proposer_info.slot != block.slot { + return Err(StoreError::ProposerSignatureVerificationFailed); + } + let proposer_bits: Vec = validator_indices(&proposer_info.participants).collect(); + if proposer_bits != [block.proposer_index] { return Err(StoreError::ProposerSignatureVerificationFailed); } - let proposer_elapsed = proposer_start.elapsed(); + if block.proposer_index >= num_validators { + return Err(StoreError::InvalidValidatorIndex); + } let total_elapsed = total_start.elapsed(); info!( slot = block.slot, attestation_count = attestations.len(), - ?aggregated_elapsed, - ?proposer_elapsed, ?total_elapsed, - "Signature verification timing" + "Block proof structural check" ); Ok(()) @@ -1325,19 +1325,46 @@ fn reorg_depth(old_head: H256, new_head: H256, store: &Store) -> Option { mod tests { use super::*; use ethlambda_types::{ - attestation::{AggregatedAttestation, AggregationBits, AttestationData, XmssSignature}, + attestation::{AggregatedAttestation, AggregationBits, AttestationData}, block::{ - AggregatedSignatureProof, AttestationSignatures, BlockBody, BlockSignatures, - SignedBlock, + BlockBody, ByteListMiB, SignedBlock, TypeOneMultiSignature, TypeTwoMultiSignature, }, checkpoint::Checkpoint, - signature::SIGNATURE_SIZE, state::State, }; + use libssz::SszEncode as _; + + /// Test helper: wrap a list of Type-1 attestation proofs plus a stub + /// proposer Type-1 into the SSZ-encoded merged Type-2 blob that the + /// post-Phase-3 `SignedBlock.proof` carries. + fn make_signed_block_proof( + proposer_index: u64, + block_root: H256, + slot: u64, + attestation_proofs: Vec, + ) -> ByteListMiB { + let mut all = attestation_proofs; + all.push(TypeOneMultiSignature::for_proposer( + proposer_index, + ByteListMiB::default(), + block_root, + slot, + )); + let merged = TypeTwoMultiSignature::from_type_1s(all); + ByteListMiB::try_from(merged.to_ssz()).expect("merged proof fits in ByteListMiB") + } #[test] fn verify_signatures_rejects_participants_mismatch() { - let state = State::from_genesis(1000, vec![]); + // One validator in state so the proposer-index bounds check passes. + let state = State::from_genesis( + 1000, + vec![ethlambda_types::state::Validator { + attestation_pubkey: [0u8; 52], + proposal_pubkey: [0u8; 52], + index: 0, + }], + ); let attestation_data = AttestationData { slot: 0, @@ -1346,12 +1373,12 @@ mod tests { source: Checkpoint::default(), }; - // Create attestation with bits [0, 1] set + // Attestation declares bits [0, 1] in the block body... let mut attestation_bits = AggregationBits::with_length(4).unwrap(); attestation_bits.set(0, true).unwrap(); attestation_bits.set(1, true).unwrap(); - // Create proof with different bits [0, 1, 2] set + // ...but the merged Type-2 carries info[0].participants = [0, 1, 2]. let mut proof_bits = AggregationBits::with_length(4).unwrap(); proof_bits.set(0, true).unwrap(); proof_bits.set(1, true).unwrap(); @@ -1359,25 +1386,29 @@ mod tests { let attestation = AggregatedAttestation { aggregation_bits: attestation_bits, - data: attestation_data, + data: attestation_data.clone(), }; - let proof = AggregatedSignatureProof::empty(proof_bits); - let attestations = AggregatedAttestations::try_from(vec![attestation]).unwrap(); - let attestation_signatures = AttestationSignatures::try_from(vec![proof]).unwrap(); + + let block = Block { + slot: 0, + proposer_index: 0, + parent_root: H256::ZERO, + state_root: H256::ZERO, + body: BlockBody { attestations }, + }; + let block_root = block.hash_tree_root(); + + let mismatching_t1 = TypeOneMultiSignature::empty( + proof_bits, + attestation_data.hash_tree_root(), + attestation_data.slot, + ); + let proof = make_signed_block_proof(0, block_root, 0, vec![mismatching_t1]); let signed_block = SignedBlock { - message: Block { - slot: 0, - proposer_index: 0, - parent_root: H256::ZERO, - state_root: H256::ZERO, - body: BlockBody { attestations }, - }, - signature: BlockSignatures { - attestation_signatures, - proposer_signature: XmssSignature::try_from(vec![0u8; SIGNATURE_SIZE]).unwrap(), - }, + message: block, + proof, }; let result = verify_block_signatures(&state, &signed_block); @@ -1436,10 +1467,8 @@ mod tests { // Simulate a stall: populate the payload pool with many distinct entries. // Each has a unique target (different slot) and a large proof payload. - let mut aggregated_payloads: HashMap< - H256, - (AttestationData, Vec), - > = HashMap::new(); + let mut aggregated_payloads: HashMap)> = + HashMap::new(); for i in 0..NUM_PAYLOAD_ENTRIES { let target_slot = (i + 1) as u64; @@ -1466,7 +1495,7 @@ mod tests { let proof_bytes: Vec = vec![0xAB; PROOF_SIZE]; let proof_data = SszList::try_from(proof_bytes).expect("proof fits in ByteListMiB"); - let proof = AggregatedSignatureProof::new(bits, proof_data); + let proof = TypeOneMultiSignature::new(bits, data_root, att_data.slot, proof_data); aggregated_payloads.insert(data_root, (att_data, vec![proof])); } @@ -1490,14 +1519,12 @@ mod tests { "MAX_ATTESTATIONS_DATA should cap attestations: got {attestation_count}" ); - // Construct the full signed block as it would be sent over gossip - let attestation_sigs: Vec = signatures; + // Build the merged Type-2 proof exactly as `propose_block` would. + let block_root = block.hash_tree_root(); + let proof = make_signed_block_proof(proposer_index, block_root, block.slot, signatures); let signed_block = SignedBlock { message: block, - signature: BlockSignatures { - attestation_signatures: AttestationSignatures::try_from(attestation_sigs).unwrap(), - proposer_signature: XmssSignature::try_from(vec![0u8; SIGNATURE_SIZE]).unwrap(), - }, + proof, }; // SSZ-encode: this is exactly what publish_block does before compression @@ -1531,6 +1558,13 @@ mod tests { bits } + /// Test helper: empty Type-1 proof carrying the given participants and slot + /// metadata. The message and bytecode_claim are zeroed β€” only the participant + /// bitfield matters for the pipeline tests below. + fn make_type_one_proof(bits: AggregationBits, slot: u64) -> TypeOneMultiSignature { + TypeOneMultiSignature::empty(bits, H256::ZERO, slot) + } + #[test] fn compact_attestations_no_duplicates() { let data_a = make_att_data(1); @@ -1544,14 +1578,14 @@ mod tests { aggregation_bits: bits_a.clone(), data: data_a.clone(), }, - AggregatedSignatureProof::empty(bits_a), + make_type_one_proof(bits_a, data_a.slot), ), ( AggregatedAttestation { aggregation_bits: bits_b.clone(), data: data_b.clone(), }, - AggregatedSignatureProof::empty(bits_b), + make_type_one_proof(bits_b, data_b.slot), ), ]; @@ -1578,21 +1612,21 @@ mod tests { aggregation_bits: bits_0.clone(), data: data_a.clone(), }, - AggregatedSignatureProof::empty(bits_0), + make_type_one_proof(bits_0, data_a.slot), ), ( AggregatedAttestation { aggregation_bits: bits_1.clone(), data: data_b.clone(), }, - AggregatedSignatureProof::empty(bits_1), + make_type_one_proof(bits_1, data_b.slot), ), ( AggregatedAttestation { aggregation_bits: bits_2.clone(), data: data_c.clone(), }, - AggregatedSignatureProof::empty(bits_2), + make_type_one_proof(bits_2, data_c.slot), ), ]; @@ -1655,24 +1689,27 @@ mod tests { ]) .unwrap(); - let attestation_signatures = AttestationSignatures::try_from(vec![ - AggregatedSignatureProof::empty(bits_a), - AggregatedSignatureProof::empty(bits_b), - ]) - .unwrap(); - + let block = Block { + slot: 1, + proposer_index: 0, + parent_root: head_root, + state_root: H256::ZERO, + body: BlockBody { attestations }, + }; + let block_root = block.hash_tree_root(); + let att_root = att_data.hash_tree_root(); + let proof = make_signed_block_proof( + 0, + block_root, + block.slot, + vec![ + TypeOneMultiSignature::empty(bits_a, att_root, att_data.slot), + TypeOneMultiSignature::empty(bits_b, att_root, att_data.slot), + ], + ); let signed_block = SignedBlock { - message: Block { - slot: 1, - proposer_index: 0, - parent_root: head_root, - state_root: H256::ZERO, - body: BlockBody { attestations }, - }, - signature: BlockSignatures { - attestation_signatures, - proposer_signature: XmssSignature::try_from(vec![0u8; SIGNATURE_SIZE]).unwrap(), - }, + message: block, + proof, }; let result = on_block_without_verification(&mut store, signed_block); @@ -1703,9 +1740,9 @@ mod tests { // A = {0, 1, 2, 3} (4 validators β€” largest, picked first) // B = {2, 3, 4} (overlaps A on {2,3} but adds validator 4) // C = {1, 2} (subset of A β€” adds nothing, must be skipped) - let proof_a = AggregatedSignatureProof::empty(make_bits(&[0, 1, 2, 3])); - let proof_b = AggregatedSignatureProof::empty(make_bits(&[2, 3, 4])); - let proof_c = AggregatedSignatureProof::empty(make_bits(&[1, 2])); + let proof_a = make_type_one_proof(make_bits(&[0, 1, 2, 3]), data.slot); + let proof_b = make_type_one_proof(make_bits(&[2, 3, 4]), data.slot); + let proof_c = make_type_one_proof(make_bits(&[1, 2]), data.slot); let mut selected = Vec::new(); extend_proofs_greedily(&[proof_a, proof_b, proof_c], &mut selected, &data); @@ -1724,7 +1761,7 @@ mod tests { // Attestation bits mirror the proof's participants for each entry. for (att, proof) in &selected { - assert_eq!(att.aggregation_bits, proof.participants); + assert_eq!(att.aggregation_bits, proof.info.participants); assert_eq!(att.data, data); } } @@ -1738,8 +1775,8 @@ mod tests { // B's participants are a subset of A's. After picking A, B offers zero // new coverage and must not be selected (its inclusion would also // violate the disjoint invariant). - let proof_a = AggregatedSignatureProof::empty(make_bits(&[0, 1, 2, 3])); - let proof_b = AggregatedSignatureProof::empty(make_bits(&[1, 2])); + let proof_a = make_type_one_proof(make_bits(&[0, 1, 2, 3]), data.slot); + let proof_b = make_type_one_proof(make_bits(&[1, 2]), data.slot); let mut selected = Vec::new(); extend_proofs_greedily(&[proof_a, proof_b], &mut selected, &data); diff --git a/crates/blockchain/tests/forkchoice_spectests.rs b/crates/blockchain/tests/forkchoice_spectests.rs index dcdcdcf6..7be7fb12 100644 --- a/crates/blockchain/tests/forkchoice_spectests.rs +++ b/crates/blockchain/tests/forkchoice_spectests.rs @@ -8,7 +8,7 @@ use ethlambda_blockchain::{MILLISECONDS_PER_INTERVAL, MILLISECONDS_PER_SLOT, sto use ethlambda_storage::{Store, backend::InMemoryBackend}; use ethlambda_types::{ attestation::{AttestationData, SignedAggregatedAttestation, SignedAttestation}, - block::{AggregatedSignatureProof, Block}, + block::{Block, TypeOneMultiSignature}, primitives::{ByteList, H256, HashTreeRoot as _}, state::State, }; @@ -118,13 +118,14 @@ fn run(path: &Path) -> datatest_stable::Result<()> { let proof_bytes: Vec = proof_fixture.proof_data.into(); let proof_data = ByteList::try_from(proof_bytes) .expect("aggregated proof data fits in ByteListMiB"); - let aggregated = SignedAggregatedAttestation { - data: att_data.data.into(), - proof: AggregatedSignatureProof::new( - proof_fixture.participants.into(), - proof_data, - ), - }; + let data: AttestationData = att_data.data.into(); + let proof = TypeOneMultiSignature::new( + proof_fixture.participants.into(), + data.hash_tree_root(), + data.slot, + proof_data, + ); + let aggregated = SignedAggregatedAttestation { data, proof }; let result = store::on_gossip_aggregated_attestation(&mut store, aggregated); assert_step_outcome(step_idx, step.valid, result)?; diff --git a/crates/blockchain/tests/signature_spectests.rs b/crates/blockchain/tests/signature_spectests.rs index fdba2e56..dc59384d 100644 --- a/crates/blockchain/tests/signature_spectests.rs +++ b/crates/blockchain/tests/signature_spectests.rs @@ -13,6 +13,19 @@ use ethlambda_test_fixtures::verify_signatures::VerifySignaturesTestVector; const SUPPORTED_FIXTURE_FORMAT: &str = "verify_signatures_test"; +/// Tests that require cryptographic signature verification at block level. +/// +/// Phase 3 of the Type-1 / Type-2 aggregation migration replaces the per- +/// attestation `verify_aggregated_signature` plus standalone proposer-signature +/// verification with a structural check on the merged Type-2 proof; the real +/// safety net is gossip-time per-attestation verification. Tests that only +/// fail on the *crypto* leg accordingly pass when run against the structural +/// stub, so they are skipped pending the `lean_multisig`-backed real +/// `verify_type_2` primitive. +/// +/// TODO(type1-type2): re-enable once block-level crypto verification returns. +const SKIP_TESTS: &[&str] = &["test_invalid_proposer_signature"]; + fn run(path: &Path) -> datatest_stable::Result<()> { let tests = VerifySignaturesTestVector::from_file(path)?; @@ -25,6 +38,11 @@ fn run(path: &Path) -> datatest_stable::Result<()> { .into()); } + if SKIP_TESTS.iter().any(|skip| name.contains(skip)) { + println!("Skipping test (Phase-3 crypto stub): {name}"); + continue; + } + println!("Running test: {}", name); // Step 1: Populate the pre-state with the test fixture diff --git a/crates/common/test-fixtures/Cargo.toml b/crates/common/test-fixtures/Cargo.toml index ac2b117d..4821b42d 100644 --- a/crates/common/test-fixtures/Cargo.toml +++ b/crates/common/test-fixtures/Cargo.toml @@ -11,6 +11,7 @@ version.workspace = true [dependencies] ethlambda-types.workspace = true +libssz.workspace = true libssz-types.workspace = true serde.workspace = true diff --git a/crates/common/test-fixtures/src/fork_choice.rs b/crates/common/test-fixtures/src/fork_choice.rs index 99fedc20..75cd810e 100644 --- a/crates/common/test-fixtures/src/fork_choice.rs +++ b/crates/common/test-fixtures/src/fork_choice.rs @@ -9,10 +9,10 @@ use crate::{ }; use ethlambda_types::attestation::XmssSignature; use ethlambda_types::block::{ - AggregatedSignatureProof, AttestationSignatures, BlockSignatures, SignedBlock, + ByteListMiB, MAX_ATTESTATIONS_DATA, SignedBlock, TypeOneMultiSignature, TypeTwoMultiSignature, }; -use ethlambda_types::primitives::H256; -use ethlambda_types::signature::SIGNATURE_SIZE; +use ethlambda_types::primitives::{H256, HashTreeRoot as _}; +use libssz::SszEncode as _; use serde::{Deserialize, Deserializer}; use std::collections::HashMap; use std::path::Path; @@ -151,31 +151,52 @@ impl BlockStepData { } } - /// Build a SignedBlock with placeholder signatures: one empty aggregated - /// proof per attestation (participant bits copied from the block body) and - /// a zeroed proposer signature. + /// Build a `SignedBlock` whose merged Type-2 proof is structurally correct + /// (one Type-1 info entry per block-body attestation plus a trailing + /// proposer entry) but carries empty proof bytes β€” the crypto layer is + /// never invoked by callers of this helper. /// /// Used by callers that import the block via `on_block_without_verification` - /// (fork-choice spec-test runner and Hive test-driver), where the crypto - /// layer is never invoked but the SignedBlock shape must still satisfy the - /// length checks `on_block_core` performs before dispatching. + /// (fork-choice spec-test runner and Hive test-driver), where + /// `process_new_block` still decodes the merged proof and asserts the info + /// list aligns with `attestations.len() + 1` before dispatching. + /// + /// Oversized-block tests (more than `MAX_ATTESTATIONS_DATA` attestations) + /// overflow `TypeOneInfos`'s SSZ-list cap, so we fall back to an empty + /// proof blob β€” `process_new_block` rejects with `TooManyAttestationData` + /// before the proof is ever decoded, so its contents don't matter for + /// those scenarios. pub fn to_blank_signed_block(&self) -> SignedBlock { let block = self.to_block(); - let proofs: Vec = block - .body - .attestations - .iter() - .map(|att| AggregatedSignatureProof::empty(att.aggregation_bits.clone())) - .collect(); - + let block_root = block.hash_tree_root(); + let proof = if block.body.attestations.len() > MAX_ATTESTATIONS_DATA { + ByteListMiB::default() + } else { + let attestation_proofs: Vec = block + .body + .attestations + .iter() + .map(|att| { + TypeOneMultiSignature::empty( + att.aggregation_bits.clone(), + att.data.hash_tree_root(), + att.data.slot, + ) + }) + .collect(); + let mut all = attestation_proofs; + all.push(TypeOneMultiSignature::for_proposer( + block.proposer_index, + ByteListMiB::default(), + block_root, + block.slot, + )); + let merged = TypeTwoMultiSignature::from_type_1s(all); + ByteListMiB::try_from(merged.to_ssz()).expect("merged proof fits in ByteListMiB") + }; SignedBlock { message: block, - signature: BlockSignatures { - proposer_signature: XmssSignature::try_from(vec![0u8; SIGNATURE_SIZE]) - .expect("zero-filled signature has the correct length"), - attestation_signatures: AttestationSignatures::try_from(proofs) - .expect("attestation proofs within limit"), - }, + proof, } } } diff --git a/crates/common/test-fixtures/src/verify_signatures.rs b/crates/common/test-fixtures/src/verify_signatures.rs index 59c5febc..d9a44f28 100644 --- a/crates/common/test-fixtures/src/verify_signatures.rs +++ b/crates/common/test-fixtures/src/verify_signatures.rs @@ -7,8 +7,10 @@ use crate::{AggregationBits, Block, Container, TestInfo, TestState, deser_xmss_hex}; use ethlambda_types::attestation::{AggregationBits as EthAggregationBits, XmssSignature}; use ethlambda_types::block::{ - AggregatedSignatureProof, AttestationSignatures, BlockSignatures, ByteListMiB, SignedBlock, + ByteListMiB, SignedBlock, TypeOneMultiSignature, TypeTwoMultiSignature, }; +use ethlambda_types::primitives::HashTreeRoot as _; +use libssz::SszEncode as _; use serde::Deserialize; use std::collections::HashMap; use std::fmt; @@ -62,34 +64,45 @@ pub struct TestSignedBlock { } /// Lossy fixture-to-SignedBlock conversion: per-attestation proof bytes from -/// the fixture are dropped, leaving empty payloads. Adequate for callers that -/// don't reach the leanVM aggregate verifier (e.g. signature spec tests whose -/// fixtures all set `expectException`). For real signature verification use -/// [`TestSignedBlock::try_into_signed_block_with_proofs`]. +/// the fixture are dropped, leaving empty payloads. The merged Type-2 proof +/// preserves the per-attestation metadata (`message`, `slot`, `participants`) +/// and the proposer's XMSS signature so structural verification passes. +/// Adequate for callers that don't reach the leanVM aggregate verifier (e.g. +/// signature spec tests whose fixtures all set `expectException`). For real +/// signature verification use [`TestSignedBlock::try_into_signed_block_with_proofs`]. impl From for SignedBlock { fn from(value: TestSignedBlock) -> Self { - let block = value.block.into(); - let proposer_signature = value.signature.proposer_signature; + let block: ethlambda_types::block::Block = value.block.into(); + let block_root = block.hash_tree_root(); + let proposer_proof = ByteListMiB::try_from(value.signature.proposer_signature.to_vec()) + .expect("XMSS signature fits in ByteListMiB"); - let attestation_signatures: AttestationSignatures = value + let attestation_t1s: Vec = value .signature .attestation_signatures .data .into_iter() - .map(|att_sig| { + .zip(block.body.attestations.iter()) + .map(|(att_sig, att)| { let participants: EthAggregationBits = att_sig.participants.into(); - AggregatedSignatureProof::empty(participants) + TypeOneMultiSignature::empty(participants, att.data.hash_tree_root(), att.data.slot) }) - .collect::>() - .try_into() - .expect("too many attestation signatures"); + .collect(); + + let mut all = attestation_t1s; + all.push(TypeOneMultiSignature::for_proposer( + block.proposer_index, + proposer_proof, + block_root, + block.slot, + )); + let merged = TypeTwoMultiSignature::from_type_1s(all); + let proof = ByteListMiB::try_from(merged.to_ssz()) + .expect("merged Type-2 proof fits in ByteListMiB"); SignedBlock { message: block, - signature: BlockSignatures { - attestation_signatures, - proposer_signature, - }, + proof, } } } @@ -128,20 +141,25 @@ impl std::error::Error for SignedBlockConvertError {} impl TestSignedBlock { /// Materialize a `SignedBlock` that preserves the fixture-supplied - /// per-attestation proof bytes verbatim. Required for verifying signatures - /// against the leanVM aggregate path; the lossy [`From`] impl above drops - /// these bytes. + /// per-attestation proof bytes verbatim by folding every Type-1 plus the + /// proposer Type-1 into the block's merged Type-2 proof. The lossy + /// [`From`] impl above drops these bytes β€” use this one when the consumer + /// needs the original aggregate bytes (e.g. the Hive test-driver feeds + /// them through `verify_block_signatures`). pub fn try_into_signed_block_with_proofs(self) -> Result { - let block = self.block.into(); - let proposer_signature = self.signature.proposer_signature; + let block: ethlambda_types::block::Block = self.block.into(); + let block_root = block.hash_tree_root(); + let proposer_proof = ByteListMiB::try_from(self.signature.proposer_signature.to_vec()) + .expect("XMSS signature fits in ByteListMiB"); - let proofs: Vec = self + let attestation_t1s: Vec = self .signature .attestation_signatures .data .into_iter() + .zip(block.body.attestations.iter()) .enumerate() - .map(|(index, att_sig)| { + .map(|(index, (att_sig, att))| { let participants: EthAggregationBits = att_sig.participants.into(); let raw = &att_sig.proof_data.data; let stripped = raw.strip_prefix("0x").unwrap_or(raw); @@ -154,19 +172,33 @@ impl TestSignedBlock { let len = bytes.len(); let proof_data = ByteListMiB::try_from(bytes) .map_err(|_| SignedBlockConvertError::ProofTooLarge { index, len })?; - Ok(AggregatedSignatureProof::new(participants, proof_data)) + Ok(TypeOneMultiSignature::new( + participants, + att.data.hash_tree_root(), + att.data.slot, + proof_data, + )) }) .collect::>()?; - let attestation_signatures: AttestationSignatures = AttestationSignatures::try_from(proofs) - .map_err(|_| SignedBlockConvertError::TooManyAttestationSignatures)?; + if attestation_t1s.len() >= 17 { + return Err(SignedBlockConvertError::TooManyAttestationSignatures); + } + + let mut all = attestation_t1s; + all.push(TypeOneMultiSignature::for_proposer( + block.proposer_index, + proposer_proof, + block_root, + block.slot, + )); + let merged = TypeTwoMultiSignature::from_type_1s(all); + let proof = ByteListMiB::try_from(merged.to_ssz()) + .expect("merged Type-2 proof fits in ByteListMiB"); Ok(SignedBlock { message: block, - signature: BlockSignatures { - attestation_signatures, - proposer_signature, - }, + proof, }) } } diff --git a/crates/common/types/src/attestation.rs b/crates/common/types/src/attestation.rs index 10fd7d82..f0684af5 100644 --- a/crates/common/types/src/attestation.rs +++ b/crates/common/types/src/attestation.rs @@ -2,7 +2,7 @@ use libssz_derive::{HashTreeRoot, SszDecode, SszEncode}; use libssz_types::{SszBitlist, SszVector}; use crate::{ - block::AggregatedSignatureProof, + block::TypeOneMultiSignature, checkpoint::Checkpoint, primitives::{H256, HashTreeRoot as _}, signature::SIGNATURE_SIZE, @@ -103,10 +103,14 @@ pub fn bits_is_subset(a: &AggregationBits, b: &AggregationBits) -> bool { } /// Aggregated attestation with its signature proof, used for gossip on the aggregation topic. +/// +/// The `proof` carries a Type-1 single-message multi-signer aggregate: the +/// signed message is the attestation data root, participants live in +/// `proof.info.participants`, and the raw aggregate bytes are in `proof.proof`. #[derive(Debug, Clone, SszEncode, SszDecode, HashTreeRoot)] pub struct SignedAggregatedAttestation { pub data: AttestationData, - pub proof: AggregatedSignatureProof, + pub proof: TypeOneMultiSignature, } /// Attestation data paired with its precomputed tree hash root. diff --git a/crates/common/types/src/block.rs b/crates/common/types/src/block.rs index 5bb1be8b..9aaac1d9 100644 --- a/crates/common/types/src/block.rs +++ b/crates/common/types/src/block.rs @@ -4,21 +4,27 @@ use libssz_derive::{HashTreeRoot, SszDecode, SszEncode}; use libssz_types::SszList; use crate::{ - attestation::{AggregatedAttestation, AggregationBits, XmssSignature, validator_indices}, + attestation::{AggregatedAttestation, AggregationBits, validator_indices}, primitives::{self, ByteList, H256}, }; // Convenience trait for calling hash_tree_root() without a hasher argument use primitives::HashTreeRoot as _; -/// Envelope carrying a block and its aggregated signatures. +/// Envelope carrying a block and a single merged proof binding every signature +/// it depends on. +/// +/// The `proof` blob is the SSZ-encoded form of a [`TypeTwoMultiSignature`] that +/// covers, in order, every per-attestation Type-1 proof plus a singleton Type-1 +/// proof carrying the proposer's signature over the block root. Decode with +/// `TypeTwoMultiSignature::from_ssz_bytes(&signed_block.proof)`. /// ///
/// -/// `HashTreeRoot` is intentionally not derived: `XmssSignature` is encoded as a -/// fixed-size byte vector for cross-client serialization compatibility, but the -/// spec treats it as a container for Merkleization. We never hash a -/// `SignedBlock` directly β€” consumers always hash the inner `Block`. +/// `HashTreeRoot` is intentionally not derived: consumers never hash a +/// `SignedBlock` directly β€” they always hash the inner `Block`. Keeping the +/// envelope structurally minimal also means the on-chain root is independent +/// of how the merged proof is serialised. /// ///
#[derive(Clone, SszEncode, SszDecode)] @@ -26,96 +32,160 @@ pub struct SignedBlock { /// The block being signed. pub message: Block, - /// Aggregated signature payload for the block. - /// - /// Contains per-attestation aggregated proofs and the proposer's signature - /// over the block root using the proposal key. - pub signature: BlockSignatures, + /// SSZ-encoded merged proof for every signature this block depends on. + pub proof: ByteListMiB, } -// Manual Debug impl because leanSig signatures don't implement Debug. +// Manual Debug impl because the merged proof bytes are large and opaque. impl core::fmt::Debug for SignedBlock { fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result { f.debug_struct("SignedBlock") .field("message", &self.message) - .field("signature", &"...") + .field("proof", &format_args!("<{} bytes>", self.proof.len())) .finish() } } -/// Signature payload for the block. -/// -///
-/// -/// See the note on [`SignedBlock`] for why `HashTreeRoot` is omitted. -/// -///
-#[derive(Clone, SszEncode, SszDecode)] -pub struct BlockSignatures { - /// Attestation signatures for the aggregated attestations in the block body. - /// - /// Each entry corresponds to an aggregated attestation from the block body and - /// contains the leanVM aggregated signature proof bytes for the participating validators. - /// - /// TODO: - /// - Eventually this field will be replaced by a single SNARK aggregating *all* signatures. - pub attestation_signatures: AttestationSignatures, +pub type ByteListMiB = ByteList<1_048_576>; - /// Proposer's signature over the block root using the proposal key. - pub proposer_signature: XmssSignature, -} +// ============================================================================ +// Type-1 / Type-2 multi-signature model +// ============================================================================ -/// List of per-attestation aggregated signature proofs. +/// Trusted `Evaluation` field carried inside Type-1 / Type-2 proofs. /// -/// Each entry corresponds to an aggregated attestation from the block body. +/// Upstream models this as a `Bytes32` placeholder until `lean_multisig_py` +/// bindings land with the concrete SSZ serialisation. Mirrored here as `H256`. +pub type BytecodeClaim = H256; + +/// Per-message metadata for a Type-1 (single-message) multi-signer proof. /// -/// It contains: -/// - the participants bitfield, -/// - proof bytes from leanVM signature aggregation. -pub type AttestationSignatures = SszList; +/// Carries everything a verifier needs to recompute the proof's binding inputs +/// without re-deriving from block content. Participants stay in bitfield form +/// for wire compactness; pubkeys are resolved at the binding boundary from the +/// validator registry. +#[derive(Debug, Clone, SszEncode, SszDecode, HashTreeRoot)] +pub struct TypeOneInfo { + /// The 32-byte message that was signed + /// (e.g. `hash_tree_root` of attestation data, or a block root). + pub message: H256, + /// The slot in which the signatures were created. + pub slot: u64, + /// Bitfield indicating which validators contributed signatures. + pub participants: AggregationBits, + /// Trusted evaluation tied to the proof. Recomputed by the verifier when + /// received externally. + pub bytecode_claim: BytecodeClaim, +} -/// Cryptographic proof that a set of validators signed a message. +/// Maximum number of distinct `AttestationData` entries permitted in a single +/// block. Canonical home for the cap shared across `ethlambda-blockchain`, +/// `ethlambda-test-fixtures`, and the wire types in this crate. /// -/// This container encapsulates the output of the leanVM signature aggregation, -/// combining the participant set with the proof bytes. This design ensures -/// the proof is self-describing: it carries information about which validators -/// it covers. +/// See: leanSpec commit 0c9528a (PR #536). +pub const MAX_ATTESTATIONS_DATA: usize = 16; + +/// SSZ-list of Type-1 info entries packed inside a Type-2 proof. /// -/// The proof can verify that all participants signed the same message in the -/// same epoch, using a single verification operation instead of checking -/// each signature individually. +/// Holds at most `MAX_ATTESTATIONS_DATA` distinct attestation entries plus one +/// for the proposer's own signature. Mirrors upstream +/// `TypeOneInfos.LIMIT = MAX_ATTESTATIONS_DATA + 1`. +pub type TypeOneInfos = SszList; + +/// A Type-1 single-message proof aggregating signatures from many validators. #[derive(Debug, Clone, SszEncode, SszDecode, HashTreeRoot)] -pub struct AggregatedSignatureProof { - /// Bitfield indicating which validators' signatures are included. - pub participants: AggregationBits, - /// The raw aggregated proof bytes from leanVM. - pub proof_data: ByteListMiB, +pub struct TypeOneMultiSignature { + /// Message, slot, participants, and trusted bytecode claim. + pub info: TypeOneInfo, + /// Raw aggregated proof bytes (`ExecutionProof` on the Rust side). + pub proof: ByteListMiB, } -pub type ByteListMiB = ByteList<1_048_576>; +/// A Type-2 merged proof covering many distinct messages. +/// +/// On the wire a `SignedBlock` will carry the SSZ-serialised form of this +/// container as its single proof blob (introduced in a later phase). The +/// block-level info list enumerates every `(message, slot, participants)` +/// tuple the proof binds to. +#[derive(Debug, Clone, SszEncode, SszDecode, HashTreeRoot)] +pub struct TypeTwoMultiSignature { + /// Per-message metadata, one entry per merged Type-1 proof. + pub info: TypeOneInfos, + /// Aggregation-level trusted evaluation. Recomputed on receive. + pub bytecode_claim: BytecodeClaim, + /// Raw merged proof bytes (`ExecutionProof` on the Rust side). + pub proof: ByteListMiB, +} -impl AggregatedSignatureProof { - /// Create a new aggregated signature proof. - pub fn new(participants: AggregationBits, proof_data: ByteListMiB) -> Self { +impl TypeOneMultiSignature { + /// Build a Type-1 proof with the given participants, message, slot and + /// raw proof bytes. + pub fn new( + participants: AggregationBits, + message: H256, + slot: u64, + proof_data: ByteListMiB, + ) -> Self { Self { - participants, - proof_data, + info: TypeOneInfo { + message, + slot, + participants, + bytecode_claim: BytecodeClaim::ZERO, + }, + proof: proof_data, } } - /// Create an empty proof with the given participants bitfield. + /// Build an empty Type-1 proof with the given participants and message + /// metadata. `proof` bytes are left empty β€” useful as a placeholder when + /// actual aggregation is not yet performed (forkchoice tests, etc.). + pub fn empty(participants: AggregationBits, message: H256, slot: u64) -> Self { + Self::new(participants, message, slot, SszList::new()) + } + + /// Wrap a proposer's XMSS signature over a block root as a singleton Type-1. /// - /// Used as a placeholder when actual aggregation is not yet implemented. - pub fn empty(participants: AggregationBits) -> Self { - Self { - participants, - proof_data: SszList::new(), - } + /// Used by block production and test fixtures to fold the proposer's + /// signature into the block-level Type-2 merged proof. + pub fn for_proposer( + proposer_index: u64, + proposer_signature: ByteListMiB, + block_root: H256, + slot: u64, + ) -> Self { + let mut participants = AggregationBits::with_length(proposer_index as usize + 1) + .expect("validator index fits"); + participants + .set(proposer_index as usize, true) + .expect("index within capacity"); + Self::new(participants, block_root, slot, proposer_signature) } /// Returns the validator indices that are set in the participants bitfield. pub fn participant_indices(&self) -> impl Iterator + '_ { - validator_indices(&self.participants) + validator_indices(&self.info.participants) + } +} + +impl TypeTwoMultiSignature { + /// Merge a list of Type-1 single-message proofs into a single Type-2 + /// multi-message proof. Mirrors upstream leanSpec's `aggregate_type_2` + /// stub: the metadata list (`TypeOneInfos`) is faithfully preserved so a + /// verifier can re-derive the per-message binding inputs, but the merged + /// `proof` bytes are left empty until the `lean_multisig_py` bindings ship + /// real cryptographic merging. Block-level signature verification stays + /// structural-only in the meantime, and per-attestation crypto verification + /// continues to run at gossip ingestion. + pub fn from_type_1s(type_1s: Vec) -> Self { + let infos: Vec = type_1s.into_iter().map(|t1| t1.info).collect(); + let info = TypeOneInfos::try_from(infos) + .expect("type-1 infos within MAX_ATTESTATIONS_DATA + 1 limit"); + Self { + info, + bytecode_claim: BytecodeClaim::ZERO, + proof: ByteListMiB::default(), + } } } @@ -203,3 +273,93 @@ pub struct BlockBody { /// List of aggregated attestations included in a block. pub type AggregatedAttestations = SszList; + +#[cfg(test)] +mod tests { + use super::*; + use libssz::{SszDecode, SszEncode}; + + fn sample_bits(len: usize, set: &[usize]) -> AggregationBits { + let mut b = AggregationBits::with_length(len).unwrap(); + for &i in set { + b.set(i, true).unwrap(); + } + b + } + + fn sample_type_one_info() -> TypeOneInfo { + TypeOneInfo { + message: H256([7u8; 32]), + slot: 42, + participants: sample_bits(8, &[0, 3, 7]), + bytecode_claim: H256([1u8; 32]), + } + } + + #[test] + fn type_one_info_ssz_round_trip() { + let info = sample_type_one_info(); + let bytes = info.to_ssz(); + let decoded = TypeOneInfo::from_ssz_bytes(&bytes).expect("decode"); + assert_eq!(decoded.message, info.message); + assert_eq!(decoded.slot, info.slot); + assert_eq!(decoded.bytecode_claim, info.bytecode_claim); + assert_eq!( + decoded.participants.as_bytes(), + info.participants.as_bytes() + ); + } + + #[test] + fn type_one_multi_signature_ssz_round_trip() { + let proof_bytes: Vec = (0..64).collect(); + let sig = TypeOneMultiSignature { + info: sample_type_one_info(), + proof: ByteListMiB::try_from(proof_bytes.clone()).unwrap(), + }; + let bytes = sig.to_ssz(); + let decoded = TypeOneMultiSignature::from_ssz_bytes(&bytes).expect("decode"); + assert_eq!(decoded.proof.to_vec(), proof_bytes); + assert_eq!(decoded.info.slot, sig.info.slot); + } + + #[test] + fn type_two_multi_signature_ssz_round_trip() { + let infos: Vec = (0..3) + .map(|i| TypeOneInfo { + message: H256([i as u8; 32]), + slot: 100 + i as u64, + participants: sample_bits(8, &[i, i + 1]), + bytecode_claim: H256([0xAA; 32]), + }) + .collect(); + let merged_bytes: Vec = (0..128).map(|i| (i % 256) as u8).collect(); + let sig = TypeTwoMultiSignature { + info: TypeOneInfos::try_from(infos.clone()).unwrap(), + bytecode_claim: H256([0xBB; 32]), + proof: ByteListMiB::try_from(merged_bytes.clone()).unwrap(), + }; + let bytes = sig.to_ssz(); + let decoded = TypeTwoMultiSignature::from_ssz_bytes(&bytes).expect("decode"); + assert_eq!(decoded.info.len(), 3); + assert_eq!(decoded.proof.to_vec(), merged_bytes); + assert_eq!(decoded.bytecode_claim, sig.bytecode_claim); + for (got, want) in decoded.info.iter().zip(infos.iter()) { + assert_eq!(got.slot, want.slot); + assert_eq!(got.message, want.message); + } + } + + #[test] + fn type_one_infos_respects_limit() { + let too_many: Vec = (0..18) + .map(|i| TypeOneInfo { + message: H256([i as u8; 32]), + slot: i as u64, + participants: sample_bits(1, &[0]), + bytecode_claim: H256([0u8; 32]), + }) + .collect(); + assert!(TypeOneInfos::try_from(too_many).is_err()); + } +} diff --git a/crates/common/types/tests/ssz_spectests.rs b/crates/common/types/tests/ssz_spectests.rs index 911daf20..8391d3ff 100644 --- a/crates/common/types/tests/ssz_spectests.rs +++ b/crates/common/types/tests/ssz_spectests.rs @@ -62,22 +62,23 @@ fn run_ssz_test(test: &SszTestCase) -> datatest_stable::Result<()> { ssz_types::SignedAttestation, ethlambda_types::attestation::SignedAttestation, >(test), - "SignedBlock" => run_serialization_only_test::< - ssz_types::SignedBlock, - ethlambda_types::block::SignedBlock, - >(test), - "BlockSignatures" => run_serialization_only_test::< - ssz_types::BlockSignatures, - ethlambda_types::block::BlockSignatures, - >(test), - "AggregatedSignatureProof" => run_typed_test::< - ssz_types::AggregatedSignatureProof, - ethlambda_types::block::AggregatedSignatureProof, - >(test), - "SignedAggregatedAttestation" => run_typed_test::< - ssz_types::SignedAggregatedAttestation, - ethlambda_types::attestation::SignedAggregatedAttestation, - >(test), + + // Skipped pending fixture regeneration against the Type-1 / Type-2 + // schema (anshalshukla/leanSpec@0ab09dd). Phase 3 removed the legacy + // `BlockSignatures` / `AttestationSignatures` / `AggregatedSignatureProof` + // containers; the on-disk fixtures still serialise the old shape so + // SSZ-byte and root assertions don't line up. + // TODO(type1-type2): re-enable once `LEAN_SPEC_COMMIT_HASH` is bumped. + "SignedBlock" + | "BlockSignatures" + | "AggregatedSignatureProof" + | "SignedAggregatedAttestation" => { + println!( + " Skipping {} (Type-2 schema migration WIP)", + test.type_name + ); + Ok(()) + } // Unsupported types: skip with a message other => { diff --git a/crates/common/types/tests/ssz_types.rs b/crates/common/types/tests/ssz_types.rs index 27bd2bd8..d5395b29 100644 --- a/crates/common/types/tests/ssz_types.rs +++ b/crates/common/types/tests/ssz_types.rs @@ -2,18 +2,13 @@ use std::collections::HashMap; use std::path::Path; pub use ethlambda_test_fixtures::{ - AggregatedAttestation, AggregationBits, AttestationData, Block, BlockBody, BlockHeader, - Checkpoint, Config, Container, TestInfo, TestState, Validator, + AggregatedAttestation, AttestationData, Block, BlockBody, BlockHeader, Checkpoint, Config, + TestInfo, TestState, Validator, }; use ethlambda_types::{ attestation::{ - Attestation as DomainAttestation, - SignedAggregatedAttestation as DomainSignedAggregatedAttestation, - SignedAttestation as DomainSignedAttestation, XmssSignature, - }, - block::{ - AggregatedSignatureProof as DomainAggregatedSignatureProof, AttestationSignatures, - BlockSignatures as DomainBlockSignatures, ByteListMiB, SignedBlock as DomainSignedBlock, + Attestation as DomainAttestation, SignedAttestation as DomainSignedAttestation, + XmssSignature, }, primitives::H256, }; @@ -129,87 +124,11 @@ impl From for DomainSignedAttestation { } } -#[derive(Debug, Clone, Deserialize)] -pub struct SignedBlock { - pub block: Block, - pub signature: BlockSignatures, -} - -impl From for DomainSignedBlock { - fn from(value: SignedBlock) -> Self { - Self { - message: value.block.into(), - signature: value.signature.into(), - } - } -} - -#[derive(Debug, Clone, Deserialize)] -pub struct BlockSignatures { - #[serde(rename = "attestationSignatures")] - pub attestation_signatures: Container, - #[serde(rename = "proposerSignature")] - #[serde(deserialize_with = "deser_signature_hex")] - pub proposer_signature: XmssSignature, -} - -impl From for DomainBlockSignatures { - fn from(value: BlockSignatures) -> Self { - let att_sigs: Vec = value - .attestation_signatures - .data - .into_iter() - .map(Into::into) - .collect(); - Self { - attestation_signatures: AttestationSignatures::try_from(att_sigs) - .expect("too many attestation signatures"), - proposer_signature: value.proposer_signature, - } - } -} - -#[derive(Debug, Clone, Deserialize)] -pub struct AggregatedSignatureProof { - pub participants: AggregationBits, - #[serde(rename = "proofData")] - pub proof_data: HexByteList, -} - -impl From for DomainAggregatedSignatureProof { - fn from(value: AggregatedSignatureProof) -> Self { - let proof_bytes: Vec = value.proof_data.into(); - Self { - participants: value.participants.into(), - proof_data: ByteListMiB::try_from(proof_bytes).expect("proof data too large"), - } - } -} - -/// Hex-encoded byte list in the fixture format: `{ "data": "0xdeadbeef" }` -#[derive(Debug, Clone, Deserialize)] -pub struct HexByteList { - data: String, -} - -impl From for Vec { - fn from(value: HexByteList) -> Self { - let stripped = value.data.strip_prefix("0x").unwrap_or(&value.data); - hex::decode(stripped).expect("invalid hex in proof data") - } -} - -#[derive(Debug, Clone, Deserialize)] -pub struct SignedAggregatedAttestation { - pub data: AttestationData, - pub proof: AggregatedSignatureProof, -} - -impl From for DomainSignedAggregatedAttestation { - fn from(value: SignedAggregatedAttestation) -> Self { - Self { - data: value.data.into(), - proof: value.proof.into(), - } - } -} +// NOTE: After Phase 3 the legacy `BlockSignatures` / `AttestationSignatures` / +// `AggregatedSignatureProof` containers are removed from the domain, and +// `SignedBlock` now carries a single `proof: ByteListMiB` field. The pinned +// leanSpec fixtures still use the old shape, so SSZ-byte and root assertions +// for `SignedBlock`, `BlockSignatures`, `AggregatedSignatureProof`, and +// `SignedAggregatedAttestation` are intentionally skipped in +// `ssz_spectests.rs::run_ssz_test` until the fixture commit is bumped to the +// Type-1/Type-2 schema. diff --git a/crates/net/rpc/src/lib.rs b/crates/net/rpc/src/lib.rs index a8e18319..73f600a6 100644 --- a/crates/net/rpc/src/lib.rs +++ b/crates/net/rpc/src/lib.rs @@ -305,11 +305,9 @@ mod tests { #[tokio::test] async fn test_get_latest_finalized_block() { use ethlambda_types::{ - attestation::XmssSignature, - block::{Block, BlockBody, BlockSignatures, SignedBlock}, + block::{Block, BlockBody, ByteListMiB, SignedBlock}, checkpoint::Checkpoint, primitives::{H256, HashTreeRoot as _}, - signature::SIGNATURE_SIZE, }; use libssz::SszEncode; @@ -317,7 +315,7 @@ mod tests { let backend = Arc::new(InMemoryBackend::new()); let mut store = Store::from_anchor_state(backend, state); - // Build a non-genesis signed block with empty body and zero proposer signature. + // Build a non-genesis signed block with empty body and empty proof blob. let block = Block { slot: 1, proposer_index: 0, @@ -328,10 +326,7 @@ mod tests { let block_root = block.header().hash_tree_root(); let signed_block = SignedBlock { message: block, - signature: BlockSignatures { - attestation_signatures: Default::default(), - proposer_signature: XmssSignature::try_from(vec![0u8; SIGNATURE_SIZE]).unwrap(), - }, + proof: ByteListMiB::default(), }; // Persist the signed block and mark it as the latest finalized checkpoint. diff --git a/crates/net/rpc/src/test_driver.rs b/crates/net/rpc/src/test_driver.rs index 2700a707..61cff59a 100644 --- a/crates/net/rpc/src/test_driver.rs +++ b/crates/net/rpc/src/test_driver.rs @@ -42,7 +42,7 @@ use ethlambda_types::{ attestation::{ AggregationBits as EthAggregationBits, SignedAggregatedAttestation, SignedAttestation, }, - block::{AggregatedSignatureProof, Block, ByteListMiB}, + block::{Block, ByteListMiB, TypeOneMultiSignature}, checkpoint::Checkpoint, primitives::{H256, HashTreeRoot as _}, state::State, @@ -428,9 +428,15 @@ fn apply_step(store: &mut Store, step: ForkChoiceStep) -> Result<(), String> { let proof_bytes: Vec = proof.proof_data.into(); let proof_data = ByteListMiB::try_from(proof_bytes) .map_err(|err| format!("aggregated proof data too large: {err:?}"))?; + let data: ethlambda_types::attestation::AttestationData = att.data.into(); let aggregated = SignedAggregatedAttestation { - data: att.data.into(), - proof: AggregatedSignatureProof::new(participants, proof_data), + proof: TypeOneMultiSignature::new( + participants, + data.hash_tree_root(), + data.slot, + proof_data, + ), + data, }; store::on_gossip_aggregated_attestation(store, aggregated).map_err(|e| e.to_string()) } diff --git a/crates/storage/src/store.rs b/crates/storage/src/store.rs index 8ce1bbc2..85db358c 100644 --- a/crates/storage/src/store.rs +++ b/crates/storage/src/store.rs @@ -11,9 +11,7 @@ use crate::api::{StorageBackend, StorageWriteBatch, Table}; use ethlambda_types::{ attestation::{AttestationData, HashedAttestationData, bits_is_subset}, - block::{ - AggregatedSignatureProof, Block, BlockBody, BlockHeader, BlockSignatures, SignedBlock, - }, + block::{Block, BlockBody, BlockHeader, ByteListMiB, SignedBlock, TypeOneMultiSignature}, checkpoint::Checkpoint, primitives::{H256, HashTreeRoot as _}, signature::ValidatorSignature, @@ -97,14 +95,14 @@ const GOSSIP_SIGNATURE_CAP: usize = 2048; #[derive(Clone)] struct PayloadEntry { data: AttestationData, - proofs: Vec, + proofs: Vec, } /// Fixed-size circular buffer for aggregated payloads. /// /// Groups proofs by attestation data (via data_root). Each distinct /// attestation message stores the full `AttestationData` plus all -/// `AggregatedSignatureProof`s covering that message. +/// `TypeOneMultiSignature`s covering that message. /// /// Entries are evicted FIFO (by insertion order of the data_root) /// when the buffer reaches capacity. @@ -135,19 +133,19 @@ impl PayloadBuffer { /// any existing proof, the incoming proof is redundant and skipped. /// - Otherwise, any existing proof whose participants are a strict subset /// of the incoming proof's is removed before inserting. - fn push(&mut self, hashed: HashedAttestationData, proof: AggregatedSignatureProof) { + fn push(&mut self, hashed: HashedAttestationData, proof: TypeOneMultiSignature) { let (data_root, att_data) = hashed.into_parts(); if let Some(entry) = self.data.get_mut(&data_root) { let mut to_remove: Vec = Vec::new(); for (i, p) in entry.proofs.iter().enumerate() { // Incoming is subsumed by an existing proof (incl. equal). Skip. - if bits_is_subset(&proof.participants, &p.participants) { + if bits_is_subset(&proof.info.participants, &p.info.participants) { return; } // Existing is a strict subset of incoming. Mark for removal. // (Non-strict equality was ruled out by the check above.) - if bits_is_subset(&p.participants, &proof.participants) { + if bits_is_subset(&p.info.participants, &proof.info.participants) { to_remove.push(i); } } @@ -184,7 +182,7 @@ impl PayloadBuffer { } /// Insert a batch of (hashed_attestation_data, proof) entries. - fn push_batch(&mut self, entries: Vec<(HashedAttestationData, AggregatedSignatureProof)>) { + fn push_batch(&mut self, entries: Vec<(HashedAttestationData, TypeOneMultiSignature)>) { for (hashed, proof) in entries { self.push(hashed, proof); } @@ -196,7 +194,7 @@ impl PayloadBuffer { /// like `promote_new_aggregated_payloads` re-insert into known_payloads /// deterministically. HashMap iteration would be RandomState-seeded and /// produce non-deterministic vote ordering for same-slot equivocation. - fn drain(&mut self) -> Vec<(HashedAttestationData, AggregatedSignatureProof)> { + fn drain(&mut self) -> Vec<(HashedAttestationData, TypeOneMultiSignature)> { self.total_proofs = 0; let mut result = Vec::with_capacity(self.data.values().map(|e| e.proofs.len()).sum()); while let Some(data_root) = self.order.pop_front() { @@ -220,7 +218,7 @@ impl PayloadBuffer { } /// Return cloned proofs for a given data_root, or empty vec if none. - fn proofs_for_root(&self, data_root: &H256) -> Vec { + fn proofs_for_root(&self, data_root: &H256) -> Vec { self.data .get(data_root) .map_or_else(Vec::new, |e| e.proofs.clone()) @@ -952,16 +950,16 @@ impl Store { batch.commit().expect("commit"); } - /// Get a signed block by combining header, body, and signatures. + /// Get a signed block by combining header, body, and the merged proof. /// /// Returns None if any of the components are not found. - /// Note: Genesis block has no entry in BlockSignatures table. + /// Note: Genesis block has no entry in the `BlockSignatures` table. pub fn get_signed_block(&self, root: &H256) -> Option { let view = self.backend.begin_read().expect("read view"); let key = root.to_ssz(); let header_bytes = view.get(Table::BlockHeaders, &key).expect("get")?; - let sig_bytes = view.get(Table::BlockSignatures, &key).expect("get")?; + let proof_bytes = view.get(Table::BlockSignatures, &key).expect("get")?; let header = BlockHeader::from_ssz_bytes(&header_bytes).expect("valid header"); @@ -974,11 +972,11 @@ impl Store { }; let block = Block::from_header_and_body(header, body); - let signature = BlockSignatures::from_ssz_bytes(&sig_bytes).expect("valid signatures"); + let proof = ByteListMiB::from_ssz_bytes(&proof_bytes).expect("valid block proof"); Some(SignedBlock { message: block, - signature, + proof, }) } @@ -1034,7 +1032,7 @@ impl Store { /// Returns a snapshot of known payloads as (AttestationData, Vec) pairs. pub fn known_aggregated_payloads( &self, - ) -> HashMap)> { + ) -> HashMap)> { let buf = self.known_payloads.lock().unwrap(); buf.data .iter() @@ -1069,7 +1067,7 @@ impl Store { pub fn existing_proofs_for_data( &self, data_root: &H256, - ) -> (Vec, Vec) { + ) -> (Vec, Vec) { let new = self.new_payloads.lock().unwrap().proofs_for_root(data_root); let known = self .known_payloads @@ -1091,7 +1089,7 @@ impl Store { pub fn insert_known_aggregated_payload( &mut self, hashed: HashedAttestationData, - proof: AggregatedSignatureProof, + proof: TypeOneMultiSignature, ) { self.known_payloads.lock().unwrap().push(hashed, proof); } @@ -1099,7 +1097,7 @@ impl Store { /// Batch-insert proofs into the known buffer. pub fn insert_known_aggregated_payloads_batch( &mut self, - entries: Vec<(HashedAttestationData, AggregatedSignatureProof)>, + entries: Vec<(HashedAttestationData, TypeOneMultiSignature)>, ) { self.known_payloads.lock().unwrap().push_batch(entries); } @@ -1113,7 +1111,7 @@ impl Store { pub fn insert_new_aggregated_payload( &mut self, hashed: HashedAttestationData, - proof: AggregatedSignatureProof, + proof: TypeOneMultiSignature, ) { self.new_payloads.lock().unwrap().push(hashed, proof); } @@ -1121,7 +1119,7 @@ impl Store { /// Batch-insert proofs into the new buffer. pub fn insert_new_aggregated_payloads_batch( &mut self, - entries: Vec<(HashedAttestationData, AggregatedSignatureProof)>, + entries: Vec<(HashedAttestationData, TypeOneMultiSignature)>, ) { self.new_payloads.lock().unwrap().push_batch(entries); } @@ -1210,7 +1208,7 @@ impl Store { } } -/// Write block header, body, and signatures onto an existing batch. +/// Write block header, body, and the merged proof blob onto an existing batch. /// /// Returns the deserialized [`Block`] so callers can access fields like /// `slot` and `parent_root` without re-deserializing. @@ -1221,7 +1219,7 @@ fn write_signed_block( ) -> Block { let SignedBlock { message: block, - signature, + proof, } = signed_block; let header = block.header(); @@ -1240,10 +1238,12 @@ fn write_signed_block( .expect("put block body"); } - let sig_entries = vec![(root_bytes, signature.to_ssz())]; + // Store the merged Type-2 proof blob. Table name kept for the column-family + // migration cost; renaming to `BlockProof` is a follow-up. + let proof_entries = vec![(root_bytes, proof.to_ssz())]; batch - .put_batch(Table::BlockSignatures, sig_entries) - .expect("put block signatures"); + .put_batch(Table::BlockSignatures, proof_entries) + .expect("put block proof"); block } @@ -1626,28 +1626,28 @@ mod tests { // ============ PayloadBuffer Tests ============ - fn make_proof() -> AggregatedSignatureProof { + fn make_proof() -> TypeOneMultiSignature { use ethlambda_types::attestation::AggregationBits; - AggregatedSignatureProof::empty(AggregationBits::new()) + TypeOneMultiSignature::empty(AggregationBits::new(), H256::ZERO, 0) } /// Create a proof with a specific validator bit set (distinct participants). - fn make_proof_for_validator(vid: usize) -> AggregatedSignatureProof { + fn make_proof_for_validator(vid: usize) -> TypeOneMultiSignature { use ethlambda_types::attestation::AggregationBits; let mut bits = AggregationBits::with_length(vid + 1).unwrap(); bits.set(vid, true).unwrap(); - AggregatedSignatureProof::empty(bits) + TypeOneMultiSignature::empty(bits, H256::ZERO, 0) } /// Create a proof with bits set for every validator in `vids`. - fn make_proof_for_validators(vids: &[u64]) -> AggregatedSignatureProof { + fn make_proof_for_validators(vids: &[u64]) -> TypeOneMultiSignature { use ethlambda_types::attestation::AggregationBits; let max = vids.iter().copied().max().unwrap_or(0) as usize; let mut bits = AggregationBits::with_length(max + 1).unwrap(); for &v in vids { bits.set(v as usize, true).unwrap(); } - AggregatedSignatureProof::empty(bits) + TypeOneMultiSignature::empty(bits, H256::ZERO, 0) } fn make_att_data(slot: u64) -> AttestationData {