Turbine deep dive + Alpenglow overview
P2P Networking Collaborators
Raúl Kripalani
P2P networking, Ethereum Foundation
Turbine Deep Dive
Goals and priors
High-level concepts
TVU stages
Tree-based topology
Routing rules
Transport and connectivity
FEC
Gulf Stream
CRDS
Takeways and Ideas
Turbine goals and priors
Context
Chain timing
Consensus
Tx ordering
Mempool
Problem framing. How to quickly disseminate large amounts of block data to thousands of validators globally with minimal latency.
High-level concepts
TVU stages
Anza’s Agave validator pipelines heavily revolve around the notion of stages and phases.�TVU (Transaction Validation Unit) stages:
1. Shred Fetch Stage ((TURBINE)) | Receives shreds from the network via Turbine. Listens for incoming shreds propagated by other validators (parents in the Turbine tree or directly from the leader). Buffers these shreds for processing. |
2. SigVerify Shreds Stage | Verifies shred signatures and prepares them for further processing. Deduplicates incoming shreds. Verifies the original leader's signature on each shred. Verifies the signature of the immediate retransmitter node (if applicable). Re-signs shreds that the validator will retransmit. Forwards verified shreds to the Retransmit Stage and Window Stage. |
3. Retransmit Stage ((TURBINE)) | Propagates shreds further down the Turbine tree. Deduplicates shreds again with a more nuanced filter. Determines the next set of child validators using ClusterNodes and AddrCache (stake-weighted, deterministic tree logic). Forwards the verified and re-signed shreds to these children (UDP/XDP/QUIC). |
4. Window Stage (Block Reconstruction & Replay) | Reconstructs blocks, verifies PoH, and replays transactions. Inserts verified shreds into the local Blockstore. Initiates repair requests for missing shreds from peers or reconstructs using erasure codes. Reconstructs PoH entries from complete sets of shreds. Verifies the Proof of History sequence. Replays the transactions from the verified entries against its local bank state to ensure the leader's execution was correct. |
5. Voting Stage | Participates in consensus by voting on validated forks. Once a block (or a fork derived from it) is validated through the Replay phase, the validator casts a vote. Votes are part of Solana's Tower BFT consensus mechanism, contributing to finality. Votes are also transactions and are gossiped to the network. |
Tree-based deterministic topology
Permissioned: participation in Turbine is restricted to active validators only.
Every shred S that the leader L intends to transmit at slot N receives its own propagation tree, with a fanout of 200 (DATA_PLANE_FANOUT).
The shred’s topology is deterministically calculated by performing a stake-weighted shuffle, seeded with shred ID (slot, index, type) and the leader’s pubkey as pseudo-randomness.
Every validator derives the same routing table for the shred in question.
Routing rules
Propagation:
Processes:
Transport and connectivity
Transports:
Remarks:
Some insightful data from the Alpenglow paper:
With a bandwidth of 1Gb/s, transmitting n = 1,500 shreds takes 18 ms (well below the average network delay of about 80 ms). To get to 80% of the total stake we need to reach n ≈ 150 nodes, which takes only about 2 ms. [suggests high stake concentration]
FEC
Distinction between data shreds and coding shreds at the protocol level.
Gulf Stream (tx forwarding)
Key properties:
Protocol: QUIC streams.
CRDS (Cluster Replicated Data Store)
A Gossip mechanism to maintain an eventually-consistent shared view of members of the “Solana cluster” through timestamped, self-certified data entries called CrdsValues.
Key data propagated:
CRDS seems to not be permissioned.
CRDS (Cluster Replicated Data Store)
Two mechanisms.
Data integrity and updates: All CrdsValue entries are signed by the originator. Wallclock timestamps are used to determine the latest version of an entry (ensuring eventual consistency).
Interesting takeaways and ideas
Alpenglow
High-level notes
Top-level goals:
Two key changes at the networking layer:
Rotor
Replaces Turbine for block dissemination.
Slices:
2-hop broadcast via relay layer:
CRDS
Node information updates:
Vote and certificate dissemination:
Thank you!