EigenDA
EigenDA is a Data Availability (DA) service, implemented as an actively validated service (AVS) on EigenLayer, that provides secure and scalable DA for L2s on Ethereum.
What is DA?
In informal terms, DA is a guarantee that a given piece of data will be available to anyone who wishes to retrieve it.
A DA system accepts blobs of data (via some interface) and then makes them available to retrievers (through another interface).
Two important aspects of a DA system are
- Security: The security of a DA system constitutes the set of conditions which are sufficient to ensure that all data blobs certified by the system as available are indeed available for honest retrievers to download.
- Throughput: The throughput of a DA system is the rate at which the system is able to accept blobs of data, typically measured in bytes/second.
An EigenLayer AVS for DA
EigenDA is implemented as an actively validated service on EigenLayer, which is a restaking protocol for Ethereum.
Because of this, EigenDA makes use of the EigenLayer state, which is stored on Ethereum, for consensus about the state of operators and as a callback for consensus about the availability of data. This means that EigenDA can be simpler in implementation than many existing DA solutions: EigenDA doesn't need to build it's own chain or consensus protocol; it rides on the back of Ethereum.
A first of its kind, horizontally scalable DA solution
Among extant DA solutions, EigenDA takes an approach to scalability which is unique in that it yields true horizontal scalability: Every additional unit of capacity contributed by an operator can increase the total system capacity.
This property is achieved by using a Reed Solomon erasure encoding scheme to shard the blob data across the DA nodes. While other systems such as Celestia and Danksharding (planned) also make use of Reed Solomon encoding, they do so only for the purpose of supporting certain observability properties of Data Availability Sampling (DAS) by light nodes. On the other hand, all incentivized/full nodes of the system download, store, and serve the full system bandwidth.
Horizontal scalability provides the promise for the technological bottlenecks of DA capacity to continually track demand, which has enormous implications for Layer 2 ecosystems.
Security Model
EigenDA produces a DA attestation which asserts that a given blob or collection of blobs is available. Attestations are anchored to one or more "Quorums," each of which defines a set of EigenLayer stakers which underwrite the security of the attestation. Quorums should be considered as redundant: Each quorum linked to an attestation provides an independent guarantee of availability as if the other quorums did not exist.
Each attestation is characterized by safety and liveness tolerances:
- Liveness tolerance: Conditions under which the system will produce an availability attestation.
- Safety tolerance: Conditions under which an availability attestation implies that data is indeed available.
EigenDA defines two properties of each blob attestation which relate to its liveness and safety tolerance:
- Liveness threshold: The liveness threshold defines the minimum percentage of stake which an attacker must control in order to mount a liveness attack on the system.
- Safety threshold: The safety threshold defines the total percentage of stake which an attacker must control in order to mount a first-order safety attack on the system.
The term "first-order attack" alludes to the fact that exceeding the safety threshold may represent only a contingency rather than an actual safety failure due to the presence of recovery mechanisms that would apply during such a contingency. Discussion of such mechanisms is outside of the scope of the current documentation.
Safety thresholds can translate directly into cryptoeconomic safety properties for quorums consisting of tokens which experience toxicity in the event of publicly observable attacks by a large coalition of token holders. This and other discussions of cryptoeconomic security are also beyond the scope of this technical documentation. We restrict the discussion to illustrating how the protocol preserves the given safety and liveness thresholds.
Glossary
Rollup Batcher
Sequencer rollup node component responsible for constructing and submitting to the settlement chain user transaction batches
Rollup Nodes
Refers to any rollup node (e,g, validator, verifier) which syncs current chain state through an onchain sequencer inbox.
EigenDA Proxy
Side car server as a part of rollup and used for secure and trustless communication with EigenDA.
EigenDA Client
Refers to collection of client modules used for securely dispersing and reading EigenDA blobs.
EigenDA Protocol
Broken down into 2 main sections.
Core Services
EigenDA Protocol consists of a suite of services that allow for data to be securely stored and retrieved from the validators.
Contracts
System Architecture
Core Components
- DA nodes are the service providers of EigenDA, storing chunks of blob data for a predefined time period and serving these chunks upon request.
- The disperser is responsible for encoding blobs, distributing them to the DA nodes, and aggregating their digital signatures into a DA attestation. As the disperser is currently centralized, it is trusted for system liveness; the disperser will be decentralized over time.
- The disperser and the DA nodes both depend on the Ethereum L1 for shared state about the DA node registration and stake delegation. The L1 is also currently used to bridge DA attestations to L2 end-user applications such as rollup chains.
Essential flows
Dispersal. The is the flow by which data is made available and consists of the following steps:
- The Disperser receives a collection of blobs, [encodes them], constructs a batch of encoded blobs and headers, and sends the sharded batch to the DA nodes.
- The DA nodes validate their shares of the batch, and return an attestation consisting of a BLS signature of the batch header.
- The disperser collects the attestations from the DA nodes and aggregates them into a single aggregate attestation.
Bridging. For a DA attestation to be consumed by the L2 end-user (e.g. a rollup), the it must be bridged to a chain from which the L2 can read. This might simply be the Ethereum L1 itself, but in many cases it is more economical to bridge directly into the L2 since this drastically decreases signature verification costs. For the time being all attestations are bridged to the L1 by the disperser.
Retrieval. Interested parties such as rollup challengers that want to obtain rollup blob data can retrieve a blob by downloading the encoded chunks from the DA nodes and decoding them. The blob lookup information contained in the request is obtained from the bridged attestation to the DA nodes.
Protocol Overview
For expositional purposes, we will divide the protocol into two conceptual layers:
- Attestation Layer: Modules to ensure that whenever a DA attestation is accepted by an end-user (e.g. a rollup), then the data is indeed available. More specifically, the attestation layer ensures that the system observes the safety and liveness tolerances defined in the Security Model section.
- Network Layer: The communications protocol which ensures that the liveness and safety of the protocol are robust against network-level events and threats.
Attestation Layer
The attest layer is responsible for ensuring that when the network-level assumptions and safety and liveness tolerances are observed, the system properly makes data available.
The primary responsibility of the attestation layer is to enable consensus about whether a given blob of data is fully within the custody of a set of honest nodes. (Here, what can be taken to be a set of honest nodes is defined by the system safety tolerance and the assurance that these honest nodes will be able to transmit the data to honest retrievers is handled by the network layer.) Since EigenDA is an EigenLayer AVS it does not need its own actual consensus protocol, but can instead piggy-back off of Ethereum's consensus. As a result, the attestation layer decomposes into two fairly straightforward pieces:
- Attestation Logic: The attestation logic allows us to answer the question of whether a given blob is available, given both a DA attestation and the validator state at the associated Ethereum block. The attestation logic can be understood as simply a function of these inputs which outputs yes or no, depending on whether these inputs imply that data is available. Naturally, this function is grounded upon assumptions about the behavior of honest nodes, which must perform certain validation actions as part of the attestation layer. The attestation logic further decomposes into two major modules:
- Encoding: The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.
- Assignment: The assignment module provides a deterministic mapping from validator state to an allocation of encoded chunks to DA nodes. The mapping is designed to uphold safety and liveness properties with minimal data-inefficiency.
- Bridging: Bridging describes how the attestation is bridged to the consumer protocol, such as that of the rollup. In principle, bridging can be performed in one of several different ways in order to optimize efficiency and composability. At the moment, only bridging via the Ethereum L1 is directly supported.
The desired behavior of the attestation logic can be formally described as follows (Ignore this if you're happy with the high level ideas): Let \(\alpha\) denote the safety threshold, i.e. the maximum proportion of adversarial stake that the system is able to tolerate. Likewise, let \(\beta\) represent the amount of stake that we require to be held by the signing operators in order to accept an attestation, i.e. one minus the liveness threshold. Also, let \(O\) denote the set of EigenDA operators.
We need to guarantee that any set of signing operators \(U_q \subseteq O\) such that
$$ \sum_{i \in U_q} S_i \ge \beta \sum_{i \in O}S_i$$
and any set of adversarial operators $U_a \subseteq U_q$ such
$$ \sum_{i \in U_a} S_i \le \alpha \sum_{i \in O}S_i$$
we can reconstruct the original data blob from the chunks held by \( U_q \setminus U_a \).
Encoding Module
The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.
Assignment Module
The assignment module is nothing more than a rule which takes in the Ethereum chain state and outputs an allocation of chunks to DA operators.
Signature verification and bridging
See the integration contracts section for details on how the attestation is bridged to the consumer protocol, such as that of the rollup.
Network Layer
This section is under construction.
Encoding Module
The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.
One way to think of the encoding module is that it must satisfy the following security requirements:
- Adversarial tolerance for DA nodes: We need to have tolerance to arbitrary adversarial behavior by any number of DA nodes up to some threshold. Note that while simple sharding approaches such as duplicating slices of the blob data have good tolerance to random node dropout, they have poor tolerance to worst-case adversarial behavior.
- Adversarial tolerance for disperser: We do not want to put trust assumptions on the encoder or rely on fraud proofs to detect if an encoding is done incorrectly.
Trustless Encoding via KZG and Reed-Solomon
EigenDA uses a combination of Reed-Solomon (RS) erasure coding and KZG polynomial commitments to perform trustless encoding. In this section, we provide a high level overview of how the EigenDA encoding module works and how it achieves these properties.
Reed Solomon Encoding
Basic RS encoding is used to achieve the first requirement of Adversarial tolerance for DA nodes. This looks like the following:
- The blob data is represented as a string of symbols, where each symbol is elements in a certain finite field. The number of symbols is called the
BlobLength
- These symbols are interpreted as the coefficients of a
BlobLength
-1 degree polynomial. - This polynomial is evaluated at
NumChunks
*ChunkLength
distinct indices. - Chunks are constructed, where each chunk consists of the polynomial evaluations at
ChunkLength
distinct indices.
Notice that given any number of chunks $M$ such that $M \times$ChunkLength
>= BlobLength
, via polynomial interpolation it is possible to reconstruct the original polynomial, and therefore its coefficients which represent the original blob.
Validation via KZG
To address the requirement Adversarial tolerance for disperser using RS encoding alone requires fraud proofs: a challenger must download all of the encoded chunks and check that they lie on a polynomial corresponding to the blob commitment.
To avoid the need for fraud proofs, EigenDA follows the trail blazed by the Ethereum DA sharding roadmap in using KZG polynomial commitments.
Chunk Validation
Blobs sent to EigenDA are identified by their KZG commitment (which can be calculated by the disperser and easily validated by the rollup sequencer). When the disperser generates the encoded blob chunks, it also generates a collection of opening proofs which the DA nodes can use to trustlessly verify that their chunks fall on the blob polynomial at the correct indices (note: the indices are jointly derived by the disperser and DA nodes from the chain state using the logic in the Assignment module to ensure that the evaluation indices for each node are unique).
Blob Size Verification KZG commitments also can be used to verify the degree of the original polynomial, which in turn corresponds to the size of the original blob. Having a trustlessly verifiable upper bound on the size of the blob is necessary for DA nodes to verify the correctness of the chunk assignment defined by the assignment module.
The KZG commitment relies on a structured reference string (SRS) containing a generator point $G$ multiplied by all of the powers of some secret field element $\tau$, up to some maximum power $n$. This means that it is not possible to use this SRS to commit to a polynomial of degree greater than $n$. A consequence of this is that if $p(x)$ is a polynomial of degree greater than $m$, it will not be possible to commit to the polynomial $x^{n-m}p(x)$. A "valid" commitment to the polynomial $x^{n-m}p(x)$ thus constitutes a proof that the polynomial $p(x)$ is of degree less than or equal to $m$.
In practice, this looks like the following:
- If the disperser wishes to claim that the polynomial $p(x)$ is of degree less than or equal to $m$, they must provide along with the commitment $C_1$ to $p$, a commitment $C_2$ to $q(x) = x^{n-m}p(x)$.
- The verifier then performs the pairing check $e(C_1,[x^{n-m}]_2) = e(C_2,H)$, where $H$ is the G2 generator and $[x^{n-m}]_2$ is the $n-m$'th power of tau. This pairing will only evaluate correctly when $C_2$ was constructed as described above and $deg(p) <= m$.
Note: The blob length verification here allows for the blob length to be upper-bounded; it cannot be used to prove the exact blob length.
Prover Optimizations
EigenDA makes use of the results of Fast Amortized Kate Proofs, developed for Ethereum's sharding roadmap, to reduce the computational complexity for proof generation.
See the full discussion
Verifier Optimizations
Without any optimizations, the KZG verification complexity can lead to a computational bottleneck for the DA nodes. Fortunately, the Universal Verification Equation developed for Danksharding data availability sampling dramatically reduces the complexity. EigenDA has implemented this optimization to eliminate this bottleneck for the DA nodes.
Amortized KZG Prover Backend
It is important that the encoding and commitment tasks are able to be performed in seconds and that the dominating complexity of the computation is nearly linear in the degree of the polynomial. This is done using algorithms based on the Fast Fourier Transform (FFT).
This document describes how the KZG-FFT encoder backend implements the Encode(data [][]byte, params EncodingParams) (BlobCommitments, []*Chunk, error)
interface, which 1) transforms the blob into a list of params.NumChunks
Chunks
, where each chunk is of length params.ChunkLength
2) produces the associated polynomial commitments and proofs.
We will also highlight the additional constraints on the Encoding interface which arise from the KZG-FFT encoder backend.
Deriving the polynomial coefficients and commitment
As described in the Encoding Module Specification, given a blob of data, we convert the blob to a polynomial $p(X) = \sum_{i=0}^{m-1} c_iX^i$ by simply slicing the data into a string of symbols, and interpreting this list of symbols as the tuple $(c_i)_{i=0}^{m-1}$.
In the case of the KZG-FFT encoder, the polynomial lives on the field associated with the BN254 elliptic curve, which as order [TODO: fill in order].
Given this polynomial representation, the KZG commitment can be calculated as in KZG polynomial commitments.
Polynomial Evaluation with the FFT
In order to use a Discrete Fourier Transform (DFT) to evaluate a polynomial, the indices of the polynomial evaluations which will make up the Chunks must be members of a cyclic group, which we will call $S$. A cyclic group is the group generated by taking all of the integer powers of some generator $v$, i.e., ${v^k | k \in \mathbb{Z} }$ (For this reason, the elements of a cyclic group $S$ of order $|S|=m$ will sometimes be referred to as the $|m|$’th roots of unity). Notice that since our polynomial lives on the BN254 field, the group $S$ must be a subgroup of that field (i.e. all of its elements must lie within that field).
Given a cyclic group $S$ of order $m$, we can evaluate a polynomial $p(X)$ of order $n$ at the indices contained in $S$ via the DFT,
$$ p_k = \sum_{i=1}^{n}c_i (v^k)^i $$
where $p_k$ gives the evaluation of the polynomial at $v^k \in S$. Letting $c$ denote the vector of polynomial coefficients and $p$ the vector of polynomial evaluations, we can use the shorthand $p = DFT[c]$. The inverse relation also holds, i.e., $c = DFT^{-1}[p]$.
To evaluate the DFT programmatically, we want $m = n$. Notice that we can achieve this when $m > n$ by simply padding $c$ with zeros to be of length $m$.
The use of the FFT can levy an additional requirement on the size of the group $S$. In our implementation, we require the size of $S$ to be a power of 2. For this, we can make use of the fact that the prime field associated with BN254 contains a subgroup of order $2^{28}$, which in turn contains subgroups of orders spanning every power of 2 less than $2^{28}$.
As the encoding interface calls for the construction of NumChunks
Chunks of length ChunkLength
, our application requires that $S$ be of size NumChunks*ChunkLength
, which in turn must be a power of 2.
Amortized Multireveal Proof Generation with the FFT
The construction of the multireveal proofs can also be performed using a DFT (as in “Fast Amortized Kate Proofs”). Leaving the full details of this process to the referenced document, we describe here only 1) the index-assignment the scheme used by the amortized multiproof generation approach and 2) the constraints that this creates for the overall encoder interface.
Given the group $S$ corresponding to the indices of the polynomial evaluations and a cyclic group $C$ which is a subgroup of $S$, the cosets of $C$ in $S$ are given by
$$ s+C = {g+c : c \in C} \text{ for } s \in S. $$
Each coset $s+C$ has size $|C|$, and there are $|S|/|C|$ unique and disjoint cosets.
Given a polynomial $p(X)$ and the groups $S$ and $C$, the Amortized Kate Proofs approach generates $|S|/|C|$ different KZG multi-reveal proofs, where each proof is associated with the evaluation of $p(X)$ at the indices contained in a single coset $sC$ for $s \in S$. Because the Amortized Kate Proofs approach uses the FFT under the hood, $C$ itself must have an order which is a power of 2.
For the purposes of the KZG-FFT encoder, this means that we must choose $S$ to be of size NumChunks*ChunkLength
and $C$ to be of size ChunkLength
, each of which must be powers of 2.
Worked Example
As a simple illustrative example, suppose that AssignmentCoordinator
provides the following parameters in order to meet the security requirements of a given blob:
ChunkLength
= 3NumChunks
= 4
Supplied with these parameters, Encoder.ParamsFromMins
will upgrade ChunkLength
to the next highest power of 2, i.e., ChunkLength
= 4, and leave NumChunks
unchanged. The following figure illustrates how the indices will be assigned across the chunks in this scenario.
Assignment Module
Warning: this page describes the assignment logic for EigenDA V1. We need to update it with Blazar assignment logic which is very different.
The assignment module is essentially a rule which takes in the Ethereum chain state and outputs an allocation of chunks to DA operators. This can be generalized to a function that outputs a set of valid allocations.
A chunk assignment has the following parameters:
- Indices: the chunk indices that will be assigned to each DA node. Some DA nodes receive more than one chunk.
- ChunkLength: the length of each chunk (measured in number of symbols, as defined by the encoding module). We currently require all chunks to be of the same length, so this parameter is a scalar.
The assignment module is implemented by the AssignmentCoordinator
interface.
Assignment Logic
The standard assignment coordinator implements a very simple logic for determining the number of chunks per node and the chunk length, which we describe here.
Chunk Length
Chunk lengths must be sufficiently small that operators with a small proportion of stake will be able to receive a quantity of data commensurate with their stake share. For each operator $i$, let $S_i$ signify the amount of stake held by that operator.
We require that the chunk size $C$ satisfy
$$ C \le \text{NextPowerOf2}\left(\frac{B}{\gamma}\max\left(\frac{\min_jS_j}{\sum_jS_j}, \frac{1}{M_\text{max}} \right) \right) $$
where $\gamma = \beta-\alpha$, with $\alpha$ and $\beta$ the adversary and quorum thresholds as defined in the Overview.
This means that as long as an operator has a stake share of at least $1/M_\text{max}$, then the encoded data that they will receive will be within a factor of 2 of their share of stake. Operators with less than $1/M_\text{max}$ of stake will receive no more than a $1/M_\text{max}$ of the encoded data. $M_\text{max}$ represents the maximum number of chunks that the disperser can be required to encode per blob. This limit is included because proving costs scale somewhat super-linearly with the number of chunks.
In the future, additional constraints on chunk length may be added; for instance, the chunk length may be set in order to maintain a fixed number of chunks per blob across all system states. Currently, the protocol does not mandate a specific value for the chunk length, but will accept the range satisfying the above constraint. The CalculateChunkLength
function is provided as a convenience function that can be used to find a chunk length satisfying the protocol requirements.
Index Assignment
For each operator $i$, let $S_i$ signify the amount of stake held by that operator. We want for the number of chunks assigned to operator $i$ to satisfy
$$ \frac{\gamma m_i C}{B} \ge \frac{S_i}{\sum_j S_j} $$
Let
$$ m_i = \text{ceil}\left(\frac{B S_i}{C\gamma \sum_j S_j}\right)\tag{1} $$
Correctness Let's show that any sets $U_q$ and $U_a$ satisfying the constraints in the Consensus Layer Overview, the data held by the operators $U_q \setminus U_a$ will constitute an entire blob. The amount of data held by these operators is given by
$$ \sum_{i \in U_q \setminus U_a} m_i C $$
We have from (1) and from the definitions of $U_q$ and $U_a$ that
$$ \sum_{i \in U_q \setminus U_a} m_i C \ge =\frac{B}{\gamma}\sum_{i \in U_q \setminus U_a}\frac{S_i}{\sum_j S_j} = \frac{B}{\gamma}\frac{\sum_{i \in U_q} S_i - \sum_{i \in U_a} S_i}{\sum_jS_j} \ge B \frac{\beta-\alpha}{\gamma} = B \tag{2} $$
Since the unique data held by these operators exceeds the size of a blob, the encoding module ensures that the original blob can be reconstructed from this data.
Validation Actions
Validation with respect to assignments is performed at different layers of the protocol:
DA Nodes
When the DA node receives a StoreChunks
request, it performs the following validation actions relative to each blob header:
- It uses the
ValidateChunkLength
to validate that theChunkLength
for the blob satisfies the above constraints. - It uses
GetOperatorAssignment
to calculate the chunk indices for which it is responsible, and verifies that each of the chunks that it has received lies on the polynomial at these indices (see Encoding validation actions)
This step ensures that each honest node has received the blobs for which it is accountable.
Since the DA nodes will allow a range of ChunkLength
values, as long as they satisfy the constraints of the protocol, it is necessary for there to be consensus on the ChunkLength
that is in use for a particular blob and quorum. For this reason, the ChunkLength
is included in the BlobQuorumParam
which is hashed to create the merkle root contained in the BatchHeaderHash
signed by the DA nodes.
Rollup Smart Contract
When the rollup confirms its blob against the EigenDA batch, it checks that the ConfirmationThreshold
for the blob is greater than the AdversaryThreshold
. This means that if the ChunkLength
determined by the disperser is invalid, the batch cannot be confirmed as a sufficient number of nodes will not sign.
EigenDA Managed Contracts
This page describes EigenDA contracts that are managed by EigenDA related actors (see the exact roles). For EigenDA-related contracts that are managed by rollups, see the rollup managed contracts page.
Warning: This page is incomplete and a work in progress as we are undergoing refactors of our contracts as well as some protocol upgrades. The details will change, but the information contained here should at least help to understand the important concepts.
Middlewares Contracts
We make use of eigenlayer-middleware contracts, which are fully documented here.
EigenDA Specific Contracts
The smart contracts can be found in our repo, and the deployment addresses on different chains can be found in the Networks section of our docs.
EigenDAThreshold Registry
The EigenDAThresholdRegistry contains two sets of fundamental parameters:
/// @notice mapping of blob version id to the params of the blob version
mapping(uint16 => VersionedBlobParams) public versionedBlobParams;
struct VersionedBlobParams {
uint32 maxNumOperators;
uint32 numChunks;
uint8 codingRate;
}
/// @notice Immutable security thresholds for quorums
SecurityThresholds public defaultSecurityThresholdsV2;
struct SecurityThresholds {
uint8 confirmationThreshold;
uint8 adversaryThreshold;
}
The securityThresholds are currently immutable. Confirmation and adversary thresholds are sometimes also referred to as liveness and safety thresholds:
- Confirmation Threshold (aka liveness threshold): minimum percentage of stake which an attacker must control in order to mount a liveness attack on the system.
- Adversary Threshold (aka safety threshold): total percentage of stake which an attacker must control in order to mount a first-order safety attack on the system.
Their default values are currently set as:
defaultSecurityThresholdsV2 = {
confirmationThreshold = 55,
adversaryThreshold = 33,
}
A new BlobParam version is rarely introduced by the EigenDA Foundation Governance. When dispersing a blob, rollups explicitly specify the version they wish to use. Currently, only version 0
is defined, with the following parameters ((reference)[https://etherscan.io/address/0xdb4c89956eEa6F606135E7d366322F2bDE609F1]):
versionedBlobParams[0] = {
maxNumOperators = 3537,
numChunks = 8192,
codingRate = 8,
}
The five parameters are intricately related by this formula which is also verified onchain by the verifyBlobSecurityParams function:
$$ numChunks \cdot (1 - \frac{100}{\gamma * codingRate}) \geq maxNumOperators $$
where $\gamma = confirmationThreshold - adversaryThreshold$
EigenDARelayRegistry
Contains EigenDA network registered Relays’ Ethereum address and DNS hostname or IP address. BlobCertificates
contain relayKeys
, which can be transformed into that relay’s URL by calling relayKeyToUrl.
EigenDADisperserRegistry
Contains EigenDA network registered Dispersers’ Ethereum address. The EigenDA Network currently only supports a single Disperser, hosted by EigenLabs. The Disperser’s URL is currently static and unchanging, and can be found on our docs site in the Networks section.
Governance Roles
TODO
EigenDA V1
The EigenDA V1 system is deprecated and in the process of being completely sunset. We recommend all users migrate to EigenDA Blazar ("V2"), which is what is described in this book.
For completion, and for those interested in comparing the V1 and V2 systems, we leave the V1 architecture diagram below.
EigenDA Integrations
This section is meant to be read by eigenda and rollup developers who are writing or extending an integration with EigenDA. Users and developers who just want to understand how an integration works at a high level, and need to learn how to configure their own integration, should instead visit our Integrations Guides.
EigenDA V2 Integration Spec
Overview
The EigenDA V2 release documentation describes the architectural changes that allow for important network performance increases. From the point of view of rollup integrations, there are three important new features:
- Blob batches are no longer bridged to Ethereum with dispersals now being confirmed once a batch has been
CERTIFIED
(i.e, signed over by operator set). This operation takes 10-20 seconds - providing lower confirmation latency and higher throughput for the rollup. Verification of the blobs now needs to be done by the rollup stack. - Centralized (accounting done by disperser) payments model
- A new relay API from which to retrieve blobs (distinct from the disperser API which is now only used to disperse blobs)
Diagrams
We will refer to the below diagrams throughout the spec.
High Level Diagram
Sequence Diagram
sequenceDiagram box Rollup Sequencer participant B as Batcher participant SP as Proxy end box EigenDA Network participant D as Disperser participant R as Relay participant DA as DA Nodes end box Ethereum participant BI as Batcher Inbox participant BV as EigenDABlobVerifier end box Rollup Validator participant VP as Proxy participant V as Validator end %% Blob Creation and Dispersal Flow B->>SP: Send payload Note over SP: Encode payload into blob alt SP->>D: GetBlobCommitment(blob) D-->>SP: blob_commitment else SP->>SP: Compute commitment locally end Note over SP: Create blob_header including payment_header SP->>D: DisperseBlob(blob, blob_header) D-->>SP: QUEUED status + blob_header_hash %% Parallel dispersal to Relay and DA nodes par Dispersal to Storage R->>D: Pull blob and Dispersal to DA nodes D->>DA: Send Headers DA->>R: Pull Chunks DA->>D: Signature end loop Until CERTIFIED status SP->>D: GetBlobStatus D-->>SP: status + signed_batch + blob_verification_info end SP->>BV: getNonSignerStakesAndSignature(signed_batch) SP->>BV: verifyBlobV2(batch_header, blob_verification_info, nonSignerStakesAndSignature) SP->>BI: Submit cert = (batch_header, blob_verification_info, nonSignerStakesAndSignature) %% Validation Flow V->>BI: Read cert V->>VP: GET /get/{cert} → cert activate V Note over VP: Extract relay_key + blob_header_hash from cert VP->>R: GetBlob(blob_header_hash) R-->>VP: Return blob VP->>BV: verifyBlobV2 VP-->>V: Return validated blob deactivate V
Ultra High Resolution Diagram
APIs
Below we give a summary of the APIs relevant to understanding the EigenDA high-level diagram
Proxy
See our gorilla/mux routes for full detail, but the gist is that proxy presents a REST endpoint based off of the op da-server spec to rollup batchers:
# OP
POST /put body: <preimage_bytes> → <hex_encoded_commitment>
GET /get/{hex_encoded_commitment} → <preimage_bytes>
# NITRO
Same as OP but add a `?commitment_mode=standard` query param
to both POST and GET methods.
Disperser
The disperser presents a grpc v2 service endpoint
$ EIGENDA_DISPERSER_PREPROD=disperser-preprod-holesky.eigenda.xyz:443
$ grpcurl $EIGENDA_DISPERSER_PREPROD list disperser.v2.Disperser
disperser.v2.Disperser.DisperseBlob
disperser.v2.Disperser.GetBlobCommitment
disperser.v2.Disperser.GetBlobStatus
disperser.v2.Disperser.GetPaymentState
Relay
Relays similarly present a grpc service endpoint
$ EIGENDA_RELAY_PREPROD=relay-1-preprod-holesky.eigenda.xyz:443
$ grpcurl $EIGENDA_RELAY_PREPROD list relay.Relay
relay.Relay.GetBlob
relay.Relay.GetChunks
Contracts
Immutable Cert Verifier
The most important contract for rollups integrations is the EigenDACertVerifier
, which presents a function to validate DACerts:
/// @notice Check a DA cert's validity
/// @param abiEncodedCert The ABI encoded certificate. Any cert verifier should decode this ABI encoding based on the certificate version.
/// @return status An enum value. Success is always mapped to 1, and other values are errors specific to each CertVerifier.
function checkDACert(bytes calldata abiEncodedCert) external view returns (uint8 status);
/// @notice Returns the EigenDA certificate version. Used off-chain to identify how to encode a certificate for this CertVerifier.
/// @return The EigenDA certificate version.
function certVersion() external view returns (uint8);
Upgradable Router
EigenDACertVerifierRouter
acts as an intermediary contract that maintains an internal mapping of activation_block_number -> EigenDACertVerifier
. This contract can be used to enable seamless upgrades for new EigenDACertVerifier
and provides a way for a rollup to securely introduce custom quorums and/or modify their security thresholds.
/// @notice Returns the address for the active cert verifier at a given reference block number.
/// The reference block number must not be in the future.
function getCertVerifierAt(uint32 referenceBlockNumber) external view returns (address);
/// @notice Check a DA cert's validity
/// @param abiEncodedCert The ABI encoded certificate. Any cert verifier should decode this ABI encoding based on the certificate version.
/// @return status An enum value. Success is always mapped to 1, and other values are errors specific to each CertVerifier.
function checkDACert(bytes calldata abiEncodedCert) external view returns (uint8 status);
Rollup Payload Lifecycle
How is a rollup’s payload (compressed batches of transactions or state transition diffs) encoded and made available on the EigenDA network?
flowchart TD subgraph Rollups[Rollup Domain] RS["Rollup Sequencer<br/>[Software System]<br/>Sequences the rollup; submits rollup payloads to EigenDA for data availability"] RV["Rollup Validator<br/>[Software System]<br/>Runs a derivation pipeline to validate the rollup"] Payload[("Rollup Payload<br/>[Data]<br/>Batches of tx data or state transition diffs")] end %% Standalone proxy Proxy["Proxy<br/>[Software System]<br/>Bridges domains by encoding/decoding payloads/blobs"] subgraph EigenDA[Data Availability Domain] EN["EigenDA Network<br/>[Software System]<br/>Provides decentralized data availability by storing and serving blobs"] Blob[("Blob<br/>[Data]<br/>Rollup payload encoded into bn254 field element array")] Cert[("DA Certificate<br/>[Data]<br/>Proof of Data Availability. Used to retrieve and validate blobs.")] ETH["Ethereum<br/>[Software System]<br/>Stores EigenDA network properties like operator stakes, etc. Also validates DA Certs."] end %% Sequencer Flow RS -->|"(1) Creates"| Payload Payload -->|"(2) Sent to"| Proxy Proxy -->|"(3) Encodes into"| Blob Blob -->|"(4) Dispersed across"| EN EN -->|"(5) Verifies signatures according to stakes stored on"| ETH EN -->|"(6) Returns cert"| Proxy Proxy -->|"(7) Submits"| Cert Cert -->|"(8) Posted to"| ETH %% Validator Flow RV -->|"(9) Reads certificates"| ETH RV -->|"(10) Retrieve Compressed Batch from Certificate"| Proxy %% Styling classDef system fill:#1168bd,stroke:#0b4884,color:white classDef container fill:#23a,stroke:#178,color:white classDef data fill:#f9f,stroke:#c6c,color:black classDef red fill:#916,stroke:#714,color:white class RS,RV,EN,ETH,S1,Proxy system class Rollups,EigenDA container class Batch,Blob,Cert,D1 data
At a high-level, a rollup sequencer needs to make its payload
available for download from validators of its network. The EigenDA network makes use of cryptographic concepts such as KZG commitments as fundamental building blocks. Because of this, it can only work with eigenda blobs
(hereafter referred to simply as blobs
; see technical definition below) of data. The EigenDA proxy is used to bridge the rollup domain (which deals with payloads) and the EigenDA domain (which deals with blobs).
As an example, an op-stack Ethereum rollup’s payload
is a compressed batch of txs (called a frame). This frame gets sent to Ethereum to be made available either as a simple tx, or as a 4844 blob
(using a blob tx). Using EigenDA instead of Ethereum for data availability works similarly: the payloads are encoded into an eigenda blob
and dispersed to the EigenDA network via an EigenDA disperser. The disperser eventually returns a DACert
containing signatures of EigenDA operators certifying the availability of the data, which is then posted to Ethereum as the input
field of a normal tx. Note that due to the rollup settling on Ethereum, Ethereum DA is needed, but only to make the DACert
available, which is much smaller than the blob
itself.
Payload
: piece of data that an EigenDA client (rollup, avs, etc.) wants to make available. This is typically compressed batches of transactions or state transition diffs.EncodedPayload
: payload encoded into a list of bn254 field elements (each 32 bytes), typically with a prefixed field element containing the payload length in bytes, such that the payload can be decoded.PayloadPolynomial
: encodedPayload padded with 0s to the next power of 2 (if needed) and interpreted either as evaluations (PolyCoeff
) or coefficients (PolyEval
) of a polynomial. Because the EigenDA network interprets blobs as coefficients, aPolyEval
will need to be IFFT’d into aPolyCoeff
before being dispersed.(EigenDA) Blob
: array of bn254 field elements of length a power of two. Interpreted by the network as coefficients of a polynomial. Equivalent toPolyCoeff
.Blob Header
: contains the information necessary to uniquely identify a BlobDispersal request.Blob Certificate
: Signed BlobHeader along with relayKeys, which uniquely identify a relay service for DA Nodes to retrieve chunks from and clients to retrieve full blobs from.Batch
: Batch of blobs whose blob certs are aggregated into a merkle tree and dispersed together for better network efficiency.DA Certificate
(orDACert
): contains the information necessary to retrieve and verify a blob from the EigenDA network, along with a proof of availability.AltDACommitment
: RLP serializedDACert
prepended with rollup-specific header bytes. This commitment is what gets sent to the rollup’s batcher inbox.
EigenDACertVerifier
: contains one main important function checkDACert which is used to verifyDACert
s.EigenDACertVerifierRouter
: contains router mapping of activation block number toEigenDACertVerifier
and allows for securely and deterministically upgrading CertVerification constants (security thresholds and custom quorums) over time.EigenDAThresholdRegistry
: contains signature related thresholds and blob→chunks encoding related parameters.EigenDARelayRegistry
: contains an Ethereum address and DNS hostname (or IP address) for each registered Relay.EigenDADisperserRegistry
: contains an Ethereum address network for each registered Disperser.
- Sequencer:
Encoding
: Payload → BlobBlobHeader Construction
: Blob → BlobHeaderDispersal
: (Blob, BlobHeader) → Certificate- Certificate+Blob
Validation
- Unhappy path:
Failover
to EthDA
- Certificate+Blob
Posting
: Certificate → Ethereum tx
- Validator (exact reverse of sequencer):
Reading
: Ethereum tx → CertificateRetrieval
: Certificate → Blob- Certificate+Blob
Validation
- Certificate+Blob
Decoding
: Blob → Payload
Data Structs
The diagram below represents the transformation from a rollup payload
to the different structs that are allowed to be dispersed.
Payload
A client payload
is whatever piece of data the EigenDA client wants to make available. For optimistic rollups this would be compressed batches of txs (frames). For (most) zk-rollups this would be compressed state transitions. For AVSs it could be Proofs, or Pictures, or any arbitrary data.
A payload
must fit inside an EigenDA blob to be dispersed. See the allowed blob sizes in the Blob section.
EncodedPayload
An encodedPayload
is the bn254 encoding of the payload
. This is an intermediary processing step, but useful to give a name to it. The encoding must respect the same constraints as those on the blob:
Every 32 bytes of data is interpreted as an integer in big endian format. Each such integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617.
The golang payload clients provided in the eigenda repo currently only support encoding version 0x0, which encodes as follows:
[0x00, version_byte, big-endian uint32 len(payload), 0x00, 0x00,...] +
[0x00, payload[0:31], 0x00, payload[32:63],...,
0x00, payload[n:len(payload)], 0x00, ..., 0x00]
where the last chunk is padded with 0s such that the total length is a multiple of 32 bytes.
For example, the payload hello
would be encoded as
[0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,...] +
[0x00, 'h', 'e', 'l', 'l', 'o', 0x00 * 26]
PayloadPolynomial
EigenDA uses KZG commitments, which represent a commitment to a function. Abstractly speaking, we thus need to represent the encodedPayload as a polynomial. We have two choices: either treat the data as the coefficients of a polynomial, or as evaluations of a polynomial. In order to convert between these two representations, we make use of FFTs which require the data to be a power of 2. Thus, PolyEval
and PolyCoeff
are defined as being an encodedPayload
padded with 0s to the next power of 2 (if needed) and interpreted as desired.
Once an interpretation of the data has been chosen, one can convert between them as follows:
PolyCoeff --FFT--> PolyEval
PolyCoeff <--IFFT-- PolyEval
Whereas Ethereum treats 4844 blobs as evaluations of a polynomial, EigenDA instead interprets EigenDA blobs as coefficients of a polynomial. Thus, only PolyCoeff
s can be submitted as a blob
to the Disperser. Each rollup integration must thus decide whether to interpret their encodedPayload
s as PolyCoeff
, which can directly be dispersed, or as PolyEval
, which will require IFFT’ing into a PolyCoeff
before being dispersed.
Typically, optimistic rollups will interpret the data as being evaluations. This allows creating point opening proofs to reveal a single field element (32 byte chunk) at a time, which is needed for interactive fraud proofs (e.g. see how optimism fraud proves 4844 blobs). ZK rollups, on the flip side, don't require point opening proofs and thus can safely save on the extra IFFT compute costs and instead interpret their data as coefficients directly.
Blob
A blob
is a bn254 field elements array that has a power of 2. It is interpreted by the EigenDA network as containing the coefficients of a polynomial (unlike Ethereum which treats blobs as being evaluations of a polynomial).
An encodedPayload
can thus be transformed into a blob
by being padded with 0s to a power of 2, with size currently limited to 16MiB. There is no minimum size, but any blob smaller than 128KiB will be charged for 128KiB.
BlobHeader
The blobHeader
is submitted alongside the blob
as part of the DisperseBlob
request, and the hash of its ABI encoding (blobKey
, also known as blobHeaderHash
) serves as a unique identifier for a blob dispersal. This identifier is used to retrieve the blob.
The BlobHeader
contains four main sections that must be constructed. It is passed into the DisperseBlobRequest
and is signed over for payment authorization.
Refer to the eigenda protobufs for full details of this struct.
Version
The blobHeader
version refers to one of the versionedBlobParams
structs defined in the EigenDAThresholdRegistry
contract.
QuorumNumbers
QuorumNumbers
represents a list of quorums required to sign and make the blob available. Quorum 0 represents the ETH quorum, quorum 1 represents the EIGEN quorum — both are always required. Custom quorums can also be added to this list.
BlobCommitment
The BlobCommitment
is a binding commitment to an EigenDA Blob. Due to the length field, a BlobCommitment
uniquely represents a single Blob
. The length field is added to the kzgCommitment to respect the binding property. It is used by the disperser to prove to EigenDA validators that the chunks they received belong to the original blob (or its Reed-Solomon extension). This commitment can either be computed locally by the EigenDA Client from the blob or generated by the disperser via the GetBlobCommitment
endpoint.
message BlobCommitment {
// A G1 commitment to the blob data.
bytes commitment = 1;
// A G2 commitment to the blob data.
bytes length_commitment = 2;
// Used with length_commitment to assert the correctness of the `length` field below.
bytes length_proof = 3;
// Length in bn254 field elements (32 bytes) of the blob. Must be a power of 2.
uint32 length = 4;
}
Unlike Ethereum blobs which are all 128KiB, EigenDA blobs can be any power of 2 length between 32KiB and 16MiB (currently), and so the commitment
alone is not sufficient to prevent certain attacks:
-
Why is a commitment to the length of the blob necessary?
There are different variants of the attack. The basic invariant the system needs to satisfy is that with the chunks from sufficient set of validators, you can get back the full blob. So the total size of the chunks held by these validators needs to exceed the blob size. If I don't know the blob size (or at least an upper bound), there's no way for the system to validate this invariant. Here’s a simple example. Assume a network of 8 DA nodes, and coding ratio 1/2. For a
blob
containing 128 field elements (FEs), each node gets 1282/8=32 FEs, meaning that any 4 nodes can join forces and reconstruct the data. Now assume a world without length proof; a malicious disperser receives the same blob, uses the same commitment, but claims that the blob only had length 4 FEs. He sends each node 42/8=1 FE. The chunks submitted to the nodes match the commitment, so the nodes accept and sign over the blob’s batch. But now there are only 8 FEs in the system, which is not enough to reconstruct the original blob (need at least 128 for that).
Note that the length here is the length of the blob (power of 2), which is different from the payload_length encoded as part of the
PayloadHeader
in theblob
itself (see the encoding section).
PaymentHeader
The paymentHeader specifies how the blob dispersal to the network will be paid for. There are 2 modes of payment, the permissionless pay-per-blob model and the permissioned reserved-bandwidth approach. See the Payments release doc for full details; we will only describe how to set these 3 fields here.
message PaymentHeader {
// The account ID of the disperser client. This should be a hex-encoded string of the ECDSA public key
// corresponding to the key used by the client to sign the BlobHeader.
string account_id = 1;
// UNIX timestamp in nanoseconds at the time of the dispersal request.
// Used to determine the reservation period, for the reserved-bandwidth payment model.
int64 timestamp = 2;
// Total amount of tokens paid by the requesting account, including the current request.
// Used for the pay-per-blob payment model.
bytes cumulative_payment = 3;
}
Users who want to pay-per-blob need to set the cumulative_payment. timestamp
is used by users who have paid for reserved-bandwidth. If both are set, reserved-bandwidth will be used first, and cumulative_payment only used if the entire bandwidth for the current reservation period has been used up.
NOTE: There will be a lot of subtleties added to this logic with the new separate-payment-per-quorum model that is actively being worked on.
An RPC call to the Disperser’s GetPaymentState
method can be made to query the current state of an account_id
. A client can query for this information on startup, cache it, and then update it manually when making dispersals. In this way, it can keep track of its reserved bandwidth usage and current cumulative_payment and set them correctly for subsequent dispersals.
EigenDA Certificate (DACert
)
An EigenDA Certificate
(or short DACert
) contains all the information needed to retrieve a blob from the EigenDA network, as well as validate it.
A DACert
contains the four data structs needed to call checkDACert on the EigenDACertVerifier.sol contract. Please refer to the eigenda core spec for more details, but in short, the BlobCertificate
is included as a leaf inside the merkle tree identified by the batch_root
in the BatchHeader
. The BlobInclusionInfo
contains the information needed to prove this merkle tree inclusion. The NonSignerStakesAndSignature
contains the aggregated BLS signature sigma
of the EigenDA validators. sigma
is a signature over the BatchHeader
. The signedQuorumNumbers
contains the quorum IDs that DA nodes signed over for the blob.
AltDACommitment
In order to be understood by each rollup stack’s derivation pipeline, the encoded DACert
must be prepended with header bytes, to turn it into an altda-commitment
respective to each stack:
- op prepends 3 bytes:
version_byte
,commitment_type
,da_layer_byte
- nitro prepends 1 byte:
version_byte
NOTE
In the future we plan to support a custom encoding byte which allows a user to specify different encoding formats for the DACert
(e.g, RLP, ABI).
Rollup Managed Contracts
This page describes contracts that are managed by rollups, but are needed to secure the EigenDA integration. For EigenDA-managed core contracts, see the core contracts page.
EigenDACertVerifier
This contract's main use case is exposing a function checkDACert which is used to verify DACerts
. This function’s logic is described in the Cert Validation section.
The contract also exposes a certVersion
method which is called by the payload disperser client to know which cert version to build in order to be verifiable by that contract.
EigenDACertVerifierRouter
This contract primarily facilitates secure upgrades of EigenDACertVerifier contracts while enabling custom quorum and threshold configurations in a format that maintains cross-version compatibility. This is done through maintaining a stateful mapping:
/// @notice A mapping from an activation block number (ABN) to a cert verifier address.
mapping(uint32 => address) public certVerifiers;
/// @notice The list of Activation Block Numbers (ABNs) for the cert verifiers.
/// @dev The list is guaranteed to be in ascending order
/// and corresponds to the keys of the certVerifiers mapping.
uint32[] public certVerifierABNs;
where each key refers to an activation_block_number
(ABN). When calling checkDACert
, the reference block number is decoded from the DACert
bytes and is used to find the unique CertVerifier active at that RBN (a reverse linear search over the certVerifierABNs
is performed). Once found, EigenDACertVerifier
at the particular ABN is used for calling checkDACert
to verify the DA Cert.
The EigenDACertVerifierRouter
enables the use of a certificate’s Reference Block Number (RBN) as a commitment to the specific EigenDACertVerifier
that should be used for verification. This mechanism ensures backward compatibility with older DA Certs, allowing an optimistic rollup to continue verifying historical data availability proofs accurately across verifier upgrades.
Lifecycle Phases
Secure interaction between a rollup and EigenDA is composed of three distinct system flows:
- Dispersal: Submitting payload data to the DA network
- Retrieval: Fetching payload data from the DA network
- Verification: Ensuring the integrity and quorum-based certification of data availability. Where and how verification is performed is often contingent on how an integration is implemented; e.g:
- Pessimistic Verification where a
DACert
is checked as pre-inclusion check for a sequencer inbox - Optimistic Verification where a
DACert
is only verified in a worst-case challenge
- Pessimistic Verification where a
Secure Dispersal
Diagram
System Flow
-
EigenDA Client takes a raw payload bytes and converts it into a blob.
-
Using
latest_block_number
(lbn) number fetched from ETH RPC node, EigenDA Client calls the router to get theEigenDACertVerifier
contract address most likely (if usingEigenDACertVerifierRouter
) to be committed to by thereference_block_number
(rbn) returned by the EigenDA disperser. -
Using the
verifier
, EigenDA Client fetches therequired_quorums
and embeds them into theBlobHeader
as part of the disperser request. -
The EigenDA Client submits the payload blob request to the EigenDA disperser via
DisperseBlob
endpoint and polls for aBlobStatusReply
(BSR). -
While querying the disperser's
GetBlobStatus
endpoint, EigenDA Client periodically checks against the confirmation threshold as it’s updated in real-time by the disperser using the rbn returned in theBlobStatusReply
for fetching thresholds. (ref) -
Once confirmation thresholds are fulfilled, EigenDA Client calls the
verifier
'scertVersion()
method to get thecert_version
and casts theDACert
into a structured ABI binding type using thecert_version
to dictate which certificate representation to use. (ref) -
EigenDA Client then passes ABI encoded cert bytes via a call to the
verifier
'scheckDACert
function which performs onchain cert verification logic and returns a uintverification_status_code
-
Using the
verification_status_code
, EigenDA Client determines whether to return the certificate (CertV2Lib.StatusCode.SUCCESS
) to the Rollup Batcher or retry a subsequent dispersal attempt
Payload to Blob Encoding
This phase occurs inside the eigenda-proxy, because the proxy acts as the “bridge” between the Rollup Domain and Data Availability Domain (see lifecycle diagram).
A payload
consists of an arbitrary byte array. The DisperseBlob endpoint accepts a blob
, which needs to be an encoded bn254 field element array.
Disperser polling
The DisperseBlob
method takes a blob
and blob_header
as input. Under the hood, the disperser performs the following steps:
- Batching: The blob is aggregated into a Merkle tree along with other blobs.
- Reed-Solomon Encoding: The blob is erasure-coded into chunks for fault tolerance.
- Dispersal to Validators: The chunks are distributed to EigenDA validator nodes based on the required quorum settings.
- Signature Collection: The disperser collects BLS signatures from participating validators.
- Status Reporting: A
BlobStatusReply
is returned to the client to reflect progress or terminal status.
The disperser batches blobs for a few seconds before dispersing them to nodes, so an entire dispersal process can exceed 10 seconds. For this reason, the API has been designed asynchronously with 2 relevant methods.
// Async call which queues up the blob for processing and immediately returns.
rpc DisperseBlob(DisperseBlobRequest) returns (DisperseBlobReply) {}
// Polled for the blob status updates, until a terminal status is received
rpc GetBlobStatus(BlobStatusRequest) returns (BlobStatusReply) {}
// Intermediate states: QUEUED, ENCODED, GATHERING_SIGNATURES
// Terminal states: UNKNOWN, COMPLETE, FAILED
enum BlobStatus {
UNKNOWN = 0; // functionally equivalent to FAILED but for unknown unknown bugs
QUEUED = 1; // Initial state after a DisperseBlob call returns
ENCODED = 2; // Reed-Solomon encoded into chunks ready to be dispersed to DA Nodes
GATHERING_SIGNATURES = 3; // blob chunks are actively being transmitted to validators
COMPLETE = 4; // blob has been dispersed and attested by DA nodes
FAILED = 5;
}
After a successful DisperseBlob RPC call, the disperser returns BlobStatus.QUEUED
. To retrieve a valid BlobStatusResponse
, the GetBlobStatus RPC endpoint should be polled until a terminal status is reached.
If BlobStatus.GATHERING_SIGNATURES
is returned, the signed_batch
and blob_verification_info
fields will be present in the BlobStatusReply
. These can be used to construct a DACert
, which may be verified immediately against the configured threshold parameters stored in the EigenDACertVerifier
contract. If the verification passes, the certificate can be accepted early. If verification fails, polling should continue.
Once BlobStatus.COMPLETE
is returned, it indicates that the disperser has stopped collecting additional signatures, typically due to reaching a timeout or encountering an issue. While the signed_batch
and blob_verification_info
fields will be populated and can be used to construct a DACert
, the DACert
could still be invalid if an insufficient amount of signatures were collected in-regards to the threshold parameters.
Any other terminal status indicates failure, and a new blob dispersal will need to be made.
Failover to Native Rollup DA
Proxy can be configured to retry BlobStatus.UNKNOWN
, BlobStatus.FAILED
, & BlobStatus.COMPLETE
(if threshold check failed) dispersal n
times, after which it returns to the rollup a 503
HTTP status code which rollup batchers can use to failover to EthDA or native rollup DA offerings (e.g, arbitrum anytrust). See here for more info on the OP implementation and here for Arbitrum.
BlobStatusReply → Cert
Implementation Note: While not mandated by the EigenDA spec, clients must currently reconstruct the
DACert
from fields in theBlobStatusReply
, as the disperser does not return a cert directly. The transformation is visualized in the Ultra High Res Diagram.
In the updated implementation, a CertBuilder
constructs the DA Cert through direct communication with the OperatorStateRetriever
contract, which provides the necessary information about operator stake states. This approach ensures accurate on-chain data for certificate verification. The following pseudocode demonstrates this process:
class DACert:
batch_header: any
blob_verification_proof: any
nonsigner_stake_sigs: any
cert_version: uint8
signedQuorumNumbers: bytes
def get_da_cert(blob_header_hash, operator_state_retriever, cert_version_uint8) -> DACert:
"""
DA Cert construction pseudocode with OperatorStateRetriever
@param blob_header_hash: key used for referencing blob status from disperser
@param operator_state_retriever: ABI contract binding for retrieving operator state data
@param cert_version_uint8: uint8 version of the certificate format to use
@return DACert: EigenDA certificate used by rollup
"""
# Call the disperser for the info needed to construct the cert
blob_status_reply = disperser_client.get_blob_status(blob_header_hash)
# Validate the blob_header received, since it uniquely identifies
# an EigenDA dispersal.
blob_header_hash_from_reply = blob_status_reply.blob_verification_info.blob_certificate.blob_header.Hash()
if blob_header_hash \!= blob_header_hash_from_reply:
throw/raise/panic
# Extract first 2 cert fields from blob status reply
batch_header = blob_status_reply.signed_batch.batch_header
blob_verification_proof = blob_status_reply.blob_verification_info
# Get the reference block number from the batch header
reference_block_number = batch_header.reference_block_number
# Get quorum IDs from the blob header
quorum_numbers = blob_verification_info.blob_certificate.blob_header.quorum_numbers
# Retrieve operator state data directly from the OperatorStateRetriever contract
operator_states = operator_state_retriever.getOperatorState(
reference_block_number,
quorum_numbers,
blob_status_reply.signed_batch.signatures
)
# Construct NonSignerStakesAndSignature using the operator state data
nonsigner_stake_sigs = construct_nonsigner_stakes_and_signature(
operator_states,
blob_status_reply.signed_batch.signatures
)
signed_quorum_numbers = blob_status_reply.signed_batch.quorum_numbers
return DACert(batch_header, blob_verification_proof, nonsigner_stake_sigs, cert_version_uint8, signed_quorum_numbers)
Secure Retrieval
System Diagram
System Flow
-
A Rollup Node queries Proxy’s
/get
endpoint to fetch batch contents associated with an encoded DA commitment. -
Proxy decodes the
cert_version
for the DA commitment and uses an internal mapping ofcert_version
⇒cert_abi_struct
to deserialize into the structured binding cert type. -
Proxy submits ABI encoded cert bytes to
EigenDACertVerifier
read call via thecheckDAcert
method, which returns averification_status_code
. -
Proxy interprets the
verification_status_code
to understand how to acknowledge the certificate's validity. If the verification fails, Proxy returns an HTTP 418 I'm a teapot status code, indicating to a secure rollup that it should disregard the certificate and treat it as an empty batch in its derivation pipeline. -
Assuming a valid certificate, Proxy attempts to query EigenDA retrieval paths for the underlying blob contents.
-
Once fetched, Proxy verifies the blob's KZG commitments to ensure tamper resistance (i.e., confirming that what's returned from EigenDA matches what was committed to during dispersal).
-
Proxy decodes the underlying blob into a
payload
type, which is returned to the Rollup Node.
Retrieval Paths
There are two main blob retrieval paths:
- decentralized retrieval: retrieve erasure coded chunks from Validators and recreate the
blob
from them. - centralized retrieval: the same Relay API that Validators use to download chunks, can also be used to retrieve full blobs.
EigenDA V2 has a new Relay API for retrieving blobs from the disperser. The GetBlob
method takes a blob_key
as input, which is a synonym for blob_header_hash
. Note that BlobCertificate
(different from DACert
!) contains an array of relay_keys
, which are the relays that can serve that specific blob. A relay’s URL can be retrieved from the relayKeyToUrl function on the EigenDARelayRegistry.sol contract.
Decoding
Decoding performs the exact reverse operations that Encoding did.
Secure Integration
This page is meant to be read by eigenda and rollup developers who are writing a secure integration and need to understand the details. For users who just want a high-level understanding of what a secure integration is, please visit our secure integration overview page instead.
Validity Conditions
EigenDA is a service that assures the availability and integrity of payloads posted to it for 14 days.
When deriving a rollup chain by running its derivation pipeline, only EigenDA DACerts
that satisfy the three validity conditions are considered valid and used:
- RBN Recency Validation - ensure that the DA Cert's reference block number (RBN) is not too old with respect to the L1 block at which the cert was included in the rollup's batcher-inbox. This ensures that the blob on EigenDA has sufficient availability time left (out of the 14 day period) in order to be downloadable if needed during a rollup fault proof window.
- Cert Validation - ensures sufficient operator stake has signed to make the blob available, for all specified quorums. The stake is obtained onchain at a given reference block number (RBN) specified inside the cert.
- Blob Validation - ensures that the blob used is consistent with the KZG commitment inside the Cert.
If #1 or #2 fails, then the DA Cert
is treated as invalid and MUST be discarded from the rollup's derivation pipeline.
1. RBN Recency Validation
This check is related to time guarantees. It is important for both optimistic and zk rollup validators to have sufficient time to download the blob from EigenDA once a cert lands in the batcher inbox.
We will use fault proofs as our base example to reason about the necessity of the recency check.
Looking at the timing diagram above, we need the EigenDA availability period to overlap the ~7days challenge period. In order to uphold this guarantee, what we need to do is simply to have rollups' derivation pipelines reject certs whose DA availability period started a long time ago. However, from the cert itself, there is no way to know when the cert was signed and made available. The only information available on the cert itself is cert.RBN
, the reference block number chosen by the disperser at which to anchor operator stakes. But that happens to be before validators sign, so it is enough to bound how far that can be from the cert's inclusion block.
Rollups must thus enforce that
certL1InclusionBlock - cert.RBN <= RecencyWindowSize
This has a second security implication. A malicious EigenDA disperser could have chosen a reference block number (RBN) that is very old, where the stake of operators was very different from the current one, due to operators withdrawing stake for example.
To give a concrete example with a rollup stack, optimism has a sequencerWindow which forces batches to land onchain in a timely fashion (12h). This filtering however, happens in the BatchQueue stage of the derivation pipeline (DP), and doesn't prevent the DP being stalled in the L1Retrieval stage by an old cert having been submitted whose blob is no longer available on EigenDA. To prevent this, we need the recencyWindow filtering to happen during the L1Retrieval stage of the DP.
Despite its semantics being slightly different, sequencerWindow and recencyWindow are related concepts, and in order to not force another config change on op altda forks, we suggest using the same value as the
SequencerWindowSize
for theRecencyWindowSize
, namely 12h.
2. Cert Validation
Cert validation is done inside the EigenDACertVerifier contract, which EigenDA deploys as-is, but is also available for rollups to modify and deploy on their own. Specifically, checkDACert is the entry point for validation. This could either be called during a normal eth transaction (either for pessimistic “bridging” like EigenDA V1 used to do, or when uploading a Blob Field Element to a one-step-proof’s preimage contract), or be zk proven using a library like Steel.
The checkDACert
function accepts an ABI-encoded []byte
certificate input. This design allows the underlying DACert structure to evolve across versions, enabling seamless upgrades without requiring changes to the EigenDACertVerifierRouter
interface.
The cert verification logic consists of:
- verify blob batch merkleInclusion proof
- verify
sigma
(operators’ bls signature) overbatchRoot
using theNonSignerStakesAndSignature
struct - verify blob security params (blob_params + security thresholds)
- verify each quorum part of the blob_header has met its threshold
3. Blob Validation
There are different required validation steps, depending on whether the client is retrieving or dispersing a blob.
Retrieval (whether data is coming from relays, or directly from DA nodes):
- Verify that received blob length is ≤ the
length
in the cert’sBlobCommitment
- Verify that the blob length claimed in the
BlobCommitment
is greater than0
- Verify that the blob length claimed in the
BlobCommitment
is a power of two - Verify that the payload length claimed in the encoded payload header is ≤ the maximum permissible payload length, as calculated from the
length
in theBlobCommitment
- The maximum permissible payload length is computed by looking at the claimed blob length, and determining how many bytes would remain if you were to remove the encoding which is performed when converting a
payload
into anencodedPayload
. This presents an upper bound for payload length: e.g. “If thepayload
were any bigger thanX
, then the process of converting it to anencodedPayload
would have yielded ablob
of larger size than claimed”
- The maximum permissible payload length is computed by looking at the claimed blob length, and determining how many bytes would remain if you were to remove the encoding which is performed when converting a
- If the bytes received for the blob are longer than necessary to convey the payload, as determined by the claimed payload length, then verify that all extra bytes are
0x0
.- Due to how padding of a blob works, it’s possible that there may be trailing
0x0
bytes, but there shouldn’t be any trailing bytes that aren’t equal to0x0
.
- Due to how padding of a blob works, it’s possible that there may be trailing
- Verify the KZG commitment. This can either be done:
- directly: recomputing the commitment using SRS points and checking that the two commitments match (this is the current implemented way)
- indirectly: verifying a point opening using Fiat-Shamir (see this issue)
Dispersal:
- If the
BlobCertificate
was generated using the disperser’sGetBlobCommitment
RPC endpoint, verify its contents:- verify KZG commitment
- verify that
length
matches the expected value, based on the blob that was actually sent - verify the
lengthProof
using thelength
andlengthCommitment
- After dispersal, verify that the
BlobKey
actually dispersed by the disperser matches the locally computedBlobKey
Note: The verification steps in point 1. for dispersal are not currently implemented. This route only makes sense for clients that want to avoid having large amounts of SRS data, but KZG commitment verification via Fiat-Shamir is required to do the verification without this data. Until the alternate verification method is implemented, usage of GetBlobCommitment
places a correctness trust assumption on the disperser generating the commitment.
Upgradable Quorums and Thresholds for Optimistic Verification
The EigenDACertVerifierRouter
contract enables secure upgrades to a rollup’s required quorums and thresholds without compromising the integrity of previously submitted state commitments. It achieves this by routing certificate verification to the appropriate EigenDACertVerifier
instance based on the reference_block_number
embedded in the cert, which dictates the verifier whose activation block was effective at that time. This ensures backward compatibility, allowing older DACert
s to be validated against the verifier version that was active at the time of their creation.
The router is typically deployed behind an upgradable admin proxy and should use the same ProxyAdmin
multisig as the rollup for consistent and secure access control.
Adding New Verifiers — Synchronization Risk
There is a synchronization risk that can temporarily cause dispersals to fail when adding a new verifier'
to the EigenDACertVerifierRouter
at a future activation block number (abn'
). If latest_block < abn'
and rbn >= abn'
, dispersals may fail if the required_quorums
set differs between verifier
and verifier'
. In this case, the quorums included in the client's BlobHeader
(based on the old verifier) would not match those expected by checkDACert
(using the new verifier). This mismatch results in at most a few failed dispersals, which will resolve once latest_block >= abn'
and reference_block_number >= abn'
, ensuring verifier consistency. The EigenDA integrations team will explore mitigations in the future.
Rollup Stack Secure Integrations
Nitro V1 | OP V1 (insecure) | Nitro V2 | OP V2 | |
---|---|---|---|---|
Cert Verification | SequencerInbox | x | one-step proof | one-step proof: done in preimage oracle contract when uploading a blob field element |
Blob Verification | one-step proof | x | one-step proof | one-step proof |
Timing Verification | SequencerInbox | x | SequencerInbox | one-step proof (?) |
Rollup Stacks
OP Stack
Links:
Arbitrum Nitro
Our up-to-date Nitro docs are available at docs.eigenda.xyz.
We maintain fork diffs for the different nitro repos that we fork:
ZKsync ZK Stack
ZKSync-era currently supports and maintains a validium mode, which means we don't need to fork ZKSync, unlike the other stacks.
The zksync eigenda client is implemented here. It makes use of our eigenda-client-rs repo.
EigenDA OP Secure integration
This document presents an overview on how EigenDA plugs into Optimism (OP) Stack.
write
andread
path in a L2 rollup- Why the
read
path must stay live (even with a misbehaving op-batcher) - Adding an EigenDA stage to the OP derivation pipeline
- Hokulea, Rust library that defines and implements the Eigenda derivation rule
- How Hokulea works in both interactive fault-proof VMs and zkVMs
Write and Read path in L2 consensus
A rollup system can be split into two parts: write path to L1 and read path from L1
Path | Direction | Purpose | Main actor |
---|---|---|---|
Write | L2 → L1 | Low cost L2 block production with user transactions | op-batcher + EigenDA proxy |
Write | Direct on L1 | Censorship resistance + Deposit | Rollup users + Opimism Portal |
Read | L1 → L2 | Safety – all nodes see the same block list | OP derivation pipeline |
- The
write path
ensures the liveness of the L2 consensus. It consists of L2 batches produced by op-batcher and L1 deposit transactions. - The
read path
controls the safety of the L2 consensus. It ensures that all L2 consensus nodes see an identical list of L2 batches and L1 sytem and deposits transactions, such that an EVM can produce identical L2 state
If the read path stalls, honest nodes can’t reach the block height needed to dispute a bad state root.
L2 Write path (happy-flow)
- op-batcher bundles user txs.
- Sends compressed batches to EigenDA proxy, which converts them into an Eigenda blob. Proxy sends blob to EigenDA, and forwards the returned certificate to op-batcher
- EigenDA certificates are posted to the L1 Rollup Inbox.
L2 Read path
The read path from L1 determines L2 consensus. OP has defined a derivation pipeline in OP spec. Both op-program in Golang and Kona in Rust implement the derivation pipeline. Like the diagram above, the derivation pipeline consists of stages that bring L1 transactions down to Payload Attributes which are L2 blocks.
To support secure integration, we have defined and inserted a Eigenda section in the OP derivation pipeline. In the diagram above, we have sketched where and what rules EigenDA inserts in the OP derivation pipeline. Both Eigenda proxy and Hokulea implement the eigenda blob derivation.
L2 Read path with EigenDA
As in the diagram, op-nodes use the read-path
on the eigenda-proxy to fetch EigenDA blob. The proxy checks
- certificate has sufficient stake and is valid
- certificate is not stale
- retrieve & KZG-checked blob from EigenDA
- Decode and pass the data onward
More information can be found in the page secure integration. The key properties which EigenDA derivation strives to keep are
- Determinism – one unique blob per DA certificate.
- Liveness – discard anything that could halt the chain.
Both eigenda-proxy and hokulea hold those properties.
Proving correctness on L1
The security of rollup is determined by if there are provable ways to challenge incorrect L2 state posted on L1. In this section, we discuss our OP secure integration library Hokulea.
Short intro to OP FPVM
The correctness of a L2 state is determined by the derivation rules, which are implemented in both Go op-program and Rust Kona.
With interactive fault proof, the derivation logics are packaged into a binary ELF file, which can be run inside a FPVM (Cannon, Asterisc, etc.).
The FPVM requires both the ELF binary and data (L2 batches and L1 deposits) to be able to rerun the derivation pipeline. The idea is to repeat what op-node has done to reach consensus, except that in FPVM, every execution is traceable and challengeable.
Data is provided to FPVM in the form of preimage oracle. OP spec has defined rules such that all data in the preimage oracle are verifiable on L1.
Hokulea
Hokulea uses traits exposed by Kona derivation pipeline to integrate EigenDA as a Data Availability Source. Hokulea provides traits, implementation for EigenDA part of derivation pipeline, such that those logics can be compiled into ELF together with Kona.
Hokulea also extends preimage oracle for EigenDA, which is able to provide the verifiable interface for answering
- whether a DA cert is correct
- what is the current recency window to determine if a cert is stale
More information about the communication spec is at Hokulea. Both extension to preimage oracle and derivation logics allows for
- deterministically deriving a rollup payload from an EigenDA certificate
- discarding DA Certs that can stall the derivation pipeline
Canoe
We developed a rust library called Canoe that uses zk validity proof to efficiently verify the cert validity on L1 or inside a zkVM.
Hokulea integration with zkVM
Unlike interactive challenge game with fault proof, a zk proof has a property that only the honest party can create a valid zk proof with respect to the correct derivation rule. Hence, a malicious party can raise a challenge but is unable to defend its position.
- The Hokulea+Kona derivation is compiled into ELF for the needed environment (RiscV zkVM or one of the FPVMs)
- The Hokulea+Kona preimage oracle are prepared, where the validity of DA cert is provided by Canoe
- zkVM takes preimage and verifies it, then feeds the data into the ELF containing the derivation logics
- zkVM produces a proof about the execution
Hokulea is currently integrating with OP-succinct and OP-Kailua. For an integration guide, please refer to the preloader example for zk integration.
Rust Kzg Bn254 library
The constraint also requires all eigenda blob with respect to the kzg commitment in the DA cert. We developed a similar library to c-kzg
called
rust-kzg-bn254 that offers similar functionalities as 4844 spec.