EigenDA

EigenDA is a Data Availability (DA) service, implemented as an actively validated service (AVS) on EigenLayer, that provides secure and scalable DA for L2s on Ethereum.

What is DA?

In informal terms, DA is a guarantee that a given piece of data will be available to anyone who wishes to retrieve it.

A DA system accepts blobs of data (via some interface) and then makes them available to retrievers (through another interface).

Two important aspects of a DA system are

  1. Security: The security of a DA system constitutes the set of conditions which are sufficient to ensure that all data blobs certified by the system as available are indeed available for honest retrievers to download.
  2. Throughput: The throughput of a DA system is the rate at which the system is able to accept blobs of data, typically measured in bytes/second.

An EigenLayer AVS for DA

EigenDA is implemented as an actively validated service on EigenLayer, which is a restaking protocol for Ethereum.

Because of this, EigenDA makes use of the EigenLayer state, which is stored on Ethereum, for consensus about the state of operators and as a callback for consensus about the availability of data. This means that EigenDA can be simpler in implementation than many existing DA solutions: EigenDA doesn't need to build it's own chain or consensus protocol; it rides on the back of Ethereum.

A first of its kind, horizontally scalable DA solution

Among extant DA solutions, EigenDA takes an approach to scalability which is unique in that it yields true horizontal scalability: Every additional unit of capacity contributed by an operator can increase the total system capacity.

This property is achieved by using a Reed Solomon erasure encoding scheme to shard the blob data across the DA nodes. While other systems such as Celestia and Danksharding (planned) also make use of Reed Solomon encoding, they do so only for the purpose of supporting certain observability properties of Data Availability Sampling (DAS) by light nodes. On the other hand, all incentivized/full nodes of the system download, store, and serve the full system bandwidth.

Horizontal scalability provides the promise for the technological bottlenecks of DA capacity to continually track demand, which has enormous implications for Layer 2 ecosystems.

Security Model

EigenDA produces a DA attestation which asserts that a given blob or collection of blobs is available. Attestations are anchored to one or more "Quorums," each of which defines a set of EigenLayer stakers which underwrite the security of the attestation. Quorums should be considered as redundant: Each quorum linked to an attestation provides an independent guarantee of availability as if the other quorums did not exist.

Each attestation is characterized by safety and liveness tolerances:

  • Liveness tolerance: Conditions under which the system will produce an availability attestation.
  • Safety tolerance: Conditions under which an availability attestation implies that data is indeed available.

EigenDA defines two properties of each blob attestation which relate to its liveness and safety tolerance:

  • Liveness threshold: The liveness threshold defines the minimum percentage of stake which an attacker must control in order to mount a liveness attack on the system.
  • Safety threshold: The safety threshold defines the total percentage of stake which an attacker must control in order to mount a first-order safety attack on the system.

The term "first-order attack" alludes to the fact that exceeding the safety threshold may represent only a contingency rather than an actual safety failure due to the presence of recovery mechanisms that would apply during such a contingency. Discussion of such mechanisms is outside of the scope of the current documentation.

Safety thresholds can translate directly into cryptoeconomic safety properties for quorums consisting of tokens which experience toxicity in the event of publicly observable attacks by a large coalition of token holders. This and other discussions of cryptoeconomic security are also beyond the scope of this technical documentation. We restrict the discussion to illustrating how the protocol preserves the given safety and liveness thresholds.

EigenDA Protocol

Broken down into 2 main sections.

Core Services

EigenDA Protocol consists of a suite of services that allow for data to be securely stored and retrieved from the validators.

Core

Contracts

System Architecture

image

Core Components

  • DA nodes are the service providers of EigenDA, storing chunks of blob data for a predefined time period and serving these chunks upon request.
  • The disperser is responsible for encoding blobs, distributing them to the DA nodes, and aggregating their digital signatures into a DA attestation. As the disperser is currently centralized, it is trusted for system liveness; the disperser will be decentralized over time.
  • The disperser and the DA nodes both depend on the Ethereum L1 for shared state about the DA node registration and stake delegation. The L1 is also currently used to bridge DA attestations to L2 end-user applications such as rollup chains.

Essential flows

Dispersal. The is the flow by which data is made available and consists of the following steps:

  1. The Disperser receives a collection of blobs, [encodes them], constructs a batch of encoded blobs and headers, and sends the sharded batch to the DA nodes.
  2. The DA nodes validate their shares of the batch, and return an attestation consisting of a BLS signature of the batch header.
  3. The disperser collects the attestations from the DA nodes and aggregates them into a single aggregate attestation.

Bridging. For a DA attestation to be consumed by the L2 end-user (e.g. a rollup), the it must be bridged to a chain from which the L2 can read. This might simply be the Ethereum L1 itself, but in many cases it is more economical to bridge directly into the L2 since this drastically decreases signature verification costs. For the time being all attestations are bridged to the L1 by the disperser.

Retrieval. Interested parties such as rollup challengers that want to obtain rollup blob data can retrieve a blob by downloading the encoded chunks from the DA nodes and decoding them. The blob lookup information contained in the request is obtained from the bridged attestation to the DA nodes.

Protocol Overview

For expositional purposes, we will divide the protocol into two conceptual layers:

  • Attestation Layer: Modules to ensure that whenever a DA attestation is accepted by an end-user (e.g. a rollup), then the data is indeed available. More specifically, the attestation layer ensures that the system observes the safety and liveness tolerances defined in the Security Model section.
  • Network Layer: The communications protocol which ensures that the liveness and safety of the protocol are robust against network-level events and threats.

image

image

Attestation Layer

The attest layer is responsible for ensuring that when the network-level assumptions and safety and liveness tolerances are observed, the system properly makes data available.

The primary responsibility of the attestation layer is to enable consensus about whether a given blob of data is fully within the custody of a set of honest nodes. (Here, what can be taken to be a set of honest nodes is defined by the system safety tolerance and the assurance that these honest nodes will be able to transmit the data to honest retrievers is handled by the network layer.) Since EigenDA is an EigenLayer AVS it does not need its own actual consensus protocol, but can instead piggy-back off of Ethereum's consensus. As a result, the attestation layer decomposes into two fairly straightforward pieces:

  • Attestation Logic: The attestation logic allows us to answer the question of whether a given blob is available, given both a DA attestation and the validator state at the associated Ethereum block. The attestation logic can be understood as simply a function of these inputs which outputs yes or no, depending on whether these inputs imply that data is available. Naturally, this function is grounded upon assumptions about the behavior of honest nodes, which must perform certain validation actions as part of the attestation layer. The attestation logic further decomposes into two major modules:
    • Encoding: The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.
    • Assignment: The assignment module provides a deterministic mapping from validator state to an allocation of encoded chunks to DA nodes. The mapping is designed to uphold safety and liveness properties with minimal data-inefficiency.
  • Bridging: Bridging describes how the attestation is bridged to the consumer protocol, such as that of the rollup. In principle, bridging can be performed in one of several different ways in order to optimize efficiency and composability. At the moment, only bridging via the Ethereum L1 is directly supported.

image

The desired behavior of the attestation logic can be formally described as follows (Ignore this if you're happy with the high level ideas): Let \(\alpha\) denote the safety threshold, i.e. the maximum proportion of adversarial stake that the system is able to tolerate. Likewise, let \(\beta\) represent the amount of stake that we require to be held by the signing operators in order to accept an attestation, i.e. one minus the liveness threshold. Also, let \(O\) denote the set of EigenDA operators.

We need to guarantee that any set of signing operators \(U_q \subseteq O\) such that

$$ \sum_{i \in U_q} S_i \ge \beta \sum_{i \in O}S_i$$

and any set of adversarial operators $U_a \subseteq U_q$ such

$$ \sum_{i \in U_a} S_i \le \alpha \sum_{i \in O}S_i$$

we can reconstruct the original data blob from the chunks held by \( U_q \setminus U_a \).

Encoding Module

The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.

Assignment Module

The assignment module is nothing more than a rule which takes in the Ethereum chain state and outputs an allocation of chunks to DA operators.

Signature verification and bridging

Bridging module

Network Layer

This section is under construction.

Encoding Module

The encoding module defines a procedure for blobs to be encoded in such a way that their successful reconstruction can be guaranteed given a large enough collection of unique encoded chunks. The procedure also allows for the chunks to be trustlessly verified against a blob commitment so that the disperser cannot violate the protocol.

image

One way to think of the encoding module is that it must satisfy the following security requirements:

  1. Adversarial tolerance for DA nodes: We need to have tolerance to arbitrary adversarial behavior by any number of DA nodes up to some threshold. Note that while simple sharding approaches such as duplicating slices of the blob data have good tolerance to random node dropout, they have poor tolerance to worst-case adversarial behavior.
  2. Adversarial tolerance for disperser: We do not want to put trust assumptions on the encoder or rely on fraud proofs to detect if an encoding is done incorrectly.

Trustless Encoding via KZG and Reed-Solomon

EigenDA uses a combination of Reed-Solomon (RS) erasure coding and KZG polynomial commitments to perform trustless encoding. In this section, we provide a high level overview of how the EigenDA encoding module works and how it achieves these properties.

Reed Solomon Encoding

Basic RS encoding is used to achieve the first requirement of Adversarial tolerance for DA nodes. This looks like the following:

  1. The blob data is represented as a string of symbols, where each symbol is elements in a certain finite field. The number of symbols is called the BlobLength
  2. These symbols are interpreted as the coefficients of a BlobLength-1 degree polynomial.
  3. This polynomial is evaluated at NumChunks*ChunkLength distinct indices.
  4. Chunks are constructed, where each chunk consists of the polynomial evaluations at ChunkLength distinct indices.

Notice that given any number of chunks $M$ such that $M \times$ChunkLength >= BlobLength, via polynomial interpolation it is possible to reconstruct the original polynomial, and therefore its coefficients which represent the original blob.

Validation via KZG

To address the requirement Adversarial tolerance for disperser using RS encoding alone requires fraud proofs: a challenger must download all of the encoded chunks and check that they lie on a polynomial corresponding to the blob commitment.

To avoid the need for fraud proofs, EigenDA follows the trail blazed by the Ethereum DA sharding roadmap in using KZG polynomial commitments.

Chunk Validation

Blobs sent to EigenDA are identified by their KZG commitment (which can be calculated by the disperser and easily validated by the rollup sequencer). When the disperser generates the encoded blob chunks, it also generates a collection of opening proofs which the DA nodes can use to trustlessly verify that their chunks fall on the blob polynomial at the correct indices (note: the indices are jointly derived by the disperser and DA nodes from the chain state using the logic in the Assignment module to ensure that the evaluation indices for each node are unique).

Blob Size Verification KZG commitments also can be used to verify the degree of the original polynomial, which in turn corresponds to the size of the original blob. Having a trustlessly verifiable upper bound on the size of the blob is necessary for DA nodes to verify the correctness of the chunk assignment defined by the assignment module.

The KZG commitment relies on a structured reference string (SRS) containing a generator point $G$ multiplied by all of the powers of some secret field element $\tau$, up to some maximum power $n$. This means that it is not possible to use this SRS to commit to a polynomial of degree greater than $n$. A consequence of this is that if $p(x)$ is a polynomial of degree greater than $m$, it will not be possible to commit to the polynomial $x^{n-m}p(x)$. A "valid" commitment to the polynomial $x^{n-m}p(x)$ thus constitutes a proof that the polynomial $p(x)$ is of degree less than or equal to $m$.

In practice, this looks like the following:

  1. If the disperser wishes to claim that the polynomial $p(x)$ is of degree less than or equal to $m$, they must provide along with the commitment $C_1$ to $p$, a commitment $C_2$ to $q(x) = x^{n-m}p(x)$.
  2. The verifier then performs the pairing check $e(C_1,[x^{n-m}]_2) = e(C_2,H)$, where $H$ is the G2 generator and $[x^{n-m}]_2$ is the $n-m$'th power of tau. This pairing will only evaluate correctly when $C_2$ was constructed as described above and $deg(p) <= m$.

Note: The blob length verification here allows for the blob length to be upper-bounded; it cannot be used to prove the exact blob length.

Prover Optimizations

EigenDA makes use of the results of Fast Amortized Kate Proofs, developed for Ethereum's sharding roadmap, to reduce the computational complexity for proof generation.

See the full discussion

Verifier Optimizations

Without any optimizations, the KZG verification complexity can lead to a computational bottleneck for the DA nodes. Fortunately, the Universal Verification Equation developed for Danksharding data availability sampling dramatically reduces the complexity. EigenDA has implemented this optimization to eliminate this bottleneck for the DA nodes.

Amortized KZG Prover Backend

It is important that the encoding and commitment tasks are able to be performed in seconds and that the dominating complexity of the computation is nearly linear in the degree of the polynomial. This is done using algorithms based on the Fast Fourier Transform (FFT).

This document describes how the KZG-FFT encoder backend implements the Encode(data [][]byte, params EncodingParams) (BlobCommitments, []*Chunk, error) interface, which 1) transforms the blob into a list of params.NumChunks Chunks, where each chunk is of length params.ChunkLength 2) produces the associated polynomial commitments and proofs.

We will also highlight the additional constraints on the Encoding interface which arise from the KZG-FFT encoder backend.

Deriving the polynomial coefficients and commitment

As described in the Encoding Module Specification, given a blob of data, we convert the blob to a polynomial $p(X) = \sum_{i=0}^{m-1} c_iX^i$ by simply slicing the data into a string of symbols, and interpreting this list of symbols as the tuple $(c_i)_{i=0}^{m-1}$.

In the case of the KZG-FFT encoder, the polynomial lives on the field associated with the BN254 elliptic curve, which as order [TODO: fill in order].

Given this polynomial representation, the KZG commitment can be calculated as in KZG polynomial commitments.

Polynomial Evaluation with the FFT

In order to use a Discrete Fourier Transform (DFT) to evaluate a polynomial, the indices of the polynomial evaluations which will make up the Chunks must be members of a cyclic group, which we will call $S$. A cyclic group is the group generated by taking all of the integer powers of some generator $v$, i.e., ${v^k | k \in \mathbb{Z} }$ (For this reason, the elements of a cyclic group $S$ of order $|S|=m$ will sometimes be referred to as the $|m|$’th roots of unity). Notice that since our polynomial lives on the BN254 field, the group $S$ must be a subgroup of that field (i.e. all of its elements must lie within that field).

Given a cyclic group $S$ of order $m$, we can evaluate a polynomial $p(X)$ of order $n$ at the indices contained in $S$ via the DFT,

$$ p_k = \sum_{i=1}^{n}c_i (v^k)^i $$

where $p_k$ gives the evaluation of the polynomial at $v^k \in S$. Letting $c$ denote the vector of polynomial coefficients and $p$ the vector of polynomial evaluations, we can use the shorthand $p = DFT[c]$. The inverse relation also holds, i.e., $c = DFT^{-1}[p]$.

To evaluate the DFT programmatically, we want $m = n$. Notice that we can achieve this when $m > n$ by simply padding $c$ with zeros to be of length $m$.

The use of the FFT can levy an additional requirement on the size of the group $S$. In our implementation, we require the size of $S$ to be a power of 2. For this, we can make use of the fact that the prime field associated with BN254 contains a subgroup of order $2^{28}$, which in turn contains subgroups of orders spanning every power of 2 less than $2^{28}$.

As the encoding interface calls for the construction of NumChunks Chunks of length ChunkLength, our application requires that $S$ be of size NumChunks*ChunkLength, which in turn must be a power of 2.

Amortized Multireveal Proof Generation with the FFT

The construction of the multireveal proofs can also be performed using a DFT (as in “Fast Amortized Kate Proofs”). Leaving the full details of this process to the referenced document, we describe here only 1) the index-assignment the scheme used by the amortized multiproof generation approach and 2) the constraints that this creates for the overall encoder interface.

Given the group $S$ corresponding to the indices of the polynomial evaluations and a cyclic group $C$ which is a subgroup of $S$, the cosets of $C$ in $S$ are given by

$$ s+C = {g+c : c \in C} \text{ for } s \in S. $$

Each coset $s+C$ has size $|C|$, and there are $|S|/|C|$ unique and disjoint cosets.

Given a polynomial $p(X)$ and the groups $S$ and $C$, the Amortized Kate Proofs approach generates $|S|/|C|$ different KZG multi-reveal proofs, where each proof is associated with the evaluation of $p(X)$ at the indices contained in a single coset $sC$ for $s \in S$. Because the Amortized Kate Proofs approach uses the FFT under the hood, $C$ itself must have an order which is a power of 2.

For the purposes of the KZG-FFT encoder, this means that we must choose $S$ to be of size NumChunks*ChunkLength and $C$ to be of size ChunkLength, each of which must be powers of 2.

Worked Example

As a simple illustrative example, suppose that AssignmentCoordinator provides the following parameters in order to meet the security requirements of a given blob:

  • ChunkLength = 3
  • NumChunks = 4

Supplied with these parameters, Encoder.ParamsFromMins will upgrade ChunkLength to the next highest power of 2, i.e., ChunkLength = 4, and leave NumChunks unchanged. The following figure illustrates how the indices will be assigned across the chunks in this scenario.

Worked example of chunk indices for ChunkLength=4, NumChunks=4

Assignment Module

The assignment module is essentially a rule which takes in the Ethereum chain state and outputs an allocation of chunks to DA operators. This can be generalized to a function that outputs a set of valid allocations.

A chunk assignment has the following parameters:

  1. Indices: the chunk indices that will be assigned to each DA node. Some DA nodes receive more than one chunk.
  2. ChunkLength: the length of each chunk (measured in number of symbols, as defined by the encoding module). We currently require all chunks to be of the same length, so this parameter is a scalar.

The assignment module is implemented by the AssignmentCoordinator interface.

image

Assignment Logic

The standard assignment coordinator implements a very simple logic for determining the number of chunks per node and the chunk length, which we describe here.

Chunk Length

Chunk lengths must be sufficiently small that operators with a small proportion of stake will be able to receive a quantity of data commensurate with their stake share. For each operator $i$, let $S_i$ signify the amount of stake held by that operator.

We require that the chunk size $C$ satisfy

$$ C \le \text{NextPowerOf2}\left(\frac{B}{\gamma}\max\left(\frac{\min_jS_j}{\sum_jS_j}, \frac{1}{M_\text{max}} \right) \right) $$

where $\gamma = \beta-\alpha$, with $\alpha$ and $\beta$ the adversary and quorum thresholds as defined in the Overview.

This means that as long as an operator has a stake share of at least $1/M_\text{max}$, then the encoded data that they will receive will be within a factor of 2 of their share of stake. Operators with less than $1/M_\text{max}$ of stake will receive no more than a $1/M_\text{max}$ of the encoded data. $M_\text{max}$ represents the maximum number of chunks that the disperser can be required to encode per blob. This limit is included because proving costs scale somewhat super-linearly with the number of chunks.

In the future, additional constraints on chunk length may be added; for instance, the chunk length may be set in order to maintain a fixed number of chunks per blob across all system states. Currently, the protocol does not mandate a specific value for the chunk length, but will accept the range satisfying the above constraint. The CalculateChunkLength function is provided as a convenience function that can be used to find a chunk length satisfying the protocol requirements.

Index Assignment

For each operator $i$, let $S_i$ signify the amount of stake held by that operator. We want for the number of chunks assigned to operator $i$ to satisfy

$$ \frac{\gamma m_i C}{B} \ge \frac{S_i}{\sum_j S_j} $$

Let

$$ m_i = \text{ceil}\left(\frac{B S_i}{C\gamma \sum_j S_j}\right)\tag{1} $$

Correctness Let's show that any sets $U_q$ and $U_a$ satisfying the constraints in the Consensus Layer Overview, the data held by the operators $U_q \setminus U_a$ will constitute an entire blob. The amount of data held by these operators is given by

$$ \sum_{i \in U_q \setminus U_a} m_i C $$

We have from (1) and from the definitions of $U_q$ and $U_a$ that

$$ \sum_{i \in U_q \setminus U_a} m_i C \ge =\frac{B}{\gamma}\sum_{i \in U_q \setminus U_a}\frac{S_i}{\sum_j S_j} = \frac{B}{\gamma}\frac{\sum_{i \in U_q} S_i - \sum_{i \in U_a} S_i}{\sum_jS_j} \ge B \frac{\beta-\alpha}{\gamma} = B \tag{2} $$

Since the unique data held by these operators exceeds the size of a blob, the encoding module ensures that the original blob can be reconstructed from this data.

Validation Actions

Validation with respect to assignments is performed at different layers of the protocol:

DA Nodes

When the DA node receives a StoreChunks request, it performs the following validation actions relative to each blob header:

  • It uses the ValidateChunkLength to validate that the ChunkLength for the blob satisfies the above constraints.
  • It uses GetOperatorAssignment to calculate the chunk indices for which it is responsible, and verifies that each of the chunks that it has received lies on the polynomial at these indices (see Encoding validation actions)

This step ensures that each honest node has received the blobs for which it is accountable.

Since the DA nodes will allow a range of ChunkLength values, as long as they satisfy the constraints of the protocol, it is necessary for there to be consensus on the ChunkLength that is in use for a particular blob and quorum. For this reason, the ChunkLength is included in the BlobQuorumParam which is hashed to create the merkle root contained in the BatchHeaderHash signed by the DA nodes.

Rollup Smart Contract

When the rollup confirms its blob against the EigenDA batch, it checks that the ConfirmationThreshold for the blob is greater than the AdversaryThreshold. This means that if the ChunkLength determined by the disperser is invalid, the batch cannot be confirmed as a sufficient number of nodes will not sign.

Signature verification and bridging

image

L1 Bridging

Bridging a DA attestation for a specific blob requires the following stages:

  • Bridging the batch attestation. This involves checking the aggregate signature of the DA nodes for the batch, and tallying up the total amount of stake the signing nodes.
  • Verifying the blob inclusion. Each batch contains a root of a Merkle tree whose leaves correspond to the blob headers contained in the batch. To verify blob inclusion, the associate Merkle proof must be supplied and evaluated. Furthermore, the specific quorum threshold requirement for the blob must be checked against the total amount of signing stake for the batch.

For the first stage, EigenDA makes use of the EigenLayer's default utilities for managing operator state, verifying aggregate BLS signatures, and checking the total stake held by the signing operators.

For the second stage, the EigenDA provides a utility contract with a verifyBlob method which rollups would typically integrate into their fraud proof pathway in the following manner:

  1. The rollup sequencer posts all lookup data needed to verify a blob against a batch to the rollup inbox contract.
  2. To initiate a fraud proof, the challenger must call the verifyBlob method with the supplied lookup data. If the blob does not verify correctly, the blob is considered invalid.

Reorg behavior (this section is outdated)

One aspect of the chain behavior of which the attestation protocol must be aware is that of chain reorganization. The following requirements relate to chain reorganizations:

  1. Signed attestations should remain valid under reorgs so that a disperser never needs to resend the data and gather new signatures.
  2. If an attestation is reorged out, a disperser should always be able to simply resubmit it after a specific waiting period.
  3. Payloads constructed by a disperser and sent to DA nodes should never be rejected due to reorgs.

These requirements result in the following design choices:

  • Chunk allotments should be based on registration state from a finalized block.
  • If an attestation is reorged out and if the transaction containing the header of a batch is not present within BLOCK_STALE_MEASURE blocks since referenceBlockNumber and the block that is BLOCK_STALE_MEASURE blocks since referenceBlockNumber is finalized, then the disperser should again start a new dispersal with that blob of data. Otherwise, the disperser must not re-submit another transaction containing the header of a batch associated with the same blob of data.
  • Payment payloads sent to DA nodes should only take into account finalized attestations.

The first and second decisions satisfy requirements 1 and 2. The three decisions together satisfy requirement 3.

Whenever the confirmBatch method of the ServiceManager.sol is called, the following checks are used to ensure that only finalized registration state is utilized:

  • Stake staleness check. The referenceBlockNumber is verified to be within BLOCK_STALE_MEASURE blocks before the confirmation block.This is to make sure that batches using outdated stakes are not confirmed. It is assured that stakes from within BLOCK_STALE_MEASURE blocks before confirmation are valid by delaying removal of stakes by BLOCK_STALE_MEASURE + MAX_DURATION_BLOCKS.

EigenDA Managed Contracts

This page describe EigenDA contracts that are manage by EigenDA related actors (see the exact roles). For EigenDA-related contracts that are managed by rollups, see the rollup managed contracts page.

Middlewares Contracts

EigenDA Specific Contracts

The smart contracts can be found here.

image.png

EigenDACertVerifier

Contains a single function verifyDACertV2 which is used to verify certs. This function’s logic is described in the Cert Validation section.

EigenDAThreshold Registry

The EigenDAThresholdRegistry contains two sets of fundamental parameters:

/// @notice mapping of blob version id to the params of the blob version
mapping(uint16 => VersionedBlobParams) public versionedBlobParams;
struct VersionedBlobParams {
    uint32 maxNumOperators;
    uint32 numChunks;
    uint8 codingRate;
}

/// @notice Immutable security thresholds for quorums
SecurityThresholds public defaultSecurityThresholdsV2;
struct SecurityThresholds {
    uint8 confirmationThreshold;
    uint8 adversaryThreshold;
}

The securityThresholds are currently immutable. These are the same as the previously called liveness and safety thresholds:

  • Confirmation Threshold (fka liveness threshold): minimum percentage of stake which an attacker must control in order to mount a liveness attack on the system.
  • Adversary Threshold (fka safety threshold): total percentage of stake which an attacker must control in order to mount a first-order safety attack on the system.

Their values are

defaultSecurityThresholdsV2 = {
	confirmationThreshold = ??,
	adversaryThreshold = ??,
}

A new BlobParam version is very infrequently introduced by the EigenDA Foundation Governance, and rollups can choose which version they wish to use when dispersing a blob. Currently there is only version 0 defined, with parameters:

versionedBlobParams[0] = {
	maxNumOperators = ??,
	numChunks = 8192,
	codingRate = ??,
}

The five parameters are intricately related by this formula which is also verified onchain by the verifyBlobSecurityParams function:

$$ numChunks \cdot (1 - \frac{100}{\gamma * codingRate}) \geq maxNumOperators $$

where $\gamma$ = confirmationThreshold - adversaryThreshold

EigenDARelayRegistry

Contains EigenDA network registered Relays’ Ethereum address and DNS hostname or IP address. BlobCertificates contain relayKey(s), which can be transformed into that relay’s URL by calling relayKeyToUrl.

EigenDADisperserRegistry

Contains EigenDA network registered Dispersers’ Ethereum address. The EigenDA Network currently only supports a single Disperser, hosted by EigenLabs. The Disperser’s URL is currently static and unchanging, and can be found on our docs site in the Networks section.

Deployments

Governance Roles

Protocol Documentation

Table of Contents

Top

churner/churner.proto

ChurnReply

FieldTypeLabelDescription
signature_with_salt_and_expirySignatureWithSaltAndExpiryThe signature signed by the Churner.
operators_to_churnOperatorToChurnrepeatedA list of existing operators that get churned out. This list will contain all quorums specified in the ChurnRequest even if some quorums may not have any churned out operators. If a quorum has available space, OperatorToChurn object will contain the quorum ID and empty operator and pubkey. The smart contract should only churn out the operators for quorums that are full.

For example, if the ChurnRequest specifies quorums 0 and 1 where quorum 0 is full and quorum 1 has available space, the ChurnReply will contain two OperatorToChurn objects with the respective quorums. OperatorToChurn for quorum 0 will contain the operator to churn out and OperatorToChurn for quorum 1 will contain empty operator (zero address) and pubkey. The smart contract should only churn out the operators for quorum 0 because quorum 1 has available space without having any operators churned. Note: it's possible an operator gets churned out just for one or more quorums (rather than entirely churned out for all quorums). |

ChurnRequest

FieldTypeLabelDescription
operator_addressstringThe Ethereum address (in hex like "0x123abcdef...") of the operator.
operator_to_register_pubkey_g1bytesThe operator making the churn request.
operator_to_register_pubkey_g2bytes
operator_request_signaturebytesThe operator's BLS signature signed on the keccak256 hash of concat("ChurnRequest", operator address, g1, g2, salt).
saltbytesThe salt used as part of the message to sign on for operator_request_signature.
quorum_idsuint32repeatedThe quorums to register for. Note: - If any of the quorum here has already been registered, this entire request will fail to proceed. - If any of the quorum fails to register, this entire request will fail. - Regardless of whether the specified quorums are full or not, the Churner will return parameters for all quorums specified here. The smart contract will determine whether it needs to churn out existing operators based on whether the quorums have available space. The IDs must be in range [0, 254].

OperatorToChurn

This describes an operator to churn out for a quorum.

FieldTypeLabelDescription
quorum_iduint32The ID of the quorum of the operator to churn out.
operatorbytesThe address of the operator.
pubkeybytesBLS pubkey (G1 point) of the operator.

SignatureWithSaltAndExpiry

FieldTypeLabelDescription
signaturebytesChurner's signature on the Operator's attributes.
saltbytesSalt is the keccak256 hash of concat("churn", time.Now(), operatorToChurn's OperatorID, Churner's ECDSA private key)
expiryint64When this churn decision will expire.

Churner

The Churner is a service that handles churn requests from new operators trying to join the EigenDA network. When the EigenDA network reaches the maximum number of operators, any new operator trying to join will have to make a churn request to this Churner, which acts as the sole decision maker to decide whether this new operator could join, and if so, which existing operator will be churned out (so the max number of operators won't be exceeded). The max number of operators, as well as the rules to make churn decisions, are defined onchain, see details in OperatorSetParam at: https://github.com/Layr-Labs/eigenlayer-middleware/blob/master/src/interfaces/IBLSRegistryCoordinatorWithIndices.sol#L24.

Method NameRequest TypeResponse TypeDescription
ChurnChurnRequestChurnReply

Top

common/common.proto

BlobCommitment

BlobCommitment represents commitment of a specific blob, containing its KZG commitment, degree proof, the actual degree, and data length in number of symbols.

FieldTypeLabelDescription
commitmentbytesA commitment to the blob data.
length_commitmentbytesA commitment to the blob data with G2 SRS, used to work with length_proof such that the claimed length below is verifiable.
length_proofbytesA proof that the degree of the polynomial used to generate the blob commitment is valid. It is computed such that the coefficient of the polynomial is committing with the G2 SRS at the end of the highest order.
lengthuint32The length specifies the degree of the polynomial used to generate the blob commitment. The length must equal to the degree + 1, and it must be a power of 2.

G1Commitment

A KZG commitment

FieldTypeLabelDescription
xbytesThe X coordinate of the KZG commitment. This is the raw byte representation of the field element.
ybytesThe Y coordinate of the KZG commitment. This is the raw byte representation of the field element.

Top

common/v2/common_v2.proto

Batch

Batch is a batch of blob certificates

FieldTypeLabelDescription
headerBatchHeaderheader contains metadata about the batch
blob_certificatesBlobCertificaterepeatedblob_certificates is the list of blob certificates in the batch

BatchHeader

BatchHeader is the header of a batch of blobs

FieldTypeLabelDescription
batch_rootbytesbatch_root is the root of the merkle tree of the hashes of blob certificates in the batch
reference_block_numberuint64reference_block_number is the block number that the state of the batch is based on for attestation

BlobCertificate

BlobCertificate contains a full description of a blob and how it is dispersed. Part of the certificate is provided by the blob submitter (i.e. the blob header), and part is provided by the disperser (i.e. the relays). Validator nodes eventually sign the blob certificate once they are in custody of the required chunks (note that the signature is indirect; validators sign the hash of a Batch, which contains the blob certificate).

FieldTypeLabelDescription
blob_headerBlobHeaderblob_header contains data about the blob.
signaturebytessignature is an ECDSA signature signed by the blob request signer's account ID over the BlobHeader's blobKey, which is a keccak hash of the serialized BlobHeader, and used to verify against blob dispersal request's account ID
relay_keysuint32repeatedrelay_keys is the list of relay keys that are in custody of the blob. The relays custodying the data are chosen by the Disperser to which the DisperseBlob request was submitted. It needs to contain at least 1 relay number. To retrieve a blob from the relay, one can find that relay's URL in the EigenDARelayRegistry contract: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/core/EigenDARelayRegistry.sol

BlobHeader

BlobHeader contains the information describing a blob and the way it is to be dispersed.

FieldTypeLabelDescription
versionuint32The blob version. Blob versions are pushed onchain by EigenDA governance in an append only fashion and store the maximum number of operators, number of chunks, and coding rate for a blob. On blob verification, these values are checked against supplied or default security thresholds to validate the security assumptions of the blob's availability.
quorum_numbersuint32repeatedquorum_numbers is the list of quorum numbers that the blob is part of. Each quorum will store the data, hence adding quorum numbers adds redundancy, making the blob more likely to be retrievable. Each quorum requires separate payment.

On-demand dispersal is currently limited to using a subset of the following quorums: - 0: ETH - 1: EIGEN

Reserved-bandwidth dispersal is free to use multiple quorums, however those must be reserved ahead of time. The quorum_numbers specified here must be a subset of the ones allowed by the on-chain reservation. Check the allowed quorum numbers by looking up reservation struct: https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/interfaces/IPaymentVault.sol#L10 | | commitment | common.BlobCommitment | | commitment is the KZG commitment to the blob | | payment_header | PaymentHeader | | payment_header contains payment information for the blob |

PaymentHeader

PaymentHeader contains payment information for a blob. At least one of reservation_period or cumulative_payment must be set, and reservation_period is always considered before cumulative_payment. If reservation_period is set but not valid, the server will reject the request and not proceed with dispersal. If reservation_period is not set and cumulative_payment is set but not valid, the server will reject the request and not proceed with dispersal. Once the server has accepted the payment header, a client cannot cancel or rollback the payment. Every dispersal request will be charged by a multiple of minNumSymbols field defined by the payment vault contract. If the request blob size is smaller or not a multiple of minNumSymbols, the server will charge the user for the next multiple of minNumSymbols (https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/payments/PaymentVaultStorage.sol#L9).

FieldTypeLabelDescription
account_idstringThe account ID of the disperser client. This account ID is an eth wallet address of the user, corresponding to the key used by the client to sign the BlobHeader.
timestampint64The timestamp should be set as the UNIX timestamp in units of nanoseconds at the time of the dispersal request, and will be used to determine the reservation period, and compared against the reservation active start and end timestamps On-chain reservation timestamps are in units of seconds, while the payment header timestamp is in nanoseconds for greater precision. If the timestamp is not set or is not part of the previous or current reservation period, the request will be rejected. The reservation period of the dispersal request is used for rate-limiting the user's account against their dedicated bandwidth. This method requires users to set up reservation accounts with EigenDA team, and the team will set up an on-chain record of reserved bandwidth for the user for some period of time. The dispersal client's accountant will set this value to the current timestamp in nanoseconds. The disperser server will find the corresponding reservation period by taking the nearest lower multiple of the on-chain configured reservation period interval, mapping each request to a time-based window and is serialized and parsed as a uint32. The disperser server then validates that it matches either the current or the previous period, and check against the user's reserved bandwidth.

Example Usage Flow: 1. The user sets up a reservation with the EigenDA team, including throughput (symbolsPerSecond), startTimestamp, endTimestamp, and reservationPeriodInterval. 2. When sending a dispersal request at time t, the client fill in the timestamp field with t. 3. The disperser take timestamp t and checks the reservation period and the user's bandwidth capacity: - If the reservation is active (t >= startTimestamp and t < endTimestamp). - After rounding up to the nearest multiple of minNumSymbols defined by the payment vault contract, the user still has enough bandwidth capacity (hasn’t exceeded symbolsPerSecond * reservationPeriodInterval). - The request is ratelimited against the current reservation period, and calculated as reservation_period = floor(t / reservationPeriodInterval) * reservationPeriodInterval. the request's reservation period must either be the disperser server's current reservation period or the previous reservation period. 4. Server always go ahead with recording the received request in the current reservation period, and then categorize the scenarios - If the remaining bandwidth is sufficient for the request, the dispersal request proceeds. - If the remaining bandwidth is not enough for the request, server fills up the current bin and overflowing the extra to a future bin. - If the bandwidth has already been exhausted, the request is rejected. 5. Once the dispersal request signature has been verified, the server will not roll back the payment or the usage records. Users should be aware of this when planning their usage. The dispersal client written by EigenDA team takes account of this. 6. When the reservation ends or usage is exhausted, the client must wait for the next reservation period or switch to on-demand. | | cumulative_payment | bytes | | Cumulative payment is the total amount of tokens paid by the requesting account, including the current request. This value is serialized as an uint256 and parsed as a big integer, and must match the user’s on-chain deposit limits as well as the recorded payments for all previous requests. Because it is a cumulative (not incremental) total, requests can arrive out of order and still unambiguously declare how much of the on-chain deposit can be deducted.

Example Decision Flow: 1. In the set up phase, the user must deposit tokens into the EigenDA PaymentVault contract. The payment vault contract specifies the minimum number of symbols charged per dispersal, the pricing per symbol, and the maximum global rate for on-demand dispersals. The user should calculate the amount of tokens they would like to deposit based on their usage. The first time a user make a request, server will immediate read the contract for the on-chain balance. When user runs out of on-chain balance, the server will reject the request and not proceed with dispersal. When a user top up on-chain, the server will only refresh every few minutes for the top-up to take effect. 2. The disperser client accounts how many tokens they’ve already paid (previousCumPmt). 3. They should calculate the payment by rounding up blob size to the nearest multiple of minNumSymbols defined by the payment vault contract, and calculate the incremental amount of tokens needed for the current request needs based on protocol defined pricing. 4. They take the sum of previousCumPmt + new incremental payment and place it in the “cumulative_payment” field. 5. The disperser checks this new cumulative total against on-chain deposits and prior records (largest previous payment and smallest later payment if exists). 6. If the payment number is valid, the request is confirmed and disperser proceeds with dispersal; otherwise it’s rejected. |

Top

disperser/disperser.proto

AuthenticatedReply

FieldTypeLabelDescription
blob_auth_headerBlobAuthHeader
disperse_replyDisperseBlobReply

AuthenticatedRequest

FieldTypeLabelDescription
disperse_requestDisperseBlobRequest
authentication_dataAuthenticationData

AuthenticationData

AuthenticationData contains the signature of the BlobAuthHeader.

FieldTypeLabelDescription
authentication_databytes

BatchHeader

FieldTypeLabelDescription
batch_rootbytesThe root of the merkle tree with the hashes of blob headers as leaves.
quorum_numbersbytesAll quorums associated with blobs in this batch. Sorted in ascending order. Ex. [0, 2, 1] => 0x000102
quorum_signed_percentagesbytesThe percentage of stake that has signed for this batch. The quorum_signed_percentages[i] is percentage for the quorum_numbers[i].
reference_block_numberuint32The Ethereum block number at which the batch was created. The Disperser will encode and disperse the blobs based on the onchain info (e.g. operator stakes) at this block number.

BatchMetadata

FieldTypeLabelDescription
batch_headerBatchHeader
signatory_record_hashbytesThe hash of all public keys of the operators that did not sign the batch.
feebytesThe fee payment paid by users for dispersing this batch. It's the bytes representation of a big.Int value.
confirmation_block_numberuint32The Ethereum block number at which the batch is confirmed onchain.
batch_header_hashbytesThis is the hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 The is the message that the operators will sign their signatures on.

BlobAuthHeader

BlobAuthHeader contains information about the blob for the client to verify and sign.

  • Once payments are enabled, the BlobAuthHeader will contain the KZG commitment to the blob, which the client will verify and sign. Having the client verify the KZG commitment instead of calculating it avoids the need for the client to have the KZG structured reference string (SRS), which can be large. The signed KZG commitment prevents the disperser from sending a different blob to the DA Nodes than the one the client sent.
  • In the meantime, the BlobAuthHeader contains a simple challenge parameter is used to prevent replay attacks in the event that a signature is leaked.
FieldTypeLabelDescription
challenge_parameteruint32

BlobHeader

FieldTypeLabelDescription
commitmentcommon.G1CommitmentKZG commitment of the blob.
data_lengthuint32The length of the blob in symbols (each symbol is 32 bytes).
blob_quorum_paramsBlobQuorumParamrepeatedThe params of the quorums that this blob participates in.

BlobInfo

BlobInfo contains information needed to confirm the blob against the EigenDA contracts

FieldTypeLabelDescription
blob_headerBlobHeader
blob_verification_proofBlobVerificationProof

BlobQuorumParam

FieldTypeLabelDescription
quorum_numberuint32The ID of the quorum.
adversary_threshold_percentageuint32The max percentage of stake within the quorum that can be held by or delegated to adversarial operators. Currently, this and the next parameter are standardized across the quorum using values read from the EigenDA contracts.
confirmation_threshold_percentageuint32The min percentage of stake that must attest in order to consider the dispersal is successful.
chunk_lengthuint32The length of each chunk.

BlobStatusReply

FieldTypeLabelDescription
statusBlobStatusThe status of the blob.
infoBlobInfoThe blob info needed for clients to confirm the blob against the EigenDA contracts.

BlobStatusRequest

BlobStatusRequest is used to query the status of a blob.

FieldTypeLabelDescription
request_idbytesRefer to the documentation for DisperseBlobReply.request_id. Note that because the request_id depends on the timestamp at which the disperser received the request, it is not possible to compute it locally from the cert and blob. Clients should thus store this request_id if they plan on requerying the status of the blob in the future.

BlobVerificationProof

FieldTypeLabelDescription
batch_iduint32batch_id is an incremental ID assigned to a batch by EigenDAServiceManager
blob_indexuint32The index of the blob in the batch (which is logically an ordered list of blobs).
batch_metadataBatchMetadata
inclusion_proofbytesinclusion_proof is a merkle proof for a blob header's inclusion in a batch
quorum_indexesbytesindexes of quorums in BatchHeader.quorum_numbers that match the quorums in BlobHeader.blob_quorum_params Ex. BlobHeader.blob_quorum_params = [ { quorum_number = 0, ... }, { quorum_number = 3, ... }, { quorum_number = 5, ... }, ] BatchHeader.quorum_numbers = [0, 5, 3] => 0x000503 Then, quorum_indexes = [0, 2, 1] => 0x000201

DisperseBlobReply

FieldTypeLabelDescription
resultBlobStatusThe status of the blob associated with the request_id. Will always be PROCESSING.
request_idbytesThe request ID generated by the disperser.

Once a request is accepted, a unique request ID is generated. request_id = string(blob_key) = (hash(blob), hash(metadata)) where metadata contains a requestedAt timestamp and the requested quorum numbers and their adversarial thresholds. BlobKey definition: https://github.com/Layr-Labs/eigenda/blob/6b02bf966afa2b9bf2385db8dd01f66f17334e17/disperser/disperser.go#L87 BlobKey computation: https://github.com/Layr-Labs/eigenda/blob/6b02bf966afa2b9bf2385db8dd01f66f17334e17/disperser/common/blobstore/shared_storage.go#L83-L84

Different DisperseBlobRequests have different IDs, including two identical DisperseBlobRequests sent at different times. Clients should thus store this ID and use it to query the processing status of the request via the GetBlobStatus API. |

DisperseBlobRequest

FieldTypeLabelDescription
databytesThe data to be dispersed. The size of data must be <= 16MiB. Every 32 bytes of data is interpreted as an integer in big endian format where the lower address has more significant bits. The integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617 If any one of the 32 bytes elements is outside the range, the whole request is deemed as invalid, and rejected.
custom_quorum_numbersuint32repeatedThe quorums to which the blob will be sent, in addition to the required quorums which are configured on the EigenDA smart contract. If required quorums are included here, an error will be returned. The disperser will ensure that the encoded blobs for each quorum are all processed within the same batch.
account_idstringThe account ID of the client. This should be a hex-encoded string of the ECSDA public key corresponding to the key used by the client to sign the BlobAuthHeader.

RetrieveBlobReply

RetrieveBlobReply contains the retrieved blob data

FieldTypeLabelDescription
databytes

RetrieveBlobRequest

RetrieveBlobRequest contains parameters to retrieve the blob.

FieldTypeLabelDescription
batch_header_hashbytes
blob_indexuint32

BlobStatus

BlobStatus represents the status of a blob. The status of a blob is updated as the blob is processed by the disperser. The status of a blob can be queried by the client using the GetBlobStatus API. Intermediate states are states that the blob can be in while being processed, and it can be updated to a different state:

  • PROCESSING
  • DISPERSING
  • CONFIRMED Terminal states are states that will not be updated to a different state:
  • FAILED
  • FINALIZED
  • INSUFFICIENT_SIGNATURES
NameNumberDescription
UNKNOWN0
PROCESSING1PROCESSING means that the blob is currently being processed by the disperser
CONFIRMED2CONFIRMED means that the blob has been dispersed to DA Nodes and the dispersed batch containing the blob has been confirmed onchain
FAILED3FAILED means that the blob has failed permanently (for reasons other than insufficient signatures, which is a separate state). This status is somewhat of a catch-all category, containing (but not necessarily exclusively as errors can be added in the future): - blob has expired - internal logic error while requesting encoding - blob retry has exceeded its limit while waiting for blob finalization after confirmation. Most likely triggered by a chain reorg: see https://github.com/Layr-Labs/eigenda/blob/master/disperser/batcher/finalizer.go#L179-L189.
FINALIZED4FINALIZED means that the block containing the blob's confirmation transaction has been finalized on Ethereum
INSUFFICIENT_SIGNATURES5INSUFFICIENT_SIGNATURES means that the confirmation threshold for the blob was not met for at least one quorum.
DISPERSING6The DISPERSING state is comprised of two separate phases: - Dispersing to DA nodes and collecting signature - Submitting the transaction on chain and waiting for tx receipt

Disperser

Disperser defines the public APIs for dispersing blobs.

Method NameRequest TypeResponse TypeDescription
DisperseBlobDisperseBlobRequestDisperseBlobReplyDisperseBlob accepts a single blob to be dispersed. This executes the dispersal async, i.e. it returns once the request is accepted. The client should use GetBlobStatus() API to poll the processing status of the blob.

If DisperseBlob returns the following error codes: INVALID_ARGUMENT (400): request is invalid for a reason specified in the error msg. RESOURCE_EXHAUSTED (429): request is rate limited for the quorum specified in the error msg. user should retry after the specified duration. INTERNAL (500): serious error, user should NOT retry. | | DisperseBlobAuthenticated | AuthenticatedRequest stream | AuthenticatedReply stream | DisperseBlobAuthenticated is similar to DisperseBlob, except that it requires the client to authenticate itself via the AuthenticationData message. The protocol is as follows: 1. The client sends a DisperseBlobAuthenticated request with the DisperseBlobRequest message 2. The Disperser sends back a BlobAuthHeader message containing information for the client to verify and sign. 3. The client verifies the BlobAuthHeader and sends back the signed BlobAuthHeader in an AuthenticationData message. 4. The Disperser verifies the signature and returns a DisperseBlobReply message. | | GetBlobStatus | BlobStatusRequest | BlobStatusReply | This API is meant to be polled for the blob status. | | RetrieveBlob | RetrieveBlobRequest | RetrieveBlobReply | This retrieves the requested blob from the Disperser's backend. This is a more efficient way to retrieve blobs than directly retrieving from the DA Nodes (see detail about this approach in api/proto/retriever/retriever.proto). The blob should have been initially dispersed via this Disperser service for this API to work. |

Top

disperser/v2/disperser_v2.proto

Attestation

FieldTypeLabelDescription
non_signer_pubkeysbytesrepeatedSerialized bytes of non signer public keys (G1 points)
apk_g2bytesSerialized bytes of G2 point that represents aggregate public key of all signers
quorum_apksbytesrepeatedSerialized bytes of aggregate public keys (G1 points) from all nodes for each quorum The order of the quorum_apks should match the order of the quorum_numbers
sigmabytesSerialized bytes of aggregate signature
quorum_numbersuint32repeatedRelevant quorum numbers for the attestation
quorum_signed_percentagesbytesThe attestation rate for each quorum. Each quorum's signing percentage is represented by an 8 bit unsigned integer. The integer is the fraction of the quorum that has signed, with 100 representing 100% of the quorum signing, and 0 representing 0% of the quorum signing. The first byte in the byte array corresponds to the first quorum in the quorum_numbers array, the second byte corresponds to the second quorum, and so on.

BlobCommitmentReply

The result of a BlobCommitmentRequest().

FieldTypeLabelDescription
blob_commitmentcommon.BlobCommitmentThe commitment of the blob.

BlobCommitmentRequest

The input for a BlobCommitmentRequest(). This can be used to construct a BlobHeader.commitment.

FieldTypeLabelDescription
blobbytesThe blob data to compute the commitment for.

BlobInclusionInfo

BlobInclusionInfo is the information needed to verify the inclusion of a blob in a batch.

FieldTypeLabelDescription
blob_certificatecommon.v2.BlobCertificate
blob_indexuint32blob_index is the index of the blob in the batch
inclusion_proofbytesinclusion_proof is the inclusion proof of the blob in the batch

BlobStatusReply

BlobStatusReply is the reply to a BlobStatusRequest.

FieldTypeLabelDescription
statusBlobStatusThe status of the blob.
signed_batchSignedBatchThe signed batch. Only set if the blob status is GATHERING_SIGNATURES or COMPLETE. signed_batch and blob_inclusion_info are only set if the blob status is GATHERING_SIGNATURES or COMPLETE. When blob is in GATHERING_SIGNATURES status, the attestation object in signed_batch contains attestation information at the point in time. As it gathers more signatures, attestation object will be updated according to the latest attestation status. The client can use this intermediate attestation to verify a blob if it has gathered enough signatures. Otherwise, it should should poll the GetBlobStatus API until the desired level of attestation has been gathered or status is COMPLETE. When blob is in COMPLETE status, the attestation object in signed_batch contains the final attestation information. If the final attestation does not meet the client's requirement, the client should try a new dispersal.
blob_inclusion_infoBlobInclusionInfoBlobInclusionInfo is the information needed to verify the inclusion of a blob in a batch. Only set if the blob status is GATHERING_SIGNATURES or COMPLETE.

BlobStatusRequest

BlobStatusRequest is used to query the status of a blob.

FieldTypeLabelDescription
blob_keybytesThe unique identifier for the blob.

DisperseBlobReply

A reply to a DisperseBlob request.

FieldTypeLabelDescription
resultBlobStatusThe status of the blob associated with the blob key.
blob_keybytesThe unique 32 byte identifier for the blob.

The blob_key is the keccak hash of the rlp serialization of the BlobHeader, as computed here: https://github.com/Layr-Labs/eigenda/blob/0f14d1c90b86d29c30ff7e92cbadf2762c47f402/core/v2/serialization.go#L30 The blob_key must thus be unique for every request, even if the same blob is being dispersed. Meaning the blob_header must be different for each request.

Note that attempting to disperse a blob with the same blob key as a previously dispersed blob may cause the disperser to reject the blob (DisperseBlob() RPC will return an error). |

DisperseBlobRequest

A request to disperse a blob.

FieldTypeLabelDescription
blobbytesThe blob to be dispersed.

The size of this byte array may be any size as long as it does not exceed the maximum length of 16MiB. While the data being dispersed is only required to be greater than 0 bytes, the blob size charged against the payment method will be rounded up to the nearest multiple of minNumSymbols defined by the payment vault contract (https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/payments/PaymentVaultStorage.sol#L9).

Every 32 bytes of data is interpreted as an integer in big endian format where the lower address has more significant bits. The integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617. If any one of the 32 bytes elements is outside the range, the whole request is deemed as invalid, and rejected. | | blob_header | common.v2.BlobHeader | | The header contains metadata about the blob.

This header can be thought of as an "eigenDA tx", in that it plays a purpose similar to an eth_tx to disperse a 4844 blob. Note that a call to DisperseBlob requires the blob and the blobHeader, which is similar to how dispersing a blob to ethereum requires sending a tx whose data contains the hash of the kzg commit of the blob, which is dispersed separately. | | signature | bytes | | signature over keccak hash of the blob_header that can be verified by blob_header.payment_header.account_id |

GetPaymentStateReply

GetPaymentStateReply contains the payment state of an account.

FieldTypeLabelDescription
payment_global_paramsPaymentGlobalParamsglobal payment vault parameters
period_recordsPeriodRecordrepeatedoff-chain account reservation usage records
reservationReservationon-chain account reservation setting
cumulative_paymentbytesoff-chain on-demand payment usage
onchain_cumulative_paymentbyteson-chain on-demand payment deposited

GetPaymentStateRequest

GetPaymentStateRequest contains parameters to query the payment state of an account.

FieldTypeLabelDescription
account_idstringThe ID of the account being queried. This account ID is an eth wallet address of the user.
signaturebytesSignature over the account ID

PaymentGlobalParams

Global constant parameters defined by the payment vault.

FieldTypeLabelDescription
global_symbols_per_seconduint64Global ratelimit for on-demand dispersals
min_num_symbolsuint64Minimum number of symbols accounted for all dispersals
price_per_symboluint64Price charged per symbol for on-demand dispersals
reservation_windowuint64Reservation window for all reservations
on_demand_quorum_numbersuint32repeatedquorums allowed to make on-demand dispersals

PeriodRecord

PeriodRecord is the usage record of an account in a bin. The API should return the active bin record and the subsequent two records that contains potential overflows.

FieldTypeLabelDescription
indexuint32Period index of the reservation
usageuint64symbol usage recorded

Reservation

Reservation parameters of an account, used to determine the rate limit for the account.

FieldTypeLabelDescription
symbols_per_seconduint64rate limit for the account
start_timestampuint32start timestamp of the reservation
end_timestampuint32end timestamp of the reservation
quorum_numbersuint32repeatedquorums allowed to make reserved dispersals
quorum_splitsuint32repeatedquorum splits describes how the payment is split among the quorums

SignedBatch

SignedBatch is a batch of blobs with a signature.

FieldTypeLabelDescription
headercommon.v2.BatchHeaderheader contains metadata about the batch
attestationAttestationattestation on the batch

BlobStatus

BlobStatus represents the status of a blob. The status of a blob is updated as the blob is processed by the disperser. The status of a blob can be queried by the client using the GetBlobStatus API. Intermediate states are states that the blob can be in while being processed, and it can be updated to a different state:

  • QUEUED
  • ENCODED
  • GATHERING_SIGNATURES Terminal states are states that will not be updated to a different state:
  • UNKNOWN
  • COMPLETE
  • FAILED
NameNumberDescription
UNKNOWN0UNKNOWN means that the status of the blob is unknown. This is a catch all and should not be encountered absent a bug.

This status is functionally equivalent to FAILED, but is used to indicate that the failure is due to an unanticipated bug. | | QUEUED | 1 | QUEUED means that the blob has been queued by the disperser for processing. The DisperseBlob API is asynchronous, meaning that after request validation, but before any processing, the blob is stored in a queue of some sort, and a response immediately returned to the client. | | ENCODED | 2 | ENCODED means that the blob has been Reed-Solomon encoded into chunks and is ready to be dispersed to DA Nodes. | | GATHERING_SIGNATURES | 3 | GATHERING_SIGNATURES means that the blob chunks are currently actively being transmitted to validators, and in doing so requesting that the validators sign to acknowledge receipt of the blob. Requests that timeout or receive errors are resubmitted to DA nodes for some period of time set by the disperser, after which the BlobStatus becomes COMPLETE. | | COMPLETE | 4 | COMPLETE means the blob has been dispersed to DA nodes, and the GATHERING_SIGNATURES period of time has completed. This status does not guarantee any signer percentage, so a client should check that the signature has met its required threshold, and resubmit a new blob dispersal request if not. | | FAILED | 5 | FAILED means that the blob has failed permanently. Note that this is a terminal state, and in order to retry the blob, the client must submit the blob again (blob key is required to be unique). |

Disperser

Disperser defines the public APIs for dispersing blobs.

Method NameRequest TypeResponse TypeDescription
DisperseBlobDisperseBlobRequestDisperseBlobReplyDisperseBlob accepts blob to disperse from clients. This executes the dispersal asynchronously, i.e. it returns once the request is accepted. The client could use GetBlobStatus() API to poll the the processing status of the blob.
GetBlobStatusBlobStatusRequestBlobStatusReplyGetBlobStatus is meant to be polled for the blob status.
GetBlobCommitmentBlobCommitmentRequestBlobCommitmentReplyGetBlobCommitment is a utility method that calculates commitment for a blob payload.
GetPaymentStateGetPaymentStateRequestGetPaymentStateReplyGetPaymentState is a utility method to get the payment state of a given account.

Top

node/node.proto

AttestBatchReply

FieldTypeLabelDescription
signaturebytes

AttestBatchRequest

FieldTypeLabelDescription
batch_headerBatchHeaderheader of the batch
blob_header_hashesbytesrepeatedthe header hashes of all blobs in the batch

BatchHeader

BatchHeader (see core/data.go#BatchHeader)

FieldTypeLabelDescription
batch_rootbytesThe root of the merkle tree with hashes of blob headers as leaves.
reference_block_numberuint32The Ethereum block number at which the batch is dispersed.

Blob

In EigenDA, the original blob to disperse is encoded as a polynomial via taking taking different point evaluations (i.e. erasure coding). These points are split into disjoint subsets which are assigned to different operator nodes in the EigenDA network. The data in this message is a subset of these points that are assigned to a single operator node.

FieldTypeLabelDescription
headerBlobHeaderWhich (original) blob this is for.
bundlesBundlerepeatedEach bundle contains all chunks for a single quorum of the blob. The number of bundles must be equal to the total number of quorums associated with the blob, and the ordering must be the same as BlobHeader.quorum_headers. Note: an operator may be in some but not all of the quorums; in that case the bundle corresponding to that quorum will be empty.

BlobHeader

FieldTypeLabelDescription
commitmentcommon.G1CommitmentThe KZG commitment to the polynomial representing the blob.
length_commitmentG2CommitmentThe KZG commitment to the polynomial representing the blob on G2, it is used for proving the degree of the polynomial
length_proofG2CommitmentThe low degree proof. It's the KZG commitment to the polynomial shifted to the largest SRS degree.
lengthuint32The length of the original blob in number of symbols (in the field where the polynomial is defined).
quorum_headersBlobQuorumInforepeatedThe params of the quorums that this blob participates in.
account_idstringThe ID of the user who is dispersing this blob to EigenDA.
reference_block_numberuint32The reference block number whose state is used to encode the blob

BlobQuorumInfo

See BlobQuorumParam as defined in api/proto/disperser/disperser.proto

FieldTypeLabelDescription
quorum_iduint32
adversary_thresholduint32
confirmation_thresholduint32
chunk_lengthuint32
ratelimituint32

Bundle

A Bundle is the collection of chunks associated with a single blob, for a single operator and a single quorum.

FieldTypeLabelDescription
chunksbytesrepeatedEach chunk corresponds to a collection of points on the polynomial. Each chunk has same number of points.
bundlebytesAll chunks of the bundle encoded in a byte array.

G2Commitment

FieldTypeLabelDescription
x_a0bytesThe A0 element of the X coordinate of G2 point.
x_a1bytesThe A1 element of the X coordinate of G2 point.
y_a0bytesThe A0 element of the Y coordinate of G2 point.
y_a1bytesThe A1 element of the Y coordinate of G2 point.

GetBlobHeaderReply

FieldTypeLabelDescription
blob_headerBlobHeaderThe header of the blob requested per GetBlobHeaderRequest.
proofMerkleProofMerkle proof that returned blob header belongs to the batch and is the batch's MerkleProof.index-th blob. This can be checked against the batch root on chain.

GetBlobHeaderRequest

See RetrieveChunksRequest for documentation of each parameter of GetBlobHeaderRequest.

FieldTypeLabelDescription
batch_header_hashbytes
blob_indexuint32
quorum_iduint32

MerkleProof

FieldTypeLabelDescription
hashesbytesrepeatedThe proof itself.
indexuint32Which index (the leaf of the Merkle tree) this proof is for.

NodeInfoReply

Node info reply

FieldTypeLabelDescription
semverstring
archstring
osstring
num_cpuuint32
mem_bytesuint64

NodeInfoRequest

Node info request

RetrieveChunksReply

FieldTypeLabelDescription
chunksbytesrepeatedAll chunks the Node is storing for the requested blob per RetrieveChunksRequest.
chunk_encoding_formatChunkEncodingFormatHow the above chunks are encoded.

RetrieveChunksRequest

FieldTypeLabelDescription
batch_header_hashbytesThe hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 This identifies which batch to retrieve for.
blob_indexuint32Which blob in the batch to retrieve for (note: a batch is logically an ordered list of blobs).
quorum_iduint32Which quorum of the blob to retrieve for (note: a blob can have multiple quorums and the chunks for different quorums at a Node can be different). The ID must be in range [0, 254].

StoreBlobsReply

FieldTypeLabelDescription
signaturesgoogle.protobuf.BytesValuerepeatedThe operator's BLS sgnature signed on the blob header hashes. The ordering of the signatures must match the ordering of the blobs sent in the request, with empty signatures in the places for discarded blobs.

StoreBlobsRequest

FieldTypeLabelDescription
blobsBlobrepeatedBlobs to store
reference_block_numberuint32The reference block number whose state is used to encode the blobs

StoreChunksReply

FieldTypeLabelDescription
signaturebytesThe operator's BLS signature signed on the batch header hash.

StoreChunksRequest

FieldTypeLabelDescription
batch_headerBatchHeaderWhich batch this request is for.
blobsBlobrepeatedThe chunks for each blob in the batch to be stored in an EigenDA Node.

ChunkEncodingFormat

This describes how the chunks returned in RetrieveChunksReply are encoded. Used to facilitate the decoding of chunks.

NameNumberDescription
UNKNOWN0
GNARK1
GOB2

Dispersal

Method NameRequest TypeResponse TypeDescription
StoreChunksStoreChunksRequestStoreChunksReplyStoreChunks validates that the chunks match what the Node is supposed to receive ( different Nodes are responsible for different chunks, as EigenDA is horizontally sharded) and is correctly coded (e.g. each chunk must be a valid KZG multiproof) according to the EigenDA protocol. It also stores the chunks along with metadata for the protocol-defined length of custody. It will return a signature at the end to attest to the data in this request it has processed.
StoreBlobsStoreBlobsRequestStoreBlobsReplyStoreBlobs is similar to StoreChunks, but it stores the blobs using a different storage schema so that the stored blobs can later be aggregated by AttestBatch method to a bigger batch. StoreBlobs + AttestBatch will eventually replace and deprecate StoreChunks method. DEPRECATED: StoreBlobs method is not used
AttestBatchAttestBatchRequestAttestBatchReplyAttestBatch is used to aggregate the batches stored by StoreBlobs method to a bigger batch. It will return a signature at the end to attest to the aggregated batch. DEPRECATED: AttestBatch method is not used
NodeInfoNodeInfoRequestNodeInfoReplyRetrieve node info metadata

Retrieval

Method NameRequest TypeResponse TypeDescription
RetrieveChunksRetrieveChunksRequestRetrieveChunksReplyRetrieveChunks retrieves the chunks for a blob custodied at the Node.
GetBlobHeaderGetBlobHeaderRequestGetBlobHeaderReplyGetBlobHeader is similar to RetrieveChunks, this just returns the header of the blob.
NodeInfoNodeInfoRequestNodeInfoReplyRetrieve node info metadata

Top

relay/relay.proto

ChunkRequest

A request for chunks within a specific blob. Requests are fulfilled in all-or-nothing fashion. If any of the requested chunks are not found or are unable to be fetched, the entire request will fail.

FieldTypeLabelDescription
by_indexChunkRequestByIndexRequest chunks by their individual indices.
by_rangeChunkRequestByRangeRequest chunks by a range of indices.

ChunkRequestByIndex

A request for chunks within a specific blob. Each chunk is requested individually by its index.

FieldTypeLabelDescription
blob_keybytesThe blob key.
chunk_indicesuint32repeatedThe index of the chunk within the blob.

ChunkRequestByRange

A request for chunks within a specific blob. Each chunk is requested a range of indices.

FieldTypeLabelDescription
blob_keybytesThe blob key.
start_indexuint32The first index to start fetching chunks from.
end_indexuint32One past the last index to fetch chunks from. Similar semantics to golang slices.

GetBlobReply

The reply to a GetBlobs request.

FieldTypeLabelDescription
blobbytesThe blob requested.

GetBlobRequest

A request to fetch one or more blobs.

FieldTypeLabelDescription
blob_keybytesThe key of the blob to fetch.

GetChunksReply

The reply to a GetChunks request.

FieldTypeLabelDescription
databytesrepeatedThe chunks requested. The order of these chunks will be the same as the order of the requested chunks. data is the raw data of the bundle (i.e. serialized byte array of the frames)

GetChunksRequest

Request chunks from blobs stored by this relay.

FieldTypeLabelDescription
chunk_requestsChunkRequestrepeatedThe chunk requests. Chunks are returned in the same order as they are requested.
operator_idbytesIf this is an authenticated request, this should hold the ID of the operator. If this is an unauthenticated request, this field should be empty. Relays may choose to reject unauthenticated requests.
timestampuint32Timestamp of the request in seconds since the Unix epoch. If too far out of sync with the server's clock, request may be rejected.
operator_signaturebytesIf this is an authenticated request, this field will hold a BLS signature by the requester on the hash of this request. Relays may choose to reject unauthenticated requests.

The following describes the schema for computing the hash of this request This algorithm is implemented in golang using relay.auth.HashGetChunksRequest().

All integers are encoded as unsigned 4 byte big endian values.

Perform a keccak256 hash on the following data in the following order: 1. the length of the operator ID in bytes 2. the operator id 3. the number of chunk requests 4. for each chunk request: a. if the chunk request is a request by index: i. a one byte ASCII representation of the character "i" (aka Ox69) ii. the length blob key in bytes iii. the blob key iv. the start index v. the end index b. if the chunk request is a request by range: i. a one byte ASCII representation of the character "r" (aka Ox72) ii. the length of the blob key in bytes iii. the blob key iv. each requested chunk index, in order 5. the timestamp (seconds since the Unix epoch encoded as a 4 byte big endian value) |

Relay

Relay is a service that provides access to public relay functionality.

Method NameRequest TypeResponse TypeDescription
GetBlobGetBlobRequestGetBlobReplyGetBlob retrieves a blob stored by the relay.
GetChunksGetChunksRequestGetChunksReplyGetChunks retrieves chunks from blobs stored by the relay.

Top

retriever/retriever.proto

BlobReply

FieldTypeLabelDescription
databytesThe blob retrieved and reconstructed from the EigenDA Nodes per BlobRequest.

BlobRequest

FieldTypeLabelDescription
batch_header_hashbytesThe hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 This identifies the batch that this blob belongs to.
blob_indexuint32Which blob in the batch this is requesting for (note: a batch is logically an ordered list of blobs).
reference_block_numberuint32The Ethereum block number at which the batch for this blob was constructed.
quorum_iduint32Which quorum of the blob this is requesting for (note a blob can participate in multiple quorums).

Retriever

The Retriever is a service for retrieving chunks corresponding to a blob from the EigenDA operator nodes and reconstructing the original blob from the chunks. This is a client-side library that the users are supposed to operationalize.

Note: Users generally have two ways to retrieve a blob from EigenDA:

  1. Retrieve from the Disperser that the user initially used for dispersal: the API is Disperser.RetrieveBlob() as defined in api/proto/disperser/disperser.proto
  2. Retrieve directly from the EigenDA Nodes, which is supported by this Retriever.

The Disperser.RetrieveBlob() (the 1st approach) is generally faster and cheaper as the Disperser manages the blobs that it has processed, whereas the Retriever.RetrieveBlob() (the 2nd approach here) removes the need to trust the Disperser, with the downside of worse cost and performance.

Method NameRequest TypeResponse TypeDescription
RetrieveBlobBlobRequestBlobReplyThis fans out request to EigenDA Nodes to retrieve the chunks and returns the reconstructed original blob in response.

Top

retriever/v2/retriever_v2.proto

BlobReply

A reply to a RetrieveBlob() request.

FieldTypeLabelDescription
databytesThe blob retrieved and reconstructed from the EigenDA Nodes per BlobRequest.

BlobRequest

A request to retrieve a blob from the EigenDA Nodes via RetrieveBlob().

FieldTypeLabelDescription
blob_headercommon.v2.BlobHeaderheader of the blob to be retrieved
reference_block_numberuint32The Ethereum block number at which the batch for this blob was constructed.
quorum_iduint32Which quorum of the blob this is requesting for (note a blob can participate in multiple quorums).

Retriever

The Retriever is a service for retrieving chunks corresponding to a blob from the EigenDA operator nodes and reconstructing the original blob from the chunks. This is a client-side library that the users are supposed to operationalize.

Note: Users generally have two ways to retrieve a blob from EigenDA V2:

  1. Retrieve from the relay that the blob is assigned to: the API is Relay.GetBlob() as defined in api/proto/relay/relay.proto
  2. Retrieve directly from the EigenDA Nodes, which is supported by this Retriever.

The Relay.GetBlob() (the 1st approach) is generally faster and cheaper as the relay manages the blobs that it has processed, whereas the Retriever.RetrieveBlob() (the 2nd approach here) removes the need to trust the relay, with the downside of worse cost and performance.

Method NameRequest TypeResponse TypeDescription
RetrieveBlobBlobRequestBlobReplyThis fans out request to EigenDA Nodes to retrieve the chunks and returns the reconstructed original blob in response.

Top

validator/node_v2.proto

GetChunksReply

The response to the GetChunks() RPC.

FieldTypeLabelDescription
chunksbytesrepeatedAll chunks the Node is storing for the requested blob per GetChunksRequest.
chunk_encoding_formatChunkEncodingFormatThe format how the above chunks are encoded.

GetChunksRequest

The parameter for the GetChunks() RPC.

FieldTypeLabelDescription
blob_keybytesThe unique identifier for the blob the chunks are being requested for. The blob_key is the keccak hash of the rlp serialization of the BlobHeader, as computed here: https://github.com/Layr-Labs/eigenda/blob/0f14d1c90b86d29c30ff7e92cbadf2762c47f402/core/v2/serialization.go#L30
quorum_iduint32Which quorum of the blob to retrieve for (note: a blob can have multiple quorums and the chunks for different quorums at a Node can be different). The ID must be in range [0, 254].

GetNodeInfoReply

Node info reply

FieldTypeLabelDescription
semverstringThe version of the node.
archstringThe architecture of the node.
osstringThe operating system of the node.
num_cpuuint32The number of CPUs on the node.
mem_bytesuint64The amount of memory on the node in bytes.

GetNodeInfoRequest

The parameter for the GetNodeInfo() RPC.

StoreChunksReply

StoreChunksReply is the message type used to respond to a StoreChunks() RPC.

FieldTypeLabelDescription
signaturebytesThe validator's BSL signature signed on the batch header hash.

StoreChunksRequest

Request that the Node store a batch of chunks.

FieldTypeLabelDescription
batchcommon.v2.Batchbatch of blobs to store
disperserIDuint32ID of the disperser that is requesting the storage of the batch.
timestampuint32Timestamp of the request in seconds since the Unix epoch. If too far out of sync with the server's clock, request may be rejected.
signaturebytesSignature using the disperser's ECDSA key over keccak hash of the batch. The purpose of this signature is to prevent hooligans from tricking validators into storing data that they shouldn't be storing.

Algorithm for computing the hash is as follows. All integer values are serialized in big-endian order (unsigned). A reference implementation (golang) can be found at https://github.com/Layr-Labs/eigenda/blob/master/disperser/auth/request_signing.go

  1. digest len(batch.BatchHeader.BatchRoot) (4 bytes, unsigned big endian) 2. digest batch.BatchHeader.BatchRoot 3. digest batch.BatchHeader.ReferenceBlockNumber (8 bytes, unsigned big endian) 4. digest len(batch.BlobCertificates) (4 bytes, unsigned big endian) 5. for each certificate in batch.BlobCertificates: a. digest certificate.BlobHeader.Version (4 bytes, unsigned big endian) b. digest len(certificate.BlobHeader.QuorumNumbers) (4 bytes, unsigned big endian) c. for each quorum_number in certificate.BlobHeader.QuorumNumbers: i. digest quorum_number (4 bytes, unsigned big endian) d. digest len(certificate.BlobHeader.Commitment.Commitment) (4 bytes, unsigned big endian) e. digest certificate.BlobHeader.Commitment.Commitment f digest len(certificate.BlobHeader.Commitment.LengthCommitment) (4 bytes, unsigned big endian) g. digest certificate.BlobHeader.Commitment.LengthCommitment h. digest len(certificate.BlobHeader.Commitment.LengthProof) (4 bytes, unsigned big endian) i. digest certificate.BlobHeader.Commitment.LengthProof j. digest certificate.BlobHeader.Commitment.Length (4 bytes, unsigned big endian) k. digest len(certificate.BlobHeader.PaymentHeader.AccountId) (4 bytes, unsigned big endian) l. digest certificate.BlobHeader.PaymentHeader.AccountId m. digest certificate.BlobHeader.PaymentHeader.Timestamp (4 bytes, signed big endian) n digest len(certificate.BlobHeader.PaymentHeader.CumulativePayment) (4 bytes, unsigned big endian) o. digest certificate.BlobHeader.PaymentHeader.CumulativePayment p digest len(certificate.BlobHeader.Signature) (4 bytes, unsigned big endian) q. digest certificate.BlobHeader.Signature r. digest len(certificate.Relays) (4 bytes, unsigned big endian) s. for each relay in certificate.Relays: i. digest relay (4 bytes, unsigned big endian) 6. digest disperserID (4 bytes, unsigned big endian) 7. digest timestamp (4 bytes, unsigned big endian)

Note that this signature is not included in the hash for obvious reasons. |

ChunkEncodingFormat

This describes how the chunks returned in GetChunksReply are encoded. Used to facilitate the decoding of chunks.

NameNumberDescription
UNKNOWN0A valid response should never use this value. If encountered, the client should treat it as an error.
GNARK1A chunk encoded in GNARK has the following format:

[KZG proof: 32 bytes] [Coeff 1: 32 bytes] [Coeff 2: 32 bytes] ... [Coeff n: 32 bytes]

The KZG proof is a point on G1 and is serialized with bn254.G1Affine.Bytes(). The coefficients are field elements in bn254 and serialized with fr.Element.Marshal().

References: - bn254.G1Affine: github.com/consensys/gnark-crypto/ecc/bn254 - fr.Element: github.com/consensys/gnark-crypto/ecc/bn254/fr

Golang serialization and deserialization can be found in: - Frame.SerializeGnark() - Frame.DeserializeGnark() Package: github.com/Layr-Labs/eigenda/encoding |

Dispersal

Dispersal is utilized to disperse chunk data.

Method NameRequest TypeResponse TypeDescription
StoreChunksStoreChunksRequestStoreChunksReplyStoreChunks instructs the validator to store a batch of chunks. This call blocks until the validator either acquires the chunks or the validator determines that it is unable to acquire the chunks. If the validator is able to acquire and validate the chunks, it returns a signature over the batch header. This RPC describes which chunks the validator should store but does not contain that chunk data. The validator is expected to fetch the chunk data from one of the relays that is in possession of the chunk.
GetNodeInfoGetNodeInfoRequestGetNodeInfoReplyGetNodeInfo fetches metadata about the node.

Retrieval

Retrieval is utilized to retrieve chunk data.

Method NameRequest TypeResponse TypeDescription
GetChunksGetChunksRequestGetChunksReplyGetChunks retrieves the chunks for a blob custodied at the Node. Note that where possible, it is generally faster to retrieve chunks from the relay service if that service is available.
GetNodeInfoGetNodeInfoRequestGetNodeInfoReplyRetrieve node info metadata

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

churner/churner.proto

ChurnReply

FieldTypeLabelDescription
signature_with_salt_and_expirySignatureWithSaltAndExpiryThe signature signed by the Churner.
operators_to_churnOperatorToChurnrepeatedA list of existing operators that get churned out. This list will contain all quorums specified in the ChurnRequest even if some quorums may not have any churned out operators. If a quorum has available space, OperatorToChurn object will contain the quorum ID and empty operator and pubkey. The smart contract should only churn out the operators for quorums that are full.

For example, if the ChurnRequest specifies quorums 0 and 1 where quorum 0 is full and quorum 1 has available space, the ChurnReply will contain two OperatorToChurn objects with the respective quorums. OperatorToChurn for quorum 0 will contain the operator to churn out and OperatorToChurn for quorum 1 will contain empty operator (zero address) and pubkey. The smart contract should only churn out the operators for quorum 0 because quorum 1 has available space without having any operators churned. Note: it's possible an operator gets churned out just for one or more quorums (rather than entirely churned out for all quorums). |

ChurnRequest

FieldTypeLabelDescription
operator_addressstringThe Ethereum address (in hex like "0x123abcdef...") of the operator.
operator_to_register_pubkey_g1bytesThe operator making the churn request.
operator_to_register_pubkey_g2bytes
operator_request_signaturebytesThe operator's BLS signature signed on the keccak256 hash of concat("ChurnRequest", operator address, g1, g2, salt).
saltbytesThe salt used as part of the message to sign on for operator_request_signature.
quorum_idsuint32repeatedThe quorums to register for. Note: - If any of the quorum here has already been registered, this entire request will fail to proceed. - If any of the quorum fails to register, this entire request will fail. - Regardless of whether the specified quorums are full or not, the Churner will return parameters for all quorums specified here. The smart contract will determine whether it needs to churn out existing operators based on whether the quorums have available space. The IDs must be in range [0, 254].

OperatorToChurn

This describes an operator to churn out for a quorum.

FieldTypeLabelDescription
quorum_iduint32The ID of the quorum of the operator to churn out.
operatorbytesThe address of the operator.
pubkeybytesBLS pubkey (G1 point) of the operator.

SignatureWithSaltAndExpiry

FieldTypeLabelDescription
signaturebytesChurner's signature on the Operator's attributes.
saltbytesSalt is the keccak256 hash of concat("churn", time.Now(), operatorToChurn's OperatorID, Churner's ECDSA private key)
expiryint64When this churn decision will expire.

Churner

The Churner is a service that handles churn requests from new operators trying to join the EigenDA network. When the EigenDA network reaches the maximum number of operators, any new operator trying to join will have to make a churn request to this Churner, which acts as the sole decision maker to decide whether this new operator could join, and if so, which existing operator will be churned out (so the max number of operators won't be exceeded). The max number of operators, as well as the rules to make churn decisions, are defined onchain, see details in OperatorSetParam at: https://github.com/Layr-Labs/eigenlayer-middleware/blob/master/src/interfaces/IBLSRegistryCoordinatorWithIndices.sol#L24.

Method NameRequest TypeResponse TypeDescription
ChurnChurnRequestChurnReply

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

common/common.proto

BlobCommitment

BlobCommitment represents commitment of a specific blob, containing its KZG commitment, degree proof, the actual degree, and data length in number of symbols.

FieldTypeLabelDescription
commitmentbytesA commitment to the blob data.
length_commitmentbytesA commitment to the blob data with G2 SRS, used to work with length_proof such that the claimed length below is verifiable.
length_proofbytesA proof that the degree of the polynomial used to generate the blob commitment is valid. It is computed such that the coefficient of the polynomial is committing with the G2 SRS at the end of the highest order.
lengthuint32The length specifies the degree of the polynomial used to generate the blob commitment. The length must equal to the degree + 1, and it must be a power of 2.

G1Commitment

A KZG commitment

FieldTypeLabelDescription
xbytesThe X coordinate of the KZG commitment. This is the raw byte representation of the field element.
ybytesThe Y coordinate of the KZG commitment. This is the raw byte representation of the field element.

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

common/v2/common_v2.proto

Batch

Batch is a batch of blob certificates

FieldTypeLabelDescription
headerBatchHeaderheader contains metadata about the batch
blob_certificatesBlobCertificaterepeatedblob_certificates is the list of blob certificates in the batch

BatchHeader

BatchHeader is the header of a batch of blobs

FieldTypeLabelDescription
batch_rootbytesbatch_root is the root of the merkle tree of the hashes of blob certificates in the batch
reference_block_numberuint64reference_block_number is the block number that the state of the batch is based on for attestation

BlobCertificate

BlobCertificate contains a full description of a blob and how it is dispersed. Part of the certificate is provided by the blob submitter (i.e. the blob header), and part is provided by the disperser (i.e. the relays). Validator nodes eventually sign the blob certificate once they are in custody of the required chunks (note that the signature is indirect; validators sign the hash of a Batch, which contains the blob certificate).

FieldTypeLabelDescription
blob_headerBlobHeaderblob_header contains data about the blob.
signaturebytessignature is an ECDSA signature signed by the blob request signer's account ID over the BlobHeader's blobKey, which is a keccak hash of the serialized BlobHeader, and used to verify against blob dispersal request's account ID
relay_keysuint32repeatedrelay_keys is the list of relay keys that are in custody of the blob. The relays custodying the data are chosen by the Disperser to which the DisperseBlob request was submitted. It needs to contain at least 1 relay number. To retrieve a blob from the relay, one can find that relay's URL in the EigenDARelayRegistry contract: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/core/EigenDARelayRegistry.sol

BlobHeader

BlobHeader contains the information describing a blob and the way it is to be dispersed.

FieldTypeLabelDescription
versionuint32The blob version. Blob versions are pushed onchain by EigenDA governance in an append only fashion and store the maximum number of operators, number of chunks, and coding rate for a blob. On blob verification, these values are checked against supplied or default security thresholds to validate the security assumptions of the blob's availability.
quorum_numbersuint32repeatedquorum_numbers is the list of quorum numbers that the blob is part of. Each quorum will store the data, hence adding quorum numbers adds redundancy, making the blob more likely to be retrievable. Each quorum requires separate payment.

On-demand dispersal is currently limited to using a subset of the following quorums: - 0: ETH - 1: EIGEN

Reserved-bandwidth dispersal is free to use multiple quorums, however those must be reserved ahead of time. The quorum_numbers specified here must be a subset of the ones allowed by the on-chain reservation. Check the allowed quorum numbers by looking up reservation struct: https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/interfaces/IPaymentVault.sol#L10 | | commitment | common.BlobCommitment | | commitment is the KZG commitment to the blob | | payment_header | PaymentHeader | | payment_header contains payment information for the blob |

PaymentHeader

PaymentHeader contains payment information for a blob. At least one of reservation_period or cumulative_payment must be set, and reservation_period is always considered before cumulative_payment. If reservation_period is set but not valid, the server will reject the request and not proceed with dispersal. If reservation_period is not set and cumulative_payment is set but not valid, the server will reject the request and not proceed with dispersal. Once the server has accepted the payment header, a client cannot cancel or rollback the payment. Every dispersal request will be charged by a multiple of minNumSymbols field defined by the payment vault contract. If the request blob size is smaller or not a multiple of minNumSymbols, the server will charge the user for the next multiple of minNumSymbols (https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/payments/PaymentVaultStorage.sol#L9).

FieldTypeLabelDescription
account_idstringThe account ID of the disperser client. This account ID is an eth wallet address of the user, corresponding to the key used by the client to sign the BlobHeader.
timestampint64The timestamp should be set as the UNIX timestamp in units of nanoseconds at the time of the dispersal request, and will be used to determine the reservation period, and compared against the reservation active start and end timestamps On-chain reservation timestamps are in units of seconds, while the payment header timestamp is in nanoseconds for greater precision. If the timestamp is not set or is not part of the previous or current reservation period, the request will be rejected. The reservation period of the dispersal request is used for rate-limiting the user's account against their dedicated bandwidth. This method requires users to set up reservation accounts with EigenDA team, and the team will set up an on-chain record of reserved bandwidth for the user for some period of time. The dispersal client's accountant will set this value to the current timestamp in nanoseconds. The disperser server will find the corresponding reservation period by taking the nearest lower multiple of the on-chain configured reservation period interval, mapping each request to a time-based window and is serialized and parsed as a uint32. The disperser server then validates that it matches either the current or the previous period, and check against the user's reserved bandwidth.

Example Usage Flow: 1. The user sets up a reservation with the EigenDA team, including throughput (symbolsPerSecond), startTimestamp, endTimestamp, and reservationPeriodInterval. 2. When sending a dispersal request at time t, the client fill in the timestamp field with t. 3. The disperser take timestamp t and checks the reservation period and the user's bandwidth capacity: - If the reservation is active (t >= startTimestamp and t < endTimestamp). - After rounding up to the nearest multiple of minNumSymbols defined by the payment vault contract, the user still has enough bandwidth capacity (hasn’t exceeded symbolsPerSecond * reservationPeriodInterval). - The request is ratelimited against the current reservation period, and calculated as reservation_period = floor(t / reservationPeriodInterval) * reservationPeriodInterval. the request's reservation period must either be the disperser server's current reservation period or the previous reservation period. 4. Server always go ahead with recording the received request in the current reservation period, and then categorize the scenarios - If the remaining bandwidth is sufficient for the request, the dispersal request proceeds. - If the remaining bandwidth is not enough for the request, server fills up the current bin and overflowing the extra to a future bin. - If the bandwidth has already been exhausted, the request is rejected. 5. Once the dispersal request signature has been verified, the server will not roll back the payment or the usage records. Users should be aware of this when planning their usage. The dispersal client written by EigenDA team takes account of this. 6. When the reservation ends or usage is exhausted, the client must wait for the next reservation period or switch to on-demand. | | cumulative_payment | bytes | | Cumulative payment is the total amount of tokens paid by the requesting account, including the current request. This value is serialized as an uint256 and parsed as a big integer, and must match the user’s on-chain deposit limits as well as the recorded payments for all previous requests. Because it is a cumulative (not incremental) total, requests can arrive out of order and still unambiguously declare how much of the on-chain deposit can be deducted.

Example Decision Flow: 1. In the set up phase, the user must deposit tokens into the EigenDA PaymentVault contract. The payment vault contract specifies the minimum number of symbols charged per dispersal, the pricing per symbol, and the maximum global rate for on-demand dispersals. The user should calculate the amount of tokens they would like to deposit based on their usage. The first time a user make a request, server will immediate read the contract for the on-chain balance. When user runs out of on-chain balance, the server will reject the request and not proceed with dispersal. When a user top up on-chain, the server will only refresh every few minutes for the top-up to take effect. 2. The disperser client accounts how many tokens they’ve already paid (previousCumPmt). 3. They should calculate the payment by rounding up blob size to the nearest multiple of minNumSymbols defined by the payment vault contract, and calculate the incremental amount of tokens needed for the current request needs based on protocol defined pricing. 4. They take the sum of previousCumPmt + new incremental payment and place it in the “cumulative_payment” field. 5. The disperser checks this new cumulative total against on-chain deposits and prior records (largest previous payment and smallest later payment if exists). 6. If the payment number is valid, the request is confirmed and disperser proceeds with dispersal; otherwise it’s rejected. |

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

disperser/disperser.proto

AuthenticatedReply

FieldTypeLabelDescription
blob_auth_headerBlobAuthHeader
disperse_replyDisperseBlobReply

AuthenticatedRequest

FieldTypeLabelDescription
disperse_requestDisperseBlobRequest
authentication_dataAuthenticationData

AuthenticationData

AuthenticationData contains the signature of the BlobAuthHeader.

FieldTypeLabelDescription
authentication_databytes

BatchHeader

FieldTypeLabelDescription
batch_rootbytesThe root of the merkle tree with the hashes of blob headers as leaves.
quorum_numbersbytesAll quorums associated with blobs in this batch. Sorted in ascending order. Ex. [0, 2, 1] => 0x000102
quorum_signed_percentagesbytesThe percentage of stake that has signed for this batch. The quorum_signed_percentages[i] is percentage for the quorum_numbers[i].
reference_block_numberuint32The Ethereum block number at which the batch was created. The Disperser will encode and disperse the blobs based on the onchain info (e.g. operator stakes) at this block number.

BatchMetadata

FieldTypeLabelDescription
batch_headerBatchHeader
signatory_record_hashbytesThe hash of all public keys of the operators that did not sign the batch.
feebytesThe fee payment paid by users for dispersing this batch. It's the bytes representation of a big.Int value.
confirmation_block_numberuint32The Ethereum block number at which the batch is confirmed onchain.
batch_header_hashbytesThis is the hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 The is the message that the operators will sign their signatures on.

BlobAuthHeader

BlobAuthHeader contains information about the blob for the client to verify and sign.

  • Once payments are enabled, the BlobAuthHeader will contain the KZG commitment to the blob, which the client will verify and sign. Having the client verify the KZG commitment instead of calculating it avoids the need for the client to have the KZG structured reference string (SRS), which can be large. The signed KZG commitment prevents the disperser from sending a different blob to the DA Nodes than the one the client sent.
  • In the meantime, the BlobAuthHeader contains a simple challenge parameter is used to prevent replay attacks in the event that a signature is leaked.
FieldTypeLabelDescription
challenge_parameteruint32

BlobHeader

FieldTypeLabelDescription
commitmentcommon.G1CommitmentKZG commitment of the blob.
data_lengthuint32The length of the blob in symbols (each symbol is 32 bytes).
blob_quorum_paramsBlobQuorumParamrepeatedThe params of the quorums that this blob participates in.

BlobInfo

BlobInfo contains information needed to confirm the blob against the EigenDA contracts

FieldTypeLabelDescription
blob_headerBlobHeader
blob_verification_proofBlobVerificationProof

BlobQuorumParam

FieldTypeLabelDescription
quorum_numberuint32The ID of the quorum.
adversary_threshold_percentageuint32The max percentage of stake within the quorum that can be held by or delegated to adversarial operators. Currently, this and the next parameter are standardized across the quorum using values read from the EigenDA contracts.
confirmation_threshold_percentageuint32The min percentage of stake that must attest in order to consider the dispersal is successful.
chunk_lengthuint32The length of each chunk.

BlobStatusReply

FieldTypeLabelDescription
statusBlobStatusThe status of the blob.
infoBlobInfoThe blob info needed for clients to confirm the blob against the EigenDA contracts.

BlobStatusRequest

BlobStatusRequest is used to query the status of a blob.

FieldTypeLabelDescription
request_idbytesRefer to the documentation for DisperseBlobReply.request_id. Note that because the request_id depends on the timestamp at which the disperser received the request, it is not possible to compute it locally from the cert and blob. Clients should thus store this request_id if they plan on requerying the status of the blob in the future.

BlobVerificationProof

FieldTypeLabelDescription
batch_iduint32batch_id is an incremental ID assigned to a batch by EigenDAServiceManager
blob_indexuint32The index of the blob in the batch (which is logically an ordered list of blobs).
batch_metadataBatchMetadata
inclusion_proofbytesinclusion_proof is a merkle proof for a blob header's inclusion in a batch
quorum_indexesbytesindexes of quorums in BatchHeader.quorum_numbers that match the quorums in BlobHeader.blob_quorum_params Ex. BlobHeader.blob_quorum_params = [ { quorum_number = 0, ... }, { quorum_number = 3, ... }, { quorum_number = 5, ... }, ] BatchHeader.quorum_numbers = [0, 5, 3] => 0x000503 Then, quorum_indexes = [0, 2, 1] => 0x000201

DisperseBlobReply

FieldTypeLabelDescription
resultBlobStatusThe status of the blob associated with the request_id. Will always be PROCESSING.
request_idbytesThe request ID generated by the disperser.

Once a request is accepted, a unique request ID is generated. request_id = string(blob_key) = (hash(blob), hash(metadata)) where metadata contains a requestedAt timestamp and the requested quorum numbers and their adversarial thresholds. BlobKey definition: https://github.com/Layr-Labs/eigenda/blob/6b02bf966afa2b9bf2385db8dd01f66f17334e17/disperser/disperser.go#L87 BlobKey computation: https://github.com/Layr-Labs/eigenda/blob/6b02bf966afa2b9bf2385db8dd01f66f17334e17/disperser/common/blobstore/shared_storage.go#L83-L84

Different DisperseBlobRequests have different IDs, including two identical DisperseBlobRequests sent at different times. Clients should thus store this ID and use it to query the processing status of the request via the GetBlobStatus API. |

DisperseBlobRequest

FieldTypeLabelDescription
databytesThe data to be dispersed. The size of data must be <= 16MiB. Every 32 bytes of data is interpreted as an integer in big endian format where the lower address has more significant bits. The integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617 If any one of the 32 bytes elements is outside the range, the whole request is deemed as invalid, and rejected.
custom_quorum_numbersuint32repeatedThe quorums to which the blob will be sent, in addition to the required quorums which are configured on the EigenDA smart contract. If required quorums are included here, an error will be returned. The disperser will ensure that the encoded blobs for each quorum are all processed within the same batch.
account_idstringThe account ID of the client. This should be a hex-encoded string of the ECSDA public key corresponding to the key used by the client to sign the BlobAuthHeader.

RetrieveBlobReply

RetrieveBlobReply contains the retrieved blob data

FieldTypeLabelDescription
databytes

RetrieveBlobRequest

RetrieveBlobRequest contains parameters to retrieve the blob.

FieldTypeLabelDescription
batch_header_hashbytes
blob_indexuint32

BlobStatus

BlobStatus represents the status of a blob. The status of a blob is updated as the blob is processed by the disperser. The status of a blob can be queried by the client using the GetBlobStatus API. Intermediate states are states that the blob can be in while being processed, and it can be updated to a different state:

  • PROCESSING
  • DISPERSING
  • CONFIRMED Terminal states are states that will not be updated to a different state:
  • FAILED
  • FINALIZED
  • INSUFFICIENT_SIGNATURES
NameNumberDescription
UNKNOWN0
PROCESSING1PROCESSING means that the blob is currently being processed by the disperser
CONFIRMED2CONFIRMED means that the blob has been dispersed to DA Nodes and the dispersed batch containing the blob has been confirmed onchain
FAILED3FAILED means that the blob has failed permanently (for reasons other than insufficient signatures, which is a separate state). This status is somewhat of a catch-all category, containing (but not necessarily exclusively as errors can be added in the future): - blob has expired - internal logic error while requesting encoding - blob retry has exceeded its limit while waiting for blob finalization after confirmation. Most likely triggered by a chain reorg: see https://github.com/Layr-Labs/eigenda/blob/master/disperser/batcher/finalizer.go#L179-L189.
FINALIZED4FINALIZED means that the block containing the blob's confirmation transaction has been finalized on Ethereum
INSUFFICIENT_SIGNATURES5INSUFFICIENT_SIGNATURES means that the confirmation threshold for the blob was not met for at least one quorum.
DISPERSING6The DISPERSING state is comprised of two separate phases: - Dispersing to DA nodes and collecting signature - Submitting the transaction on chain and waiting for tx receipt

Disperser

Disperser defines the public APIs for dispersing blobs.

Method NameRequest TypeResponse TypeDescription
DisperseBlobDisperseBlobRequestDisperseBlobReplyDisperseBlob accepts a single blob to be dispersed. This executes the dispersal async, i.e. it returns once the request is accepted. The client should use GetBlobStatus() API to poll the processing status of the blob.

If DisperseBlob returns the following error codes: INVALID_ARGUMENT (400): request is invalid for a reason specified in the error msg. RESOURCE_EXHAUSTED (429): request is rate limited for the quorum specified in the error msg. user should retry after the specified duration. INTERNAL (500): serious error, user should NOT retry. | | DisperseBlobAuthenticated | AuthenticatedRequest stream | AuthenticatedReply stream | DisperseBlobAuthenticated is similar to DisperseBlob, except that it requires the client to authenticate itself via the AuthenticationData message. The protocol is as follows: 1. The client sends a DisperseBlobAuthenticated request with the DisperseBlobRequest message 2. The Disperser sends back a BlobAuthHeader message containing information for the client to verify and sign. 3. The client verifies the BlobAuthHeader and sends back the signed BlobAuthHeader in an AuthenticationData message. 4. The Disperser verifies the signature and returns a DisperseBlobReply message. | | GetBlobStatus | BlobStatusRequest | BlobStatusReply | This API is meant to be polled for the blob status. | | RetrieveBlob | RetrieveBlobRequest | RetrieveBlobReply | This retrieves the requested blob from the Disperser's backend. This is a more efficient way to retrieve blobs than directly retrieving from the DA Nodes (see detail about this approach in api/proto/retriever/retriever.proto). The blob should have been initially dispersed via this Disperser service for this API to work. |

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

disperser/v2/disperser_v2.proto

Attestation

FieldTypeLabelDescription
non_signer_pubkeysbytesrepeatedSerialized bytes of non signer public keys (G1 points)
apk_g2bytesSerialized bytes of G2 point that represents aggregate public key of all signers
quorum_apksbytesrepeatedSerialized bytes of aggregate public keys (G1 points) from all nodes for each quorum The order of the quorum_apks should match the order of the quorum_numbers
sigmabytesSerialized bytes of aggregate signature
quorum_numbersuint32repeatedRelevant quorum numbers for the attestation
quorum_signed_percentagesbytesThe attestation rate for each quorum. Each quorum's signing percentage is represented by an 8 bit unsigned integer. The integer is the fraction of the quorum that has signed, with 100 representing 100% of the quorum signing, and 0 representing 0% of the quorum signing. The first byte in the byte array corresponds to the first quorum in the quorum_numbers array, the second byte corresponds to the second quorum, and so on.

BlobCommitmentReply

The result of a BlobCommitmentRequest().

FieldTypeLabelDescription
blob_commitmentcommon.BlobCommitmentThe commitment of the blob.

BlobCommitmentRequest

The input for a BlobCommitmentRequest(). This can be used to construct a BlobHeader.commitment.

FieldTypeLabelDescription
blobbytesThe blob data to compute the commitment for.

BlobInclusionInfo

BlobInclusionInfo is the information needed to verify the inclusion of a blob in a batch.

FieldTypeLabelDescription
blob_certificatecommon.v2.BlobCertificate
blob_indexuint32blob_index is the index of the blob in the batch
inclusion_proofbytesinclusion_proof is the inclusion proof of the blob in the batch

BlobStatusReply

BlobStatusReply is the reply to a BlobStatusRequest.

FieldTypeLabelDescription
statusBlobStatusThe status of the blob.
signed_batchSignedBatchThe signed batch. Only set if the blob status is GATHERING_SIGNATURES or COMPLETE. signed_batch and blob_inclusion_info are only set if the blob status is GATHERING_SIGNATURES or COMPLETE. When blob is in GATHERING_SIGNATURES status, the attestation object in signed_batch contains attestation information at the point in time. As it gathers more signatures, attestation object will be updated according to the latest attestation status. The client can use this intermediate attestation to verify a blob if it has gathered enough signatures. Otherwise, it should should poll the GetBlobStatus API until the desired level of attestation has been gathered or status is COMPLETE. When blob is in COMPLETE status, the attestation object in signed_batch contains the final attestation information. If the final attestation does not meet the client's requirement, the client should try a new dispersal.
blob_inclusion_infoBlobInclusionInfoBlobInclusionInfo is the information needed to verify the inclusion of a blob in a batch. Only set if the blob status is GATHERING_SIGNATURES or COMPLETE.

BlobStatusRequest

BlobStatusRequest is used to query the status of a blob.

FieldTypeLabelDescription
blob_keybytesThe unique identifier for the blob.

DisperseBlobReply

A reply to a DisperseBlob request.

FieldTypeLabelDescription
resultBlobStatusThe status of the blob associated with the blob key.
blob_keybytesThe unique 32 byte identifier for the blob.

The blob_key is the keccak hash of the rlp serialization of the BlobHeader, as computed here: https://github.com/Layr-Labs/eigenda/blob/0f14d1c90b86d29c30ff7e92cbadf2762c47f402/core/v2/serialization.go#L30 The blob_key must thus be unique for every request, even if the same blob is being dispersed. Meaning the blob_header must be different for each request.

Note that attempting to disperse a blob with the same blob key as a previously dispersed blob may cause the disperser to reject the blob (DisperseBlob() RPC will return an error). |

DisperseBlobRequest

A request to disperse a blob.

FieldTypeLabelDescription
blobbytesThe blob to be dispersed.

The size of this byte array may be any size as long as it does not exceed the maximum length of 16MiB. While the data being dispersed is only required to be greater than 0 bytes, the blob size charged against the payment method will be rounded up to the nearest multiple of minNumSymbols defined by the payment vault contract (https://github.com/Layr-Labs/eigenda/blob/1430d56258b4e814b388e497320fd76354bfb478/contracts/src/payments/PaymentVaultStorage.sol#L9).

Every 32 bytes of data is interpreted as an integer in big endian format where the lower address has more significant bits. The integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617. If any one of the 32 bytes elements is outside the range, the whole request is deemed as invalid, and rejected. | | blob_header | common.v2.BlobHeader | | The header contains metadata about the blob.

This header can be thought of as an "eigenDA tx", in that it plays a purpose similar to an eth_tx to disperse a 4844 blob. Note that a call to DisperseBlob requires the blob and the blobHeader, which is similar to how dispersing a blob to ethereum requires sending a tx whose data contains the hash of the kzg commit of the blob, which is dispersed separately. | | signature | bytes | | signature over keccak hash of the blob_header that can be verified by blob_header.payment_header.account_id |

GetPaymentStateReply

GetPaymentStateReply contains the payment state of an account.

FieldTypeLabelDescription
payment_global_paramsPaymentGlobalParamsglobal payment vault parameters
period_recordsPeriodRecordrepeatedoff-chain account reservation usage records
reservationReservationon-chain account reservation setting
cumulative_paymentbytesoff-chain on-demand payment usage
onchain_cumulative_paymentbyteson-chain on-demand payment deposited

GetPaymentStateRequest

GetPaymentStateRequest contains parameters to query the payment state of an account.

FieldTypeLabelDescription
account_idstringThe ID of the account being queried. This account ID is an eth wallet address of the user.
signaturebytesSignature over the account ID

PaymentGlobalParams

Global constant parameters defined by the payment vault.

FieldTypeLabelDescription
global_symbols_per_seconduint64Global ratelimit for on-demand dispersals
min_num_symbolsuint64Minimum number of symbols accounted for all dispersals
price_per_symboluint64Price charged per symbol for on-demand dispersals
reservation_windowuint64Reservation window for all reservations
on_demand_quorum_numbersuint32repeatedquorums allowed to make on-demand dispersals

PeriodRecord

PeriodRecord is the usage record of an account in a bin. The API should return the active bin record and the subsequent two records that contains potential overflows.

FieldTypeLabelDescription
indexuint32Period index of the reservation
usageuint64symbol usage recorded

Reservation

Reservation parameters of an account, used to determine the rate limit for the account.

FieldTypeLabelDescription
symbols_per_seconduint64rate limit for the account
start_timestampuint32start timestamp of the reservation
end_timestampuint32end timestamp of the reservation
quorum_numbersuint32repeatedquorums allowed to make reserved dispersals
quorum_splitsuint32repeatedquorum splits describes how the payment is split among the quorums

SignedBatch

SignedBatch is a batch of blobs with a signature.

FieldTypeLabelDescription
headercommon.v2.BatchHeaderheader contains metadata about the batch
attestationAttestationattestation on the batch

BlobStatus

BlobStatus represents the status of a blob. The status of a blob is updated as the blob is processed by the disperser. The status of a blob can be queried by the client using the GetBlobStatus API. Intermediate states are states that the blob can be in while being processed, and it can be updated to a different state:

  • QUEUED
  • ENCODED
  • GATHERING_SIGNATURES Terminal states are states that will not be updated to a different state:
  • UNKNOWN
  • COMPLETE
  • FAILED
NameNumberDescription
UNKNOWN0UNKNOWN means that the status of the blob is unknown. This is a catch all and should not be encountered absent a bug.

This status is functionally equivalent to FAILED, but is used to indicate that the failure is due to an unanticipated bug. | | QUEUED | 1 | QUEUED means that the blob has been queued by the disperser for processing. The DisperseBlob API is asynchronous, meaning that after request validation, but before any processing, the blob is stored in a queue of some sort, and a response immediately returned to the client. | | ENCODED | 2 | ENCODED means that the blob has been Reed-Solomon encoded into chunks and is ready to be dispersed to DA Nodes. | | GATHERING_SIGNATURES | 3 | GATHERING_SIGNATURES means that the blob chunks are currently actively being transmitted to validators, and in doing so requesting that the validators sign to acknowledge receipt of the blob. Requests that timeout or receive errors are resubmitted to DA nodes for some period of time set by the disperser, after which the BlobStatus becomes COMPLETE. | | COMPLETE | 4 | COMPLETE means the blob has been dispersed to DA nodes, and the GATHERING_SIGNATURES period of time has completed. This status does not guarantee any signer percentage, so a client should check that the signature has met its required threshold, and resubmit a new blob dispersal request if not. | | FAILED | 5 | FAILED means that the blob has failed permanently. Note that this is a terminal state, and in order to retry the blob, the client must submit the blob again (blob key is required to be unique). |

Disperser

Disperser defines the public APIs for dispersing blobs.

Method NameRequest TypeResponse TypeDescription
DisperseBlobDisperseBlobRequestDisperseBlobReplyDisperseBlob accepts blob to disperse from clients. This executes the dispersal asynchronously, i.e. it returns once the request is accepted. The client could use GetBlobStatus() API to poll the the processing status of the blob.
GetBlobStatusBlobStatusRequestBlobStatusReplyGetBlobStatus is meant to be polled for the blob status.
GetBlobCommitmentBlobCommitmentRequestBlobCommitmentReplyGetBlobCommitment is a utility method that calculates commitment for a blob payload.
GetPaymentStateGetPaymentStateRequestGetPaymentStateReplyGetPaymentState is a utility method to get the payment state of a given account.

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

node/node.proto

AttestBatchReply

FieldTypeLabelDescription
signaturebytes

AttestBatchRequest

FieldTypeLabelDescription
batch_headerBatchHeaderheader of the batch
blob_header_hashesbytesrepeatedthe header hashes of all blobs in the batch

BatchHeader

BatchHeader (see core/data.go#BatchHeader)

FieldTypeLabelDescription
batch_rootbytesThe root of the merkle tree with hashes of blob headers as leaves.
reference_block_numberuint32The Ethereum block number at which the batch is dispersed.

Blob

In EigenDA, the original blob to disperse is encoded as a polynomial via taking taking different point evaluations (i.e. erasure coding). These points are split into disjoint subsets which are assigned to different operator nodes in the EigenDA network. The data in this message is a subset of these points that are assigned to a single operator node.

FieldTypeLabelDescription
headerBlobHeaderWhich (original) blob this is for.
bundlesBundlerepeatedEach bundle contains all chunks for a single quorum of the blob. The number of bundles must be equal to the total number of quorums associated with the blob, and the ordering must be the same as BlobHeader.quorum_headers. Note: an operator may be in some but not all of the quorums; in that case the bundle corresponding to that quorum will be empty.

BlobHeader

FieldTypeLabelDescription
commitmentcommon.G1CommitmentThe KZG commitment to the polynomial representing the blob.
length_commitmentG2CommitmentThe KZG commitment to the polynomial representing the blob on G2, it is used for proving the degree of the polynomial
length_proofG2CommitmentThe low degree proof. It's the KZG commitment to the polynomial shifted to the largest SRS degree.
lengthuint32The length of the original blob in number of symbols (in the field where the polynomial is defined).
quorum_headersBlobQuorumInforepeatedThe params of the quorums that this blob participates in.
account_idstringThe ID of the user who is dispersing this blob to EigenDA.
reference_block_numberuint32The reference block number whose state is used to encode the blob

BlobQuorumInfo

See BlobQuorumParam as defined in api/proto/disperser/disperser.proto

FieldTypeLabelDescription
quorum_iduint32
adversary_thresholduint32
confirmation_thresholduint32
chunk_lengthuint32
ratelimituint32

Bundle

A Bundle is the collection of chunks associated with a single blob, for a single operator and a single quorum.

FieldTypeLabelDescription
chunksbytesrepeatedEach chunk corresponds to a collection of points on the polynomial. Each chunk has same number of points.
bundlebytesAll chunks of the bundle encoded in a byte array.

G2Commitment

FieldTypeLabelDescription
x_a0bytesThe A0 element of the X coordinate of G2 point.
x_a1bytesThe A1 element of the X coordinate of G2 point.
y_a0bytesThe A0 element of the Y coordinate of G2 point.
y_a1bytesThe A1 element of the Y coordinate of G2 point.

GetBlobHeaderReply

FieldTypeLabelDescription
blob_headerBlobHeaderThe header of the blob requested per GetBlobHeaderRequest.
proofMerkleProofMerkle proof that returned blob header belongs to the batch and is the batch's MerkleProof.index-th blob. This can be checked against the batch root on chain.

GetBlobHeaderRequest

See RetrieveChunksRequest for documentation of each parameter of GetBlobHeaderRequest.

FieldTypeLabelDescription
batch_header_hashbytes
blob_indexuint32
quorum_iduint32

MerkleProof

FieldTypeLabelDescription
hashesbytesrepeatedThe proof itself.
indexuint32Which index (the leaf of the Merkle tree) this proof is for.

NodeInfoReply

Node info reply

FieldTypeLabelDescription
semverstring
archstring
osstring
num_cpuuint32
mem_bytesuint64

NodeInfoRequest

Node info request

RetrieveChunksReply

FieldTypeLabelDescription
chunksbytesrepeatedAll chunks the Node is storing for the requested blob per RetrieveChunksRequest.
chunk_encoding_formatChunkEncodingFormatHow the above chunks are encoded.

RetrieveChunksRequest

FieldTypeLabelDescription
batch_header_hashbytesThe hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 This identifies which batch to retrieve for.
blob_indexuint32Which blob in the batch to retrieve for (note: a batch is logically an ordered list of blobs).
quorum_iduint32Which quorum of the blob to retrieve for (note: a blob can have multiple quorums and the chunks for different quorums at a Node can be different). The ID must be in range [0, 254].

StoreBlobsReply

FieldTypeLabelDescription
signaturesgoogle.protobuf.BytesValuerepeatedThe operator's BLS sgnature signed on the blob header hashes. The ordering of the signatures must match the ordering of the blobs sent in the request, with empty signatures in the places for discarded blobs.

StoreBlobsRequest

FieldTypeLabelDescription
blobsBlobrepeatedBlobs to store
reference_block_numberuint32The reference block number whose state is used to encode the blobs

StoreChunksReply

FieldTypeLabelDescription
signaturebytesThe operator's BLS signature signed on the batch header hash.

StoreChunksRequest

FieldTypeLabelDescription
batch_headerBatchHeaderWhich batch this request is for.
blobsBlobrepeatedThe chunks for each blob in the batch to be stored in an EigenDA Node.

ChunkEncodingFormat

This describes how the chunks returned in RetrieveChunksReply are encoded. Used to facilitate the decoding of chunks.

NameNumberDescription
UNKNOWN0
GNARK1
GOB2

Dispersal

Method NameRequest TypeResponse TypeDescription
StoreChunksStoreChunksRequestStoreChunksReplyStoreChunks validates that the chunks match what the Node is supposed to receive ( different Nodes are responsible for different chunks, as EigenDA is horizontally sharded) and is correctly coded (e.g. each chunk must be a valid KZG multiproof) according to the EigenDA protocol. It also stores the chunks along with metadata for the protocol-defined length of custody. It will return a signature at the end to attest to the data in this request it has processed.
StoreBlobsStoreBlobsRequestStoreBlobsReplyStoreBlobs is similar to StoreChunks, but it stores the blobs using a different storage schema so that the stored blobs can later be aggregated by AttestBatch method to a bigger batch. StoreBlobs + AttestBatch will eventually replace and deprecate StoreChunks method. DEPRECATED: StoreBlobs method is not used
AttestBatchAttestBatchRequestAttestBatchReplyAttestBatch is used to aggregate the batches stored by StoreBlobs method to a bigger batch. It will return a signature at the end to attest to the aggregated batch. DEPRECATED: AttestBatch method is not used
NodeInfoNodeInfoRequestNodeInfoReplyRetrieve node info metadata

Retrieval

Method NameRequest TypeResponse TypeDescription
RetrieveChunksRetrieveChunksRequestRetrieveChunksReplyRetrieveChunks retrieves the chunks for a blob custodied at the Node.
GetBlobHeaderGetBlobHeaderRequestGetBlobHeaderReplyGetBlobHeader is similar to RetrieveChunks, this just returns the header of the blob.
NodeInfoNodeInfoRequestNodeInfoReplyRetrieve node info metadata

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

validator/node_v2.proto

GetChunksReply

The response to the GetChunks() RPC.

FieldTypeLabelDescription
chunksbytesrepeatedAll chunks the Node is storing for the requested blob per GetChunksRequest.
chunk_encoding_formatChunkEncodingFormatThe format how the above chunks are encoded.

GetChunksRequest

The parameter for the GetChunks() RPC.

FieldTypeLabelDescription
blob_keybytesThe unique identifier for the blob the chunks are being requested for. The blob_key is the keccak hash of the rlp serialization of the BlobHeader, as computed here: https://github.com/Layr-Labs/eigenda/blob/0f14d1c90b86d29c30ff7e92cbadf2762c47f402/core/v2/serialization.go#L30
quorum_iduint32Which quorum of the blob to retrieve for (note: a blob can have multiple quorums and the chunks for different quorums at a Node can be different). The ID must be in range [0, 254].

GetNodeInfoReply

Node info reply

FieldTypeLabelDescription
semverstringThe version of the node.
archstringThe architecture of the node.
osstringThe operating system of the node.
num_cpuuint32The number of CPUs on the node.
mem_bytesuint64The amount of memory on the node in bytes.

GetNodeInfoRequest

The parameter for the GetNodeInfo() RPC.

StoreChunksReply

StoreChunksReply is the message type used to respond to a StoreChunks() RPC.

FieldTypeLabelDescription
signaturebytesThe validator's BSL signature signed on the batch header hash.

StoreChunksRequest

Request that the Node store a batch of chunks.

FieldTypeLabelDescription
batchcommon.v2.Batchbatch of blobs to store
disperserIDuint32ID of the disperser that is requesting the storage of the batch.
timestampuint32Timestamp of the request in seconds since the Unix epoch. If too far out of sync with the server's clock, request may be rejected.
signaturebytesSignature using the disperser's ECDSA key over keccak hash of the batch. The purpose of this signature is to prevent hooligans from tricking validators into storing data that they shouldn't be storing.

Algorithm for computing the hash is as follows. All integer values are serialized in big-endian order (unsigned). A reference implementation (golang) can be found at https://github.com/Layr-Labs/eigenda/blob/master/disperser/auth/request_signing.go

  1. digest len(batch.BatchHeader.BatchRoot) (4 bytes, unsigned big endian) 2. digest batch.BatchHeader.BatchRoot 3. digest batch.BatchHeader.ReferenceBlockNumber (8 bytes, unsigned big endian) 4. digest len(batch.BlobCertificates) (4 bytes, unsigned big endian) 5. for each certificate in batch.BlobCertificates: a. digest certificate.BlobHeader.Version (4 bytes, unsigned big endian) b. digest len(certificate.BlobHeader.QuorumNumbers) (4 bytes, unsigned big endian) c. for each quorum_number in certificate.BlobHeader.QuorumNumbers: i. digest quorum_number (4 bytes, unsigned big endian) d. digest len(certificate.BlobHeader.Commitment.Commitment) (4 bytes, unsigned big endian) e. digest certificate.BlobHeader.Commitment.Commitment f digest len(certificate.BlobHeader.Commitment.LengthCommitment) (4 bytes, unsigned big endian) g. digest certificate.BlobHeader.Commitment.LengthCommitment h. digest len(certificate.BlobHeader.Commitment.LengthProof) (4 bytes, unsigned big endian) i. digest certificate.BlobHeader.Commitment.LengthProof j. digest certificate.BlobHeader.Commitment.Length (4 bytes, unsigned big endian) k. digest len(certificate.BlobHeader.PaymentHeader.AccountId) (4 bytes, unsigned big endian) l. digest certificate.BlobHeader.PaymentHeader.AccountId m. digest certificate.BlobHeader.PaymentHeader.Timestamp (4 bytes, signed big endian) n digest len(certificate.BlobHeader.PaymentHeader.CumulativePayment) (4 bytes, unsigned big endian) o. digest certificate.BlobHeader.PaymentHeader.CumulativePayment p digest len(certificate.BlobHeader.Signature) (4 bytes, unsigned big endian) q. digest certificate.BlobHeader.Signature r. digest len(certificate.Relays) (4 bytes, unsigned big endian) s. for each relay in certificate.Relays: i. digest relay (4 bytes, unsigned big endian) 6. digest disperserID (4 bytes, unsigned big endian) 7. digest timestamp (4 bytes, unsigned big endian)

Note that this signature is not included in the hash for obvious reasons. |

ChunkEncodingFormat

This describes how the chunks returned in GetChunksReply are encoded. Used to facilitate the decoding of chunks.

NameNumberDescription
UNKNOWN0A valid response should never use this value. If encountered, the client should treat it as an error.
GNARK1A chunk encoded in GNARK has the following format:

[KZG proof: 32 bytes] [Coeff 1: 32 bytes] [Coeff 2: 32 bytes] ... [Coeff n: 32 bytes]

The KZG proof is a point on G1 and is serialized with bn254.G1Affine.Bytes(). The coefficients are field elements in bn254 and serialized with fr.Element.Marshal().

References: - bn254.G1Affine: github.com/consensys/gnark-crypto/ecc/bn254 - fr.Element: github.com/consensys/gnark-crypto/ecc/bn254/fr

Golang serialization and deserialization can be found in: - Frame.SerializeGnark() - Frame.DeserializeGnark() Package: github.com/Layr-Labs/eigenda/encoding |

Dispersal

Dispersal is utilized to disperse chunk data.

Method NameRequest TypeResponse TypeDescription
StoreChunksStoreChunksRequestStoreChunksReplyStoreChunks instructs the validator to store a batch of chunks. This call blocks until the validator either acquires the chunks or the validator determines that it is unable to acquire the chunks. If the validator is able to acquire and validate the chunks, it returns a signature over the batch header. This RPC describes which chunks the validator should store but does not contain that chunk data. The validator is expected to fetch the chunk data from one of the relays that is in possession of the chunk.
GetNodeInfoGetNodeInfoRequestGetNodeInfoReplyGetNodeInfo fetches metadata about the node.

Retrieval

Retrieval is utilized to retrieve chunk data.

Method NameRequest TypeResponse TypeDescription
GetChunksGetChunksRequestGetChunksReplyGetChunks retrieves the chunks for a blob custodied at the Node. Note that where possible, it is generally faster to retrieve chunks from the relay service if that service is available.
GetNodeInfoGetNodeInfoRequestGetNodeInfoReplyRetrieve node info metadata

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

relay/relay.proto

ChunkRequest

A request for chunks within a specific blob. Requests are fulfilled in all-or-nothing fashion. If any of the requested chunks are not found or are unable to be fetched, the entire request will fail.

FieldTypeLabelDescription
by_indexChunkRequestByIndexRequest chunks by their individual indices.
by_rangeChunkRequestByRangeRequest chunks by a range of indices.

ChunkRequestByIndex

A request for chunks within a specific blob. Each chunk is requested individually by its index.

FieldTypeLabelDescription
blob_keybytesThe blob key.
chunk_indicesuint32repeatedThe index of the chunk within the blob.

ChunkRequestByRange

A request for chunks within a specific blob. Each chunk is requested a range of indices.

FieldTypeLabelDescription
blob_keybytesThe blob key.
start_indexuint32The first index to start fetching chunks from.
end_indexuint32One past the last index to fetch chunks from. Similar semantics to golang slices.

GetBlobReply

The reply to a GetBlobs request.

FieldTypeLabelDescription
blobbytesThe blob requested.

GetBlobRequest

A request to fetch one or more blobs.

FieldTypeLabelDescription
blob_keybytesThe key of the blob to fetch.

GetChunksReply

The reply to a GetChunks request.

FieldTypeLabelDescription
databytesrepeatedThe chunks requested. The order of these chunks will be the same as the order of the requested chunks. data is the raw data of the bundle (i.e. serialized byte array of the frames)

GetChunksRequest

Request chunks from blobs stored by this relay.

FieldTypeLabelDescription
chunk_requestsChunkRequestrepeatedThe chunk requests. Chunks are returned in the same order as they are requested.
operator_idbytesIf this is an authenticated request, this should hold the ID of the operator. If this is an unauthenticated request, this field should be empty. Relays may choose to reject unauthenticated requests.
timestampuint32Timestamp of the request in seconds since the Unix epoch. If too far out of sync with the server's clock, request may be rejected.
operator_signaturebytesIf this is an authenticated request, this field will hold a BLS signature by the requester on the hash of this request. Relays may choose to reject unauthenticated requests.

The following describes the schema for computing the hash of this request This algorithm is implemented in golang using relay.auth.HashGetChunksRequest().

All integers are encoded as unsigned 4 byte big endian values.

Perform a keccak256 hash on the following data in the following order: 1. the length of the operator ID in bytes 2. the operator id 3. the number of chunk requests 4. for each chunk request: a. if the chunk request is a request by index: i. a one byte ASCII representation of the character "i" (aka Ox69) ii. the length blob key in bytes iii. the blob key iv. the start index v. the end index b. if the chunk request is a request by range: i. a one byte ASCII representation of the character "r" (aka Ox72) ii. the length of the blob key in bytes iii. the blob key iv. each requested chunk index, in order 5. the timestamp (seconds since the Unix epoch encoded as a 4 byte big endian value) |

Relay

Relay is a service that provides access to public relay functionality.

Method NameRequest TypeResponse TypeDescription
GetBlobGetBlobRequestGetBlobReplyGetBlob retrieves a blob stored by the relay.
GetChunksGetChunksRequestGetChunksReplyGetChunks retrieves chunks from blobs stored by the relay.

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

retriever/retriever.proto

BlobReply

FieldTypeLabelDescription
databytesThe blob retrieved and reconstructed from the EigenDA Nodes per BlobRequest.

BlobRequest

FieldTypeLabelDescription
batch_header_hashbytesThe hash of the ReducedBatchHeader defined onchain, see: https://github.com/Layr-Labs/eigenda/blob/master/contracts/src/interfaces/IEigenDAServiceManager.sol#L43 This identifies the batch that this blob belongs to.
blob_indexuint32Which blob in the batch this is requesting for (note: a batch is logically an ordered list of blobs).
reference_block_numberuint32The Ethereum block number at which the batch for this blob was constructed.
quorum_iduint32Which quorum of the blob this is requesting for (note a blob can participate in multiple quorums).

Retriever

The Retriever is a service for retrieving chunks corresponding to a blob from the EigenDA operator nodes and reconstructing the original blob from the chunks. This is a client-side library that the users are supposed to operationalize.

Note: Users generally have two ways to retrieve a blob from EigenDA:

  1. Retrieve from the Disperser that the user initially used for dispersal: the API is Disperser.RetrieveBlob() as defined in api/proto/disperser/disperser.proto
  2. Retrieve directly from the EigenDA Nodes, which is supported by this Retriever.

The Disperser.RetrieveBlob() (the 1st approach) is generally faster and cheaper as the Disperser manages the blobs that it has processed, whereas the Retriever.RetrieveBlob() (the 2nd approach here) removes the need to trust the Disperser, with the downside of worse cost and performance.

Method NameRequest TypeResponse TypeDescription
RetrieveBlobBlobRequestBlobReplyThis fans out request to EigenDA Nodes to retrieve the chunks and returns the reconstructed original blob in response.

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

Protocol Documentation

Table of Contents

Top

retriever/v2/retriever_v2.proto

BlobReply

A reply to a RetrieveBlob() request.

FieldTypeLabelDescription
databytesThe blob retrieved and reconstructed from the EigenDA Nodes per BlobRequest.

BlobRequest

A request to retrieve a blob from the EigenDA Nodes via RetrieveBlob().

FieldTypeLabelDescription
blob_headercommon.v2.BlobHeaderheader of the blob to be retrieved
reference_block_numberuint32The Ethereum block number at which the batch for this blob was constructed.
quorum_iduint32Which quorum of the blob this is requesting for (note a blob can participate in multiple quorums).

Retriever

The Retriever is a service for retrieving chunks corresponding to a blob from the EigenDA operator nodes and reconstructing the original blob from the chunks. This is a client-side library that the users are supposed to operationalize.

Note: Users generally have two ways to retrieve a blob from EigenDA V2:

  1. Retrieve from the relay that the blob is assigned to: the API is Relay.GetBlob() as defined in api/proto/relay/relay.proto
  2. Retrieve directly from the EigenDA Nodes, which is supported by this Retriever.

The Relay.GetBlob() (the 1st approach) is generally faster and cheaper as the relay manages the blobs that it has processed, whereas the Retriever.RetrieveBlob() (the 2nd approach here) removes the need to trust the relay, with the downside of worse cost and performance.

Method NameRequest TypeResponse TypeDescription
RetrieveBlobBlobRequestBlobReplyThis fans out request to EigenDA Nodes to retrieve the chunks and returns the reconstructed original blob in response.

Scalar Value Types

.proto TypeNotesC++JavaPythonGoC#PHPRuby
doubledoubledoublefloatfloat64doublefloatFloat
floatfloatfloatfloatfloat32floatfloatFloat
int32Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.int32intintint32intintegerBignum or Fixnum (as required)
int64Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.int64longint/longint64longinteger/stringBignum
uint32Uses variable-length encoding.uint32intint/longuint32uintintegerBignum or Fixnum (as required)
uint64Uses variable-length encoding.uint64longint/longuint64ulonginteger/stringBignum or Fixnum (as required)
sint32Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.int32intintint32intintegerBignum or Fixnum (as required)
sint64Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.int64longint/longint64longinteger/stringBignum
fixed32Always four bytes. More efficient than uint32 if values are often greater than 2^28.uint32intintuint32uintintegerBignum or Fixnum (as required)
fixed64Always eight bytes. More efficient than uint64 if values are often greater than 2^56.uint64longint/longuint64ulonginteger/stringBignum
sfixed32Always four bytes.int32intintint32intintegerBignum or Fixnum (as required)
sfixed64Always eight bytes.int64longint/longint64longinteger/stringBignum
boolboolbooleanbooleanboolboolbooleanTrueClass/FalseClass
stringA string must always contain UTF-8 encoded or 7-bit ASCII text.stringStringstr/unicodestringstringstringString (UTF-8)
bytesMay contain any arbitrary sequence of bytes.stringByteStringstr[]byteByteStringstringString (ASCII-8BIT)

EigenDA V2 Integration Spec

Overview

The EigenDA V2 documentation describes the architectural changes that allow for important network performance increases. From the point of view of rollup integrations, there are three important new features:

  1. Blob batches are no longer bridged to Ethereum with dispersals now being confirmed once a batch has been CERTIFIED (i.e, signed over by operator set). This operation takes 10-20 seconds - providing lower confirmation latency and higher throughput for the rollup. Verification of the blobs now needs to be done by the rollup stack.
  2. Centralized (accounting done by disperser) payments model
  3. A new relayer API from which to retrieve blobs (distinct from the disperser API which is now only used to disperse blobs)

Diagrams

We will refer to the below diagrams throughout the text.

High Level Diagram

image.png

Sequence Diagram

sequenceDiagram
  box Rollup Sequencer
  participant B as Batcher
  participant SP as Proxy
  end
  box EigenDA Network
  participant D as Disperser
  participant R as Relay
  participant DA as DA Nodes
  end
  box Ethereum
  participant BI as Batcher Inbox
  participant BV as EigenDABlobVerifier
  end
  box Rollup Validator
  participant VP as Proxy
  participant V as Validator
  end

  %% Blob Creation and Dispersal Flow
  B->>SP: Send payload
  Note over SP: Encode payload into blob
  alt
          SP->>D: GetBlobCommitment(blob)
          D-->>SP: blob_commitment
    else
            SP->>SP: Compute commitment locally
    end
  Note over SP: Create blob_header including payment_header
  SP->>D: DisperseBlob(blob, blob_header)
  D-->>SP: QUEUED status + blob_header_hash
  
  %% Parallel dispersal to Relay and DA nodes
  par Dispersal to Storage
      R->>D: Pull blob
  and Dispersal to DA nodes
      D->>DA: Send Headers
      DA->>R: Pull Chunks
      DA->>D: Signature
  end

  loop Until CERTIFIED status
          SP->>D: GetBlobStatus
          D-->>SP: status + signed_batch + blob_verification_info
  end
  SP->>BV: getNonSignerStakesAndSignature(signed_batch)
  SP->>BV: verifyBlobV2(batch_header, blob_verification_info, nonSignerStakesAndSignature)
  SP->>BI: Submit cert = (batch_header, blob_verification_info, nonSignerStakesAndSignature)

  %% Validation Flow
  V->>BI: Read cert
  V->>VP: GET /get/{cert} → cert
  activate V
  Note over VP: Extract relay_key + blob_header_hash from cert
  VP->>R: GetBlob(blob_header_hash)
  R-->>VP: Return blob
  VP->>BV: verifyBlobV2
  VP-->>V: Return validated blob
  deactivate V

Ultra High Resolution Diagram

image.png

APIs

Below we give a summary of the APIs relevant to understanding the high-level diagram above.

Proxy

See our gorilla/mux routes for full detail, but the gist is that proxy presents a REST endpoint based off of the op da-server spec to rollup batchers:

# OP
POST /put body: <preimage_bytes> → <hex_encoded_commitment>
GET /get/{hex_encoded_commitment} → <preimage_bytes>
# NITRO
Same as OP but add a `?commitment_mode=standard` query param 
to both POST and GET methods.

Disperser

The disperser presents a grpc v2 service endpoint

$ EIGENDA_DISPERSER_PREPROD=disperser-preprod-holesky.eigenda.xyz:443
$ grpcurl $EIGENDA_DISPERSER_PREPROD list disperser.v2.Disperser
disperser.v2.Disperser.DisperseBlob
disperser.v2.Disperser.GetBlobCommitment
disperser.v2.Disperser.GetBlobStatus
disperser.v2.Disperser.GetPaymentState

Relay

Relays similarly present a grpc service endpoint

$ EIGENDA_RELAY_PREPROD=relay-1-preprod-holesky.eigenda.xyz:443
$ grpcurl $EIGENDA_RELAY_PREPROD list relay.Relay
relay.Relay.GetBlob
relay.Relay.GetChunks

Contracts

The most important contract for rollups integrations is the EigenDACertVerifier, which presents a function to validate Certs:

/**
 * @notice Verifies a blob cert for the specified quorums with the default security thresholds
 * @param batchHeader The batch header of the blob 
 * @param blobInclusionInfo The inclusion proof for the blob cert
 * @param nonSignerStakesAndSignature The nonSignerStakesAndSignature to verify the blob cert against
 */
function verifyDACertV2(
    BatchHeaderV2 calldata batchHeader,
    BlobInclusionInfo calldata blobInclusionInfo,
    NonSignerStakesAndSignature calldata nonSignerStakesAndSignature
) external view

Rollup Payload Lifecycle

How is a rollup’s payload (compressed batches of transactions or state transition diffs) encoded and made available on the EigenDA network?

flowchart TD
    subgraph Rollups[Rollup Domain]
        RS["Rollup Sequencer<br/>[Software System]<br/>Sequences the rollup; submits rollup payloads to EigenDA for data availability"]
        RV["Rollup Validator<br/>[Software System]<br/>Runs a derivation pipeline to validate the rollup"]
        Payload[("Rollup Payload<br/>[Data]<br/>Batches of tx data or state transition diffs")]
    end

    %% Standalone proxy
    Proxy["Proxy<br/>[Software System]<br/>Bridges domains by encoding/decoding payloads/blobs"]

    subgraph EigenDA[Data Availability Domain]
        EN["EigenDA Network<br/>[Software System]<br/>Provides decentralized data availability by storing and serving blobs"]
        Blob[("Blob<br/>[Data]<br/>Rollup payload encoded into bn254 field element array")]
        Cert[("DA Certificate<br/>[Data]<br/>Proof of Data Availability. Used to retrieve and validate blobs.")]
        ETH["Ethereum<br/>[Software System]<br/>Stores EigenDA network properties like operator stakes, etc. Also validates certs."]
    end

    %% Sequencer Flow
    RS -->|"(1) Creates"| Payload
    Payload -->|"(2) Sent to"| Proxy
    Proxy -->|"(3) Encodes into"| Blob
    Blob -->|"(4) Dispersed across"| EN
    EN -->|"(5) Verifies signatures according to stakes stored on"| ETH
    EN -->|"(6) Returns cert"| Proxy
    Proxy -->|"(7) Submits"| Cert
    Cert -->|"(8) Posted to"| ETH
    
    %% Validator Flow
    RV -->|"(9) Reads certificates"| ETH
    RV -->|"(10) Retrieve Compressed Batch from Certificate"| Proxy

    %% Styling
    classDef system fill:#1168bd,stroke:#0b4884,color:white
    classDef container fill:#23a,stroke:#178,color:white
    classDef data fill:#f9f,stroke:#c6c,color:black
    classDef red fill:#916,stroke:#714,color:white
    
    class RS,RV,EN,ETH,S1,Proxy system
    class Rollups,EigenDA container
    class Batch,Blob,Cert,D1 data

At a high-level, a rollup sequencer needs to make it’s payload available for download from validators of its network. The EigenDA network makes use of cryptographic concepts such as KZG commitments as fundamental building blocks. Because of this, it can only work with eigenda blobs (herreafter referred to simply as blobs; see technical definition below) of data. The EigenDA proxy is used to bridge the rollup domain (which deals with payloads) and the EigenDA domain (which deals with blobs).

As an example, an op-stack Ethereum rollup’s payload is a compressed batch of txs (called a frame). This frame gets sent to Ethereum to be made available either as a simple tx, or as a 4844 blob (using a blob tx). Using EigenDA instead of Ethereum for data availability works similarly: the payloads are encoded into an eigenda blob and dispersed to the EigenDA network via an EigenDA disperser. The disperser eventually returns a certificate containing signatures of EigenDA operators certifying the availability of the data, which is then posted to Ethereum as the input field of a normal tx. Note that due to the rollup settling on Ethereum, Ethereum DA is needed, but only to make the cert available, which is much smaller than the blob containing the payload which is made available on EigenDA instead.

Data structs:

  • Payload: piece of data that an EigenDA client (rollup, avs, etc.) wants to make available. This is typically compressed batches of transactions or state transition diffs.
  • EncodedPayload: payload encoded into a list of bn254 field elements (each 32 bytes), typically with a prefixed field element containing the payload length in bytes, such that the payload can decoded.
  • PayloadPolynomial : encodedPayload padded with 0s to the next power of 2 (if needed) and interpreted either as evaluations (PolyCoeff) or coefficients (PolyEval) of a polynomial. Because the EigenDA network interprets blobs as coefficients, a PolyEval will need to be IFFT’d into a PolyCoeff before being dispersed.
  • (EigenDA) Blob: array of bn254 field elements of length a power of two. Interpreted by the network as coefficients of a polynomial. Equivalent to PolyCoeff.
  • Blob Header: contains the information necessary to uniquely identify a BlobDispersal request.
  • Blob Certificate: Signed BlobHeader along with relayKeys, which uniquely identify a relay service for DA Nodes to retrieve chunks from and clients to retrieve full blobs from.
  • Batch: Batch of blobs whose blob certs are grouped into a merkle tree and dispersed together for better network efficiency.
  • DA Certificate (or DACert): contains the information necessary to retrieve and verify a blob from the EigenDA network, along with a proof of availability.
  • AltDACommitment: RLP serialized DACert prepended with rollup-specific header bytes. This commitment is what gets sent to the rollup’s batcher inbox.

Contracts

  • EigenDACertVerifier: contains one main important function verifyCertV2 which is used to verify certs.
  • EigenDAThresholdRegistry: contains signature related thresholds and blob→chunks encoding related parameters.
  • EigenDARelayRegistry: contains EigenDA network registered Relays’ Ethereum address and DNS hostname or IP address.
  • EigenDADisperserRegistry : contains EigenDA network registered Dispersers’ Ethereum address.

Lifecycle phases:

  • Sequencer:
    • Encoding: Payload → Blob
    • BlobHeader Construction: Blob → BlobHeader
    • Dispersal: (Blob, BlobHeader) → Certificate
      • Certificate+Blob Validation
      • Unhappy path: Failover to EthDA
    • Posting: Certificate → Ethereum tx
  • Validator (exact reverse of sequencer):
    • Reading: Ethereum tx → Certificate
    • Retrieval: Certificate → Blob
      • Certificate+Blob Validation
    • Decoding: Blob → Payload

Data Structs

The below diagram represents the transformation from a rollup payload to the different structs that are allowed to be dispersed

image.png

Payload

A client payload is whatever piece of data the EigenDA client wants to make available. For optimistic rollups this would be compressed batches of txs (frames). For (most) zk-rollups this would be compressed state transitions. For AVSs it could be Proofs, or Pictures, or whatever.

A payload must fit inside an EigenDA blob to be dispersed. See the allowed blob sizes in the Blob section.

EncodedPayload

An encodedPayload is the bn254 encoding of the payload. This is an intermediary processing step, but useful to give a name to it. The encoding must respect the same constraints as those on the blob:

Every 32 bytes of data is interpreted as an integer in big endian format. Each such integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617.

The eigenda codebase currently only supports encoding version 0x0, which encodes as follows:

[0x00, version_byte, big-endian uint32 len(payload), 0x00, 0x00,...] +
    [0x00, payload[0:31], 0x00, payload[32:63],..., 
        0x00, payload[n:len(payload)], 0x00, ..., 0x00]

where the last chunk is padded with 0s to be a multiple of 32.

So for eg, the payload hello would be encoded as

[0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,...] +
    [0x00, 'h', 'e', 'l', 'l', 'o', 0x00 * 26]

PayloadPolynomial

EigenDA uses KZG commitments, which represent a commitment to a function. Abstractly speaking, we thus need to represent the encodedPayload as a polynomial. We have two choices: either treat the data as the coefficients of a polynomial, or as evaluations of a polynomial. In order to convert between these two representations, we make use of FFTs which require the data to be a power of 2. Thus, PolyEval and PolyCoeff are defined as being an encodedPayload padded with 0s to the next power of 2 (if needed) and interpreted as desired.

Once an interpretation of the data has been chosen, one can convert between them as follows:

PolyCoeff --FFT--> PolyEval
PolyCoef <--IFFT-- PolyEval

EigenDA works differently from Ethereum network interprets blobs as coefficients, only PolyCoeffs can be submitted as a blob to the Disperser. A PolyEval will thus need to be IFFT’d into a PolyCoeff before being dispersed. One benefit of interpreting the data as being evaluations is that it one can use point opening proofs to reveal a single field element (32 byte chunk) of data at a time. This is useful for interactive fraud proofs (e.g. see how optimism fraud proves 4844 blobs), but less so for validity proofs that zk rollups use.

Blob

A blob is a bn254 field elements array that has a power of 2. It is interpreted by the EigenDA network as containing the coefficients of a polynomial (unlike Ethereum which treats blobs as being evaluations of a polynomial).

An encodedPayload can thus be transformed into a blob by being padded with 0s to a power of 2, with size at most 16MiB. There is no minimum size, but any blob smaller than 128KiB will be charged for 128KiB.

BlobHeader And BlobCertificate

The blobHeader is submitted alongside the blob as part of the DisperseBlob request, and the hash of its rlp serialization (blobKeyaka blobHeaderHash) is a unique identifier for a blob dispersal. This unique identifier is used to retrieve the blob.

image.png

We refer to the eigenda core spec for full details of this struct. The version is modified

DACertificate

A certicate (or short cert) contains all the information needed to retrieve a blob from the EigenDA network, as well as validate it.

image.png

A cert contains the three data structs needed to call verifyDACertV2 on the EigenDACertVerifier.sol contract. Please refer to the eigenda core spec for more details, but in short, the BlobCertificate is included as a leaf inside the merkle tree identified by the batch_root in the BatchHeader. The BlobInclusionInfo contains the information needed to prove this merkle tree inclusion. The NonSignerStakesAndSignature contains the aggregated BLS signature sigma of the EigenDA validators. sigma is a signature over the BatchHeader.

image.png

AltDACommitment

In order to be understood by each rollup stack’s derivation pipeline, the cert must be prepended with header bytes, to turn it into an altda-commitment respective to each stack:

  • op preprends 3 bytes: version_byte, commitment_type, da_layer_byte
  • nitro prepends 1 byte: version_byte

Smart Contracts

The smart contracts can be found here.

image.png

EigenDACertVerifier

Contains a single function verifyDACertV2 which is used to verify certs . This function’s logic is described in the Cert Validation section.

EigenDAThreshold Registry

The EigenDAThresholdRegistry contains two sets of fundamental parameters:

/// @notice mapping of blob version id to the params of the blob version
mapping(uint16 => VersionedBlobParams) public versionedBlobParams;
struct VersionedBlobParams {
    uint32 maxNumOperators;
    uint32 numChunks;
    uint8 codingRate;
}

/// @notice Immutable security thresholds for quorums
SecurityThresholds public defaultSecurityThresholdsV2;
struct SecurityThresholds {
    uint8 confirmationThreshold;
    uint8 adversaryThreshold;
}

The securityThresholds are currently immutable. These are the same as the previously called liveness and safety thresholds:

  • Confirmation Threshold (fka liveness threshold): minimum percentage of stake which an attacker must control in order to mount a liveness attack on the system.
  • Adversary Threshold (fka safety threshold): total percentage of stake which an attacker must control in order to mount a first-order safety attack on the system.

Their values are

defaultSecurityThresholdsV2 = {
    confirmationThreshold = ??,
    adversaryThreshold = ??,
}

A new BlobParam version is very infrequently introduced by the EigenDA Foundation Governance, and rollups can choose which version they wish to use when dispersing a blob. Currently there is only version 0 defined, with parameters:

versionedBlobParams[0] = {
    maxNumOperators = ??,
    numChunks = 8192,
    codingRate = ??,
}

The five parameters are intricately related by this formula which is also verified onchain by the verifyBlobSecurityParams function:

$$ numChunks \cdot (1 - \frac{100}{\gamma * codingRate}) \geq maxNumOperators $$

where $\gamma = confirmationThreshold - adversaryThreshold$

EigenDARelayRegistry

Contains EigenDA network registered Relays’ Ethereum address and DNS hostname or IP address. BlobCertificates contain relayKey(s), which can be transformed into that relay’s URL by calling relayKeyToUrl.

EigenDADisperserRegistry

Contains EigenDA network registered Dispersers’ Ethereum address. The EigenDA Network currently only supports a single Disperser, hosted by EigenLabs. The Disperser’s URL is currently static and unchanging, and can be found on our docs site in the Networks section.

Phases

Encoding

This phase is done inside the eigenda-proxy, because the proxy acts as the “bridge” between the Rollup Domain and Data Availability Domain (see diagram above).

A payload contains an arbitrary byte array. The DisperseBlob endpoint accepts encodedPayloads, which are bn254

BlobHeader Construction

The BlobHeader contains 4 main sections that we need to construct.

Version

The blobHeader version refers to one of the versionedBlobParams struct defined in the EigenDAThresholdRegistry contract.

QuorumNumbers

QuorumNumbers represents a list a quorums that are required to sign over and make the blob available. The quorums 0 represents the ETH quorum, and quorum 1 represents the EIGEN quorum, and both of these are required. Custom quorums can also be added to this list.

BlobCommitment

The BlobCommitment is the unique identifier for an EigenDA Blob. It can either be computed locally from the blob, or one can ask the disperser to generate it via the GetBlobCommitment endpoint.

message BlobCommitment {
  // A G1 commitment to the blob data.
  bytes commitment = 1;
  // A G2 commitment to the blob data.
  bytes length_commitment = 2;
    // Used with length_commitment to assert the correctness of the `length` field below.
  bytes length_proof = 3;
  // Length in bn254 field elements (32 bytes) of the blob. Must be a power of 2.
  uint32 length = 4;
}

Unlike Ethereum blobs which are all 128KiB, EigenDA blobs can be any power of 2 length between 32KiB and 16MiB (currently), and so the commitment alone is not sufficient to prevent certain attacks:

  • Why is a commitment to the length of the blob necessary?

    There are different variants of the attack. The basic invariant the system needs to satisfy is that with the chunks from sufficient set of validators, you can get back the full blob. So the total size of the chunks held by these validators needs to exceed the blob size. If I don't know the blob size (or at least an upper bound), there's no way for the system to validate this invariant. Here’s a simple example. Assume a network of 8 DA nodes, and coding ratio 1/2. For a blob containing 128 field elements (FEs), each node gets 1282/8=32 FEs, meaning that any 4 nodes can join forces and reconstruct the data. Now assume a world without length proof; a malicious disperser receives the same blob, uses the same commitment, but claims that the blob only had length 4 FEs. He sends each node 42/8=1 FE. The chunks submitted to the nodes match the commitment, so the nodes accept and sign over the blob’s batch. But now there are only 8 FEs in the system, which is not enough to reconstruct the original blob (need at least 128 for that).

Note that the length here is the length of the blob (power of 2), which is different from the payload_length encoded as part of the PayloadHeader in the blob itself (see the encoding section).

PaymentHeader

The paymentHeader specifies how the blob dispersal to the network will be paid for. There are 2 main modes of payment, the permissionless pay-per-blob model and the permissioned reserved-bandwidth approach. See the Payments Spec for full details, we will only describe how to set these 4 fields here.

message PaymentHeader {
  // The account ID of the disperser client. This should be a hex-encoded string of the ECSDA public key
  // corresponding to the key used by the client to sign the BlobHeader.
  string account_id = 1;
  uint32 reservation_period = 2;
  bytes cumulative_payment = 3;
    // The salt is used to ensure that the payment header is intentionally unique.
  uint32 salt = 4;
}

Users who want to pay-per-blob need to set the cumulative_payment. Users who have already paid for reserved-bandwidth can set the reservation_period.

An rpc call to the Disperser’s GetPaymentState method can be made to query the current state of an account_id. A client can query for this information on startup, cache it and then update it manually when making pay-per-blob payments, by keeping track of its current cumulative_payment to set it correctly for subsequent dispersals.

The salt is needed for reserved bandwidth users: given that the blob_header_hash uniquely identifies a dispersal, if a dispersal ever fails (see next section), then to redisperse the same blob, a new unique blob_header must be created. If the reservation_period is still the same (currently set to 300 second intervals), then the salt must be increased (or randomly changed) to allow resubmitting the same blob.

Blob Dispersal

The DisperseBlob method takes a blob and blob_header as input. Dispersal entails taking a blob, reed-solomon encoding it into chunks, dispersing those to the EigenDA nodes, retrieving their signatures, creating a cert from them, and returning that cert to the client. The disperser batches blob for a few seconds before dispersing them to nodes, so an entire dispersal process can take north of 10 seconds. For this reason, the API has been designed asynchronously with 2 relevant methods:

// Async call which queues up the blob for processing and immediately returns.
rpc DisperseBlob(DisperseBlobRequest) returns (DisperseBlobReply) {}
// Polled for the blob status updates, until a terminal status is received
rpc GetBlobStatus(BlobStatusRequest) returns (BlobStatusReply) {}

message DisperseBlobRequest {
  bytes blob = 1;
  common.v2.BlobHeader blob_header = 2;
  bytes signature = 3;
}
message BlobStatusReply {
  BlobStatus status = 1;
  SignedBatch signed_batch = 2;
  BlobVerificationInfo blob_verification_info = 3;
}

// Intermediate states: QUEUED, ENCODED
// Terminal states: CERTIFIED, UNKNOWN, FAILED, INSUFFICIENT_SIGNATURES
enum BlobStatus {
  UNKNOWN = 0; // functionally equivalent to FAILED but for unknown unknown bugs
  QUEUED = 1; // Initial state after a DisperseBlob call returns
  ENCODED = 2; // Reed-Solomon encoded into chunks ready to be dispersed to DA Nodes
  CERTIFIED = 3; // blob has been dispersed and attested by NA nodes
  FAILED = 4; // permanent failure (for reasons other than insufficient signatures)
  INSUFFICIENT_SIGNATURES = 5;
}

After a successful DisperseBlob rpc call, BlobStatus.QUEUED is returned. To retrieve a cert, the GetBlobStatus rpc shall be polled until a terminal status is reached. If BlobStatus.CERTIFIED is received, the signed_batch and blob_verification_info fields of the BlobStatusReply will be returned and can be used to create the cert . Any other terminal status indicates failure, and a new blob dispersal will need to be made.

Failover to EthDA

The proxy can be configured to retry any failed terminal status n times, after which it returns to the rollup a 503 HTTP status code which rollup batchers can use to failover to EthDA. See here for more info.

BlobStatusReply → Cert

This is not necessarily part of the spec but is currently needed given that the disperser doesn’t actually return a cert, so we need a bit of data processing to transform its returned value into a Cert. The transformation is visualized in the Ultra High Res Diagram. The main difference is just calling the getNonSignerStakesAndSignature helper function within the new EigenDACertVerifier contract to create the NonSignerStakesAndSignature struct. The following pseudocode below exemplifies this necessary preprocessing step:


class CertV2:
    batch_header: any  # You can specify proper types here
    blob_verification_proof: any
    nonsigner_stake_sigs: any

def get_cert_v2(blob_header_hash, blob_verifier_binding) -> CertV2:
    """
    V2 cert construction pseudocode
    @param blob_header_hash: key used for referencing blob status from disperser
    @param blob_verifier_binding: ABI contract binding used for generating nonsigner metadata
    @return v2_cert: EigenDA V2 certificate used by rollup 
    """
  # Call the disperser for the info needed to construct the cert
    blob_status_reply = disperser_client.get_blob_status(blob_header_hash)
    
    # Validate the blob_header received, since it uniquely identifies
    # an EigenDA dispersal.
    blob_header_hash_from_reply = blob_status_reply
                                                                .blob_verification_info
                                                                .blob_certificate
                                                                .blob_header
                                                                .Hash()
    if blob_header_hash != blob_header_hash_from_reply {
        throw/raise/panic
    }

    # Extract first 2 cert fields from blob status reply
    batch_header = blob_status_reply.signed_batch.batch_header
    blob_verification_proof = blob_status_reply.blob_verification_info
    
    # Construct NonSignerStakesAndSignature
    nonsigner_stake_sigs = blob_verifier_binding.[getNonSignerStakesAndSignature](https://www.notion.so/EigenDA-Proxy-V2-Changes-14813c11c3e080ce9f40fdc39c2062d0?pvs=21)(
                                                     blob_status_reply.signed_batch)
                                                 
  return Cert(batch_header, blob_verification_proof, nonsigner_stake_sigs)

Posting to Ethereum

The proxy converts the cert to an altda-commitment ready to be submitted to the batcher’s inbox without any further modifications by the rollup stack.

Retrieval

There are two main blob retrieval paths:

  1. decentralized trustless retrieval: retrieve chunks from Validators are recreate the blob from them.
  2. centralized trustful retrieval: the same Relay API that Validators use to download chunks, can also be used to retrieve full blobs.

EigenDA V2 has a new Relay API for retrieving blobs from the disperser. The GetBlob method takes a blob_key as input, which is a synonym for blob_header_hash. Note that BlobCertificate (different from Cert!) contains an array of relay_keys, which are the relays that can serve that specific blob. A relay’s URL can be retrieved from the relayKeyToUrl function on the EigenDARelayRegistry.sol contract.

Decoding

Decoding performs the exact reverse operations that Encoding did.

Cert+Blob+Timing Validation

Blob and Cert verification is done for both the write (sequencer) and read (validator) paths. Given this duplication in the lifecycle, and given its importance, it deserves its own section.

The validation process is:

  1. Validate the Cert (against state on ethereum)
  2. Validate the Blob (against the Cert)

Cert Validation

Cert validation is done inside the EigenDACertVerifier contract, which EigenDA deploys as-is, but is also available for rollups to modify and deploy on their own. Specifically, the verifyDACertV2 is the entry point for validation. This could either be called during a normal eth transaction (either for pessimistic “bridging” like EigenDA V1 used to do, or when uploading a Blob Field Element to a one-step-proof’s preimage contract), or be zk proven using a library like Steel.

The cert verification logic consists of:

  1. merkleInclusion:
  2. verify sigma (operators’ bls signature) over batchRoot using the NonSignerStakesAndSignature struct
  3. verify relay keys
  4. verify blob security params (blob_params + security thresholds)
  5. verify each quorum part of the blob_header has met its threshold

Blob Validation

There are different required validation steps, depending on whether the client is retrieving or dispersing a blob.

Retrieval (whether data is coming from relays, or directly from DA nodes):

  1. Verify that received blob length is ≤ the length in the cert’s BlobCommitment
  2. Verify that the blob length claimed in the BlobCommitment is greater than 0
  3. Verify that the blob length claimed in the BlobCommitment is a power of two
  4. Verify that the payload length claimed in the encoded payload header is ≤ the maximum permissible payload length, as calculated from the length in the BlobCommitment
    1. The maximum permissible payload length is computed by looking at the claimed blob length, and determining how many bytes would remain if you were to remove the encoding which is performed when converting a payload into an encodedPayload. This presents an upper bound for payload length: e.g. “If the payload were any bigger than X, then the process of converting it to an encodedPayload would have yielded a blob of larger size than claimed”
  5. If the bytes received for the blob are longer than necessary to convey the payload, as determined by the claimed payload length, then verify that all extra bytes are 0x0.
    1. Due to how padding of a blob works, it’s possible that there may be trailing 0x0 bytes, but there shouldn’t be any trailing bytes that aren’t equal to 0x0.
  6. Verify the KZG commitment. This can either be done:
    1. directly: recomputing the commitment using SRS points and checking that the two commitments match (this is the current implemented way)
    2. indirectly: verifying a point opening using Fiat-Shamir (see this issue)

Dispersal:

  1. If the BlobCertificate was generated using the disperser’s GetBlobCommitment RPC endpoint, verify its contents:
    1. verify KZG commitment
    2. verify that length matches the expected value, based on the blob that was actually sent
    3. verify the lengthProof using the length and lengthCommitment
  2. After dispersal, verify that the BlobKey actually dispersed by the disperser matches the locally computed BlobKey

Note: The verification steps in point 1. for dispersal are not currently implemented. This route only makes sense for clients that want to avoid having large amounts of SRS data, but KZG commitment verification via Fiat-Shamir is required to do the verification without this data. Until the alternate verification method is implemented, usage of GetBlobCommitment places a correctness trust assumption on the disperser generating the commitment.

Timing Verification

Certs need to be included in the rollup’s batcher-inbox in a timely matter, otherwise a malicious batcher could wait until the blobs have expired on EigenDA before posting the cert to the rollup.

image.png

Rollup Stack Secure Integrations

Nitro V1OP V1 (insecure)Nitro V2OP V2
Cert VerificationSequencerInboxxone-step proofone-step proof: done in preimage oracle contract when uploading a blob field element
Blob Verificationone-step proofxone-step proofone-step proof
Timing VerificationSequencerInboxxSequencerInboxone-step proof (?)

See our rollup stack specific specs for more detail:

V1 → V2 Migration

TODO

Proxy

Proxy is the main entry point for most customers of the EigenDA protocol. It provides a very simple POST/GET REST interface to abstract away most of the complexity of interfacing with the different services correctly.

Proxy

Rollup Managed Contracts

TODO: describe the contracts managed by rollups.

rollup-contracts

V1

Link to previous documentation?