Data Structs
The diagram below represents the transformation from a rollup payload
to the different structs that are allowed to be dispersed.
Payload
A client payload
is whatever piece of data the EigenDA client wants to make available. For optimistic rollups this would be compressed batches of txs (frames). For (most) zk-rollups this would be compressed state transitions. For AVSs it could be Proofs, or Pictures, or any arbitrary data.
A payload
must fit inside an EigenDA blob to be dispersed. See the allowed blob sizes in the Blob section.
EncodedPayload
An encodedPayload
is the bn254 encoding of the payload
. This is an intermediary processing step, but useful to give a name to it. The encoding must respect the same constraints as those on the blob:
Every 32 bytes of data is interpreted as an integer in big endian format. Each such integer must stay in the valid range to be interpreted as a field element on the bn254 curve. The valid range is 0 <= x < 21888242871839275222246405745257275088548364400416034343698204186575808495617.
The golang payload clients provided in the eigenda repo currently only support encoding version 0x0, which encodes as follows:
[0x00, version_byte, big-endian uint32 len(payload), 0x00, 0x00,...] +
[0x00, payload[0:31], 0x00, payload[32:63],...,
0x00, payload[n:len(payload)], 0x00, ..., 0x00]
where the last chunk is padded with 0s such that the total length is a multiple of 32 bytes.
For example, the payload hello
would be encoded as
[0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00,...] +
[0x00, 'h', 'e', 'l', 'l', 'o', 0x00 * 26]
PayloadPolynomial
EigenDA uses KZG commitments, which represent a commitment to a function. Abstractly speaking, we thus need to represent the encodedPayload as a polynomial. We have two choices: either treat the data as the coefficients of a polynomial, or as evaluations of a polynomial. In order to convert between these two representations, we make use of FFTs which require the data to be a power of 2. Thus, PolyEval
and PolyCoeff
are defined as being an encodedPayload
padded with 0s to the next power of 2 (if needed) and interpreted as desired.
Once an interpretation of the data has been chosen, one can convert between them as follows:
PolyCoeff --FFT--> PolyEval
PolyCoeff <--IFFT-- PolyEval
Whereas Ethereum treats 4844 blobs as evaluations of a polynomial, EigenDA instead interprets EigenDA blobs as coefficients of a polynomial. Thus, only PolyCoeff
s can be submitted as a blob
to the Disperser. Each rollup integration must thus decide whether to interpret their encodedPayload
s as PolyCoeff
, which can directly be dispersed, or as PolyEval
, which will require IFFT’ing into a PolyCoeff
before being dispersed.
Typically, optimistic rollups will interpret the data as being evaluations. This allows creating point opening proofs to reveal a single field element (32 byte chunk) at a time, which is needed for interactive fraud proofs (e.g. see how optimism fraud proves 4844 blobs). ZK rollups, on the flip side, don't require point opening proofs and thus can safely save on the extra IFFT compute costs and instead interpret their data as coefficients directly.
Blob
A blob
is a bn254 field elements array that has a power of 2. It is interpreted by the EigenDA network as containing the coefficients of a polynomial (unlike Ethereum which treats blobs as being evaluations of a polynomial).
An encodedPayload
can thus be transformed into a blob
by being padded with 0s to a power of 2, with size currently limited to 16MiB. There is no minimum size, but any blob smaller than 128KiB will be charged for 128KiB.
BlobHeader
The blobHeader
is submitted alongside the blob
as part of the DisperseBlob
request, and the hash of its ABI encoding (blobKey
, also known as blobHeaderHash
) serves as a unique identifier for a blob dispersal. This identifier is used to retrieve the blob.
The BlobHeader
contains four main sections that must be constructed. It is passed into the DisperseBlobRequest
and is signed over for payment authorization.
Refer to the eigenda protobufs for full details of this struct.
Version
The blobHeader
version refers to one of the versionedBlobParams
structs defined in the EigenDAThresholdRegistry
contract.
QuorumNumbers
QuorumNumbers
represents a list of quorums required to sign and make the blob available. Quorum 0 represents the ETH quorum, quorum 1 represents the EIGEN quorum — both are always required. Custom quorums can also be added to this list.
BlobCommitment
The BlobCommitment
is a binding commitment to an EigenDA Blob. Due to the length field, a BlobCommitment
uniquely represents a single Blob
. The length field is added to the kzgCommitment to respect the binding property. It is used by the disperser to prove to EigenDA validators that the chunks they received belong to the original blob (or its Reed-Solomon extension). This commitment can either be computed locally by the EigenDA Client from the blob or generated by the disperser via the GetBlobCommitment
endpoint.
message BlobCommitment {
// A G1 commitment to the blob data.
bytes commitment = 1;
// A G2 commitment to the blob data.
bytes length_commitment = 2;
// Used with length_commitment to assert the correctness of the `length` field below.
bytes length_proof = 3;
// Length in bn254 field elements (32 bytes) of the blob. Must be a power of 2.
uint32 length = 4;
}
Unlike Ethereum blobs which are all 128KiB, EigenDA blobs can be any power of 2 length between 32KiB and 16MiB (currently), and so the commitment
alone is not sufficient to prevent certain attacks:
-
Why is a commitment to the length of the blob necessary?
There are different variants of the attack. The basic invariant the system needs to satisfy is that with the chunks from sufficient set of validators, you can get back the full blob. So the total size of the chunks held by these validators needs to exceed the blob size. If I don't know the blob size (or at least an upper bound), there's no way for the system to validate this invariant. Here’s a simple example. Assume a network of 8 DA nodes, and coding ratio 1/2. For a
blob
containing 128 field elements (FEs), each node gets 1282/8=32 FEs, meaning that any 4 nodes can join forces and reconstruct the data. Now assume a world without length proof; a malicious disperser receives the same blob, uses the same commitment, but claims that the blob only had length 4 FEs. He sends each node 42/8=1 FE. The chunks submitted to the nodes match the commitment, so the nodes accept and sign over the blob’s batch. But now there are only 8 FEs in the system, which is not enough to reconstruct the original blob (need at least 128 for that).
Note that the length here is the length of the blob (power of 2), which is different from the payload_length encoded as part of the
PayloadHeader
in theblob
itself (see the encoding section).
PaymentHeader
The paymentHeader specifies how the blob dispersal to the network will be paid for. There are 2 modes of payment, the permissionless pay-per-blob model and the permissioned reserved-bandwidth approach. See the Payments release doc for full details; we will only describe how to set these 3 fields here.
message PaymentHeader {
// The account ID of the disperser client. This should be a hex-encoded string of the ECDSA public key
// corresponding to the key used by the client to sign the BlobHeader.
string account_id = 1;
// UNIX timestamp in nanoseconds at the time of the dispersal request.
// Used to determine the reservation period, for the reserved-bandwidth payment model.
int64 timestamp = 2;
// Total amount of tokens paid by the requesting account, including the current request.
// Used for the pay-per-blob payment model.
bytes cumulative_payment = 3;
}
Users who want to pay-per-blob need to set the cumulative_payment. timestamp
is used by users who have paid for reserved-bandwidth. If both are set, reserved-bandwidth will be used first, and cumulative_payment only used if the entire bandwidth for the current reservation period has been used up.
NOTE: There will be a lot of subtleties added to this logic with the new separate-payment-per-quorum model that is actively being worked on.
An RPC call to the Disperser’s GetPaymentState
method can be made to query the current state of an account_id
. A client can query for this information on startup, cache it, and then update it manually when making dispersals. In this way, it can keep track of its reserved bandwidth usage and current cumulative_payment and set them correctly for subsequent dispersals.
EigenDA Certificate (DACert
)
An EigenDA Certificate
(or short DACert
) contains all the information needed to retrieve a blob from the EigenDA network, as well as validate it.
A DACert
contains the four data structs needed to call checkDACert on the EigenDACertVerifier.sol contract. Please refer to the eigenda core spec for more details, but in short, the BlobCertificate
is included as a leaf inside the merkle tree identified by the batch_root
in the BatchHeader
. The BlobInclusionInfo
contains the information needed to prove this merkle tree inclusion. The NonSignerStakesAndSignature
contains the aggregated BLS signature sigma
of the EigenDA validators. sigma
is a signature over the BatchHeader
. The signedQuorumNumbers
contains the quorum IDs that DA nodes signed over for the blob.
AltDACommitment
In order to be understood by each rollup stack’s derivation pipeline, the encoded DACert
must be prepended with header bytes, to turn it into an altda-commitment
respective to each stack:
- op prepends 3 bytes:
version_byte
,commitment_type
,da_layer_byte
- nitro prepends 1 byte:
version_byte
NOTE
In the future we plan to support a custom encoding byte which allows a user to specify different encoding formats for the DACert
(e.g, RLP, ABI).