Secure Integration
This page is meant to be read by eigenda and rollup developers who are writing a secure integration and need to understand the details. For users who just want a high-level understanding of what a secure integration is, please visit our secure integration overview page instead.
Validity Conditions
EigenDA is a service that assures the availability and integrity of payloads posted to it for 14 days.
When deriving a rollup chain by running its derivation pipeline, only EigenDA DACerts
that satisfy the three validity conditions are considered valid and used:
- RBN Recency Validation - ensure that the DA Cert's reference block number (RBN) is not too old with respect to the L1 block at which the cert was included in the rollup's batcher-inbox. This ensures that the blob on EigenDA has sufficient availability time left (out of the 14 day period) in order to be downloadable if needed during a rollup fault proof window.
- Cert Validation - ensures sufficient operator stake has signed to make the blob available, for all specified quorums. The stake is obtained onchain at a given reference block number (RBN) specified inside the cert.
- Blob Validation - ensures that the blob used is consistent with the KZG commitment inside the Cert.
If #1 or #2 fails, then the DA Cert
is treated as invalid and MUST be discarded from the rollup's derivation pipeline.
1. RBN Recency Validation
This check is related to time guarantees. It is important for both optimistic and zk rollup validators to have sufficient time to download the blob from EigenDA once a cert lands in the batcher inbox.
We will use fault proofs as our base example to reason about the necessity of the recency check.
Looking at the timing diagram above, we need the EigenDA availability period to overlap the ~7days challenge period. In order to uphold this guarantee, what we need to do is simply to have rollups' derivation pipelines reject certs whose DA availability period started a long time ago. However, from the cert itself, there is no way to know when the cert was signed and made available. The only information available on the cert itself is cert.RBN
, the reference block number chosen by the disperser at which to anchor operator stakes. But that happens to be before validators sign, so it is enough to bound how far that can be from the cert's inclusion block.
Rollups must thus enforce that
certL1InclusionBlock - cert.RBN <= RecencyWindowSize
This has a second security implication. A malicious EigenDA disperser could have chosen a reference block number (RBN) that is very old, where the stake of operators was very different from the current one, due to operators withdrawing stake for example.
To give a concrete example with a rollup stack, optimism has a sequencerWindow which forces batches to land onchain in a timely fashion (12h). This filtering however, happens in the BatchQueue stage of the derivation pipeline (DP), and doesn't prevent the DP being stalled in the L1Retrieval stage by an old cert having been submitted whose blob is no longer available on EigenDA. To prevent this, we need the recencyWindow filtering to happen during the L1Retrieval stage of the DP.
Despite its semantics being slightly different, sequencerWindow and recencyWindow are related concepts, and in order to not force another config change on op altda forks, we suggest using the same value as the
SequencerWindowSize
for theRecencyWindowSize
, namely 12h.
2. Cert Validation
Cert validation is done inside the EigenDACertVerifier contract, which EigenDA deploys as-is, but is also available for rollups to modify and deploy on their own. Specifically, checkDACert is the entry point for validation. This could either be called during a normal eth transaction (either for pessimistic “bridging” like EigenDA V1 used to do, or when uploading a Blob Field Element to a one-step-proof’s preimage contract), or be zk proven using a library like Steel.
The checkDACert
function accepts an ABI-encoded []byte
certificate input. This design allows the underlying DACert structure to evolve across versions, enabling seamless upgrades without requiring changes to the EigenDACertVerifierRouter
interface.
The cert verification logic consists of:
- verify blob batch merkleInclusion proof
- verify
sigma
(operators’ bls signature) overbatchRoot
using theNonSignerStakesAndSignature
struct - verify blob security params (blob_params + security thresholds)
- verify each quorum part of the blob_header has met its threshold
3. Blob Validation
There are different required validation steps, depending on whether the client is retrieving or dispersing a blob.
Retrieval (whether data is coming from relays, or directly from DA nodes):
- Verify that received blob length is ≤ the
length
in the cert’sBlobCommitment
- Verify that the blob length claimed in the
BlobCommitment
is greater than0
- Verify that the blob length claimed in the
BlobCommitment
is a power of two - Verify that the payload length claimed in the encoded payload header is ≤ the maximum permissible payload length, as calculated from the
length
in theBlobCommitment
- The maximum permissible payload length is computed by looking at the claimed blob length, and determining how many bytes would remain if you were to remove the encoding which is performed when converting a
payload
into anencodedPayload
. This presents an upper bound for payload length: e.g. “If thepayload
were any bigger thanX
, then the process of converting it to anencodedPayload
would have yielded ablob
of larger size than claimed”
- The maximum permissible payload length is computed by looking at the claimed blob length, and determining how many bytes would remain if you were to remove the encoding which is performed when converting a
- If the bytes received for the blob are longer than necessary to convey the payload, as determined by the claimed payload length, then verify that all extra bytes are
0x0
.- Due to how padding of a blob works, it’s possible that there may be trailing
0x0
bytes, but there shouldn’t be any trailing bytes that aren’t equal to0x0
.
- Due to how padding of a blob works, it’s possible that there may be trailing
- Verify the KZG commitment. This can either be done:
- directly: recomputing the commitment using SRS points and checking that the two commitments match (this is the current implemented way)
- indirectly: verifying a point opening using Fiat-Shamir (see this issue)
Dispersal:
- If the
BlobCertificate
was generated using the disperser’sGetBlobCommitment
RPC endpoint, verify its contents:- verify KZG commitment
- verify that
length
matches the expected value, based on the blob that was actually sent - verify the
lengthProof
using thelength
andlengthCommitment
- After dispersal, verify that the
BlobKey
actually dispersed by the disperser matches the locally computedBlobKey
Note: The verification steps in point 1. for dispersal are not currently implemented. This route only makes sense for clients that want to avoid having large amounts of SRS data, but KZG commitment verification via Fiat-Shamir is required to do the verification without this data. Until the alternate verification method is implemented, usage of GetBlobCommitment
places a correctness trust assumption on the disperser generating the commitment.
Upgradable Quorums and Thresholds for Optimistic Verification
The EigenDACertVerifierRouter
contract enables secure upgrades to a rollup’s required quorums and thresholds without compromising the integrity of previously submitted state commitments. It achieves this by routing certificate verification to the appropriate EigenDACertVerifier
instance based on the reference_block_number
embedded in the cert, which dictates the verifier whose activation block was effective at that time. This ensures backward compatibility, allowing older DACert
s to be validated against the verifier version that was active at the time of their creation.
The router is typically deployed behind an upgradable admin proxy and should use the same ProxyAdmin
multisig as the rollup for consistent and secure access control.
Adding New Verifiers — Synchronization Risk
There is a synchronization risk that can temporarily cause dispersals to fail when adding a new verifier'
to the EigenDACertVerifierRouter
at a future activation block number (abn'
). If latest_block < abn'
and rbn >= abn'
, dispersals may fail if the required_quorums
set differs between verifier
and verifier'
. In this case, the quorums included in the client's BlobHeader
(based on the old verifier) would not match those expected by checkDACert
(using the new verifier). This mismatch results in at most a few failed dispersals, which will resolve once latest_block >= abn'
and reference_block_number >= abn'
, ensuring verifier consistency. The EigenDA integrations team will explore mitigations in the future.
Rollup Stack Secure Integrations
Nitro V1 | OP V1 (insecure) | Nitro V2 | OP V2 | |
---|---|---|---|---|
Cert Verification | SequencerInbox | x | one-step proof | one-step proof: done in preimage oracle contract when uploading a blob field element |
Blob Verification | one-step proof | x | one-step proof | one-step proof |
Timing Verification | SequencerInbox | x | SequencerInbox | one-step proof (?) |