Skip to content

Commit 57061d8

Browse files
authored
FMA: Interop Transaction Handling (#249)
1 parent 4053328 commit 57061d8

File tree

1 file changed

+175
-0
lines changed

1 file changed

+175
-0
lines changed

security/fma-interop-tx-handling.md

+175
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,175 @@
1+
# Interop Transaction Handling: Failure Modes and Recovery Path Analysis
2+
3+
| | |
4+
|--------|--------------|
5+
| Author | Axel Kingsley |
6+
| Created at | 2025-03-31 |
7+
| Needs Approval From | |
8+
| Other Reviewers | |
9+
| Status | Draft |
10+
11+
## Introduction
12+
13+
This document covers new considerations for Chain Operators in Interop-enabled spaces when processing transactions.
14+
15+
## Context and Problem
16+
17+
In an OP Stack chain, block builders (Sequencers) build blocks from user-submitted transactions. Most
18+
Chain Operators arrange their infrastructure defensively so that RPC requests aren't handled direclty by the
19+
Supervisor, instead building the mempool over P2P.
20+
21+
In an Interop-Enabled context, Executing Messages (Interop Transactions) hold special meaning within the
22+
protocol. For every Executing Message in a block, there must be a matching "Initiating Message" (plain log event)
23+
which matches the specified index and content hash.
24+
25+
Including an Executing Message which *does not* match to an Initiating Message is considered to be invalid to the protocol.
26+
The result is that the *entire block* which contains this Invalid Message is replaced, producing a reorg on the chain
27+
at this height.
28+
29+
Because the consequenquence for an Invalid Message is so high, Chain Operators are highly incentivized to check Executing
30+
Messages before they are added to a block. *However*, being excessive with these checks can cause interruption to the
31+
chain's regular forward-progress. There must be a balance taken in checking messages.
32+
33+
### Two Extremes
34+
35+
To understand the purpose of these decisions, lets consider the extreme validity-checking policies we could adopt:
36+
37+
**Check Every Message Exhaustively**
38+
- In this model, every Executing Message is checked constantly, at a maximum rate, and each message is checked as close to block-building as possible.
39+
- The compute cost to check every message during block building adds time to *every* transaction, blowing out our ability
40+
to build a block in under 2s.
41+
- The validity of the included Executing Messages would be *as correct* as possible. However! Even *after* the block is built, the data being relied upon (cross-unsafe data) could change on the Initiating Chain if they suffer a reorg.
42+
So while this policy is *most correct*, it is not *totally correct*
43+
44+
**Never Check Any Messages**
45+
- In this model, we optimize only for not taking any additional compute tasks, instead just trusting every message.
46+
- Naturally, there is no impact into block building or any other process, BUT...
47+
- Blocks would easily become invalid, because an attacker could submit Invalid Messages, even just one per 2s, and prevent the Sequencer from ever building a valid block.
48+
49+
So, no matter what solution we pick, we deal with *some* amount of uncertainty and take *some* amount of additional compute load.
50+
51+
## The Solution Design
52+
53+
The [Interop Topology and Tx Flow for Interop Chains Design Doc](https://github.com/ethereum-optimism/design-docs/pull/218)
54+
describes the solution design we plan on going with:
55+
56+
- All Executing Message are checked once at `proxyd` ingress.
57+
- All Executing Message are checked once at Node Mempool ingress (not counting Sequencer).
58+
- All Executing Message in Node Mempools are Batched at checked on a regular interval.
59+
- If an Executing Message is ever Invalid, it is discarded and not retried.
60+
- *No* Checks are done at Block Building time.
61+
62+
This FMA describes the potential negative consequences of this design. We have selected this design because it maximizes
63+
the opportunities for Invalid Messages to be caught and discarded, while also leaving the block building hot-path from
64+
having to take on new compute.
65+
66+
# FMs:
67+
68+
## FM1: Checks Fail to Catch an Invalid Executing Message
69+
- Description
70+
- This overlaps Supervisor FMA's FM2a, where an invalid message is included in a built block.
71+
- An interop transaction is checked mulitiple times (Proxyd, Sentry Node, Sequencer), and each check must fail
72+
in order for the transaction to make it all the way to block building.
73+
- The most likely cause for all filter layers to fail is a bug in the Supervisor.
74+
- Individual layers may fail to filter a transaction if it decides the transaction doesn't need to be
75+
checked for Interop Validity. For example, if it appears to have no Access List, it does not need to be checked.
76+
- Recurring filters (ones which regularly re-validate Transactions in a Mempool) may fail to filter
77+
a Transaction if the frequency of the check is too low.
78+
- Risk Assessment
79+
- It is Low Likelihood that every individual filter layer has a different failure, leading to a total filter failure.
80+
- It is more likely that a Supervisor bug would make every check ineffective.
81+
- Has no effect on our ability to process other transactions. Supervisor effects are described in
82+
the Supervisor FMA.
83+
- Negative UX and Customer Perception from building invalid block content.
84+
- Action Items
85+
- Monitor for all invalid messages, as a percentage of all incoming messages.
86+
- Critically monitor for *any* messages which will cause a reorg (ie an invalid message which was included
87+
in a block)
88+
89+
## FM2: Checks Discard Valid Message
90+
- Description
91+
- Due to a bug in either the Supervisor, or the Callers, some or all Executing Messages
92+
aren't being included in blocks.
93+
- When this happens, there is nothing invalid being produced by block builders, but no Interop
94+
Messages are being included.
95+
- Risk Assessment
96+
- More Negative UX and Custoemr Perception if Interop Messages aren't making it into the chain.
97+
- Failed transactions would cause customers to redrive transactions, potentially overwhelming
98+
infrastructure capacity.
99+
- Action Items
100+
- Monitor for a lack of interop messages in blocks. If there is a suspicious lack of them, it may indicate
101+
a failure in the filtering.
102+
103+
## FM3a: Transaction Volume causes DOS Failures of Proxyd
104+
- Description
105+
- Due to the new validation requirements on `proxyd` to check Interop Messages, an influx of
106+
Interop Messages may arrive, causing `proxyd` to become overwhelmed with work.
107+
- When `proxyd` becomes overwhelmed, it may reject customer requests or crash outright, affecting
108+
liveness for Tx Inclusion and RPC.
109+
- Risk Assessment
110+
- Low Impact ; Medium Likelihood
111+
- If a `proxyd` instance should go down, we should be able to replace it quickly, as it is a stateless
112+
service.
113+
- When `proxyd` goes down, it takes outstanding requests with it, meaning some requests fail to be handled,
114+
but excess load is shed.
115+
- Mitigations
116+
- `proxyd` could feature a pressure-relief setting, where if too much time is spent waiting on the Supervisor,
117+
no additional Interop Messages will be accepted through this gateway.
118+
- We should deploy at least one "Utility Supervisor" to respond to Checks from `proxyd` instances.
119+
The size and quantitiy of the Supervisor(s) could be scaled if needed. (Note: A Supervisor also requires Nodes
120+
of each network to function)
121+
## FM3b: Transaction Volume causes DOS Failures of Supervisor
122+
- Description
123+
- In any section of infrastructure, calls to the Supervisor to Check a given Interop Message might overwhelm the
124+
Supervisor.
125+
- If this happens, the Supervisor may become slow to respond, slow to perform its other sync duties, or may crash outright.
126+
- When a Supervisor crashes, any connected Nodes can't keep their Cross-Heads up to date, and Managed Nodes won't get L1 updates either.
127+
- Risk Assessment
128+
- Medium Impact ; Low Likelihood
129+
- The Supervisor is designed to respond to Check requests. Even though it hasn't been load tested in realistic settings,
130+
there is very little computational overhead when responding to an RPC request.
131+
- Supervisors can be scaled and replicated to serve high-need sections of the infrastructure. Supervisors
132+
sync identically (assuming a matching L1), so two of them should be able to share traffic.
133+
- Chain Liveness is not threatened, because block builders (Sequencers) who cannot determine Interop Validity (like if the Supervisor
134+
is unavailable) should proactively drop these transactions. *Interop Liveness* is interrupted in order to preserve
135+
chain liveness.
136+
- Nodes who rely on this Supervisor to advance their Cross Heads will not be able to until the service is restored.
137+
- Mitigations
138+
- Callers should avoid making requests of the Supervisor if they can avoid doing so.
139+
- If a message has no Access List, it certainly doesn't require Interop Validation.
140+
- If messages are invalid for other reasons, they can be rejected without Interop Validation.
141+
- Callers can establish rate limits to enforce a maximum number of Access Lists (or maximum number of Access List Items)
142+
which they will ask the Supervisor about.
143+
- When the rate limit is hit, the caller can drop further interop messages in place, eliminating excess traffic.
144+
- We can use heuristics to allow some transactions through despite rate limiting. For example, a sender who has recently
145+
submitted a valid interop transaction may earn the ability to (check and) submit interop transactions even when the rate limit is hit.
146+
## FM3c: Transaction Volume causes DOS Failures of Node
147+
- Description
148+
- Transactions make it past `proxyd` and arrive in a Sentry Node or the Sequencer.
149+
- Due to the high volume of Interop Messages, the work to check Interop Messages causes
150+
a delay in the handling of other work, or causes the Node to crash outright.
151+
- Risk Assessment
152+
- Medium/Low Impact ; Low Likelihood
153+
- It's not good if a Sequencer goes down, but the average node can crash without issue.
154+
- Conductor Sets keep block production healthy even when a Sequencer goes down.
155+
- Mitigations
156+
- Callers should use Batched RPC requests to the Supervisor when they are regularly validating groups
157+
of Transactions. This minimizes the amount of network latency experienced, allowing other work to get done.
158+
- Mempool transactions which fail checks should be dropped and not retried. This prevents malicious transactions
159+
from using more than one check against the Supervisor.
160+
## FM4: Additional Checks Lead to Increased Delay
161+
- Description
162+
- An interop transaction is checked once at `proxyd`, and at least once more at the Sentry Node Mempool Ingress.
163+
- It may be checked additional times if it stays in the mempool long enough.
164+
- Each call to the Supervisor will take some amount of time, which synchronously delays the transaction.
165+
- Risk Assessment
166+
- It's latency, it will happen, and the impact is based on customer sentiment on the latency.
167+
168+
# Notes From Review
169+
- We are comfortable disabling interop as-needed, when processing interop leads to negative externalities for the rest of the chain
170+
- Expiring messages are the largest concern
171+
- But we can have the app layer re-emit messages, so this can be solved outside of protocol
172+
- One big risk we'd like to track and understand is a chaotic cycle of sequencers initiating reorgs
173+
- The "Smarter" a sequencer might be to avoid invalid blocks, the more reorgs that get made
174+
- If there were too many reorgs, chains may stomp on each other and cause widespread issue
175+
- To mitigate this, we want lots of testing on the emergent behaviors, using dev networks with test sequencers.

0 commit comments

Comments
 (0)