Skip to content

Commit 0bd8235

Browse files
committed
feat: add docs about L1 follower node
1 parent da760fa commit 0bd8235

File tree

1 file changed

+155
-0
lines changed

1 file changed

+155
-0
lines changed

src/content/docs/en/developers/guides/running-a-scroll-node.mdx

Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -139,6 +139,161 @@ Running the node in Docker might have a significant impact on node performance.
139139
- Check logs: `docker logs --tail 10 -f l2geth-docker`
140140
- Stop the container: `docker stop l2geth-docker`
141141

142+
---
143+
## Run L2geth in L1 follower mode
144+
145+
<Aside type="tip">
146+
L1 follower mode runs a node that only follows the finalized L1 and DA to read and derive the L2 chain. In this mode the node does not participate directly in the L2.
147+
This mode is useful for reconstructing the L2 state by only following the L1.
148+
</Aside>
149+
150+
Run `l2geth` with the `--da.sync` flag. Provide blob APIs and beacon node with
151+
- `--da.blob.beaconnode "<L1 beacon node>"` (recommended, if beacon node supports historical blobs)
152+
- `--da.blob.blobscan "https://api.blobscan.com/blobs/"` `--da.blob.blocknative "https://api.ethernow.xyz/v1/blob/"` for mainnet
153+
- `--da.blob.blobscan "https://api.sepolia.blobscan.com/blobs/"` for Sepolia.
154+
155+
Strictly speaking only one of the blob providers is necessary, but during testing blobscan and blocknative were not fully reliable. That's why using a beacon node with historical blob data is recommended (can be additionally to blobscan and blobnative).
156+
157+
### mainnet
158+
```bash
159+
./build/bin/geth --scroll \
160+
--datadir "tmp/mainnet-l2geth-datadir" \
161+
--gcmode archive \
162+
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" \
163+
--da.sync=true \
164+
--da.blob.blobscan "https://api.blobscan.com/blobs/" --da.blob.blocknative "https://api.ethernow.xyz/v1/blob/" \
165+
--da.blob.beaconnode "<L1 beacon node>" \
166+
--l1.endpoint "<L1 RPC node>" \
167+
--verbosity 3
168+
```
169+
170+
A full sync will take about 2 weeks depending on the speed of the RPC node, beacon node and the local machine. Progess is reported as follows for every 1000 blocks applied:
171+
172+
```bash
173+
INFO [08-01|16:44:42.173] L1 sync progress blockhain height=87000 block hash=608eec..880ebd root=218215..9a58a2
174+
```
175+
176+
### Sepolia
177+
```bash
178+
./build/bin/geth --scroll-sepolia \
179+
--datadir "tmp/sepolia-l2geth-datadir" \
180+
--gcmode archive \
181+
--http --http.addr "0.0.0.0" --http.port 8545 --http.api "eth,net,web3,debug,scroll" \
182+
--da.sync=true \
183+
--da.blob.blobscan "https://api.sepolia.blobscan.com/blobs/" \
184+
--da.blob.beaconnode "<L1 beacon node>" \
185+
--l1.endpoint "<L1 RPC node>" \
186+
--verbosity 3
187+
```
188+
189+
A full sync will take about 2-3 days depending on the speed of the RPC node, beacon node and the local machine. Progess is reported as follows for every 1000 blocks applied:
190+
191+
```bash
192+
INFO [08-01|16:44:42.173] L1 sync progress blockhain height=87000 block hash=608eec..880ebd root=218215..9a58a2
193+
```
194+
195+
196+
### Troubleshooting
197+
You should see something like this shortly after starting:
198+
- the node (APIs, geth console, etc) will not be responsive until all the L1 messages have been synced
199+
- but it is already starting the derivation pipeline which can be seen through `L1 sync progress [...]`.
200+
- for Sepolia it might take a little longer (10-20mins) for the first `L1 sync progress [...]` to appear as the L1 blocks are more sparse at the beginning
201+
```bash
202+
INFO [09-18|13:41:34.039] Starting L1 message sync service latestProcessedBlock=20,633,529
203+
WARN [09-18|13:41:34.551] Running initial sync of L1 messages before starting l2geth, this might take a while...
204+
INFO [09-18|13:41:45.249] Syncing L1 messages processed=20,634,929 confirmed=20,777,179 collected=71 progress(%)=99.315
205+
INFO [09-18|13:41:55.300] Syncing L1 messages processed=20,637,029 confirmed=20,777,179 collected=145 progress(%)=99.325
206+
INFO [09-18|13:42:05.400] Syncing L1 messages processed=20,638,329 confirmed=20,777,179 collected=220 progress(%)=99.332
207+
INFO [09-18|13:42:15.610] Syncing L1 messages processed=20,640,129 confirmed=20,777,179 collected=303 progress(%)=99.340
208+
INFO [09-18|13:42:24.324] L1 sync progress "blockhain height"=1000 "block hash"=a28c48..769cee root=174edb..9d9fbd
209+
INFO [09-18|13:42:25.555] Syncing L1 messages processed=20,641,529 confirmed=20,777,179 collected=402 progress(%)=99.347
210+
```
211+
212+
**Temporary errors**
213+
Especially at the beginning some errors like below might appear in the console. This is expected, as the pipeline relies on the L1 messages but in case they're not synced fast enough such an error might pop up. It will continue once the L1 messages are available.
214+
```
215+
WARN [09-18|13:52:25.843] syncing pipeline step failed due to temporary error, retrying err="temporary: failed to process logs to DA, error: failed to get commit batch da: 7, err: failed to get L1 messages for v0 batch 7: EOF: <nil>"
216+
```
217+
218+
## Limitations
219+
220+
The `state root` of a block can be reproduced when using this mode of syncing but currently not the `block hash`. This is due to the fact that currently the header fields `difficulty` and `extraData` are not stored on DA but these fields are utilized by [Clique consensus](https://eips.ethereum.org/EIPS/eip-225) which is used by the Scroll protocol. This will be fixed in a future upgrade where the main implementation on l2geth is already done: https://github.com/scroll-tech/go-ethereum/pull/903 https://github.com/scroll-tech/go-ethereum/pull/913.
221+
222+
To verify the locally created `state root` against mainnet (`https://sepolia-rpc.scroll.io/` for Sepolia), we can do the following:
223+
224+
```bash
225+
# query local block info
226+
curl localhost:8545 -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_getHeaderByNumber","params":["0x2AF8"],"id":0}' | jq
227+
228+
# query mainnet block info
229+
curl https://rpc.scroll.io -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","method":"eth_getHeaderByNumber","params":["0x2AF8"],"id":0}' | jq
230+
```
231+
232+
By comparing the headers we can most importantly see that **`state root` , `receiptsRoot` and everything that has to do with the state matches**. However, the following fields will be different:
233+
234+
- `difficulty` and therefore `totalDifficulty`
235+
- `extraData`
236+
- `size` due to differences in header size
237+
- `hash` and therefore `parentHash`
238+
239+
Example local output for block 11000:
240+
241+
```bash
242+
{
243+
"jsonrpc": "2.0",
244+
"id": 0,
245+
"result": {
246+
"difficulty": "0xa",
247+
"extraData": "0x0102030405060708",
248+
"gasLimit": "0x989680",
249+
"gasUsed": "0xa410",
250+
"hash": "0xf3cdafbe35d5e7c18d8274bddad9dd12c94b83a81cefeb82ebb73fa799ff9fcc",
251+
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
252+
"miner": "0x0000000000000000000000000000000000000000",
253+
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
254+
"nonce": "0x0000000000000000",
255+
"number": "0x2af8",
256+
"parentHash": "0xde244f7e8bc54c8809e6c2ce65c439b58e90baf11f6cf9aaf8df33a827bd01ab",
257+
"receiptsRoot": "0xd95b673818fa493deec414e01e610d97ee287c9421c8eff4102b1647c1a184e4",
258+
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
259+
"size": "0x252",
260+
"stateRoot": "0x0f387e78e4a7457a318c7bce7cde0b05c3609347190144a7e105ef05194ae218",
261+
"timestamp": "0x6526db8e",
262+
"totalDifficulty": "0x1adb1",
263+
"transactionsRoot": "0x6a81c9342456693d57963883983bba024916f4d277392c9c1dc497e3518a78e3"
264+
}
265+
}
266+
```
267+
268+
Example remote output:
269+
270+
```bash
271+
{
272+
"id": 0,
273+
"jsonrpc": "2.0",
274+
"result": {
275+
"difficulty": "0x2",
276+
"extraData": "0xd883050000846765746888676f312e31392e31856c696e7578000000000000009920319c246ec8ae4d4f73f07d79f68b2890e9c2033966efe5a81aedddae12875c3170f0552f48b7e5d8e92ac828a6008b2ba7c5b9c4a0af1692337bbdc792be01",
277+
"gasLimit": "0x989680",
278+
"gasUsed": "0xa410",
279+
"hash": "0xb7848d5b300247d7c33aeba0f1b33375e1cb3113b950dffc140945e9d3d88d58",
280+
"logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
281+
"miner": "0x0000000000000000000000000000000000000000",
282+
"mixHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
283+
"nonce": "0x0000000000000000",
284+
"number": "0x2af8",
285+
"parentHash": "0xa93e6143ab213a044eb834cdd391a6ef2c818de25b04a3839ee44a75bd28a2c7",
286+
"receiptsRoot": "0xd95b673818fa493deec414e01e610d97ee287c9421c8eff4102b1647c1a184e4",
287+
"sha3Uncles": "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
288+
"size": "0x2ab",
289+
"stateRoot": "0x0f387e78e4a7457a318c7bce7cde0b05c3609347190144a7e105ef05194ae218",
290+
"timestamp": "0x6526db8e",
291+
"totalDifficulty": "0x55f1",
292+
"transactionsRoot": "0x6a81c9342456693d57963883983bba024916f4d277392c9c1dc497e3518a78e3"
293+
}
294+
}
295+
```
296+
142297
---
143298
## Configuration Reference
144299

0 commit comments

Comments
 (0)