fix: correctly track reverted hashes#498
Conversation
fix reverted cache
|
|
||
| let inner_pool = op_pool_builder.build_pool(ctx, evm_config).await?; | ||
|
|
||
| let reverted_cache = enable_revert_protection.then_some(setup_revert_protection( |
There was a problem hiding this comment.
why do we need a reverted cache? is it for easier test assertions?
There was a problem hiding this comment.
no this is a core feature for revert protection functionality. it's so senders can track if their txs have been evicted from the pool because they were reverted
There was a problem hiding this comment.
we don't expose rpc methods for transaction receipts though? rollup-boost only forwards bundles and raw transactions
There was a problem hiding this comment.
if its for debug purposes, the cache size should be configurable - right now the bundle request rate is 300 req/s for reverted transactions on mainnet when the cache max_capacity is only 100
There was a problem hiding this comment.
this pr is meant to just fix the bug, not add features. i think the cache capacity should be configured in terms of storage size (like configured to a max of 4mb).
and i thought the plan was to expose the rpc for receipts at some point
There was a problem hiding this comment.
no plans for receipts rpc
📝 Summary
Use the new pool wrapper to manage the reverted txs cache. This fixes a bug where tracking the reverted hashes was dependent on the
--builder.log-pool-transactionsflag. This was not caught by our integration tests because the integration tests spin up the tx pool monitor regardless of the value of that flag.💡 Motivation and Context
The reverted cache should be managed closely to the pool
✅ I have completed the following steps:
make lintmake test