You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current validator implementation lacks efficient transaction execution
scheduling, operating under a global state lock that creates a single pipeline
path for the transaction lifecycle. This design constraint limits execution to
one OS thread at a time, failing to leverage available hardware parallelization
capabilities. Additionally, the pipeline presents optimization opportunities
given the nature of ephemeral nodes.
This MIMD proposes an efficient multi-threaded scheduler implementation that
utilizes available OS threads to achieve near-linear scaling with CPU core
count under low-contention scenarios (minimal account lock conflicts).
Architecture and Control Flow
The proposed scheduler centers around a thread-safe account lock database
managed by a dedicated scheduler thread. The high-level execution flow follows
these stages:
Transaction Processing Pipeline
Signature Verification: Performed on the receiving thread without
acquiring locks
Account Validation: Ensures all transaction accounts exist in the
accounts database without lock acquisition
Queue Submission: Transactions are enqueued for processing by the
scheduler thread
Dual-Queue Processing: The scheduler thread manages two distinct queues:
External Transaction Queue: Populated by client requests
Internal Priority Queue: Contains transactions blocked by account lock
conflicts
Lock Management: The scheduler maintains an in-memory account lock
database supporting both exclusive and shared access patterns
Work Distribution: Transactions are dispatched to available SVM worker
threads with pre-acquired account locks
Lock Release: SVM workers release account locks upon completion and
signal availability to the scheduler
Queue Reevaluation: When workers complete execution, their associated
priority queues are reevaluated for immediate execution or redistribution
This architecture enables optimal parallelization while preserving transaction
ordering semantics where possible.
Account Lock Implementation
Lock Primitive
The core synchronization primitive utilizes an atomic 32-bit integer wrapped in an Arc:
typeAccountLock = Arc<AtomicU32>;
Bit Field Layout
The 32-bit integer employs the following bit field structure:
Counter Increment: The front_runs counter increments with each
front-running event
Threshold Enforcement: Upon reaching the configured limit, waiting write
transactions block all subsequent readers
Counter Reset: Occurs upon successful write lock acquisition
Priority Fee Mechanism
Transactions blocked by account locks are inserted into priority queues ordered
by the included priority fees, enabling higher-fee transactions to front-run
lower-fee transactions operating on the same accounts.
Inter-Thread Communication
The scheduler employs the Actor pattern with isolated state management,
communicating exclusively through message-passing channels:
Signals worker readiness and triggers scheduling evaluation
This design eliminates shared mutable state except for the atomic account lock
integers, reducing synchronization overhead and improving cache locality.
Internal Priority Queue Structure
The internal priority queue maintains per-worker transaction queues for blocked
transactions:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Multi Threaded Scheduler Design
Abstract
The current validator implementation lacks efficient transaction execution
scheduling, operating under a global state lock that creates a single pipeline
path for the transaction lifecycle. This design constraint limits execution to
one OS thread at a time, failing to leverage available hardware parallelization
capabilities. Additionally, the pipeline presents optimization opportunities
given the nature of ephemeral nodes.
This MIMD proposes an efficient multi-threaded scheduler implementation that
utilizes available OS threads to achieve near-linear scaling with CPU core
count under low-contention scenarios (minimal account lock conflicts).
Architecture and Control Flow
The proposed scheduler centers around a thread-safe account lock database
managed by a dedicated scheduler thread. The high-level execution flow follows
these stages:
Transaction Processing Pipeline
acquiring locks
accounts database without lock acquisition
scheduler thread
conflicts
database supporting both exclusive and shared access patterns
threads with pre-acquired account locks
signal availability to the scheduler
priority queues are reevaluated for immediate execution or redistribution
This architecture enables optimal parallelization while preserving transaction
ordering semantics where possible.
Account Lock Implementation
Lock Primitive
The core synchronization primitive utilizes an atomic 32-bit integer wrapped in an
Arc
:Bit Field Layout
The 32-bit integer employs the following bit field structure:
Field Specifications
or last read lock acquirer (supports up to 256 workers)
pending lock acquisition requests
This design implements a reader-writer lock with additional metadata to
facilitate efficient scheduling decisions.
Lock Acquisition Protocol
AccountsDB updates
Fairness and Front-Running Prevention
Transaction Ordering Fairness
The lock mechanism implements fairness guarantees through transaction ID registration:
read lock acquisitions
prevent newer write lock acquisitions
Controlled Front-Running
To maximize throughput while preventing starvation, the system allows limited read transaction front-running:
Front-Running Control Mechanism
front_runs
counter increments with eachfront-running event
transactions block all subsequent readers
Priority Fee Mechanism
Transactions blocked by account locks are inserted into priority queues ordered
by the included priority fees, enabling higher-fee transactions to front-run
lower-fee transactions operating on the same accounts.
Inter-Thread Communication
The scheduler employs the Actor pattern with isolated state management,
communicating exclusively through message-passing channels:
Channel Architecture
Transaction Ingress Channel: RPC handlers → Scheduler
Work Distribution Channel: Scheduler → SVM Workers
simulation
Worker Availability Channel: SVM Workers → Scheduler
This design eliminates shared mutable state except for the atomic account lock
integers, reducing synchronization overhead and improving cache locality.
Internal Priority Queue Structure
The internal priority queue maintains per-worker transaction queues for blocked
transactions:
Queue Management
Performance Characteristics
Scalability
Throughput Optimization
Beta Was this translation helpful? Give feedback.
All reactions