-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Specification
Quiche's connection send and recv involve processing 1 or more QUIC packets (1 to 1 to a dgram packet) which each can contain multiple QUIC frames.
The processing of a single packet ends up dispatching into single connection to handle this data.
The QUICSocket.handleSocketMessage
therefore needs to run as fast as possible in order to drain the kernel's UDP receive buffer.
However atm, because all of the quiche operations run on the main thread, all of its work is done synchronously and thus blocks Node's main thread.
This can be quite heavy because processing the received frames involves cryptographic decryption, and also immediately sending back frames that involve cryptographic encryption.
JS multi-threading can be quite slow, with overheads around 1.4ms: MatrixAI/js-workers#1 (comment). Meaning you need an operation that takes longer than 1.4ms to be worth it. Plus this will need to use the zero-copy transfer capability to ensure buffers are shared.
┌───────────────────────┐
│ Main Thread │
│ │
│ ┌───────────┐ │
│ │Node Buffer│ │
│ └─────┬─────┘ │ ┌────────────────────────┐
│ │ │ │ │
│ Slice │ Copy │ │ Worker Thread │
│ │ │ │ │
│ ┌────────▼────────┐ │ Transfer │ ┌──────────────────┐ │
│ │Input ArrayBuffer├──┼──────────┼──► │ │
│ └─────────────────┘ │ │ └─────────┬────────┘ │
│ │ │ │ │
│ │ │ Compute │ │
│ │ │ │ │
│ ┌─────────────────┐ │ │ ┌─────────▼────────┐ │
│ │ ◄──┼──────────┼──┤Output ArrayBuffer│ │
│ └─────────────────┘ │ Transfer │ └──────────────────┘ │
│ │ │ │
│ │ │ │
└───────────────────────┘ └────────────────────────┘
Native multi-threading is likely to be faster. The napi-rs
bridge offers the ability to create native OS pthreads, which is separate from the libuv thread pool in nodejs because libuv thread pool is intended for IO, and quiche's operations is all CPU bound. However benching this will be important to understand how fast the operations are.
Naive benchmarks of quiche recv and send pairs between client and server indicated these levels of performance:
Optimised native code is 250000 iterations per second
Optimised FFI native code is 35000 ops per second
Optimised js-quic only 1896
Each iteration is 2 recv and send pairs, it is 2 because of both client and server.
The goal is to get js-quic code as close as possible to FFI native code, and perhaps exceed it.
Another source of slowdowns might be the FFI of napi-rs itself. This would be a separate problem though.
Additional context
- Events Refactoring (integrating
js-events
) #53 (comment) - https://chat.openai.com/share/b10af494-cdc5-46f1-b845-f4a5b1d594c0
- https://github.com/nodejs/node-addon-api/blob/main/doc/threadsafe_function.md#example
- https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes
Tasks
- Investigate
napi-rs
threading - Create a separate thread on the rust side, and run the quiche connection in it
- Put all quiche connection processing into a single separate thread, don't create a new thread for each quiche connection, the goal here is to primarily unblock the node's main thread from processing the quiche operations, so that node can continue processing the IO
- Benchmark with a single separate thread
- Now create a thread pool, and send every quiche connection execution to the thread pool.
- If thread pool is used, it is likely all quiche connection memory has to be shared, rather than being pinned to a long-running thread
- If you are using a threadpool, this might get complicated to manage, as this means the native code has state... if this is too complicated try
js-workers
to see if 1.4ms is still worth it?