This repository was archived by the owner on Feb 26, 2024. It is now read-only.

Description
We currently use a function called bufferify to convert "large" JSON objects into small Buffer chunks that can be yielded and serialized for the wire as they are processed. This works around a node limitation with JSON.stringify, as it can't handle strings greater than ~1GB in size (Buffers also have a similar limit). However, this approach is much slower then JSON.stringify, so we try to only do it on "large" traces. The number at which we choose to switch to bufferification is arbitrary and prone to issues where a trace with few steps still has too much data to JSON.stringify, causing the error. Here is an example: #3924
We could try to JSON.stringify, and then fall back to bufferify, but there is a better way: start sending the data as soon as it is available to send. Example: #3997
However, this has some new problems:
- if we've already started sending some data and then encounter an error, what do we do? We can change our response to a JSON-RPC error response!
- if the logic is moved from the server, where it is now, to the
debug_traceTransaction implementation itself, what do we do for provider usage? It needs the JSON object itself, not Buffers or chunks of Buffers.
- for smaller traces that don't need to be bufferified, buffering all the data into a JSON object and then calling
JSON.stringify might still be faster.
A similar issue: #381