Description
I was curious what was causing the outlier results, especially this one:
test goser::capnp::bench_serialize ... bench: 32 ns/iter (+/- 19) = 14000 MB/s
Then I came across this:
These benchmarks measure the cost of translating between in-memory and on-wire representations of data. For Cap'n Proto, the two representations are exactly the same, so the translation is either a no-op or a memcpy. - Source
The README for this project should probably point that out? I see one of your old articles about the project provides details, in particular one explaining cap'n'proto:
Cap’n Proto doesn’t really do serialization, but lays the serialized data out just like it is in memory so it has nearly zero serialization speed.
A link to the article might be helpful? The README here mentions it's a JSON serialization benchmark, but as that quote points out, the results for Cap'n'Proto aren't really representative of such.
I am a little confused regarding the zero-copy deserialize support with Serde and MessagePack. If I understand, that would be implemented similar to the deserialize benchmark but can drop a line in the closure like so?:
let _log: Log = ::rmp_serde::from_slice(&*buf).unwrap();
This only seems to provide a small ~10% improvement for me, nothing like what cap'n'proto is showing for it's deserialize, is that a different kind of zero-copy?