Skip to content

Allow configuration of BoundedChannelOptions.Capacity in SocketFrameHandler #1826

@mtapkanov

Description

@mtapkanov

Is your feature request related to a problem? Please describe.

Problem

In the current implementation of SocketFrameHandler (RabbitMQ.Client 7.0.0), the channel used for outgoing messages is created with a hardcoded bounded capacity of 128:

var channel = Channel.CreateBounded<RentedMemory>(
    new BoundedChannelOptions(128) { ... });

Currently, in SocketFrameHandler, the bounded channel used to buffer outgoing frames has a hardcoded capacity of 128.
This can cause messages to accumulate in memory if the writer loop is delayed (due to backpressure, network issues, etc).
If the process is shut down or crashes before these messages are flushed to the socket, they are silently lost.
This is particularly critical in high-throughput systems where reliability is more important than latency or throughput.

Describe the solution you'd like

I’d like the channel capacity (currently hardcoded as 128) to be configurable.
For example, this could be done by adding a new optional property to AmqpTcpEndpoint, such as:

public int? SocketWriteBufferCapacity { get; set; }

This value can then be passed to BoundedChannelOptions when creating the internal channel.
The default value can remain 128 to preserve backward compatibility.

Describe alternatives you've considered

Using the Harmony library to patch the hardcoded BoundedChannelOptions capacity. While this allowed us to dynamically adjust the value at runtime, it’s not an ideal solution because it relies on modifying the library’s internal behavior, which can be error-prone and incompatible with future versions of RabbitMQ.Client.

This is alternatives, while functional in certain contexts, do not provide a clean, officially supported solution, which is why a configurable option would be the preferred approach.

Additional context

We observed this issue during a controlled RabbitMQ server restart in a production-like environment.
The application had written several messages into the buffer (up to the hardcoded 128 limit), and upon restart, they were not delivered.
This resulted in data loss and inconsistent state across services.

I’m happy to contribute a PR for this change if it’s accepted. Thanks!

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions