Unexpectedly high memory usage per channel after pileup #10879
-
We have noticed that following larger pileups, the memory consumed per channel grows, and doesn't ever go down until node restart. For example, one of our clusters has ~4300 total channels, with ~200Mib memory consumption according to the node memory details. After pileup, with the same number of channels, the memory consumption goes up to 9Gib, a ~40x increase, and stays there even after all queues have been cleared, taking up by far the most memory on the node, that reports a total Runtime Used memory of 11 Gib. We do pool channels, no connection uses more than 63, most less than 10, we just have a lot of services connected. Is this a known issue? or a non-issue and will be reclaimed at the last minute if needed? So far we usually resort to a rolling-restart during slow hours so we don't have to find out at when the next pileup happens. Specifically on RabbitMQ 3.8.9 |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 3 replies
-
Hello, thanks for using RabbitMQ. We can't assist you unless we know more about your environment -
What does "larger pileups" mean? Do you mean you have many messages in the "Ready" state? Finally, we can't spend time assisting users who are on an unsupported version of RabbitMQ. RabbitMQ 3.8.9 has been out of support for quite a while now. |
Beta Was this translation helpful? Give feedback.
-
@BalintHarmatAtBetssonGroup RabbitMQ 3.8 has long reached EOL. Start with collecting relevant data instead of guessing. The docs explain where to start. For a large number of connections, see this section, otherwise only having good monitoring in place will allow you to make an informed decision. Obviously we recommend upgrading to 3.13.x, currently 3.13.1. |
Beta Was this translation helpful? Give feedback.
-
Hi, So the problem is whenever we do a server reboot we get a high channel count of atleast 12K and it stops accepting connections and we are not able to launch any secrets. what is the best solution to fix the issue with high channel count. |
Beta Was this translation helpful? Give feedback.
-
@jignesh-sen RabbitMQ 4.0.x is out of community support. The best solution to fix an issue is to understand it, and we don't have any data to even make an educated guess. However, we do know that except for shovels and federation links, both of which are ultimately user-controlled, RabbitMQ does not open channels, your applications do. So it's your applications that open 12K connections. We cannot know why. Limiting the number of connections allowed on a channel, monitoring connection churn, identifying connections that use an excessive number of connections and ultimately identifying what applications leak or excessively use connections is the only recommendation Team RabbitMQ can provide given a three sentence long problem description. Making all connections your applications open use easy to identify names will help. |
Beta Was this translation helpful? Give feedback.
@BalintHarmatAtBetssonGroup RabbitMQ 3.8 has long reached EOL. Start with collecting relevant data instead of guessing. The docs explain where to start.
For a large number of connections, see this section, otherwise only having good monitoring in place will allow you to make an informed decision.
Obviously we recommend upgrading to 3.13.x, currently 3.13.1.