-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Memory Leak - heapdumps show continuous growth in compiled code being retained by dummy job #3040
Comments
Can you produce the complete source code, as the one that you provided is not enough as it is not adding any jobs and so on. |
Of course: import { Queue, Worker } from 'bullmq'
// producer
const myQueueName = 'my-queue'
const queue = new Queue(QUEUE_NAME.PROCESS_KEYWORD_MATCH, {
connection: new IORedis({
host: env.REDIS_HOST || 'localhost',
port: Number(env.REDIS_PORT) || 6379,
maxRetriesPerRequest: null,
}),
defaultJobOptions: {
removeOnComplete: true,
},
})
setInterval(() => queue.add(myQueueName, {}), 100)
// worker
new Worker(
myQueueName,
job => {
// empty
},
{
connection: new IORedis({
host: env.REDIS_HOST || 'localhost',
port: Number(env.REDIS_PORT) || 6379,
maxRetriesPerRequest: null,
}),
concurrency: 5,
limiter: {
max: 100,
duration: 1000,
},
}
) |
Btw, jobs are not "drained", the drained event is generated when the queue is empty, i.e. there are no jobs to be processed. |
Furthermore, by definition this code is going to generate a leak: {
connection: new IORedis({
host: env.REDIS_HOST || 'localhost',
port: Number(env.REDIS_PORT) || 6379,
maxRetriesPerRequest: null,
}), As you are passing an instance of IORedis that you are never closing (BullMQ would only close connections created by itself). |
I am running this code which creates something like 1k jobs per second, and I keep it running for some time, like 10 minutes or so, running the garbage collector from time to time and the memory is quite stable at around 10-11Mb. So I am going to need more proof that there is indeed a memory leak in this code. import { Queue, Worker } from "bullmq";
const queueName = "test-leaks";
// producer
const queue = new Queue(queueName, {
connection: {
host: "localhost",
port: 6379,
maxRetriesPerRequest: null,
},
defaultJobOptions: {
removeOnComplete: true,
},
});
setInterval(() => queue.add(queueName, {}), 1);
// worker
new Worker(
queueName,
(job) => {
// empty
},
{
connection: {
host: "localhost",
port: 6379,
maxRetriesPerRequest: null,
},
concurrency: 5,
limiter: {
max: 100,
duration: 1000,
},
}
); |
Now, there could be a leak, but it is too small to be debugged unfortunately. This is the state of NodeJS ecosystem, some leaks you must live with as long as they are small enough we do not have tools to guarantee 100% that there are no leaks. |
Reopened in case the author can provide more evidence. |
Version
v5.39.1
Platform
NodeJS
What happened?
As seen in the heap dump comparison after ~100k dummy job completions, there is compiled code that looks like Redis commands being retained. Tested with concurrency = 1 and 5, limited to 1 or 100 per second.

How to reproduce.
Relevant log output
Code of Conduct
The text was updated successfully, but these errors were encountered: