Skip to content

Commit 56182fe

Browse files
committed
Merge branch 'master' of https://github.com/taskforcesh/bullmq into remove-ac-and-uuid
2 parents a667a96 + 14019dc commit 56182fe

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+1588
-531
lines changed

.github/workflows/test.yml

+54
Original file line numberDiff line numberDiff line change
@@ -212,3 +212,57 @@ jobs:
212212
run: |
213213
cd python
214214
./run_tests.sh
215+
216+
python-dragonflydb:
217+
runs-on: ubuntu-latest
218+
219+
name: testing python@${{ matrix.python-version }}, dragonflydb@latest
220+
221+
strategy:
222+
matrix:
223+
node-version: [lts/*]
224+
python-version: ['3.10']
225+
226+
services:
227+
dragonflydb:
228+
image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.24.0
229+
env:
230+
DFLY_cluster_mode: emulated
231+
DFLY_lock_on_hashtags: true
232+
HEALTHCHECK_PORT: 6379
233+
ports:
234+
- 6379:6379
235+
236+
steps:
237+
- name: Checkout repository
238+
uses: actions/checkout@c85c95e3d7251135ab7dc9ce3241c5835cc595a9 # v3
239+
240+
- name: Use Node.js ${{ matrix.node-version }}
241+
uses: actions/setup-node@1e60f620b9541d16bece96c5465dc8ee9832be0b # v3
242+
with:
243+
node-version: ${{ matrix.node-version }}
244+
cache: 'yarn'
245+
- run: yarn install --ignore-engines --frozen-lockfile --non-interactive
246+
- run: yarn build
247+
- run: yarn copy:lua:python
248+
249+
- name: Set up Python ${{ matrix.python-version }}
250+
uses: actions/setup-python@61a6322f88396a6271a6ee3565807d608ecaddd1 # v4
251+
with:
252+
python-version: ${{ matrix.python-version }}
253+
254+
- name: Install dependencies
255+
run: |
256+
python -m pip install --upgrade pip
257+
pip install flake8 mypy types-redis
258+
pip install -r python/requirements.txt
259+
- name: Lint with flake8
260+
run: |
261+
# stop the build if there are Python syntax errors or undefined names
262+
flake8 ./python --count --select=E9,F63,F7,F82 --show-source --statistics
263+
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
264+
flake8 ./python --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
265+
- name: Test with pytest
266+
run: |
267+
cd python
268+
./run_tests_dragonfly.sh

docs/gitbook/SUMMARY.md

+1
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,7 @@
9898
* [Groups](bullmq-pro/groups/README.md)
9999
* [Getters](bullmq-pro/groups/getters.md)
100100
* [Rate limiting](bullmq-pro/groups/rate-limiting.md)
101+
* [Local group rate limit](bullmq-pro/groups/local-group-rate-limit.md)
101102
* [Concurrency](bullmq-pro/groups/concurrency.md)
102103
* [Local group concurrency](bullmq-pro/groups/local-group-concurrency.md)
103104
* [Max group size](bullmq-pro/groups/max-group-size.md)

docs/gitbook/bullmq-pro/batches.md

+51-13
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,9 @@ description: Processing jobs in batches
44

55
# Batches
66

7-
It is possible to configure workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called _batch_) in one go.
7+
It is possible to configure workers so that instead of processing one job at a time they can process up to a number of jobs (a so-called _batch_) in one go. Workers using batches have slightly different semantics and behavior than normal workers, so read carefully the following examples to avoid pitfalls.
88

9-
Workers using batches have slightly different semantics and behavior than normal workers, so read carefully the following examples to avoid pitfalls.
10-
11-
In order to enable batches you must pass the `batches` option with a size representing the maximum amount of jobs per batch:
9+
To enable batches, pass the `batch` option with a `size` property representing the maximum number of jobs per batch:
1210

1311
```typescript
1412
const worker = new WorkerPro(
@@ -26,14 +24,54 @@ const worker = new WorkerPro(
2624
```
2725

2826
{% hint style="info" %}
29-
There is no maximum limit for the size of the batches, however, keep in mind that there is an overhead proportional to the size of the batch, so really large batches could create performance issues. A typical value would be something between 10 and 50 jobs per batch.
27+
There is no strict maximum limit for the size of batches; however, keep in mind that larger batches introduce overhead proportional to their size, which could lead to performance issues. Typical batch sizes range between 10 and 50 jobs.
28+
{% endhint %}
29+
30+
### New Batch Options: `minSize` and `timeout`
31+
32+
In addition to the size option, two new options—`minSize` and `timeout`—provide greater control over batch processing:
33+
34+
* `minSize`: Specifies the minimum number of jobs required before the worker processes a batch. The worker will wait until at least minSize jobs are available before fetching and processing them, up to the size limit. If fewer than minSize jobs are available, the worker waits indefinitely unless a timeout is also set. 
35+
* `timeout`: Defines the maximum time (in milliseconds) the worker will wait for minSize jobs to accumulate. If the timeout expires before minSize is reached, the worker processes whatever jobs are available, up to the size limit. If minSize is not set the timeout option is effectively ignored, as the worker batches only avaialble jobs.
36+
37+
{% hint style="info" %}
38+
Important: minSize and timeout are not compatible with groups. When groups are used, the worker ignores minSize and tries to batch avaialble jobs without waiting.
3039
{% endhint %}
3140

41+
Here’s an example configuration using both `minSize` and `timeout`:
42+
43+
```typescript
44+
const worker = new WorkerPro(
45+
'My Queue',
46+
async (job: JobPro) => {
47+
const batch = job.getBatch();
48+
for (let i = 0; i < batch.length; i++) {
49+
const batchedJob = batch[i];
50+
await doSomethingWithBatchedJob(batchedJob);
51+
}
52+
},
53+
{
54+
connection,
55+
batch: {
56+
size: 10, // Maximum jobs per batch
57+
minSize: 5, // Wait for at least 5 jobs
58+
timeout: 30_000 // Wait up to 30 seconds
59+
},
60+
},
61+
);
62+
```
63+
64+
In this example:
65+
66+
* The worker waits for at least 5 jobs to become available, up to a maximum of 10 jobs per batch.
67+
* If 5 or more jobs are available within 30 seconds, it processes the batch (up to 10 jobs).
68+
* If fewer than 5 jobs are available after 30 seconds, it processes whatever jobs are present, even if below `minSize`.
69+
3270
### Failing jobs
3371

34-
When using batches, the default is that if the processor throws an exception, **all the jobs in the batch will fail.**
72+
When using batches, the default is that if the processor throws an exception, **all jobs in the batch will fail.**
3573

36-
Sometimes it is useful to just fail specific jobs in a batch, we can accomplish this by using the job's method `setAsFailed`. See how the example above can be modified to fail specific jobs:
74+
To fail specific jobs instead, use the `setAsFailed` method on individual jobs within the batch:
3775

3876
```typescript
3977
const worker = new WorkerPro(
@@ -54,13 +92,13 @@ const worker = new WorkerPro(
5492
);
5593
```
5694

57-
Only the jobs that are `setAsFailed` will fail, the rest will be moved to _complete_ when the processor for the batch job completes.
95+
Only jobs explicitly marked with `setAsFailed` will fail; the remaining jobs in the batch will complete succesfully once the processor finishes.
5896

5997
### Handling events
6098

61-
Batches are handled by wrapping all the jobs in a batch into a dummy job that keeps all the jobs in an internal array. This approach simplifies the mechanics of running batches, however, it also affects things like how events are handled. For instance, if you need to listen for individual jobs that have completed or failed you must use global events, as the event handler on the worker instance will only report on the events produced by the wrapper batch job, and not the jobs themselves.
99+
Batches are managed by wrapping all jobs in a batch into a dummy job that holds the jobs in an internal array. This simplifies batch processing but affects event handling. For example, worker-level event listeners (e.g., `worker.on('completed', ...)`) report events for the dummy batch job, not the individual jobs within it.
62100

63-
It is possible, however, to call the `getBatch` function in order to retrieve all the jobs that belong to a given batch.
101+
To retrieve the jobs in a batch from an event handler, use the `getBatch` method:
64102

65103
```typescript
66104
worker.on('completed', job => {
@@ -84,6 +122,6 @@ queueEvents.on('completed', (jobId, err) => {
84122

85123
Currently, all worker options can be used with batches, however, there are some unsupported features that may be implemented in the future:
86124

87-
- [Dynamic rate limit](https://docs.bullmq.io/guide/rate-limiting#manual-rate-limit)
88-
- [Manually processing jobs](https://docs.bullmq.io/patterns/manually-fetching-jobs)
89-
- [Dynamically delay jobs](https://docs.bullmq.io/patterns/process-step-jobs#delaying).
125+
* [Dynamic rate limit](https://docs.bullmq.io/guide/rate-limiting#manual-rate-limit)
126+
* [Manually processing jobs](https://docs.bullmq.io/patterns/manually-fetching-jobs)
127+
* [Dynamically delay jobs](https://docs.bullmq.io/patterns/process-step-jobs#delaying).

docs/gitbook/bullmq-pro/changelog.md

+116
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,119 @@
1+
## [7.30.4](https://github.com/taskforcesh/bullmq-pro/compare/v7.30.3...v7.30.4) (2025-03-01)
2+
3+
4+
### Bug Fixes
5+
6+
* **job-scheduler:** consider removing current job from wait, paused or prioritized ([#3066](https://github.com/taskforcesh/bullmq/issues/3066)) ([97cd2b1](https://github.com/taskforcesh/bullmq/commit/97cd2b147d541e0984d1c2e107110e1a9d56d9b5))
7+
8+
9+
### Performance Improvements
10+
11+
* **delayed:** add marker once when promoting delayed jobs ([#3096](https://github.com/taskforcesh/bullmq/issues/3096)) (python) ([38912fb](https://github.com/taskforcesh/bullmq/commit/38912fba969d614eb44d05517ba2ec8bc418a16e))
12+
13+
## [7.30.3](https://github.com/taskforcesh/bullmq-pro/compare/v7.30.2...v7.30.3) (2025-02-21)
14+
15+
16+
### Bug Fixes
17+
18+
* **repeat:** use JobPro class when creating delayed job ([#292](https://github.com/taskforcesh/bullmq-pro/issues/292)) ([ce9eff8](https://github.com/taskforcesh/bullmq-pro/commit/ce9eff8a7c000afb5bc23173267f44b2040a0c6a))
19+
* **worker:** do not execute run method when no processor is defined when resuming ([#3089](https://github.com/taskforcesh/bullmq/issues/3089)) ([4a66933](https://github.com/taskforcesh/bullmq/commit/4a66933496db68a84ec7eb7c153fcedb7bd14c7b))
20+
* **worker:** do not resume when closing ([#3080](https://github.com/taskforcesh/bullmq/issues/3080)) ([024ee0f](https://github.com/taskforcesh/bullmq/commit/024ee0f3f0e808c256712d3ccb1bcadb025eb931))
21+
* **job:** set processedBy when moving job to active in moveToFinished ([#3077](https://github.com/taskforcesh/bullmq/issues/3077)) fixes [#3073](https://github.com/taskforcesh/bullmq/issues/3073) ([1aa970c](https://github.com/taskforcesh/bullmq/commit/1aa970ced3c55949aea6726c4ad29531089f5370))
22+
* **drain:** pass delayed key for redis cluster ([#3074](https://github.com/taskforcesh/bullmq/issues/3074)) ([05ea32b](https://github.com/taskforcesh/bullmq/commit/05ea32b7e4f0cd4099783fd81d2b3214d7a293d5))
23+
* **job-scheduler:** restore limit option to be saved ([#3071](https://github.com/taskforcesh/bullmq/issues/3071)) ([3e649f7](https://github.com/taskforcesh/bullmq/commit/3e649f7399514b343447ed2073cc07e4661f7390))
24+
* **job-scheduler:** return undefined in getJobScheduler when it does not exist ([#3065](https://github.com/taskforcesh/bullmq/issues/3065)) fixes [#3062](https://github.com/taskforcesh/bullmq/issues/3062) ([548cc1c](https://github.com/taskforcesh/bullmq/commit/548cc1ce8080042b4b44009ea99108bd24193895))
25+
* fix return type of getNextJob ([b970281](https://github.com/taskforcesh/bullmq/commit/b9702812e6961f0f5a834f66d43cfb2feabaafd8))
26+
27+
28+
### Features
29+
30+
* **job:** add moveToWait method for manual processing ([#2978](https://github.com/taskforcesh/bullmq/issues/2978)) ([5a97491](https://github.com/taskforcesh/bullmq/commit/5a97491a0319df320b7858657e03c357284e0108))
31+
* **queue:** support removeGlobalConcurrency method ([#3076](https://github.com/taskforcesh/bullmq/issues/3076)) ([ece8532](https://github.com/taskforcesh/bullmq/commit/ece853203adb420466dfaf3ff8bccc73fb917147))
32+
33+
34+
### Performance Improvements
35+
36+
* **add-job:** add job into wait or prioritized state when delay is provided as 0 ([#3052](https://github.com/taskforcesh/bullmq/issues/3052)) ([3e990eb](https://github.com/taskforcesh/bullmq/commit/3e990eb742b3a12065110f33135f282711fdd7b9))
37+
38+
## [7.30.2](https://github.com/taskforcesh/bullmq-pro/compare/v7.30.1...v7.30.2) (2025-02-20)
39+
40+
41+
### Bug Fixes
42+
43+
* **worker:** wait fetched jobs to be processed when closing ([#3059](https://github.com/taskforcesh/bullmq/issues/3059)) ([d4de2f5](https://github.com/taskforcesh/bullmq/commit/d4de2f5e88d57ea00274e62ab23d09f4806196f8))
44+
45+
46+
## [7.30.1](https://github.com/taskforcesh/bullmq-pro/compare/v7.30.0...v7.30.1) (2025-02-20)
47+
48+
49+
### Bug Fixes
50+
51+
* **job:** save processedBy attribute when preparing for processing ([#300](https://github.com/taskforcesh/bullmq-pro/issues/300)) ([c947f6e](https://github.com/taskforcesh/bullmq-pro/commit/c947f6eab80ecd7124e77a589e23f50909e0dee8))
52+
53+
# [7.30.0](https://github.com/taskforcesh/bullmq-pro/compare/v7.29.0...v7.30.0) (2025-02-19)
54+
55+
56+
### Features
57+
58+
* **groups:** support local limiter options ([#262](https://github.com/taskforcesh/bullmq-pro/issues/262)) ([fed293c](https://github.com/taskforcesh/bullmq-pro/commit/fed293cceb575caa7be4987cb65c488faf700075))
59+
60+
# [7.29.0](https://github.com/taskforcesh/bullmq-pro/compare/v7.28.0...v7.29.0) (2025-02-18)
61+
62+
63+
### Features
64+
65+
* **job-scheduler:** revert add delayed job and update in the same script ([9f0f1ba](https://github.com/taskforcesh/bullmq/commit/9f0f1ba9b17874a757ac38c1878792c0df3c5a9a))
66+
67+
# [7.28.0](https://github.com/taskforcesh/bullmq-pro/compare/v7.27.0...v7.28.0) (2025-02-15)
68+
69+
70+
### Bug Fixes
71+
72+
* **worker:** evaluate if a job needs to be fetched when moving to failed ([#3043](https://github.com/taskforcesh/bullmq/issues/3043)) ([406e21c](https://github.com/taskforcesh/bullmq/commit/406e21c9aadd7670f353c1c6b102a401fc327653))
73+
* **retry-job:** consider updating failures in job ([#3036](https://github.com/taskforcesh/bullmq/issues/3036)) ([21e8495](https://github.com/taskforcesh/bullmq/commit/21e8495b5f2bf5418d86f60b59fad25d306a0298))
74+
* **flow-producer:** add support for skipWaitingForReady ([6d829fc](https://github.com/taskforcesh/bullmq/commit/6d829fceda9f204f193c533ffc780962692b8f16))
75+
76+
77+
### Features
78+
79+
* **job-scheduler:** save limit option ([#3033](https://github.com/taskforcesh/bullmq/issues/3033)) ([a1571ea](https://github.com/taskforcesh/bullmq/commit/a1571ea03be6c6c41794fa272c38c29588351bbf))
80+
* **queue:** add option to skip wait until connection ready ([e728299](https://github.com/taskforcesh/bullmq/commit/e72829922d4234b92290346dce5d33f5b98ee373))
81+
82+
# [7.27.0](https://github.com/taskforcesh/bullmq-pro/compare/v7.26.6...v7.27.0) (2025-02-12)
83+
84+
85+
### Bug Fixes
86+
87+
* **worker:** avoid possible hazard in closing worker ([0f07467](https://github.com/taskforcesh/bullmq/commit/0f0746727176d7ff285ae2d1f35048109b4574c5))
88+
89+
90+
### Features
91+
92+
* **queue-getters:** add prometheus exporter ([078ae9d](https://github.com/taskforcesh/bullmq/commit/078ae9db80f6ca64ff0a8135b57a6dc71d71cb1e))
93+
* **job-scheduler:** save iteration count ([#3018](https://github.com/taskforcesh/bullmq/issues/3018)) ([ad5c07c](https://github.com/taskforcesh/bullmq/commit/ad5c07cc7672a3f7a7185310b1250763a5fef76b))
94+
* **sandbox:** add support for getChildrenValues ([dcc3b06](https://github.com/taskforcesh/bullmq/commit/dcc3b0628f992546d7b93f509795e5d4eb3e1b15))
95+
96+
## [7.26.6](https://github.com/taskforcesh/bullmq-pro/compare/v7.26.5...v7.26.6) (2025-02-03)
97+
98+
99+
### Bug Fixes
100+
101+
* **worker:** add missing otel trace when extending locks ([#290](https://github.com/taskforcesh/bullmq-pro/issues/290)) ([efbf948](https://github.com/taskforcesh/bullmq-pro/commit/efbf948585fee4614311db7789d4d351ecc87767))
102+
103+
## [7.26.5](https://github.com/taskforcesh/bullmq-pro/compare/v7.26.4...v7.26.5) (2025-02-02)
104+
105+
106+
### Bug Fixes
107+
108+
* **worker:** remove the use of multi in extend locks ([3862075](https://github.com/taskforcesh/bullmq-pro/commit/3862075ab4e41cfa4c1f6b3f87ba50a5087f8c0d))
109+
110+
## [7.26.4](https://github.com/taskforcesh/bullmq-pro/compare/v7.26.3...v7.26.4) (2025-01-30)
111+
112+
113+
### Bug Fixes
114+
115+
* **retry-job:** pass stalled key instead of limiter ([#291](https://github.com/taskforcesh/bullmq-pro/issues/291)) ([e981c69](https://github.com/taskforcesh/bullmq-pro/commit/e981c69067afa68f86be7599b3f835e53406dd9b))
116+
1117
## [7.26.3](https://github.com/taskforcesh/bullmq-pro/compare/v7.26.2...v7.26.3) (2025-01-26)
2118

3119

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
---
2+
description: How to rate-limit each group with a different limit per group.
3+
---
4+
5+
# Local group rate limit
6+
7+
Sometimes it is required that different groups have different rate limits, this could be the case for example if a group represents a given user in the system, and depending on the user's quota or other factors we would like to have a different rate-limit for it.
8+
9+
You can use a local group rate limit, which would be used only for the specific group that have the rate-limit setup. For example:
10+
11+
```typescript
12+
import { QueuePro } from '@taskforcesh/bullmq-pro';
13+
14+
const queue = new QueuePro('myQueue', { connection });
15+
const groupId = 'my group';
16+
const maxJobsPerDuration = 100;
17+
18+
const duration = 1000; // duration in ms.
19+
await queue.setGroupRateLimit(groupId, maxJobsPerDuration, duration);
20+
21+
```
22+
23+
This code would set a specific rate limit on the group "my group" of max 100 jobs per second. Note that you can still have a ["default" rate-limit](rate-limiting.md) specified for the rest of the groups, the call to `setGroupRateLimit` will therefore allow you to override that rate-limit .
24+
25+
### Read more
26+
27+
* [ Local Rate Limit Group API Reference](https://api.bullmq.pro/classes/v7.QueuePro.html#setGroupRateLimit)
28+

0 commit comments

Comments
 (0)