Skip to content

Commit 44d0be4

Browse files
committed
docs: update docs with latest changes
1 parent 5d877d7 commit 44d0be4

File tree

8 files changed

+293
-27
lines changed

8 files changed

+293
-27
lines changed

README.md

+18-3
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
- 🛡️ Grace period and timeouts. Serve stale data when the store is dead or slow
1515
- 🤓 SWR-like caching strategy
1616
- 🗂️ Namespaces. Group your keys by categories.
17+
- 🏷️ Tagging. Easy invalidations.
1718
- 🛑 Cache stamped protection.
1819
- 🏷️ Named caches
1920
- 📖 Well documented + handy JSDoc annotations
@@ -89,15 +90,29 @@ See the [drivers documentation](https://bentocache.dev/docs/cache-drivers) for l
8990

9091
If your factory is taking too long to execute, you can just return a little bit of stale data while keeping the factory running in the background. Next time the entry is requested, it will be already computed and served immediately.
9192

93+
### Tagging
94+
95+
Allows associating a cache entry with one or more tags to simplify invalidation. Instead of managing individual keys, entries can be grouped under multiple tags and invalidated in a single operation.
96+
97+
```ts
98+
await bento.getOrSet({
99+
key: 'foo',
100+
factory: getFromDb(),
101+
tags: ['tag-1', 'tag-2']
102+
});
103+
104+
await bento.deleteByTags({ tags: ['tag-1'] });
105+
```
106+
92107
### Namespaces
93108

94-
The ability to create logical groups for cache keys together, so you can invalidate everything at once later :
109+
Another way to group your keys is to use namespaces. This allows you to invalidate everything at once later :
95110

96111
```ts
97112
const users = bento.namespace('users')
98113

99-
users.set('32', { name: 'foo' })
100-
users.set('33', { name: 'bar' })
114+
users.set({ key: '32', value: { name: 'foo' } })
115+
users.set({ key: '33', value: { name: 'bar' } })
101116

102117
users.clear()
103118
```

docs/content/docs/adaptive_caching.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ const authToken = await bento.getOrSet({
3131
key: 'token',
3232
factory: async (options) => {
3333
const token = await fetchAccessToken();
34-
options.setTtl(token.expiresIn);
34+
options.setOptions({ ttl: token.expiresIn });
3535
return token;
3636
}
3737
});
@@ -53,9 +53,9 @@ const news = await namespace.getOrSet({
5353
const newsItem = await fetchNews(newsId);
5454

5555
if (newsItem.hasBeenUpdatedRecently) {
56-
options.setTtl('5m');
56+
options.setOptions({ ttl: '5m' });
5757
} else {
58-
options.setTtl('2d');
58+
options.setOptions({ ttl: '2d' });
5959
}
6060

6161
return newsItem;

docs/content/docs/db.json

+6
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,12 @@
4141
"contentPath": "./methods.md",
4242
"category": "Guides"
4343
},
44+
{
45+
"permalink": "tagging",
46+
"title": "Tagging",
47+
"contentPath": "./tags.md",
48+
"category": "Guides"
49+
},
4450
{
4551
"permalink": "namespaces",
4652
"title": "Namespaces",

docs/content/docs/introduction.md

+18-1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ Bentocache is a robust multi-tier caching library for Node.js applications
1414
- 🛡️ Grace period and timeouts. Serve stale data when the store is dead or slow
1515
- 🤓 SWR-like caching strategy
1616
- 🗂️ Namespaces. Group your keys by categories.
17+
- 🏷️ Tagging. Easy invalidations.
1718
- 🛑 Cache stamped protection.
1819
- 🏷️ Named caches
1920
- 📖 Well documented + handy JSDoc annotations
@@ -102,9 +103,23 @@ Only a Redis driver for the bus is currently available. We probably have drivers
102103

103104
If your factory is taking too long to execute, you can just return a little bit of stale data while keeping the factory running in the background. Next time the entry is requested, it will be already computed and served immediately.
104105

106+
### Tagging
107+
108+
Allows associating a cache entry with one or more tags to simplify invalidation. Instead of managing individual keys, entries can be grouped under multiple tags and invalidated in a single operation.
109+
110+
```ts
111+
await bento.getOrSet({
112+
key: 'foo',
113+
factory: getFromDb(),
114+
tags: ['tag-1', 'tag-2']
115+
});
116+
117+
await bento.deleteByTags({ tags: ['tag-1'] });
118+
```
119+
105120
### Namespaces
106121

107-
The ability to create logical groups for cache keys together, so you can invalidate everything at once later :
122+
Another way to group your keys is to use namespaces. This allows you to invalidate everything at once later :
108123

109124
```ts
110125
const users = bento.namespace('users')
@@ -165,6 +180,8 @@ If you like this project, [please consider supporting it by sponsoring it](https
165180

166181
## Prior art and inspirations
167182

183+
Bentocache was inspired by several other caching libraries and systems. Especially [FusionCache](https://github.com/ZiggyCreatures/FusionCache), which is probably the most advanced caching library I've ever seen, no matter the language. Huge kudos to the author for his amazing work.
184+
168185
- https://github.com/ZiggyCreatures/FusionCache
169186
- https://laravel.com/docs/10.x/cache
170187
- https://github.com/TurnerSoftware/CacheTower

docs/content/docs/methods.md

+67-19
Original file line numberDiff line numberDiff line change
@@ -83,25 +83,6 @@ const products = await bento.getOrSet({
8383

8484
The `getOrSet` factory function accepts an `ctx` object as argument that can be used to do multiple things:
8585

86-
### ctx.setTtl
87-
88-
`setTtl` allows you to set the TTL of the key dynamically. This is useful when the TTL depends on the value itself.
89-
90-
```ts
91-
const products = await bento.getOrSet({
92-
key: 'token',
93-
factory: async (ctx) => {
94-
const token = await fetchAccessToken()
95-
96-
options.setTtl(token.expiresIn)
97-
98-
return token
99-
}
100-
})
101-
```
102-
103-
Auth tokens are a perfect example of this use case. The cached token should expire when the token itself expires. And we know the expiration time only after fetching the token. See [Adaptive Caching docs](./adaptive_caching.md) for more information.
104-
10586
### ctx.skip
10687

10788
Returning `skip` in a factory will not cache the value, and `getOrSet` will returns `undefined` even if there is a stale item in cache.
@@ -143,6 +124,47 @@ cache.getOrSet({
143124
})
144125
```
145126

127+
### ctx.setOptions
128+
129+
`setOptions` allows you to update the options of the cache entry. This is useful when you want to update the TTL, grace period, or tags and when it depends on the value itself.
130+
131+
132+
```ts
133+
const products = await bento.getOrSet({
134+
key: 'token',
135+
factory: async (ctx) => {
136+
const token = await fetchAccessToken()
137+
138+
options.setOptions({
139+
ttl: token.expiresIn,
140+
tags: ['auth', 'token'],
141+
})
142+
143+
return token
144+
}
145+
})
146+
```
147+
148+
149+
Auth tokens are a perfect example of this use case. The cached token should expire when the token itself expires. And we know the expiration time only after fetching the token. See [Adaptive Caching docs](./adaptive_caching.md) for more information.
150+
151+
### ctx.gracedEntry
152+
153+
If a stale value is available in the cache, `ctx.gracedEntry` will contain it. This can be useful if you want to do something based on the stale value.
154+
155+
```ts
156+
const products = await bento.getOrSet({
157+
key: 'products',
158+
factory: async (ctx) => {
159+
if (ctx.gracedEntry?.value === 'bar') {
160+
return 'foo'
161+
}
162+
163+
return 'bar'
164+
}
165+
})
166+
```
167+
146168
## getOrSetForever
147169

148170
Same as `getOrSet`, but the value will never expire.
@@ -186,6 +208,32 @@ Delete a key from the cache.
186208
await bento.delete({ key: 'products' })
187209
```
188210

211+
## expire
212+
213+
This method is slightly different from `delete`:
214+
215+
When we delete a key, it is completely removed and forgotten. This means that even if we use grace periods, the value will no longer be available.
216+
217+
`expire` works like `delete`, except that instead of completely removing the value, we just mark it as expired/stale but keep it for the [grace period](./grace_periods.md). For example:
218+
219+
```ts
220+
// Set a value with a grace period of 6 minutes
221+
await cache.set({
222+
key: 'hello',
223+
value: 'world',
224+
grace: '6m'
225+
})
226+
227+
// Expire the value. It is kept in the cache but marked as STALE for 6 minutes
228+
await cache.expire({ key: 'hello' })
229+
230+
// Here, a get with `grace: false` will return nothing, because we have a stale value
231+
const r1 = await cache.get({ key: 'hello', grace: false })
232+
233+
// Here it will return the value, because it is still within the grace period
234+
const r2 = await cache.get({ key: 'hello' })
235+
```
236+
189237
## deleteMany
190238

191239
Delete multiple keys from the cache.

docs/content/docs/options.md

+28
Original file line numberDiff line numberDiff line change
@@ -132,6 +132,34 @@ const bento = new BentoCache({
132132
})
133133
```
134134

135+
### `l2CircuitBreakerDuration`
136+
137+
Default: `undefined (disabled)`
138+
139+
Levels: `global`, `store`, `operation`
140+
141+
This option allows you to enable a simple circuit breaker system for the L2 Cache. If defined, the circuit breaker will open when a call to our distributed cache fails. It will stay open for `l2CircuitBreakerDuration` seconds.
142+
143+
If you're not familiar with the circuit breaker system, to summarize it very simply: if an operation on the L2 Cache fails and the circuit breaker option is activated, then all future operations on the L2 Cache will be rejected for `l2CircuitBreakerDuration` seconds, in order to avoid overloading the L2 Cache with operations that are likely to fail.
144+
145+
Once the `l2CircuitBreakerDuration` seconds have passed, the circuit breaker closes and operations on the L2 Cache can resume.
146+
147+
### skipL2Write
148+
149+
Default: `false`
150+
151+
Levels: `operation`
152+
153+
If `true`, the L2 Cache will not be called to write a value.
154+
155+
### skipBusNotify
156+
157+
Default: `false`
158+
159+
Levels: `operation`
160+
161+
If `true`, no notification will be sent to the bus after an operation.
162+
135163
### `serializer`
136164

137165
Default: `JSON.stringify` and `JSON.parse`

docs/content/docs/tags.md

+137
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,137 @@
1+
---
2+
summary: Associate tags with your cache keys to easily invalidat a bunch of keys at once
3+
---
4+
5+
# Tags
6+
7+
:::warning
8+
Tags are available since v1.2.0 and still **experimental**.
9+
10+
We will **not** make breaking changes without a major version, but no guarantees are made about the stability of this feature yet.
11+
12+
Please if you find any bugs, report them on Github issues.
13+
:::
14+
15+
16+
Tagging allows associating a cache entry with one or more tags to simplify invalidation. Instead of managing individual keys, entries can be grouped under multiple tags and invalidated in a single operation.
17+
18+
## Usage
19+
20+
```ts
21+
await bento.getOrSet({
22+
key: 'foo',
23+
factory: getFromDb(),
24+
tags: ['tag-1', 'tag-2']
25+
});
26+
27+
await bento.set({ key: 'foo', tags: ['tag-1'] });
28+
```
29+
30+
To invalidate all entries linked to a tag:
31+
32+
```ts
33+
await bento.deleteByTags({ tags: ['tag-1'] });
34+
```
35+
36+
Now, imagine that the tags depend on the cached value itself. In that case, you can use [adaptive caching](./adaptive_caching.md) to update tags dynamically based on the computed value.
37+
38+
```ts
39+
const product = await bento.getOrSet({
40+
key: `product:${id}`,
41+
factory: async (ctx) => {
42+
const product = await fetchProduct(id);
43+
ctx.setTags(product.tags);
44+
return product;
45+
}
46+
})
47+
```
48+
49+
50+
## How it works
51+
52+
If you are interested in how Bentocache handles tags internally, read on.
53+
54+
Generally, there are two ways to implement tagging in a cache system:
55+
56+
- **Server-side tagging**: The cache backend (e.g., Redis, Memcached, databases) is responsible for managing tags and their associated entries. However, most distributed caches do not natively support tagging. When it's not supported, workarounds exist, but they are either inefficient or complex to implement.
57+
58+
- **Client-side tagging**: The caching library manages tags internally. This is the approach used by Bentocache.
59+
60+
Bentocache implements **client-side tagging**, making it fully backend-agnostic. Instead of relying on the cache backend to track and delete entries by tags, Bentocache tracks invalidation timestamps for each tag and filters out stale data dynamically.
61+
62+
This means all Bentocache drivers automatically support tagging without any modification. If someone implements a custom driver, tagging will work out of the box, without requiring any additional logic.
63+
64+
### Why avoid server-side tagging
65+
66+
Among all the cache backends Bentocache supports, none provide a native tagging system without significant overhead. Of course, something could probably be hacked together on top of all drivers, but it would probably be inefficient and also pretty complex to implement, depending on the backend.
67+
68+
For example:
69+
70+
- In Redis, tagging could be hacked together using Redis sets, but this would require complex management and would not be efficient for large datasets.
71+
- In databases, a separate table mapping cache keys to tags could be used, but this would significantly increase query complexity and also reduce performance.
72+
73+
By performance, I mean that to delete all keys associated with a tag, you’d typically need to run a query like:
74+
75+
```sql
76+
DELETE * FROM cache WHERE "my-tag" IN tags;
77+
```
78+
79+
This approach does not scale in a distributed cache with millions of entries, as scanning large datasets in real-time would be extremely slow and inefficient.
80+
81+
### How Bentocache handles tags
82+
83+
Instead of directly deleting entries with a given tag, Bentocache uses a more efficient approach.
84+
85+
Core idea is pretty simple:
86+
- When a tag is invalidated, Bentocache stores an **invalidation timestamp** in the cache.
87+
- When fetching an entry, Bentocache checks whether it was cached before or after its associated tags were invalidated.
88+
89+
Let's take a concrete example. Here we just cached an entry with the tags `tag-1` and `tag-2`:
90+
91+
```ts
92+
await bento.getOrSet({
93+
key: 'foo',
94+
factory: getFromDb(),
95+
tags: ['tag-1', 'tag-2']
96+
});
97+
```
98+
99+
Internally, Bentocache stores something like:
100+
101+
```ts
102+
foo = { value: 'bar', tags: ['tag-1', 'tag-2'], createdAt: 1700000 }
103+
```
104+
105+
Note that we also store the creation date of the entry as `createdAt`.
106+
107+
Now, we invalidate the `tag-1` tag:
108+
109+
```ts
110+
await bento.deleteByTags({ tags: ['tag-1'] });
111+
```
112+
113+
Instead of scanning and deleting every entry associated with `tag-1`, Bentocache simply stores the invalidation timestamp under a special cache key:
114+
115+
```ts
116+
__bentocache:tags:tag-1 = { invalidatedAt: 1701234 }
117+
```
118+
119+
So, we store the invalidation timestamp of the tag under the key `__bentocache:tags:tag-1`. This means that any cache entry associated with tag-1 created before `1700001234` is now considered stale.
120+
121+
Now, when fetching an entry, Bentocache checks if it was created before the tag was invalidated. If it was, Bentocache considers the entry stale and ignores it.
122+
123+
In fact, the implementation is a bit more complex than that, but that's the general idea.
124+
125+
## Limitations
126+
127+
The main limitation of this system is that you should avoid using too many tags on a single entry. The more tags you use per entry, the more invalidation timestamps Bentocache needs to store and especially check when fetching an entry. This can increase lookup times and impact performance.
128+
129+
In fact, the same issue exists in other systems like Loki, OpenTelemetry, TimescaleDB etc.. where it's known as the "high cardinality" problem. To maintain optimal performance, it's recommended to **keep the number of tags per entry reasonable**.
130+
131+
## Acknowledgements
132+
133+
The concept of client-side tagging in Bentocache was **heavily inspired** by the huge work done by Jody Donetti on [FusionCache](https://github.com/ZiggyCreatures/FusionCache).
134+
135+
Reading his detailed explanations and discussions on GitHub provided invaluable insights into the challenges and solutions in implementing an efficient tagging system.
136+
137+
A **huge thanks** for sharing his expertise and paving the way for innovative caching strategies

0 commit comments

Comments
 (0)