Skip to content

Commit 1c2e4e4

Browse files
committed
docs: update for new version
1 parent f3d48d3 commit 1c2e4e4

15 files changed

+215
-255
lines changed

.github/workflows/checks.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
- name: Setup Node.js
1313
uses: actions/setup-node@v4
1414
with:
15-
node-version: 21
15+
node-version:
1616

1717
- name: Install pnpm
1818
run: |

docs/content/docs/adaptive_caching.md

+26-16
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,14 @@ Adaptive caching is a method to dynamically change the cache options based on th
99
For example, authentication tokens are a perfect example of this use case. Consider the following scenario:
1010

1111
```ts
12-
const authToken = await bento.getOrSet('token', async () => {
13-
const token = await fetchAccessToken()
14-
return token
15-
}, { ttl: '10m' })
12+
const authToken = await bento.getOrSet({
13+
key: 'token',
14+
factory: async () => {
15+
const token = await fetchAccessToken()
16+
return token
17+
},
18+
ttl: '10m'
19+
})
1620
```
1721

1822
In this example, we are fetching an authentication token that will expire after some time. The problem is, we have no idea when the token will expire until we fetch it. So, we decide to cache the token for 10 minutes, but this approach has multiple issues:
@@ -23,10 +27,13 @@ In this example, we are fetching an authentication token that will expire after
2327
This is where adaptive caching comes in. Instead of setting a fixed TTL, we can set it dynamically based on the token's expiration time:
2428

2529
```ts
26-
const authToken = await bento.getOrSet('token', async (options) => {
27-
const token = await fetchAccessToken();
28-
options.setTtl(token.expiresIn);
29-
return token;
30+
const authToken = await bento.getOrSet({
31+
key: 'token',
32+
factory: async (options) => {
33+
const token = await fetchAccessToken();
34+
options.setTtl(token.expiresIn);
35+
return token;
36+
}
3037
});
3138
```
3239

@@ -40,15 +47,18 @@ Let's see how we can achieve this with BentoCache:
4047

4148
```ts
4249
const namespace = bento.namespace('news');
43-
const news = await namespace.getOrSet(newsId, async (options) => {
44-
const newsItem = await fetchNews(newsId);
50+
const news = await namespace.getOrSet({
51+
key: newsId,
52+
factory: async (options) => {
53+
const newsItem = await fetchNews(newsId);
4554

46-
if (newsItem.hasBeenUpdatedRecently) {
47-
options.setTtl('5m');
48-
} else {
49-
options.setTtl('2d');
50-
}
55+
if (newsItem.hasBeenUpdatedRecently) {
56+
options.setTtl('5m');
57+
} else {
58+
options.setTtl('2d');
59+
}
5160

52-
return newsItem;
61+
return newsItem;
62+
}
5363
});
5464
```

docs/content/docs/cache_drivers.md

+27-27
Original file line numberDiff line numberDiff line change
@@ -50,9 +50,9 @@ const bento = new BentoCache({
5050
})
5151
```
5252

53-
| Option | Description | Default |
54-
| --- | --- | --- |
55-
| `connection` | The connection options to use to connect to Redis or an instance of `ioredis` | N/A |
53+
| Option | Description | Default |
54+
|--------------|-------------------------------------------------------------------------------|---------|
55+
| `connection` | The connection options to use to connect to Redis or an instance of `ioredis` | N/A |
5656

5757
## Filesystem
5858

@@ -74,10 +74,10 @@ const bento = new BentoCache({
7474
})
7575
```
7676

77-
| Option | Description | Default |
78-
| --- | --- | --- |
79-
| `directory` | The directory where the cache files will be stored. | N/A |
80-
| `pruneInterval` | The interval in milliseconds to prune expired entries. false to disable. | false |
77+
| Option | Description | Default |
78+
|-----------------|--------------------------------------------------------------------------|---------|
79+
| `directory` | The directory where the cache files will be stored. | N/A |
80+
| `pruneInterval` | The interval in milliseconds to prune expired entries. false to disable. | false |
8181

8282
### Prune Interval
8383

@@ -105,11 +105,11 @@ const bento = new BentoCache({
105105
})
106106
```
107107

108-
| Option | Description | Default |
109-
| --- | --- | --- |
110-
| `maxSize` | The maximum size of the cache **in bytes**. | N/A |
111-
| `maxItems` | The maximum number of entries that the cache can contain. Note that fewer items may be stored if you are also using `maxSize` and the cache is full. | N/A |
112-
| `maxEntrySize` | The maximum size of a single entry in bytes. | N/A |
108+
| Option | Description | Default |
109+
|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
110+
| `maxSize` | The maximum size of the cache **in bytes**. | N/A |
111+
| `maxItems` | The maximum number of entries that the cache can contain. Note that fewer items may be stored if you are also using `maxSize` and the cache is full. | N/A |
112+
| `maxEntrySize` | The maximum size of a single entry in bytes. | N/A |
113113

114114
## DynamoDB
115115

@@ -143,16 +143,16 @@ You will also need to create a DynamoDB table with a string partition key named
143143

144144
**Make sure to also enable [Time To Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) on the table, on the `ttl` attribute. This will allow DynamoDB to automatically delete expired items.**
145145

146-
| Option | Description | Default |
147-
| --- | --- | --- |
148-
| `table.name` | The name of the table that will be used to store the cache. | `cache` |
149-
| `credentials` | The credentials to use to connect to DynamoDB. | N/A |
150-
| `endpoint` | The endpoint to use to connect to DynamoDB. | N/A |
151-
| `region` | The region to use to connect to DynamoDB. | N/A |
146+
| Option | Description | Default |
147+
|---------------|-------------------------------------------------------------|---------|
148+
| `table.name` | The name of the table that will be used to store the cache. | `cache` |
149+
| `credentials` | The credentials to use to connect to DynamoDB. | N/A |
150+
| `endpoint` | The endpoint to use to connect to DynamoDB. | N/A |
151+
| `region` | The region to use to connect to DynamoDB. | N/A |
152152

153153

154154
:::warning
155-
Be careful with the `.clear()` function of the DynamoDB driver. We do not recommend using it. Dynamo does not offer a "native" `clear`, so we are forced to make several API calls to: retrieve the keys and delete them, 25 by 25 (max per `BatchWriteItemCommand`).
155+
You should be careful with the `.clear()` function of the DynamoDB driver. We do not recommend using it. Dynamo does not offer a "native" `clear`, so we are forced to make several API calls to: retrieve the keys and delete them, 25 by 25 (max per `BatchWriteItemCommand`).
156156

157157
So using this function can be costly, both in terms of execution time and API request cost. And also pose rate-limiting problems. Use at your own risk.
158158
:::
@@ -163,18 +163,18 @@ We offer several drivers to use a database as a cache. The database store should
163163

164164
:::note
165165

166-
Note that you can easily create your own adapter by implementing the `DatabaseAdapter` interface if you are using another library not supported by Bentocache. See the [documentation](/docs/custom-cache-driver) for more details.
166+
Note that you can easily create your own adapter by implementing the `DatabaseAdapter` interface if you are using another library not supported by Bentocache. See the [documentation](./extend/custom_cache_driver.md) for more details.
167167

168168
:::
169169

170170
All SQL drivers accept the following options:
171171

172-
| Option | Description | Default |
173-
| --- | --- | --- |
174-
| `tableName` | The name of the table that will be used to store the cache. | `bentocache` |
175-
| `autoCreateTable` | If the cache table should be automatically created if it does not exist. | `true` |
176-
| `connection` | An instance of `knex` or `Kysely` based on the driver. | N/A |
177-
| `pruneInterval` | The [Duration](./options.md#ttl-formats) in milliseconds to prune expired entries. | false |
172+
| Option | Description | Default |
173+
|-------------------|------------------------------------------------------------------------------------|--------------|
174+
| `tableName` | The name of the table that will be used to store the cache. | `bentocache` |
175+
| `autoCreateTable` | If the cache table should be automatically created if it does not exist. | `true` |
176+
| `connection` | An instance of `knex` or `Kysely` based on the driver. | N/A |
177+
| `pruneInterval` | The [Duration](./options.md#ttl-formats) in milliseconds to prune expired entries. | false |
178178

179179
### Knex
180180

@@ -226,7 +226,7 @@ const bento = new BentoCache({
226226

227227
### Orchid ORM
228228

229-
You must provide a Orchid ORM instance to use the Orchid driver. Feel free to check the [Orchid ORM documentation](https://orchid-orm.netlify.app/) for more details about the configuration. Orchid support the following databases : PostgreSQL.
229+
You must provide an Orchid ORM instance to use the Orchid driver. Feel free to check the [Orchid ORM documentation](https://orchid-orm.netlify.app/) for more details about the configuration. Orchid support the following databases : PostgreSQL.
230230

231231
You will need to install `orchid-orm` to use this driver.
232232

docs/content/docs/extend/custom_cache_driver.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -65,9 +65,9 @@ Similarly, the `L1CacheDriver` interface is the same, except that it is not asyn
6565

6666
So this should be quite easy to implement. Feel free to take a lot at [the existing drivers](https://github.com/Julien-R44/bentocache/tree/main/packages/bentocache/src/drivers) implementations for inspiration.
6767

68-
Also note that your driver will receive two additional parameters in the constructor : `ttl` and `prefix`. These parameters are common to every drivers and their purpose is explained in the [options](../options.md) page.
68+
Also note that your driver will receive two additional parameters in the constructor : `ttl` and `prefix`. These parameters are common to every driver and their purpose is explained in the [options](../options.md) page.
6969

70-
Once you defined you driver, you can create a factory function that will be used by Bentocache to create instances of your driver at runtime. The factory function must be something like this:
70+
Once you defined your driver, you can create a factory function that will be used by Bentocache to create instances of your driver at runtime. The factory function must be something like this:
7171

7272
```ts
7373
import type { CreateDriverResult } from 'bentocache/types'

docs/content/docs/grace_periods.md

+26-7
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Then, when the cache entry is requested again but not found, you will have to fe
1414

1515
But what if your source of truth is down? Wouldn't it be nice to be able to serve stale data for a little while, until your source of truth is back up?
1616

17-
This is what grace periods are for. Basically, you can specify a grace period when you set a cache entry. This grace period will be the duration during which an entry will still be considered as servable, even if it is stale, if anything goes wrong.
17+
This is what grace periods are for. This grace period will be the duration during which an entry will still be available, even if it is stale.
1818

1919
## How to use grace periods
2020

@@ -23,26 +23,45 @@ Grace periods can be configured at global, driver and operations levels. See the
2323
Let's imagine you have a simple controller that fetches a user from the database, and caches it for 10 minutes:
2424

2525
```ts
26-
bento.getOrSet('users:1', () => User.find(1), { ttl: '10m' })
26+
bento.getOrSet({
27+
key: 'users:1',
28+
factory: () => User.find(1),
29+
ttl: '10m',
30+
})
2731
```
2832

2933
Now, let's say your cache is empty. Someone request the user with id 1, but, the database is down, for any reasons. Without grace periods, this request will just fail and **will display an error to the user**.
3034

3135
Now what if we add a grace period ?
3236

3337
```ts
34-
bento.getOrSet('users:1', () => User.find(1), {
38+
bento.getOrSet({
39+
key: 'users:1',
40+
factory: () => User.find(1),
3541
ttl: '10m',
36-
gracePeriod: { enabled: true, duration: '6h', fallbackDuration: '5m' }
42+
grace: '6h',
3743
})
3844
```
3945

4046
- First time this code is executed, user will be fetched from database, stored in cache for **10 minutes** with a grace period of **6 hours**.
4147
- **11 minutes later**, someone request the same user. The cache entry is logically expired, but the grace period is still valid.
42-
- So, we try to call the factory again to refresh the cache entry. But oops, **the database is down** ( or factory is failling for any other reasons ).
48+
- So, we try to call the factory again to refresh the cache entry. But oops, **the database is down** ( or factory is failing for any other reasons ).
4349
- Since we are still in the grace period of 6h, we will serve the stale data from the cache.
44-
- We also reconsider the stale cache entry as valid for 5m ( the `fallbackDuration` property ). In other words, subsequent requests for the same user will serve the same stale data for 5 minutes. This prevents overwhelming the database with multiple calls while it's down or overloaded.
4550

4651
As a result, instead of displaying an error page to the user, we are serving data that's a little out of date. Depending on your use case, this can result in a much better user experience.
4752

48-
So, grace period is a practical solution that can help you maintain a positive user experience even during unforeseen downtimes or heavy loads on your data source. By understanding how to configure this feature, you can make your application more resilient and user-friendly.
53+
## Backoff strategy
54+
55+
If the factory is failing, you can also use a backoff strategy to retry the factory only after a certain amount of time. This is useful to avoid hammering your database or API when it's down.
56+
57+
```ts
58+
bento.getOrSet({
59+
key: 'users:1',
60+
factory: () => User.find(1),
61+
ttl: '10m',
62+
grace: '6h',
63+
graceBackoff: '5m',
64+
})
65+
```
66+
67+
In this example, if the factory fails, we will wait 5 minutes before trying again.

docs/content/docs/introduction.md

+7-10
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ There are already caching libraries for Node: [`keyv`](https://keyv.org/), [`cac
2626

2727
Not to knock them, on the contrary, they have their use cases and cool. Some are even "marketed" as such and are still very handy for simple caching system.
2828

29-
Bentocache, on the other hand, is a **full-featured caching library**. We indeed have this notion of unified access to differents drivers, but in addition to that, we have a ton of features that will allow you to do robust caching.
29+
Bentocache, on the other hand, is a **full-featured caching library**. We indeed have this notion of unified access to different drivers, but in addition to that, we have a ton of features that will allow you to do robust caching.
3030

3131
With that in mind, then I believe there is no serious alternative to Bentocache in the JavaScript ecosystem. Which is regrettable, because all other languages have powerful solutions. This is why Bentocache was created.
3232

@@ -84,7 +84,7 @@ Multi-layer caching allows you to combine the speed of in-memory caching with th
8484

8585
Many drivers available to suit all situations: Redis, Upstash, Database (MySQL, SQLite, PostgreSQL), DynamoDB, Filesystem, In-memory (LRU Cache), Vercel KV...
8686

87-
See the [drivers documentation](./cache_drivers.md) for list of available drivers. Also very easy to extend the library and [add your own driver](tbd)
87+
See the [drivers documentation](./cache_drivers.md) for list of available drivers. Also, very easy to extend the library and [add your own driver](./extend/custom_cache_driver.md)
8888

8989
<!-- :::warning
9090
Only a Redis driver for the bus is currently available. We probably have drivers for other backends like Zookeeper, Kafka, RabbitMQ... Let us know with an issue if you are interested in this.
@@ -97,7 +97,7 @@ Only a Redis driver for the bus is currently available. We probably have drivers
9797

9898
- [Cache stamped prevention](./stampede_protection.md): Ensuring that only one factory is executed at the same time.
9999

100-
- [Retry queue](./multi_tier.md#retry-queue-strategy) : When a application fails to publish something to the bus, it is added to a queue and retried later.
100+
- [Retry queue](./multi_tier.md#retry-queue-strategy) : When an application fails to publish something to the bus, it is added to a queue and retried later.
101101

102102
### Timeouts
103103

@@ -110,8 +110,8 @@ The ability to create logical groups for cache keys together, so you can invalid
110110
```ts
111111
const users = bento.namespace('users')
112112

113-
users.set('32', { name: 'foo' })
114-
users.set('33', { name: 'bar' })
113+
users.set({ key: '32', value: { name: 'foo' } })
114+
users.set({ key: '33', value: { name: 'bar' } })
115115

116116
users.clear()
117117
```
@@ -135,14 +135,14 @@ All TTLs can be passed in a human-readable string format. We use [lukeed/ms](htt
135135
```ts
136136
bento.getOrSet({
137137
key: 'foo',
138-
ttl: '2.5h',
139138
factory: () => getFromDb(),
139+
ttl: '2.5h',
140140
})
141141
```
142142

143143
In this case, when only 20% or less of the TTL remains and the entry is requested :
144144

145-
- It will returns the cached value to the user.
145+
- It will return the cached value to the user.
146146
- Start a background refresh by calling the factory.
147147
- Next time the entry is requested, it will be already computed, and can be returned immediately.
148148

@@ -164,9 +164,6 @@ See the [logging documentation](./digging_deeper/logging.md) for more informatio
164164

165165
If you like this project, [please consider supporting it by sponsoring it](https://github.com/sponsors/Julien-R44/). It will help a lot to maintain and improve it. Thanks a lot !
166166

167-
168-
169-
170167
## Prior art and inspirations
171168

172169
- https://github.com/ZiggyCreatures/FusionCache

0 commit comments

Comments
 (0)