You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this example, we are fetching an authentication token that will expire after some time. The problem is, we have no idea when the token will expire until we fetch it. So, we decide to cache the token for 10 minutes, but this approach has multiple issues:
@@ -23,10 +27,13 @@ In this example, we are fetching an authentication token that will expire after
23
27
This is where adaptive caching comes in. Instead of setting a fixed TTL, we can set it dynamically based on the token's expiration time:
|`directory`| The directory where the cache files will be stored. | N/A|
80
+
|`pruneInterval`| The interval in milliseconds to prune expired entries. false to disable. | false |
81
81
82
82
### Prune Interval
83
83
@@ -105,11 +105,11 @@ const bento = new BentoCache({
105
105
})
106
106
```
107
107
108
-
| Option | Description | Default |
109
-
|---|---|---|
110
-
|`maxSize`| The maximum size of the cache **in bytes**. | N/A |
111
-
|`maxItems`| The maximum number of entries that the cache can contain. Note that fewer items may be stored if you are also using `maxSize` and the cache is full. | N/A |
112
-
|`maxEntrySize`| The maximum size of a single entry in bytes. | N/A |
|`maxSize`| The maximum size of the cache **in bytes**. | N/A|
111
+
|`maxItems`| The maximum number of entries that the cache can contain. Note that fewer items may be stored if you are also using `maxSize` and the cache is full. | N/A|
112
+
|`maxEntrySize`| The maximum size of a single entry in bytes. | N/A|
113
113
114
114
## DynamoDB
115
115
@@ -143,16 +143,16 @@ You will also need to create a DynamoDB table with a string partition key named
143
143
144
144
**Make sure to also enable [Time To Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) on the table, on the `ttl` attribute. This will allow DynamoDB to automatically delete expired items.**
145
145
146
-
| Option | Description | Default |
147
-
|---|---|---|
148
-
|`table.name`| The name of the table that will be used to store the cache. |`cache`|
149
-
|`credentials`| The credentials to use to connect to DynamoDB. | N/A |
150
-
|`endpoint`| The endpoint to use to connect to DynamoDB. | N/A |
151
-
|`region`| The region to use to connect to DynamoDB. | N/A |
|`table.name`| The name of the table that will be used to store the cache. |`cache`|
149
+
|`credentials`| The credentials to use to connect to DynamoDB. | N/A|
150
+
|`endpoint`| The endpoint to use to connect to DynamoDB. | N/A|
151
+
|`region`| The region to use to connect to DynamoDB. | N/A|
152
152
153
153
154
154
:::warning
155
-
Be careful with the `.clear()` function of the DynamoDB driver. We do not recommend using it. Dynamo does not offer a "native" `clear`, so we are forced to make several API calls to: retrieve the keys and delete them, 25 by 25 (max per `BatchWriteItemCommand`).
155
+
You should be careful with the `.clear()` function of the DynamoDB driver. We do not recommend using it. Dynamo does not offer a "native" `clear`, so we are forced to make several API calls to: retrieve the keys and delete them, 25 by 25 (max per `BatchWriteItemCommand`).
156
156
157
157
So using this function can be costly, both in terms of execution time and API request cost. And also pose rate-limiting problems. Use at your own risk.
158
158
:::
@@ -163,18 +163,18 @@ We offer several drivers to use a database as a cache. The database store should
163
163
164
164
:::note
165
165
166
-
Note that you can easily create your own adapter by implementing the `DatabaseAdapter` interface if you are using another library not supported by Bentocache. See the [documentation](/docs/custom-cache-driver) for more details.
166
+
Note that you can easily create your own adapter by implementing the `DatabaseAdapter` interface if you are using another library not supported by Bentocache. See the [documentation](./extend/custom_cache_driver.md) for more details.
167
167
168
168
:::
169
169
170
170
All SQL drivers accept the following options:
171
171
172
-
| Option | Description | Default |
173
-
|---|---|---|
174
-
|`tableName`| The name of the table that will be used to store the cache. |`bentocache`|
175
-
|`autoCreateTable`| If the cache table should be automatically created if it does not exist. |`true`|
176
-
|`connection`| An instance of `knex` or `Kysely` based on the driver. | N/A |
177
-
|`pruneInterval`| The [Duration](./options.md#ttl-formats) in milliseconds to prune expired entries. | false |
|`tableName`| The name of the table that will be used to store the cache.|`bentocache`|
175
+
|`autoCreateTable`| If the cache table should be automatically created if it does not exist. |`true`|
176
+
|`connection`| An instance of `knex` or `Kysely` based on the driver. | N/A|
177
+
|`pruneInterval`| The [Duration](./options.md#ttl-formats) in milliseconds to prune expired entries. | false|
178
178
179
179
### Knex
180
180
@@ -226,7 +226,7 @@ const bento = new BentoCache({
226
226
227
227
### Orchid ORM
228
228
229
-
You must provide a Orchid ORM instance to use the Orchid driver. Feel free to check the [Orchid ORM documentation](https://orchid-orm.netlify.app/) for more details about the configuration. Orchid support the following databases : PostgreSQL.
229
+
You must provide an Orchid ORM instance to use the Orchid driver. Feel free to check the [Orchid ORM documentation](https://orchid-orm.netlify.app/) for more details about the configuration. Orchid support the following databases : PostgreSQL.
230
230
231
231
You will need to install `orchid-orm` to use this driver.
Copy file name to clipboardexpand all lines: docs/content/docs/extend/custom_cache_driver.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -65,9 +65,9 @@ Similarly, the `L1CacheDriver` interface is the same, except that it is not asyn
65
65
66
66
So this should be quite easy to implement. Feel free to take a lot at [the existing drivers](https://github.com/Julien-R44/bentocache/tree/main/packages/bentocache/src/drivers) implementations for inspiration.
67
67
68
-
Also note that your driver will receive two additional parameters in the constructor : `ttl` and `prefix`. These parameters are common to every drivers and their purpose is explained in the [options](../options.md) page.
68
+
Also note that your driver will receive two additional parameters in the constructor : `ttl` and `prefix`. These parameters are common to every driver and their purpose is explained in the [options](../options.md) page.
69
69
70
-
Once you defined you driver, you can create a factory function that will be used by Bentocache to create instances of your driver at runtime. The factory function must be something like this:
70
+
Once you defined your driver, you can create a factory function that will be used by Bentocache to create instances of your driver at runtime. The factory function must be something like this:
Copy file name to clipboardexpand all lines: docs/content/docs/grace_periods.md
+26-7
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ Then, when the cache entry is requested again but not found, you will have to fe
14
14
15
15
But what if your source of truth is down? Wouldn't it be nice to be able to serve stale data for a little while, until your source of truth is back up?
16
16
17
-
This is what grace periods are for. Basically, you can specify a grace period when you set a cache entry. This grace period will be the duration during which an entry will still be considered as servable, even if it is stale, if anything goes wrong.
17
+
This is what grace periods are for. This grace period will be the duration during which an entry will still be available, even if it is stale.
18
18
19
19
## How to use grace periods
20
20
@@ -23,26 +23,45 @@ Grace periods can be configured at global, driver and operations levels. See the
23
23
Let's imagine you have a simple controller that fetches a user from the database, and caches it for 10 minutes:
Now, let's say your cache is empty. Someone request the user with id 1, but, the database is down, for any reasons. Without grace periods, this request will just fail and **will display an error to the user**.
- First time this code is executed, user will be fetched from database, stored in cache for **10 minutes** with a grace period of **6 hours**.
41
47
-**11 minutes later**, someone request the same user. The cache entry is logically expired, but the grace period is still valid.
42
-
- So, we try to call the factory again to refresh the cache entry. But oops, **the database is down** ( or factory is failling for any other reasons ).
48
+
- So, we try to call the factory again to refresh the cache entry. But oops, **the database is down** ( or factory is failing for any other reasons ).
43
49
- Since we are still in the grace period of 6h, we will serve the stale data from the cache.
44
-
- We also reconsider the stale cache entry as valid for 5m ( the `fallbackDuration` property ). In other words, subsequent requests for the same user will serve the same stale data for 5 minutes. This prevents overwhelming the database with multiple calls while it's down or overloaded.
45
50
46
51
As a result, instead of displaying an error page to the user, we are serving data that's a little out of date. Depending on your use case, this can result in a much better user experience.
47
52
48
-
So, grace period is a practical solution that can help you maintain a positive user experience even during unforeseen downtimes or heavy loads on your data source. By understanding how to configure this feature, you can make your application more resilient and user-friendly.
53
+
## Backoff strategy
54
+
55
+
If the factory is failing, you can also use a backoff strategy to retry the factory only after a certain amount of time. This is useful to avoid hammering your database or API when it's down.
56
+
57
+
```ts
58
+
bento.getOrSet({
59
+
key: 'users:1',
60
+
factory: () =>User.find(1),
61
+
ttl: '10m',
62
+
grace: '6h',
63
+
graceBackoff: '5m',
64
+
})
65
+
```
66
+
67
+
In this example, if the factory fails, we will wait 5 minutes before trying again.
Copy file name to clipboardexpand all lines: docs/content/docs/introduction.md
+7-10
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ There are already caching libraries for Node: [`keyv`](https://keyv.org/), [`cac
26
26
27
27
Not to knock them, on the contrary, they have their use cases and cool. Some are even "marketed" as such and are still very handy for simple caching system.
28
28
29
-
Bentocache, on the other hand, is a **full-featured caching library**. We indeed have this notion of unified access to differents drivers, but in addition to that, we have a ton of features that will allow you to do robust caching.
29
+
Bentocache, on the other hand, is a **full-featured caching library**. We indeed have this notion of unified access to different drivers, but in addition to that, we have a ton of features that will allow you to do robust caching.
30
30
31
31
With that in mind, then I believe there is no serious alternative to Bentocache in the JavaScript ecosystem. Which is regrettable, because all other languages have powerful solutions. This is why Bentocache was created.
32
32
@@ -84,7 +84,7 @@ Multi-layer caching allows you to combine the speed of in-memory caching with th
84
84
85
85
Many drivers available to suit all situations: Redis, Upstash, Database (MySQL, SQLite, PostgreSQL), DynamoDB, Filesystem, In-memory (LRU Cache), Vercel KV...
86
86
87
-
See the [drivers documentation](./cache_drivers.md) for list of available drivers. Also very easy to extend the library and [add your own driver](tbd)
87
+
See the [drivers documentation](./cache_drivers.md) for list of available drivers. Also, very easy to extend the library and [add your own driver](./extend/custom_cache_driver.md)
88
88
89
89
<!-- :::warning
90
90
Only a Redis driver for the bus is currently available. We probably have drivers for other backends like Zookeeper, Kafka, RabbitMQ... Let us know with an issue if you are interested in this.
@@ -97,7 +97,7 @@ Only a Redis driver for the bus is currently available. We probably have drivers
97
97
98
98
-[Cache stamped prevention](./stampede_protection.md): Ensuring that only one factory is executed at the same time.
99
99
100
-
-[Retry queue](./multi_tier.md#retry-queue-strategy) : When a application fails to publish something to the bus, it is added to a queue and retried later.
100
+
-[Retry queue](./multi_tier.md#retry-queue-strategy) : When an application fails to publish something to the bus, it is added to a queue and retried later.
101
101
102
102
### Timeouts
103
103
@@ -110,8 +110,8 @@ The ability to create logical groups for cache keys together, so you can invalid
110
110
```ts
111
111
const users =bento.namespace('users')
112
112
113
-
users.set('32', { name: 'foo' })
114
-
users.set('33', { name: 'bar' })
113
+
users.set({ key: '32', value: { name: 'foo' } })
114
+
users.set({ key: '33', value: { name: 'bar' } })
115
115
116
116
users.clear()
117
117
```
@@ -135,14 +135,14 @@ All TTLs can be passed in a human-readable string format. We use [lukeed/ms](htt
135
135
```ts
136
136
bento.getOrSet({
137
137
key: 'foo',
138
-
ttl: '2.5h',
139
138
factory: () =>getFromDb(),
139
+
ttl: '2.5h',
140
140
})
141
141
```
142
142
143
143
In this case, when only 20% or less of the TTL remains and the entry is requested :
144
144
145
-
- It will returns the cached value to the user.
145
+
- It will return the cached value to the user.
146
146
- Start a background refresh by calling the factory.
147
147
- Next time the entry is requested, it will be already computed, and can be returned immediately.
148
148
@@ -164,9 +164,6 @@ See the [logging documentation](./digging_deeper/logging.md) for more informatio
164
164
165
165
If you like this project, [please consider supporting it by sponsoring it](https://github.com/sponsors/Julien-R44/). It will help a lot to maintain and improve it. Thanks a lot !
0 commit comments