-
-
Notifications
You must be signed in to change notification settings - Fork 10.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimised reading from cache #19627
base: main
Are you sure you want to change the base?
Optimised reading from cache #19627
Conversation
cache: redisCacheInstanceStub | ||
}); | ||
|
||
const fetchData = sinon.stub().resolves('Da Value'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure how this test works reliably. Shouldn't we at least wait after both get calls before resoling the fetchData method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to do that because we're not await
ing the calls to get
- this test consistently fails without the changes and passes with them
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'm still a bit skeptical because everything is still very synchronous and the executing already starts before awaiting the get calls, but we can just fix if it would ever start failing
@@ -98,6 +99,40 @@ describe('Adapter Cache Redis', function () { | |||
} | |||
}); | |||
|
|||
it('can wait for execution in the case of duplicate reads', async function () { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preferably we should also add a test with a fetchData error
Our bot has automatically marked this PR as stale because there has not been any activity here in some time. If we’ve missed reviewing your PR & you’re still interested in working on it, please let us know. Otherwise this PR will be closed shortly, but can always be reopened later. Thank you for understanding 🙂 |
WalkthroughThe changes focus on improving the concurrency and efficiency of the Redis cache adapter in the Ghost project. A new Changes
Sequence DiagramsequenceDiagram
participant Client
participant CacheAdapter
participant RedisCache
participant FetchFunction
Client->>CacheAdapter: get(key)
alt Key not in currentlyExecutingReads
CacheAdapter->>RedisCache: Check cache
alt Cache miss
CacheAdapter->>FetchFunction: Fetch data
FetchFunction-->>CacheAdapter: Return value
CacheAdapter->>RedisCache: Store value
end
CacheAdapter-->>Client: Return value
else Key already in currentlyExecutingReads
CacheAdapter-->>Client: Return existing promise
end
The sequence diagram illustrates the enhanced caching mechanism, showing how concurrent reads for the same key are managed, and how the fetch operation is performed only once when needed. ✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
ghost/adapter-cache-redis/lib/AdapterCacheRedis.js (1)
Line range hint
181-207
: Re-throw or handle errors properly in#get
.When
fetchData()
orthis.cache.get()
fails, the error is logged but never exposed to the caller. This can mask problems in upstream code. Consider re-throwing the error or returning a fallback value to ensure failures aren't silently swallowed.A possible fix:
try { const result = await this.cache.get(internalKey); ... } catch (err) { logging.error(err); + throw err; // or return a fallback, depending on the desired behavior }
🧹 Nitpick comments (1)
ghost/adapter-cache-redis/test/adapter-cache-redis.test.js (1)
Line range hint
62-99
: Great coverage for cache miss update scenario.This test provides clear evidence that the cache is only updated once on a miss and subsequent reads reuse the cached data. This greatly improves efficiency. You might also consider adding a negative test (e.g., when
fetchData
throws an error) to ensure error conditions are handled gracefully.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
ghost/adapter-cache-redis/lib/AdapterCacheRedis.js
(2 hunks)ghost/adapter-cache-redis/test/adapter-cache-redis.test.js
(2 hunks)
🔇 Additional comments (3)
ghost/adapter-cache-redis/test/adapter-cache-redis.test.js (1)
102-134
: Consider adding a fetchData error scenario test.As previously mentioned in older review feedback, testing how your code behaves when
fetchData
rejects ensures resilience under non-ideal conditions (e.g., network failures). Verifying that subsequent calls either retry or surface the error will improve reliability coverage.ghost/adapter-cache-redis/lib/AdapterCacheRedis.js (2)
64-64
: Good approach to handle concurrent reads.Storing active read operations in a map prevents duplicate fetches, thereby reducing the load on the data store. This is aligned with your PR objective of optimizing concurrent cache reads.
170-179
: Validate potential race conditions.Although JavaScript is single-threaded, consider a quick defensive check in case a promise is removed from
currentlyExecutingReads
before it's retrieved (for instance, if code structure changes over time). A small verification can help you detect unusual concurrency issues in complex scenarios.
80c4e91
to
d6d5838
Compare
🍦
This ensures that concurrent reads from the cache do not make concurrent reads from the underlying data store.