Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a copy-on-write in-memory MemStore backed by a BTree with snapshot pool and manager, typed and prefix wrappers, integration into multi-store/BaseApp, module warmup hooks, snapshot-related options, and extensive tests for lifecycle, concurrency, and iterators. Changes
Sequence Diagram(s)sequenceDiagram
participant App as BaseApp/Module
participant RootMS as RootMultiStore
participant Manager as MemStoreManager
participant Branch as memStore(branch)
participant Pool as SnapshotPool
App->>RootMS: Create CacheMultiStore()
RootMS->>Manager: Branch()
Manager->>Branch: copy root -> branch
RootMS-->>App: Cache context with Branch
App->>Branch: Set/Delete (mutations)
App->>RootMS: Commit(height)
RootMS->>Branch: Commit()
Branch->>Manager: Commit(height)
Manager->>Manager: atomic swap root ← current
Manager->>Pool: Set(height, snapshot)
Manager->>RootMS: Branch() -> new isolated memStore
RootMS-->>App: New cache memStore
App->>Manager: GetSnapshotBranch(height)
Manager->>Pool: Get(height)
Pool-->>Manager: snapshot tree
Manager-->>App: immutable memStore view
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
@gorgos your pull request is missing a changelog! |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (11)
types/module/module.go (1)
881-889: Consider adding ordering control for warmup execution.The
WarmupMemStoreimplementation correctly iterates modules and invokes warmup on those implementingHasWarmupMemStore. However, unlike other lifecycle hooks (BeginBlock, EndBlock), there's no ordering mechanism (e.g.,OrderWarmupMemStore).If warmup order matters for your use case (e.g., one module's warmup depends on another's completion), consider adding a
SetOrderWarmupMemStoremethod similar to lines 397-406.types/context.go (1)
371-373: Add nil safety check for MultiStore.The
MemStoremethod callsc.ms.GetMemStore()without checking ifc.msis nil. WhileNewContextinitializes the multistore, defensive programming suggests adding a nil check here, consistent with other Context accessor patterns.🔎 Proposed fix
func (c Context) MemStore(key []byte) storetypes.MemStore { + if c.ms == nil { + panic("multistore is nil") + } return prefix.NewMemStore(c.ms.GetMemStore(), key) }server/mock/store.go (1)
12-13: Redundant import alias.Both
typesandstoretypesalias the same packagecosmossdk.io/store/types. Consider using a single alias for consistency.🔎 Proposed fix
- "cosmossdk.io/store/types" - storetypes "cosmossdk.io/store/types" + storetypes "cosmossdk.io/store/types"Then update the new methods to use
storetypesinstead oftypes:-func (ms multiStore) SetMemStoreManager(types.MemStoreManager) { +func (ms multiStore) SetMemStoreManager(storetypes.MemStoreManager) { panic("not implemented") } -func (ms multiStore) SetSnapshotPoolLimit(int64) { +func (ms multiStore) SetSnapshotPoolLimit(int64) { panic("not implemented") } -func (ms multiStore) GetMemStore() types.MemStore { +func (ms multiStore) GetMemStore() storetypes.MemStore { panic("not implemented") }store/prefix/memstore_test.go (1)
60-63: Variable naming inconsistency.
NewBranchuses PascalCase which is typically reserved for exported identifiers. Use camelCase for local variables.🔎 Proposed fix
- NewBranch := tree.Branch() - newPrefixBatch := NewMemStore(NewBranch, prefix) + newBranch := tree.Branch() + newPrefixBatch := NewMemStore(newBranch, prefix)store/memstore/snapshop_pool.go (1)
1-1: Filename typo.The filename
snapshop_pool.goshould besnapshot_pool.go.store/memstore/memstore_test.go (2)
225-244: Non-idiomatic assertion usage.Using
fmt.Errorfas the message argument toassert.NotNilandassert.Equalis unusual. These functions expect a format string and variadic args, not an error value.🔎 Proposed fix
- assert.NotNil(t, batch, fmt.Errorf("worker %d: nil batch created", workerID)) + assert.NotNil(t, batch, "worker %d: nil batch created", workerID) ... - assert.Equal(t, readValue, value, fmt.Errorf("worker %d: batch %d failed to read written value", workerID, i)) + assert.Equal(t, readValue, value, "worker %d: batch %d failed to read written value", workerID, i) ... - assert.Equal(t, batch.Get([]byte(initKey)), "init-value-0", fmt.Errorf("worker %d: batch %d failed to read initial value", workerID, i)) + assert.Equal(t, batch.Get([]byte(initKey)), "init-value-0", "worker %d: batch %d failed to read initial value", workerID, i)
667-690: Large benchmark setup may cause slow CI runs.The benchmark pre-populates 50 million entries before measuring. Consider parameterizing or reducing this for faster feedback, or ensuring it only runs with explicit
-benchflags.- for i := 0; i < 50_000_000; i++ { + const setupSize = 1_000_000 // Reduce for faster CI; increase for production benchmarks + for i := 0; i < setupSize; i++ {store/prefix/memstore.go (1)
154-170: Inconsistent behavior between Key() and Value() when invalid.
Key()panics when the iterator is invalid (line 156), butValue()returnsnil(line 166). This inconsistency could confuse callers. Consider aligning the behavior - either both panic or both return nil/zero values.🔎 Option 1: Both panic (consistent with Key)
func (pi *prefixMemStoreIterator) Value() any { if !pi.valid { - return nil + panic("prefixIterator invalid, cannot call Value()") } return pi.iter.Value() }🔎 Option 2: Both return nil (lenient behavior)
func (pi *prefixMemStoreIterator) Key() []byte { if !pi.valid { - panic("prefixIterator invalid, cannot call Key()") + return nil } key := pi.iter.Key() return stripPrefix(key, pi.prefix) }store/memstore/internal/btree.go (1)
48-54: Consider using pointer receiver for consistency
Getuses a value receiver whileSetandDeleteuse pointer receivers. While functionally correct (thetreefield is a pointer), using a pointer receiver consistently would avoid unnecessary struct copying and improve consistency.🔎 Suggested change
-func (bt BTree) Get(key []byte) any { +func (bt *BTree) Get(key []byte) any { i, found := bt.tree.Get(newItem[any](key, nil)) if !found { return nil } return i.value }store/memstore/typed_memstore.go (1)
115-121: Consider using a sentinel errorThe
Error()method creates a new error on each call when the iterator is invalid. A package-level sentinel error would be more efficient and allow for error comparison.🔎 Suggested change
+var errInvalidIterator = errors.New("invalid typedMemStoreIterator") + // Error returns an error if the iterator is invalid func (ti *typedMemStoreIterator[T]) Error() error { if !ti.Valid() { - return errors.New("invalid typedMemStoreIterator") + return errInvalidIterator } return nil }store/memstore/memstore.go (1)
87-99: Clarify the comment on GetSnapshotBranchThe comment "Committing a branch created here is unsafe" at line 87 is slightly misleading. Since the result is wrapped in
UncommittableMemStore, callingCommit()will panic rather than cause unsafe behavior. Consider rewording to clarify that commits are prevented.🔎 Suggested comment improvement
-// Committing a branch created here is unsafe. +// GetSnapshotBranch returns an UncommittableMemStore that panics on Commit(), +// as snapshot branches represent immutable historical state. func (t *memStoreManager) GetSnapshotBranch(height int64) (types.MemStore, bool) {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (20)
baseapp/abci_test.gobaseapp/options.gogo.modserver/mock/store.gostore/cachemulti/store.gostore/memstore/internal/btree.gostore/memstore/internal/memiterator.gostore/memstore/internal/types.gostore/memstore/memstore.gostore/memstore/memstore_test.gostore/memstore/snapshop_pool.gostore/memstore/typed_memstore.gostore/memstore/unusable_memstore.gostore/prefix/memstore.gostore/prefix/memstore_test.gostore/rootmulti/store.gostore/types/memstore.gostore/types/store.gotypes/context.gotypes/module/module.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-04T02:34:26.648Z
Learnt from: maxim-inj
Repo: InjectiveLabs/cosmos-sdk PR: 55
File: x/bank/keeper/virtual_balances.go:90-105
Timestamp: 2025-05-04T02:34:26.648Z
Learning: The ObjectStore in Cosmos SDK doesn't persist data between blocks. It's implemented in the transient store package (store/transient/store.go) as ObjStore, and its Commit() method clears the internal data structure at the end of each block without writing to persistent storage.
Applied to files:
store/memstore/unusable_memstore.gostore/rootmulti/store.gotypes/context.gogo.modstore/cachemulti/store.go
🧬 Code graph analysis (17)
store/memstore/memstore_test.go (2)
store/memstore/memstore.go (1)
NewMemStoreManager(65-81)store/types/memstore.go (1)
MemStore(112-123)
store/memstore/unusable_memstore.go (1)
store/types/memstore.go (2)
MemStore(112-123)MemStoreIterator(127-151)
store/memstore/snapshop_pool.go (1)
store/types/memstore.go (1)
MemStoreManager(10-43)
types/module/module.go (1)
types/context.go (1)
Context(42-76)
store/prefix/memstore.go (1)
store/types/memstore.go (3)
MemStore(112-123)TypedMemStore(174-205)MemStoreIterator(127-151)
store/types/store.go (1)
store/types/memstore.go (2)
MemStore(112-123)MemStoreManager(10-43)
store/rootmulti/store.go (3)
store/memstore/memstore.go (1)
NewMemStoreManager(65-81)store/cachemulti/store.go (2)
Store(23-32)NewFromKVStore(39-56)store/memstore/unusable_memstore.go (1)
NewUnusableMemstore(17-21)
server/mock/store.go (1)
store/types/memstore.go (2)
MemStoreManager(10-43)MemStore(112-123)
types/context.go (2)
store/types/memstore.go (1)
MemStore(112-123)store/prefix/memstore.go (1)
NewMemStore(24-29)
baseapp/abci_test.go (4)
store/types/memstore.go (1)
MemStore(112-123)store/memstore/typed_memstore.go (1)
NewTypedMemStore(20-24)store/prefix/memstore.go (2)
NewTypedMemStore(18-21)NewMemStore(24-29)store/memstore/memstore.go (1)
NewMemStoreManager(65-81)
store/cachemulti/store.go (2)
store/types/memstore.go (1)
MemStore(112-123)store/rootmulti/store.go (1)
Store(60-86)
store/memstore/memstore.go (3)
store/types/memstore.go (3)
MemStoreManager(10-43)MemStore(112-123)SnapshotPool(156-169)store/memstore/internal/btree.go (2)
BTree(24-27)NewBTree(30-38)store/memstore/snapshop_pool.go (1)
SnapshotPool(15-21)
store/memstore/internal/btree.go (2)
store/types/store.go (1)
Iterator(321-321)store/memstore/internal/types.go (1)
TypedMemIterator(3-10)
store/memstore/typed_memstore.go (1)
store/types/memstore.go (4)
TypedMemStore(174-205)TypedMemStoreIterator(210-238)MemStore(112-123)MemStoreIterator(127-151)
baseapp/options.go (1)
baseapp/baseapp.go (1)
BaseApp(69-217)
store/types/memstore.go (1)
store/types/store.go (1)
Iterator(321-321)
store/memstore/internal/memiterator.go (3)
store/memstore/internal/types.go (1)
TypedMemIterator(3-10)store/memstore/internal/btree.go (1)
BTree(24-27)collections/item.go (1)
Item(13-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
- GitHub Check: tests (03)
- GitHub Check: tests (01)
- GitHub Check: tests (00)
- GitHub Check: tests (02)
- GitHub Check: test-e2e
- GitHub Check: test-sim-nondeterminism
- GitHub Check: liveness-test
- GitHub Check: test-integration
- GitHub Check: build (arm64)
- GitHub Check: build (amd64)
- GitHub Check: build (arm)
- GitHub Check: Analyze
- GitHub Check: Gosec
🔇 Additional comments (41)
types/module/module.go (1)
238-241: LGTM! Clean interface addition.The
HasWarmupMemStoreinterface follows the established pattern of other module lifecycle interfaces in this file.store/memstore/internal/types.go (1)
1-10: LGTM! Well-designed generic iterator interface.The
TypedMemIterator[T any]interface provides a clean, type-safe abstraction for iterating over in-memory store data. The method signatures align with standard iterator patterns.store/memstore/unusable_memstore.go (1)
1-49: LGTM! Panic-based sentinel implementation is correctly structured.The
unusableMemstoreprovides a fail-fast mechanism for accessing non-existent snapshot versions. The implementation correctly:
- Uses unexported type with exported constructor for encapsulation
- Panics consistently with height-specific error messages
- Includes compile-time interface compliance check (line 9)
This aligns with the design goal stated in the comment at lines 11-12.
baseapp/abci_test.go (2)
221-314: LGTM! Comprehensive test for MemStore cache context lifecycle.This test thoroughly exercises:
- Prefixed MemStore usage via
ctx.MemStore(blockPrefix)- Nested
CacheContextisolation (uncommitted changes are dropped)- Committed cache writes are visible after
writeCache()- State is not visible before commit but persists after commit
The test validates the copy-on-write and branching semantics critical to the MemStore design.
316-451: LGTM! Excellent coverage of MemStore warmup and restart scenarios.This test validates:
- Warmup function populating MemStore from external state (DB)
- Persistence of MemStore contents across commits
- Snapshot capture and restoration after app restart
- Correct iteration and retrieval of warmed-up data
The test covers the full lifecycle of MemStore snapshot management, which is essential for ensuring correctness across node restarts.
baseapp/options.go (1)
132-134: LGTM! Snapshot pool limit setters follow established patterns.Both
SetupSnapshotPoolLimitandSetSnapshotPoolLimitcorrectly:
- Follow the functional option pattern used throughout this file
- Check
app.sealedbefore modification (line 412)- Delegate to the CommitMultiStore (line 416)
The implementation is consistent with similar configuration setters like
SetIAVLCacheSizeandSetPruning.Also applies to: 411-417
store/types/store.go (1)
143-143: No action needed - all interface implementations are complete.The interface changes have been properly implemented:
GetMemStore()added toMultiStoreinterface is implemented in bothstore/rootmulti/store.go(line 786) andstore/cachemulti/store.go(line 262)SetMemStoreManager()andSetSnapshotPoolLimit()added toCommitMultiStoreinterface are implemented instore/rootmulti/store.go(lines 157, 161) andserver/mock/store.go(lines 26, 30)Note:
cachemulti/store.goonly implementsCacheMultiStore(which extendsMultiStore), notCommitMultiStore, so it correctly only hasGetMemStore()and does not need the other two methods.server/mock/store.go (1)
26-36: LGTM!The stub methods correctly implement the interface contract for the mock store. The
panic("not implemented")pattern is consistent with other methods in this file.store/prefix/memstore_test.go (1)
1-9: LGTM!Comprehensive test coverage for the prefix memstore including basic operations, iterators, nested batches, edge cases, snapshot handling, and random data testing. The tests properly verify isolation semantics and commit propagation.
Also applies to: 367-397
store/rootmulti/store.go (3)
544-554: Review the CAS panic behavior for concurrent modification.The
CompareAndSwapfailure triggers a panic, which is appropriate for detecting a programming error (concurrent modification of the memstore pointer during commit). However, this is a hard crash scenario.Ensure documentation or comments clarify that
Commitmust not be called concurrently and that this panic represents an invariant violation rather than a recoverable error.
637-650: LGTM!The memstore branching in
CacheMultiStorecorrectly creates an isolated branch for each cache context. The nil check guards against invariant violations, and the dereference pattern is safe sinceLoad()returns a non-nil pointer to the stored value.
715-728: LGTM!Proper fallback to
NewUnusableMemstorewhen a snapshot doesn't exist ensures graceful handling of queries at historical heights where the memstore state isn't available. The uncommittable wrapper prevents accidental modifications.store/memstore/internal/memiterator.go (2)
24-60: LGTM!The iterator initialization correctly handles both ascending and descending modes with proper boundary positioning. The descending case correctly treats
endas exclusive by seeking to the end and moving backwards.
82-104: LGTM!The
Next()andkeyInRange()methods correctly advance the iterator and validate bounds. The ascending mode checks against the exclusive end bound, while descending mode checks against the inclusive start bound.store/memstore/memstore_test.go (1)
21-61: LGTM!Excellent test coverage including:
- L1/L2 batch relationship and commit propagation
- Uncommitted batch isolation (both L1 and L2)
- Snapshot pool functionality with height-based retrieval
- Proper panic behavior for concurrent modification detection
The tests effectively validate the copy-on-write semantics and isolation guarantees.
Also applies to: 301-416, 497-570
store/prefix/memstore.go (2)
55-58: Commit() commits the entire parent store.Calling
Commit()on a prefixed store delegates to the parent'sCommit(), which applies all changes in the parent - not just those under this prefix. This is correct for the branching model but may be surprising if multiple prefixed stores share the same parent branch.Ensure callers understand that committing a prefixed store affects the entire parent branch, not just the prefixed keys.
69-108: LGTM!The iterator implementations correctly:
- Compute prefixed bounds using
cloneAppendfor start/end- Use
cpIncrto create an exclusive upper bound when end is nil- Delegate to the parent iterator with the prefixed range
store/cachemulti/store.go (4)
140-142: LGTM!The
Write()method correctly commits the memStore after writing all KV stores. This ensures the memory-backed state is persisted to the parent in the correct order.
161-167: LGTM!Excellent comment explaining why
memStore.Branch()is necessary. This prevents the critical bug where writes in a droppedCacheContextwould leak into the parent context.
261-264: LGTM!Simple accessor for the memStore field, enabling external access to the memory-backed store for prefix wrapping and other operations.
246-249: memStore commit semantics in Restore() are correct.When
Restore()commitsother.memStore, it correctly propagates changes back to the parent becauseClone()creates a nested memStore branch (viacms.memStore.Branch()), and callingCommit()on a nested branch updates its parent'scurrentpointer (parent.current = branch.current). Since memStore is always initialized as a nested branch in this codebase (see rootmulti/store.go), the parent Store's memStore will reflect the committed changes.store/memstore/internal/btree.go (4)
19-38: LGTM - BTree wrapper designThe design choice of
NoLocks: truewith a dedicatedcopyLockfor COW operations is appropriate for the performance-critical path. The documentation clearly states that external synchronization is required for concurrent access.
56-68: LGTM - Iterator boundary validationThe validation correctly distinguishes between
nil(unbounded) and empty[]byte{}(invalid) boundaries, which is a sensible design choice.
70-82: LGTM - Copy-on-write implementationThe
Copy()method correctly:
- Locks during the underlying tree copy (which modifies internal state per the comment)
- Creates a fresh mutex for the new instance, ensuring independent synchronization
88-102: LGTM - Item type and comparisonThe generic
item[T]type andbyKeyscomparator usingbytes.Comparecorrectly implement the required ordering for the B-tree.store/memstore/typed_memstore.go (4)
9-24: LGTM - Interface assertions and constructorThe compile-time interface assertions at lines 10-11 properly ensure type conformance, and the constructor is clean.
32-40: Type assertion may panic on type mismatchThe unchecked type assertion
val.(T)at line 39 will panic if the stored value is not of typeT. While this is acceptable for a typed store (mismatched types indicate a programming error), consider whether a checked assertion with a meaningful panic message would improve debuggability.This is likely acceptable given the "typed" contract, but verify that all usages ensure type consistency.
42-65: LGTM - Delegation methodsThe
Set,Delete,Commit,Branch,Iterator, andReverseIteratormethods correctly delegate to the underlyingMemStorewhile preserving type safety.
99-108: LGTM - Typed iterator Value methodThe
Value()method correctly handlesnilvalues and performs type assertion. The same type mismatch consideration applies as noted forGet().store/types/memstore.go (5)
3-43: LGTM - MemStoreManager interfaceThe interface is well-documented with clear contracts for branching semantics, concurrent modification detection, and snapshot management. The documentation of panic behavior for concurrent commits is particularly helpful.
45-105: LGTM - Reader/Writer interfacesGood separation of concerns between
MemStoreReaderandMemStoreWriter. The documentation clearly explains iterator snapshot semantics and the different commit behaviors for nested vs. top-level branches.
107-151: LGTM - MemStore and MemStoreIteratorThe
MemStoreinterface cleanly composes branching with reader/writer capabilities. The iterator interface is complete with proper documentation about resource cleanup viaClose().
153-169: LGTM - SnapshotPool interfaceSimple and focused interface for height-based snapshot management with limit-based pruning support.
171-238: LGTM - Typed interfacesThe generic
TypedMemStore[T]andTypedMemStoreIterator[T]interfaces provide a clean type-safe API. The additionalError()method on the typed iterator (compared toMemStoreIterator) makes sense for surfacing validity state.store/memstore/memstore.go (7)
10-58: LGTM - Type definitionsThe type structure is well-designed:
- Atomic pointers for
rootandbaseenable lock-free COW semantics- The
parentfield inmemStoresupports nested branchingUncommittableMemStorewrapper prevents commits on snapshot branches
64-81: LGTM - Manager initializationThe initialization correctly sets up:
rootandbasepointing to the same tree (for concurrent modification detection)currentas a COW copy for accumulating changes- A fresh snapshot pool
101-121: LGTM - Branch creationThe branch correctly:
- Creates a COW copy of the current root for isolation
- Captures the
basefor concurrent modification detection during commit- Establishes the manager reference for top-level commit coordination
166-219: LGTM - MemStore data operationsThe data operations are correctly implemented:
Get,Set,Deletedelegate to the current B-treeIteratorandReverseIteratorcreate a snapshot copy to ensure immutability during iteration, which is important for consistency
221-237: LGTM - Nested branch creationNested branches correctly:
- Create a COW copy of the parent's B-tree
- Set
parentfor later commit propagation- Don't need
base/managersince they commit to parent, not to the manager
123-164: LGTM - Manager commit with caveatsThe commit logic correctly:
- Validates non-negative height
- Uses CAS for atomic root update
- Creates immutable snapshots with
current=nilto prevent commits- Updates base for subsequent branch creation
The snapshot manager stored in the pool has
snapshotPool: nilwhich prevents nested snapshot access, which is appropriate.
239-260: Race condition in concurrent L1 commits exists but is documented usage constraintThe race condition described is technically possible: if two L1 branches both call
memStore.Commit()beforememStoreManager.Commit(), both will pass themanager.base.Load() != b.basecheck and the second will overwritemanager.current, silently losing the first branch's changes.However, this violates a documented design assumption. The code explicitly states "
currentis implemented with the assumption that it is accessed only by a single writer." The intended usage pattern is:batch.Commit()→tree.Commit()→ nextbatch.Commit(). Within this pattern, the concurrent modification detection works correctly—the second batch's commit will panic when it detects the base has changed.This is not a hidden bug but a documented constraint on single-threaded writer access. The test
TestSimpleConcurrentL1BatchCommitPanicvalidates the correct behavior under proper sequencing. If concurrent L1 branch commits without interveningtree.Commit()calls occur, data loss silently follows—which warrants either enforcing the constraint at the API level or updating the synchronization mechanism.
| // replace broken goleveldb | ||
| github.com/syndtr/goleveldb => github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7 | ||
|
|
||
| github.com/tidwall/btree => github.com/InjectiveLabs/btree v0.0.0-20250402045845-4690b506608a |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the InjectiveLabs btree fork exists and verify the commit
echo "=== Checking InjectiveLabs/btree repository ==="
git ls-remote https://github.com/InjectiveLabs/btree.git 2>&1 | head -20
echo -e "\n=== Checking for the specific commit ==="
git ls-remote https://github.com/InjectiveLabs/btree.git 4690b506608a 2>&1
echo -e "\n=== Checking available tags ==="
git ls-remote --tags https://github.com/InjectiveLabs/btree.git 2>&1 | tail -20Repository: InjectiveLabs/cosmos-sdk
Length of output: 295
Use a tagged release of the InjectiveLabs/btree fork for better reproducibility.
The replace directive uses a pseudo-version referencing an untagged commit (4690b506608a), which is the current HEAD of the fork's master branch. The InjectiveLabs/btree fork has no tagged releases. Consider creating a tagged release in the fork instead of relying on an untagged commit reference to ensure reproducible builds and clearer version tracking.
🤖 Prompt for AI Agents
In go.mod around line 228, the replace directive pins github.com/tidwall/btree
to an untagged pseudo-version of the InjectiveLabs fork (commit 4690b506608a);
create a proper tag in the InjectiveLabs/btree fork (e.g., v0.1.0) pointing to
that commit (or the intended commit) and then update go.mod to reference that
tag (or run `go get github.com/InjectiveLabs/btree@v0.1.0`), run `go mod tidy`
to update modules, verify builds, and commit the updated go.mod and go.sum so
consumers use a reproducible tagged release; if tagging the fork is not
possible, document the reason and consider vendoring or submitting the change
upstream instead.
store/memstore/snapshop_pool.go
Outdated
| func (p *snapshotPool) Limit(limit int64) { | ||
| if limit <= 0 { | ||
| panic("snapshot pool limit must be positive") | ||
| } | ||
|
|
||
| p.limit = limit | ||
| p.list = make([]*snapshotItem, limit) | ||
|
|
||
| for i := int64(0); i < limit; i++ { | ||
| p.list[i] = &snapshotItem{ | ||
| mtx: &sync.RWMutex{}, | ||
| store: nil, | ||
| height: 0, | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Limit() is not thread-safe.
Calling Limit() while concurrent Get/Set operations are in progress can cause data races. The pool's list slice is replaced without synchronization, and in-flight operations may access stale or invalid indices.
Consider either:
- Using a global mutex to protect
Limit()and coordinate withGet/Set - Documenting that
Limit()must only be called during initialization before concurrent access - Using
sync/atomicfor the limit and implementing lock-free resizing
🔎 Proposed fix (option 2 - documentation)
+// Limit sets the maximum number of snapshots to retain.
+// WARNING: This method is NOT thread-safe. It must only be called during
+// initialization before any concurrent Get/Set operations begin.
func (p *snapshotPool) Limit(limit int64) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (p *snapshotPool) Limit(limit int64) { | |
| if limit <= 0 { | |
| panic("snapshot pool limit must be positive") | |
| } | |
| p.limit = limit | |
| p.list = make([]*snapshotItem, limit) | |
| for i := int64(0); i < limit; i++ { | |
| p.list[i] = &snapshotItem{ | |
| mtx: &sync.RWMutex{}, | |
| store: nil, | |
| height: 0, | |
| } | |
| } | |
| } | |
| // Limit sets the maximum number of snapshots to retain. | |
| // WARNING: This method is NOT thread-safe. It must only be called during | |
| // initialization before any concurrent Get/Set operations begin. | |
| func (p *snapshotPool) Limit(limit int64) { | |
| if limit <= 0 { | |
| panic("snapshot pool limit must be positive") | |
| } | |
| p.limit = limit | |
| p.list = make([]*snapshotItem, limit) | |
| for i := int64(0); i < limit; i++ { | |
| p.list[i] = &snapshotItem{ | |
| mtx: &sync.RWMutex{}, | |
| store: nil, | |
| height: 0, | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In store/memstore/snapshop_pool.go around lines 68–83, Limit() replaces the
pool's list unsafely causing data races with concurrent Get/Set; fix by
introducing a pool-level mutex (e.g., p.mu) and use it to coordinate resizing:
allocate and initialize the new slice locally without holding the lock, then
acquire the mutex and swap p.list and update p.limit atomically; also update
Get/Set to use the same mutex (RLock/RUnlock for readers) so they cannot access
p.list while it's being swapped.
| memStoreSnapshot, exists := rs.memStoreManager.GetSnapshotBranch(version) | ||
|
|
||
| if !exists { | ||
| memStoreSnapshot = memstore.NewUnusableMemstore(version) |
There was a problem hiding this comment.
I think this means that anything calling ctx.MemStore() at an older height can crash the node given height is controlled by the user via query - is this intended?
There was a problem hiding this comment.
Good question, I think a query panic wouldn't crash the node, but the design is still questionable.
| return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig) | ||
| } | ||
|
|
||
| func (c Context) MemStore(key []byte) storetypes.MemStore { |
There was a problem hiding this comment.
I think it's okay to leave out for performance and simplicity reasons.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
baseapp/abci_test.go (1)
426-437: Consider using defer for iterator cleanup to ensure proper resource release on test failures.If any assertion between
iter := typedEkv.Iterator(nil, nil)anditer.Close()fails, the iterator won't be closed, potentially leaking resources.🔎 Proposed fix
iter := typedEkv.Iterator(nil, nil) + defer iter.Close() for ; iter.Valid(); iter.Next() { memStoreSnapshot = append(memStoreSnapshot, struct { key []byte value *cmtproto.Block }{ key: iter.Key(), value: iter.Value(), }) } - iter.Close()store/memstore/typed_memstore.go (1)
117-121: Error() creates a new error instance on every call when iterator is invalid.Consider caching the error or using a package-level sentinel error for better performance and easier error comparison.
🔎 Proposed improvement
+var errInvalidIterator = errors.New("invalid typedMemStoreIterator") + // Error returns an error if the iterator is invalid func (ti *typedMemStoreIterator[T]) Error() error { if !ti.Valid() { - return errors.New("invalid typedMemStoreIterator") + return errInvalidIterator } return nil }store/types/memstore.go (1)
125-151: MemStoreIterator lacks Error() method while TypedMemStoreIterator has it.The
TypedMemStoreIterator[T]interface includes anError() errormethod (line 237), but the baseMemStoreIteratorinterface does not. This inconsistency may cause issues when wrapping iterators or checking for errors uniformly.Consider adding
Error() errortoMemStoreIteratorfor consistency.🔎 Proposed addition
MemStoreIterator interface { // ... existing methods ... // Close releases any resources associated with the iterator. // It must be called when done using the iterator. Close() error + + // Error returns an error if the iterator is invalid. + // Returns nil if the iterator is valid. + Error() error }store/memstore/internal/btree.go (1)
54-60: Get method uses value receiver while Set/Delete use pointer receivers.
Getat line 54 uses value receiver(bt BTree)whileSetandDeleteuse pointer receivers(bt *BTree). While this works due to Go's auto-dereferencing, it's inconsistent and could cause subtle issues if someone takes the address of a BTree value.🔎 Proposed fix for consistency
-func (bt BTree) Get(key []byte) any { +func (bt *BTree) Get(key []byte) any { i, found := bt.tree.Get(newItem[any](key, nil)) if !found { return nil } return i.value } -func (bt BTree) Iterator(start, end []byte) (TypedMemIterator[any], error) { +func (bt *BTree) Iterator(start, end []byte) (TypedMemIterator[any], error) { -func (bt BTree) ReverseIterator(start, end []byte) (TypedMemIterator[any], error) { +func (bt *BTree) ReverseIterator(start, end []byte) (TypedMemIterator[any], error) { -func (bt BTree) Copy() *BTree { +func (bt *BTree) Copy() *BTree { -func (bt BTree) Clear() { +func (bt *BTree) Clear() {
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
baseapp/abci_test.gostore/memstore/internal/btree.gostore/memstore/internal/memiterator.gostore/memstore/memstore.gostore/memstore/typed_memstore.gostore/rootmulti/store.gostore/types/memstore.go
🚧 Files skipped from review as they are similar to previous changes (1)
- store/memstore/internal/memiterator.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-04T02:34:26.648Z
Learnt from: maxim-inj
Repo: InjectiveLabs/cosmos-sdk PR: 55
File: x/bank/keeper/virtual_balances.go:90-105
Timestamp: 2025-05-04T02:34:26.648Z
Learning: The ObjectStore in Cosmos SDK doesn't persist data between blocks. It's implemented in the transient store package (store/transient/store.go) as ObjStore, and its Commit() method clears the internal data structure at the end of each block without writing to persistent storage.
Applied to files:
store/rootmulti/store.go
🧬 Code graph analysis (5)
store/types/memstore.go (4)
version/version.go (1)
Commit(37-37)store/types/store.go (1)
Iterator(321-321)store/reexport.go (1)
Key(22-22)client/v2/autocli/flag/interface.go (1)
Value(20-23)
store/memstore/typed_memstore.go (1)
store/types/memstore.go (4)
TypedMemStore(174-205)TypedMemStoreIterator(210-238)MemStore(112-123)MemStoreIterator(127-151)
baseapp/abci_test.go (4)
store/types/memstore.go (1)
MemStore(112-123)store/memstore/typed_memstore.go (1)
NewTypedMemStore(20-24)store/prefix/memstore.go (2)
NewTypedMemStore(18-21)NewMemStore(24-29)store/memstore/memstore.go (1)
NewMemStoreManager(73-89)
store/memstore/memstore.go (3)
store/types/memstore.go (4)
MemStoreManager(10-43)MemStore(112-123)SnapshotPool(156-169)MemStoreIterator(127-151)store/memstore/internal/btree.go (2)
BTree(30-33)NewBTree(36-44)store/memstore/snapshop_pool.go (1)
SnapshotPool(15-21)
store/memstore/internal/btree.go (2)
store/types/store.go (1)
Iterator(321-321)store/memstore/internal/types.go (1)
TypedMemIterator(3-10)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: tests (02)
- GitHub Check: tests (01)
- GitHub Check: tests (00)
- GitHub Check: tests (03)
- GitHub Check: test-simapp
- GitHub Check: liveness-test
- GitHub Check: test-sim-nondeterminism
- GitHub Check: test-e2e
- GitHub Check: test-integration
- GitHub Check: build (amd64)
- GitHub Check: build (arm64)
- GitHub Check: build (arm)
- GitHub Check: Gosec
- GitHub Check: Analyze
- GitHub Check: dependency-review
- GitHub Check: golangci-lint
🔇 Additional comments (10)
baseapp/abci_test.go (1)
221-314: Test coverage for MemStore cache context lifecycle looks solid.The test properly validates:
- Prefixed MemStore operations within EndBlocker
- Dropped vs committed cache contexts (cacheCtx discarded, cacheCtx2 committed)
- Post-commit state verification showing data properly isolated and committed
The assertions correctly verify that
block-2(from discarded cache) is nil whileblock-1andblock-3are present after commit.store/rootmulti/store.go (3)
553-563: Commit defer block executes afterflushMetadatadefer - verify ordering intent.The
deferat line 553 executes after thedefer rs.flushMetadata(...)at line 551, meaning memstore commit happens after metadata is flushed. If memstore commit panics (e.g., concurrent modification at line 561), metadata would already be written but memstore state would be inconsistent.Consider whether the memstore operations should be part of the main commit flow (before defer) or if this ordering is intentional for your consistency model.
646-658: LGTM - CacheMultiStore properly branches from pre-isolated memstore.The nil guard and the branching pattern ensure each CacheMultiStore gets an isolated view of the memstore state. This correctly implements copy-on-write semantics.
724-737: Snapshot branch fallback uses unusable memstore when height not found.When a snapshot doesn't exist for the requested version,
NewUnusableMemstore(version)is returned. Based on the past review discussion, this can panic if query code attempts write operations. This is acceptable for read-only query contexts, but ensure callers are aware of this limitation.store/memstore/typed_memstore.go (1)
32-40: Type assertion will panic on type mismatch - ensure consistent usage.The type assertion
val.(T)at line 39 will panic if the stored value is not of typeT. This is acceptable for a typed store where callers are expected to use consistent types, but it means mixing types under the same prefix would cause runtime panics.store/types/memstore.go (1)
1-43: Well-documented interface contracts for MemStoreManager.The documentation clearly explains:
- CoW snapshot semantics for branching
- Single-commit guarantee with concurrent modification detection
- Height validation requirements
The interfaces provide a solid foundation for the memstore abstraction.
store/memstore/memstore.go (3)
60-70: Good - UncommittableMemStore now properly panics on Set/Delete operations.This addresses the past review comment about allowing Set/Delete while panicking only on Commit. The behavior is now consistent - all mutating operations panic.
153-158: CompareAndSwap pattern may have subtle race with concurrent Branch calls.The CAS at line 154 uses
t.base.Load()as the expected value. However, betweent.base.Load()being called and the CAS completing, another goroutine could callBranch()which also loads fromt.root. This is likely intentional for the design (branches get a snapshot), but thet.base.Store(current)at line 158 updates base after CAS, which could affect in-flight Branch calls.Verify this is the intended behavior for concurrent Branch + Commit scenarios.
250-268: memStore.Commit correctly propagates changes to parent or manager.The logic properly handles:
- Nested branches: updates parent's current pointer
- Top-level branches: validates base hasn't changed before updating manager
The unreachable panic at line 267 is a good defensive measure.
store/memstore/internal/btree.go (1)
19-33: Thread safety documentation is clear and addresses the design intent.The documentation properly explains:
- BTree is NOT thread-safe for concurrent mutations
- Single goroutine access per instance is enforced via isolated branches
- copyLock only serializes Copy() calls
This addresses the past review comment about documenting the thread-safety invariant.
kakysha
left a comment
There was a problem hiding this comment.
- was it tested with -race in a context of multiple reading goroutines? We added
copyLockinside internal/btree, so I assume it should be working, need to test that.
| } | ||
|
|
||
| if other.memStore != nil { | ||
| other.memStore.Commit() |
There was a problem hiding this comment.
not sure if this is what we actually want / need to do here? Restore(other) means we "resetting" our current cms to the state of other store, but doing other.memStore.Commit() will commit to other.memStore.parent, and that is not always current cms.memStore?
There was a problem hiding this comment.
Can you explain when other.memStore.parent would not be from cms ? But I assume you just mean it's theoretically possible, because in practice you would always call it from where the Clone was called from.
So are you fine if I just add a panic for the case it doesn't match?
There was a problem hiding this comment.
Something like that, yes. I haven’t checked all the call sites, but I know this mechanism is used in EVM restore from snapshot logic. Maybe in there we can end up in a situation when snapshot has a store not directly branched from current cms?
There was a problem hiding this comment.
Only relevant use case for this inside EVM I found is during reverting native change to a snapshot. And since snapshots are done via cachems.Clone(), which is implemented couple of lines above via memstore.Branch(), it must be safe, since all snapshots are effectively a branches from current cms memstore.
store/memstore/snapshop_pool.go
Outdated
|
|
||
| snapshotItem struct { | ||
| mtx *sync.RWMutex | ||
| store types.MemStoreManager |
There was a problem hiding this comment.
Why do we store managers in pool instead of memstores? It provokes unneccesarry Branch() calls on GetSnapshotBranch() with tree copying, even though the snapshots are read-only?
There was a problem hiding this comment.
Its uncommittable manager. But yeah storing the mempool directly is still maybe better, you still need to copy it though anyways. Will add this refactor.
Edit: Actually, maybe its possible without the Copy to be honest. I don't actually see why it's needed, going to verify with Jesse...
|
|
||
| // Get retrieves a value for the given key from the current branch. | ||
| func (b *memStore) Get(key []byte) any { | ||
| return b.current.Get(key) |
There was a problem hiding this comment.
BIG note: we don't return copy of the data, but instead return the pointer to the data stored in the memstore. DO NOT MODIFY directly.
There was a problem hiding this comment.
what are you proposing here?
store/rootmulti/store.go
Outdated
| defer func() { | ||
| height := rs.lastCommitInfo.Version | ||
| current := rs.memStore.Load() | ||
| (*current).Commit() |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@store/memstore/internal/btree.go`:
- Around line 88-90: Clear() is defined with a value receiver (func (bt BTree)
Clear()) which prevents modifications to the original BTree when the underlying
tree reference might be replaced and is inconsistent with Set and Delete; change
the receiver to a pointer (func (bt *BTree) Clear()) so the method operates on
the original BTree instance and any reassignment of the tree field (bt.tree)
will persist, mirroring how Set and Delete use pointer receivers.
In `@store/memstore/snapshop_pool.go`:
- Line 1: The file is misnamed "snapshop_pool.go"; rename it to
"snapshot_pool.go" to fix the typo and ensure consistent imports and tooling;
update any references or imports if they rely on the filename (e.g., CI scripts,
build targets) and run `go vet`/`go test` to verify package memstore still
compiles with the renamed source file.
🧹 Nitpick comments (3)
store/memstore/internal/btree.go (1)
44-50: Inconsistent empty key validation between mutation and iteration methods.
SetandDeletedo not validate empty keys, butIteratorandReverseIteratorreturnerrKeyEmptyfor empty (non-nil, zero-length) keys. This inconsistency could lead to data being stored with empty keys that cannot be iterated over.Consider adding the same validation to
SetandDeletefor consistency, or document that empty keys are valid for storage but not for iteration bounds.🔧 Optional: Add empty key validation to Set
func (bt *BTree) Set(key []byte, value any) { + if key != nil && len(key) == 0 { + panic("key cannot be empty") + } bt.tree.Set(newItem(key, value)) }store/memstore/internal/memiterator.go (1)
70-75:Error()semantics are unconventional.The
Error()method returns an error when the iterator is invalid, rather than for actual operational errors. This differs from typical Go iterator patterns whereError()reports errors encountered during iteration. The current implementation is used internally byassertValid()to trigger panics.This is functional but may surprise callers expecting standard iterator error semantics.
store/memstore/manager.go (1)
111-124: Line 123 (t.current = current) appears redundant.At line 112,
current := t.currentcopies the reference. After the CAS succeeds, line 123 assignst.current = current, which sets it to the same value it already held. This line has no effect.If the intent was to prepare for the next block cycle, the assignment would need to be a new copy. However, looking at the overall flow,
t.currentis only updated externally when an L1 memStore commits, so this line seems unnecessary.🔧 Consider removing redundant assignment
t.base.Store(current) - t.current = current t.snapshotPool.Set(height, current.Copy()) }
| func (bt BTree) Clear() { | ||
| bt.tree.Clear() | ||
| } |
There was a problem hiding this comment.
Clear() uses value receiver - changes won't affect the original BTree.
The Clear() method uses a value receiver (bt BTree) instead of a pointer receiver (bt *BTree). Since BTree contains a pointer to the underlying tree, the Clear() call will work on the tree data, but if the underlying btree implementation replaces the tree reference, changes may not propagate correctly. More importantly, this is inconsistent with Set and Delete which use pointer receivers.
🔧 Suggested fix
-func (bt BTree) Clear() {
+func (bt *BTree) Clear() {
bt.tree.Clear()
}🤖 Prompt for AI Agents
In `@store/memstore/internal/btree.go` around lines 88 - 90, Clear() is defined
with a value receiver (func (bt BTree) Clear()) which prevents modifications to
the original BTree when the underlying tree reference might be replaced and is
inconsistent with Set and Delete; change the receiver to a pointer (func (bt
*BTree) Clear()) so the method operates on the original BTree instance and any
reassignment of the tree field (bt.tree) will persist, mirroring how Set and
Delete use pointer receivers.
| @@ -0,0 +1,85 @@ | |||
| package memstore | |||
There was a problem hiding this comment.
Filename typo: "snapshop_pool.go" should be "snapshot_pool.go".
The filename contains a typo - "snapshop" instead of "snapshot".
🤖 Prompt for AI Agents
In `@store/memstore/snapshop_pool.go` at line 1, The file is misnamed
"snapshop_pool.go"; rename it to "snapshot_pool.go" to fix the typo and ensure
consistent imports and tooling; update any references or imports if they rely on
the filename (e.g., CI scripts, build targets) and run `go vet`/`go test` to
verify package memstore still compiles with the renamed source file.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
store/cachemulti/store.go (1)
132-142: Add nil guards in Write() and CacheMultiStore() for consistency and robustness.The
memStorefield can be nil, as evidenced by defensive nil checks inClone()andRestore()methods. However,Write()(line 137) callscms.memStore.Commit()andCacheMultiStore()(line 167) callscms.memStore.Branch()without nil checks, risking panics if a nilmemStorereaches these code paths. While the current rootmulti caller enforces non-nil invariants via checks before instantiation, direct construction or legacy call paths could bypass this protection.Proposed fix (nil-safe memStore handling)
func (cms Store) Write() { if cms.branched { panic("cannot Write on branched store") } for _, store := range cms.stores { store.Write() } - - cms.memStore.Commit() + if cms.memStore != nil { + cms.memStore.Commit() + } } func (cms Store) CacheMultiStore() types.CacheMultiStore { // Important: create an isolated MemStore branch for nested cache contexts. // Reusing the same memStore instance causes writes in a dropped CacheContext // to leak into the parent context. Branching ensures writes are only // propagated when Write() is called on the child cache store. - return NewFromParent(cms.getCacheWrapper, cms.traceWriter, cms.traceContext, cms.memStore.Branch()) + var memStore types.MemStore + if cms.memStore != nil { + memStore = cms.memStore.Branch() + } + return NewFromParent(cms.getCacheWrapper, cms.traceWriter, cms.traceContext, memStore) }
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@store/memstore/manager.go`:
- Around line 74-94: Branch() currently loads t.root and t.base separately so
current := root.Copy() can end up out-of-sync with base if Commit() swaps root
concurrently; change the logic so the branch's base always refers to the same
snapshot used to build current—e.g., after doing root := t.root.Load() and
current := root.Copy(), set base = root (instead of calling t.base.Load()), or
implement a retry loop that reloads root/base until they are consistent; update
the memStore returned to use that base, and keep references to memStoreManager,
Branch(), t.root.Load(), t.base.Load(), current := root.Copy(), and Commit()
when making the change.
| // Branch creates a top-level branch. | ||
| // It creates a copy-on-write snapshot of the tree's root btree as its working copy. | ||
| func (t *memStoreManager) Branch() types.MemStore { | ||
| root := t.root.Load() | ||
| // Create a copy-on-write snapshot for the current | ||
| current := root.Copy() | ||
|
|
||
| var base *btree | ||
| if t.base != nil { | ||
| base = t.base.Load() | ||
| } | ||
|
|
||
| return &memStore{ | ||
| // This is a top-level branch, so parent is nil | ||
| parent: nil, | ||
|
|
||
| current: current, | ||
| base: base, | ||
| manager: t, | ||
| } | ||
| } |
There was a problem hiding this comment.
Ensure Branch() captures a consistent root/base snapshot under concurrent Commit().
On Line 77-84, root and base are loaded separately. If Commit() swaps root between those loads, current can be built from the old root while base points at the new one, breaking the invariant that base matches the snapshot used to build current. That can allow a stale branch to pass commit checks or panic unexpectedly. Consider retrying until root == base (or using base = root) before copying.
🔧 Proposed fix (retry until root/base are consistent)
func (t *memStoreManager) Branch() types.MemStore {
- root := t.root.Load()
- // Create a copy-on-write snapshot for the current
- current := root.Copy()
-
- var base *btree
- if t.base != nil {
- base = t.base.Load()
- }
+ var root, base *btree
+ for {
+ root = t.root.Load()
+ if t.base == nil {
+ base = root
+ break
+ }
+ base = t.base.Load()
+ if base == root {
+ break
+ }
+ // commit in flight; retry for a consistent snapshot
+ }
+
+ // Create a copy-on-write snapshot for the current
+ current := root.Copy()
return &memStore{
// This is a top-level branch, so parent is nil
parent: nil,
current: current,
base: base,
manager: t,
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Branch creates a top-level branch. | |
| // It creates a copy-on-write snapshot of the tree's root btree as its working copy. | |
| func (t *memStoreManager) Branch() types.MemStore { | |
| root := t.root.Load() | |
| // Create a copy-on-write snapshot for the current | |
| current := root.Copy() | |
| var base *btree | |
| if t.base != nil { | |
| base = t.base.Load() | |
| } | |
| return &memStore{ | |
| // This is a top-level branch, so parent is nil | |
| parent: nil, | |
| current: current, | |
| base: base, | |
| manager: t, | |
| } | |
| } | |
| // Branch creates a top-level branch. | |
| // It creates a copy-on-write snapshot of the tree's root btree as its working copy. | |
| func (t *memStoreManager) Branch() types.MemStore { | |
| var root, base *btree | |
| for { | |
| root = t.root.Load() | |
| if t.base == nil { | |
| base = root | |
| break | |
| } | |
| base = t.base.Load() | |
| if base == root { | |
| break | |
| } | |
| // commit in flight; retry for a consistent snapshot | |
| } | |
| // Create a copy-on-write snapshot for the current | |
| current := root.Copy() | |
| return &memStore{ | |
| // This is a top-level branch, so parent is nil | |
| parent: nil, | |
| current: current, | |
| base: base, | |
| manager: t, | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@store/memstore/manager.go` around lines 74 - 94, Branch() currently loads
t.root and t.base separately so current := root.Copy() can end up out-of-sync
with base if Commit() swaps root concurrently; change the logic so the branch's
base always refers to the same snapshot used to build current—e.g., after doing
root := t.root.Load() and current := root.Copy(), set base = root (instead of
calling t.base.Load()), or implement a retry loop that reloads root/base until
they are consistent; update the memStore returned to use that base, and keep
references to memStoreManager, Branch(), t.root.Load(), t.base.Load(), current
:= root.Copy(), and Commit() when making the change.
https://injective-labs.atlassian.net/browse/CP-323
Summary by CodeRabbit
New Features
Tests