Skip to content

feat: add memstore#77

Open
gorgos wants to merge 25 commits intov0.50.x-injfrom
CP-323/memstore-integration
Open

feat: add memstore#77
gorgos wants to merge 25 commits intov0.50.x-injfrom
CP-323/memstore-integration

Conversation

@gorgos
Copy link
Member

@gorgos gorgos commented Dec 17, 2025

https://injective-labs.atlassian.net/browse/CP-323

Summary by CodeRabbit

  • New Features

    • Added a copy-on-write in-memory store with branching/commit semantics, snapshot pool, typed views, prefixed scoped stores, B‑tree-backed iterators, and controls for snapshot pool limits and trace recording. Exposed memstore access via context and integrated memstore management into multi-store flows; modules can opt into memstore warm-up.
  • Tests

    • Added extensive tests covering memstore lifecycle, warmup, branching/commit semantics, snapshots, isolation, iteration behavior, concurrency, and BaseApp memstore integration.

@coderabbitai
Copy link

coderabbitai bot commented Dec 17, 2025

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a copy-on-write in-memory MemStore backed by a BTree with snapshot pool and manager, typed and prefix wrappers, integration into multi-store/BaseApp, module warmup hooks, snapshot-related options, and extensive tests for lifecycle, concurrency, and iterators.

Changes

Cohort / File(s) Summary
Core interfaces & types
store/types/memstore.go, store/types/store.go
New memstore interfaces (MemStore, MemStoreManager, SnapshotPool, TypedMemStore, iterators) and MultiStore/CommitMultiStore API extensions (GetMemStore, SetMemStoreManager, SetSnapshotPoolLimit).
MemStore implementation & manager
store/memstore/memstore.go, store/memstore/manager.go, store/memstore/unusable_memstore.go
Copy‑on‑write memStore implementation with Branch/Commit semantics, memStoreManager with snapshot pool, GetSnapshotBranch/Branch/Commit/SetSnapshotPoolLimit, and an unusable snapshot stub.
BTree & iterator internals
store/memstore/internal/btree.go, store/memstore/internal/types.go, store/memstore/internal/memiterator.go
New generic BTree wrapper and TypedMemIterator interface plus memIterator implementation (forward/reverse, bounds, copy semantics).
Snapshot pool
store/memstore/snapshop_pool.go, store/memstore/manager.go
Fixed-size, per-item locked snapshot pool with Get/Set/ResizeAndClear; manager wires snapshots into commits and retrieval.
Typed & prefix wrappers
store/memstore/typed_memstore.go, store/prefix/memstore.go
Generic typedMemStore and iterators; prefixMemStore wrapper that prefixes/strips keys and provides prefixed iterators.
CacheMultiStore integration
store/cachemulti/store.go
CacheMultiStore now carries a MemStore, branches via memStore.Branch(), commits/clone/restore also handle memStore, and exposes GetMemStore().
RootMultiStore integration
store/rootmulti/store.go
Wires MemStoreManager into root store, pre-creates branched isolated memStore, exposes SetMemStoreManager/SetSnapshotPoolLimit/GetMemStore, and commits memStore on Commit.
Context & module hooks
types/context.go, types/module/module.go
Context.MemStore(key) returns a prefix-wrapped MemStore; added HasWarmupMemStore and Manager.WarmupMemStore to call module warmups.
BaseApp options & tests
baseapp/options.go, baseapp/abci_test.go
New SetupSnapshotPoolLimit option and BaseApp.SetSnapshotPoolLimit; SetTraceFlightRecorder added; tests for MemStore cache lifecycle and warmup added.
Mocks & tests
server/mock/store.go, store/memstore/*_test.go, store/prefix/memstore_test.go
Mock multiStore extended with MemStore methods; extensive memstore and prefix tests covering concurrency, snapshots, iterators, warmup, and edge cases.
Build config
go.mod
Added replace mapping for github.com/tidwall/btreegithub.com/InjectiveLabs/btree (plus preserved store replace entry).

Sequence Diagram(s)

sequenceDiagram
    participant App as BaseApp/Module
    participant RootMS as RootMultiStore
    participant Manager as MemStoreManager
    participant Branch as memStore(branch)
    participant Pool as SnapshotPool

    App->>RootMS: Create CacheMultiStore()
    RootMS->>Manager: Branch()
    Manager->>Branch: copy root -> branch
    RootMS-->>App: Cache context with Branch

    App->>Branch: Set/Delete (mutations)

    App->>RootMS: Commit(height)
    RootMS->>Branch: Commit()
    Branch->>Manager: Commit(height)
    Manager->>Manager: atomic swap root ← current
    Manager->>Pool: Set(height, snapshot)
    Manager->>RootMS: Branch() -> new isolated memStore
    RootMS-->>App: New cache memStore

    App->>Manager: GetSnapshotBranch(height)
    Manager->>Pool: Get(height)
    Pool-->>Manager: snapshot tree
    Manager-->>App: immutable memStore view
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰 I hopped through roots and branches of code,

I planted snapshots where small trees grow,
I branched, I committed, then warmed up the store,
Prefixed my keys and then bounced out the door.
🥕✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add memstore' accurately describes the main objective of the pull request, which adds a complete in-memory store implementation across multiple files.
Docstring Coverage ✅ Passed Docstring coverage is 80.95% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch CP-323/memstore-integration

Comment @coderabbitai help to get the list of available commands and usage tips.

@gorgos gorgos marked this pull request as ready for review December 30, 2025 08:22
@github-actions
Copy link

@gorgos your pull request is missing a changelog!

@gorgos gorgos requested review from kakysha and ze97286 December 30, 2025 08:22
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (11)
types/module/module.go (1)

881-889: Consider adding ordering control for warmup execution.

The WarmupMemStore implementation correctly iterates modules and invokes warmup on those implementing HasWarmupMemStore. However, unlike other lifecycle hooks (BeginBlock, EndBlock), there's no ordering mechanism (e.g., OrderWarmupMemStore).

If warmup order matters for your use case (e.g., one module's warmup depends on another's completion), consider adding a SetOrderWarmupMemStore method similar to lines 397-406.

types/context.go (1)

371-373: Add nil safety check for MultiStore.

The MemStore method calls c.ms.GetMemStore() without checking if c.ms is nil. While NewContext initializes the multistore, defensive programming suggests adding a nil check here, consistent with other Context accessor patterns.

🔎 Proposed fix
 func (c Context) MemStore(key []byte) storetypes.MemStore {
+	if c.ms == nil {
+		panic("multistore is nil")
+	}
 	return prefix.NewMemStore(c.ms.GetMemStore(), key)
 }
server/mock/store.go (1)

12-13: Redundant import alias.

Both types and storetypes alias the same package cosmossdk.io/store/types. Consider using a single alias for consistency.

🔎 Proposed fix
-	"cosmossdk.io/store/types"
-	storetypes "cosmossdk.io/store/types"
+	storetypes "cosmossdk.io/store/types"

Then update the new methods to use storetypes instead of types:

-func (ms multiStore) SetMemStoreManager(types.MemStoreManager) {
+func (ms multiStore) SetMemStoreManager(storetypes.MemStoreManager) {
 	panic("not implemented")
 }

-func (ms multiStore) SetSnapshotPoolLimit(int64) {
+func (ms multiStore) SetSnapshotPoolLimit(int64) {
 	panic("not implemented")
 }

-func (ms multiStore) GetMemStore() types.MemStore {
+func (ms multiStore) GetMemStore() storetypes.MemStore {
 	panic("not implemented")
 }
store/prefix/memstore_test.go (1)

60-63: Variable naming inconsistency.

NewBranch uses PascalCase which is typically reserved for exported identifiers. Use camelCase for local variables.

🔎 Proposed fix
-	NewBranch := tree.Branch()
-	newPrefixBatch := NewMemStore(NewBranch, prefix)
+	newBranch := tree.Branch()
+	newPrefixBatch := NewMemStore(newBranch, prefix)
store/memstore/snapshop_pool.go (1)

1-1: Filename typo.

The filename snapshop_pool.go should be snapshot_pool.go.

store/memstore/memstore_test.go (2)

225-244: Non-idiomatic assertion usage.

Using fmt.Errorf as the message argument to assert.NotNil and assert.Equal is unusual. These functions expect a format string and variadic args, not an error value.

🔎 Proposed fix
-				assert.NotNil(t, batch, fmt.Errorf("worker %d: nil batch created", workerID))
+				assert.NotNil(t, batch, "worker %d: nil batch created", workerID)
...
-				assert.Equal(t, readValue, value, fmt.Errorf("worker %d: batch %d failed to read written value", workerID, i))
+				assert.Equal(t, readValue, value, "worker %d: batch %d failed to read written value", workerID, i)
...
-				assert.Equal(t, batch.Get([]byte(initKey)), "init-value-0", fmt.Errorf("worker %d: batch %d failed to read initial value", workerID, i))
+				assert.Equal(t, batch.Get([]byte(initKey)), "init-value-0", "worker %d: batch %d failed to read initial value", workerID, i)

667-690: Large benchmark setup may cause slow CI runs.

The benchmark pre-populates 50 million entries before measuring. Consider parameterizing or reducing this for faster feedback, or ensuring it only runs with explicit -bench flags.

-	for i := 0; i < 50_000_000; i++ {
+	const setupSize = 1_000_000 // Reduce for faster CI; increase for production benchmarks
+	for i := 0; i < setupSize; i++ {
store/prefix/memstore.go (1)

154-170: Inconsistent behavior between Key() and Value() when invalid.

Key() panics when the iterator is invalid (line 156), but Value() returns nil (line 166). This inconsistency could confuse callers. Consider aligning the behavior - either both panic or both return nil/zero values.

🔎 Option 1: Both panic (consistent with Key)
 func (pi *prefixMemStoreIterator) Value() any {
 	if !pi.valid {
-		return nil
+		panic("prefixIterator invalid, cannot call Value()")
 	}

 	return pi.iter.Value()
 }
🔎 Option 2: Both return nil (lenient behavior)
 func (pi *prefixMemStoreIterator) Key() []byte {
 	if !pi.valid {
-		panic("prefixIterator invalid, cannot call Key()")
+		return nil
 	}

 	key := pi.iter.Key()
 	return stripPrefix(key, pi.prefix)
 }
store/memstore/internal/btree.go (1)

48-54: Consider using pointer receiver for consistency

Get uses a value receiver while Set and Delete use pointer receivers. While functionally correct (the tree field is a pointer), using a pointer receiver consistently would avoid unnecessary struct copying and improve consistency.

🔎 Suggested change
-func (bt BTree) Get(key []byte) any {
+func (bt *BTree) Get(key []byte) any {
 	i, found := bt.tree.Get(newItem[any](key, nil))
 	if !found {
 		return nil
 	}
 	return i.value
 }
store/memstore/typed_memstore.go (1)

115-121: Consider using a sentinel error

The Error() method creates a new error on each call when the iterator is invalid. A package-level sentinel error would be more efficient and allow for error comparison.

🔎 Suggested change
+var errInvalidIterator = errors.New("invalid typedMemStoreIterator")
+
 // Error returns an error if the iterator is invalid
 func (ti *typedMemStoreIterator[T]) Error() error {
 	if !ti.Valid() {
-		return errors.New("invalid typedMemStoreIterator")
+		return errInvalidIterator
 	}
 	return nil
 }
store/memstore/memstore.go (1)

87-99: Clarify the comment on GetSnapshotBranch

The comment "Committing a branch created here is unsafe" at line 87 is slightly misleading. Since the result is wrapped in UncommittableMemStore, calling Commit() will panic rather than cause unsafe behavior. Consider rewording to clarify that commits are prevented.

🔎 Suggested comment improvement
-// Committing a branch created here is unsafe.
+// GetSnapshotBranch returns an UncommittableMemStore that panics on Commit(),
+// as snapshot branches represent immutable historical state.
 func (t *memStoreManager) GetSnapshotBranch(height int64) (types.MemStore, bool) {
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b80153c and d2e93f0.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (20)
  • baseapp/abci_test.go
  • baseapp/options.go
  • go.mod
  • server/mock/store.go
  • store/cachemulti/store.go
  • store/memstore/internal/btree.go
  • store/memstore/internal/memiterator.go
  • store/memstore/internal/types.go
  • store/memstore/memstore.go
  • store/memstore/memstore_test.go
  • store/memstore/snapshop_pool.go
  • store/memstore/typed_memstore.go
  • store/memstore/unusable_memstore.go
  • store/prefix/memstore.go
  • store/prefix/memstore_test.go
  • store/rootmulti/store.go
  • store/types/memstore.go
  • store/types/store.go
  • types/context.go
  • types/module/module.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-04T02:34:26.648Z
Learnt from: maxim-inj
Repo: InjectiveLabs/cosmos-sdk PR: 55
File: x/bank/keeper/virtual_balances.go:90-105
Timestamp: 2025-05-04T02:34:26.648Z
Learning: The ObjectStore in Cosmos SDK doesn't persist data between blocks. It's implemented in the transient store package (store/transient/store.go) as ObjStore, and its Commit() method clears the internal data structure at the end of each block without writing to persistent storage.

Applied to files:

  • store/memstore/unusable_memstore.go
  • store/rootmulti/store.go
  • types/context.go
  • go.mod
  • store/cachemulti/store.go
🧬 Code graph analysis (17)
store/memstore/memstore_test.go (2)
store/memstore/memstore.go (1)
  • NewMemStoreManager (65-81)
store/types/memstore.go (1)
  • MemStore (112-123)
store/memstore/unusable_memstore.go (1)
store/types/memstore.go (2)
  • MemStore (112-123)
  • MemStoreIterator (127-151)
store/memstore/snapshop_pool.go (1)
store/types/memstore.go (1)
  • MemStoreManager (10-43)
types/module/module.go (1)
types/context.go (1)
  • Context (42-76)
store/prefix/memstore.go (1)
store/types/memstore.go (3)
  • MemStore (112-123)
  • TypedMemStore (174-205)
  • MemStoreIterator (127-151)
store/types/store.go (1)
store/types/memstore.go (2)
  • MemStore (112-123)
  • MemStoreManager (10-43)
store/rootmulti/store.go (3)
store/memstore/memstore.go (1)
  • NewMemStoreManager (65-81)
store/cachemulti/store.go (2)
  • Store (23-32)
  • NewFromKVStore (39-56)
store/memstore/unusable_memstore.go (1)
  • NewUnusableMemstore (17-21)
server/mock/store.go (1)
store/types/memstore.go (2)
  • MemStoreManager (10-43)
  • MemStore (112-123)
types/context.go (2)
store/types/memstore.go (1)
  • MemStore (112-123)
store/prefix/memstore.go (1)
  • NewMemStore (24-29)
baseapp/abci_test.go (4)
store/types/memstore.go (1)
  • MemStore (112-123)
store/memstore/typed_memstore.go (1)
  • NewTypedMemStore (20-24)
store/prefix/memstore.go (2)
  • NewTypedMemStore (18-21)
  • NewMemStore (24-29)
store/memstore/memstore.go (1)
  • NewMemStoreManager (65-81)
store/cachemulti/store.go (2)
store/types/memstore.go (1)
  • MemStore (112-123)
store/rootmulti/store.go (1)
  • Store (60-86)
store/memstore/memstore.go (3)
store/types/memstore.go (3)
  • MemStoreManager (10-43)
  • MemStore (112-123)
  • SnapshotPool (156-169)
store/memstore/internal/btree.go (2)
  • BTree (24-27)
  • NewBTree (30-38)
store/memstore/snapshop_pool.go (1)
  • SnapshotPool (15-21)
store/memstore/internal/btree.go (2)
store/types/store.go (1)
  • Iterator (321-321)
store/memstore/internal/types.go (1)
  • TypedMemIterator (3-10)
store/memstore/typed_memstore.go (1)
store/types/memstore.go (4)
  • TypedMemStore (174-205)
  • TypedMemStoreIterator (210-238)
  • MemStore (112-123)
  • MemStoreIterator (127-151)
baseapp/options.go (1)
baseapp/baseapp.go (1)
  • BaseApp (69-217)
store/types/memstore.go (1)
store/types/store.go (1)
  • Iterator (321-321)
store/memstore/internal/memiterator.go (3)
store/memstore/internal/types.go (1)
  • TypedMemIterator (3-10)
store/memstore/internal/btree.go (1)
  • BTree (24-27)
collections/item.go (1)
  • Item (13-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: tests (03)
  • GitHub Check: tests (01)
  • GitHub Check: tests (00)
  • GitHub Check: tests (02)
  • GitHub Check: test-e2e
  • GitHub Check: test-sim-nondeterminism
  • GitHub Check: liveness-test
  • GitHub Check: test-integration
  • GitHub Check: build (arm64)
  • GitHub Check: build (amd64)
  • GitHub Check: build (arm)
  • GitHub Check: Analyze
  • GitHub Check: Gosec
🔇 Additional comments (41)
types/module/module.go (1)

238-241: LGTM! Clean interface addition.

The HasWarmupMemStore interface follows the established pattern of other module lifecycle interfaces in this file.

store/memstore/internal/types.go (1)

1-10: LGTM! Well-designed generic iterator interface.

The TypedMemIterator[T any] interface provides a clean, type-safe abstraction for iterating over in-memory store data. The method signatures align with standard iterator patterns.

store/memstore/unusable_memstore.go (1)

1-49: LGTM! Panic-based sentinel implementation is correctly structured.

The unusableMemstore provides a fail-fast mechanism for accessing non-existent snapshot versions. The implementation correctly:

  • Uses unexported type with exported constructor for encapsulation
  • Panics consistently with height-specific error messages
  • Includes compile-time interface compliance check (line 9)

This aligns with the design goal stated in the comment at lines 11-12.

baseapp/abci_test.go (2)

221-314: LGTM! Comprehensive test for MemStore cache context lifecycle.

This test thoroughly exercises:

  • Prefixed MemStore usage via ctx.MemStore(blockPrefix)
  • Nested CacheContext isolation (uncommitted changes are dropped)
  • Committed cache writes are visible after writeCache()
  • State is not visible before commit but persists after commit

The test validates the copy-on-write and branching semantics critical to the MemStore design.


316-451: LGTM! Excellent coverage of MemStore warmup and restart scenarios.

This test validates:

  • Warmup function populating MemStore from external state (DB)
  • Persistence of MemStore contents across commits
  • Snapshot capture and restoration after app restart
  • Correct iteration and retrieval of warmed-up data

The test covers the full lifecycle of MemStore snapshot management, which is essential for ensuring correctness across node restarts.

baseapp/options.go (1)

132-134: LGTM! Snapshot pool limit setters follow established patterns.

Both SetupSnapshotPoolLimit and SetSnapshotPoolLimit correctly:

  • Follow the functional option pattern used throughout this file
  • Check app.sealed before modification (line 412)
  • Delegate to the CommitMultiStore (line 416)

The implementation is consistent with similar configuration setters like SetIAVLCacheSize and SetPruning.

Also applies to: 411-417

store/types/store.go (1)

143-143: No action needed - all interface implementations are complete.

The interface changes have been properly implemented:

  • GetMemStore() added to MultiStore interface is implemented in both store/rootmulti/store.go (line 786) and store/cachemulti/store.go (line 262)
  • SetMemStoreManager() and SetSnapshotPoolLimit() added to CommitMultiStore interface are implemented in store/rootmulti/store.go (lines 157, 161) and server/mock/store.go (lines 26, 30)

Note: cachemulti/store.go only implements CacheMultiStore (which extends MultiStore), not CommitMultiStore, so it correctly only has GetMemStore() and does not need the other two methods.

server/mock/store.go (1)

26-36: LGTM!

The stub methods correctly implement the interface contract for the mock store. The panic("not implemented") pattern is consistent with other methods in this file.

store/prefix/memstore_test.go (1)

1-9: LGTM!

Comprehensive test coverage for the prefix memstore including basic operations, iterators, nested batches, edge cases, snapshot handling, and random data testing. The tests properly verify isolation semantics and commit propagation.

Also applies to: 367-397

store/rootmulti/store.go (3)

544-554: Review the CAS panic behavior for concurrent modification.

The CompareAndSwap failure triggers a panic, which is appropriate for detecting a programming error (concurrent modification of the memstore pointer during commit). However, this is a hard crash scenario.

Ensure documentation or comments clarify that Commit must not be called concurrently and that this panic represents an invariant violation rather than a recoverable error.


637-650: LGTM!

The memstore branching in CacheMultiStore correctly creates an isolated branch for each cache context. The nil check guards against invariant violations, and the dereference pattern is safe since Load() returns a non-nil pointer to the stored value.


715-728: LGTM!

Proper fallback to NewUnusableMemstore when a snapshot doesn't exist ensures graceful handling of queries at historical heights where the memstore state isn't available. The uncommittable wrapper prevents accidental modifications.

store/memstore/internal/memiterator.go (2)

24-60: LGTM!

The iterator initialization correctly handles both ascending and descending modes with proper boundary positioning. The descending case correctly treats end as exclusive by seeking to the end and moving backwards.


82-104: LGTM!

The Next() and keyInRange() methods correctly advance the iterator and validate bounds. The ascending mode checks against the exclusive end bound, while descending mode checks against the inclusive start bound.

store/memstore/memstore_test.go (1)

21-61: LGTM!

Excellent test coverage including:

  • L1/L2 batch relationship and commit propagation
  • Uncommitted batch isolation (both L1 and L2)
  • Snapshot pool functionality with height-based retrieval
  • Proper panic behavior for concurrent modification detection

The tests effectively validate the copy-on-write semantics and isolation guarantees.

Also applies to: 301-416, 497-570

store/prefix/memstore.go (2)

55-58: Commit() commits the entire parent store.

Calling Commit() on a prefixed store delegates to the parent's Commit(), which applies all changes in the parent - not just those under this prefix. This is correct for the branching model but may be surprising if multiple prefixed stores share the same parent branch.

Ensure callers understand that committing a prefixed store affects the entire parent branch, not just the prefixed keys.


69-108: LGTM!

The iterator implementations correctly:

  1. Compute prefixed bounds using cloneAppend for start/end
  2. Use cpIncr to create an exclusive upper bound when end is nil
  3. Delegate to the parent iterator with the prefixed range
store/cachemulti/store.go (4)

140-142: LGTM!

The Write() method correctly commits the memStore after writing all KV stores. This ensures the memory-backed state is persisted to the parent in the correct order.


161-167: LGTM!

Excellent comment explaining why memStore.Branch() is necessary. This prevents the critical bug where writes in a dropped CacheContext would leak into the parent context.


261-264: LGTM!

Simple accessor for the memStore field, enabling external access to the memory-backed store for prefix wrapping and other operations.


246-249: memStore commit semantics in Restore() are correct.

When Restore() commits other.memStore, it correctly propagates changes back to the parent because Clone() creates a nested memStore branch (via cms.memStore.Branch()), and calling Commit() on a nested branch updates its parent's current pointer (parent.current = branch.current). Since memStore is always initialized as a nested branch in this codebase (see rootmulti/store.go), the parent Store's memStore will reflect the committed changes.

store/memstore/internal/btree.go (4)

19-38: LGTM - BTree wrapper design

The design choice of NoLocks: true with a dedicated copyLock for COW operations is appropriate for the performance-critical path. The documentation clearly states that external synchronization is required for concurrent access.


56-68: LGTM - Iterator boundary validation

The validation correctly distinguishes between nil (unbounded) and empty []byte{} (invalid) boundaries, which is a sensible design choice.


70-82: LGTM - Copy-on-write implementation

The Copy() method correctly:

  1. Locks during the underlying tree copy (which modifies internal state per the comment)
  2. Creates a fresh mutex for the new instance, ensuring independent synchronization

88-102: LGTM - Item type and comparison

The generic item[T] type and byKeys comparator using bytes.Compare correctly implement the required ordering for the B-tree.

store/memstore/typed_memstore.go (4)

9-24: LGTM - Interface assertions and constructor

The compile-time interface assertions at lines 10-11 properly ensure type conformance, and the constructor is clean.


32-40: Type assertion may panic on type mismatch

The unchecked type assertion val.(T) at line 39 will panic if the stored value is not of type T. While this is acceptable for a typed store (mismatched types indicate a programming error), consider whether a checked assertion with a meaningful panic message would improve debuggability.

This is likely acceptable given the "typed" contract, but verify that all usages ensure type consistency.


42-65: LGTM - Delegation methods

The Set, Delete, Commit, Branch, Iterator, and ReverseIterator methods correctly delegate to the underlying MemStore while preserving type safety.


99-108: LGTM - Typed iterator Value method

The Value() method correctly handles nil values and performs type assertion. The same type mismatch consideration applies as noted for Get().

store/types/memstore.go (5)

3-43: LGTM - MemStoreManager interface

The interface is well-documented with clear contracts for branching semantics, concurrent modification detection, and snapshot management. The documentation of panic behavior for concurrent commits is particularly helpful.


45-105: LGTM - Reader/Writer interfaces

Good separation of concerns between MemStoreReader and MemStoreWriter. The documentation clearly explains iterator snapshot semantics and the different commit behaviors for nested vs. top-level branches.


107-151: LGTM - MemStore and MemStoreIterator

The MemStore interface cleanly composes branching with reader/writer capabilities. The iterator interface is complete with proper documentation about resource cleanup via Close().


153-169: LGTM - SnapshotPool interface

Simple and focused interface for height-based snapshot management with limit-based pruning support.


171-238: LGTM - Typed interfaces

The generic TypedMemStore[T] and TypedMemStoreIterator[T] interfaces provide a clean type-safe API. The additional Error() method on the typed iterator (compared to MemStoreIterator) makes sense for surfacing validity state.

store/memstore/memstore.go (7)

10-58: LGTM - Type definitions

The type structure is well-designed:

  • Atomic pointers for root and base enable lock-free COW semantics
  • The parent field in memStore supports nested branching
  • UncommittableMemStore wrapper prevents commits on snapshot branches

64-81: LGTM - Manager initialization

The initialization correctly sets up:

  • root and base pointing to the same tree (for concurrent modification detection)
  • current as a COW copy for accumulating changes
  • A fresh snapshot pool

101-121: LGTM - Branch creation

The branch correctly:

  • Creates a COW copy of the current root for isolation
  • Captures the base for concurrent modification detection during commit
  • Establishes the manager reference for top-level commit coordination

166-219: LGTM - MemStore data operations

The data operations are correctly implemented:

  • Get, Set, Delete delegate to the current B-tree
  • Iterator and ReverseIterator create a snapshot copy to ensure immutability during iteration, which is important for consistency

221-237: LGTM - Nested branch creation

Nested branches correctly:

  • Create a COW copy of the parent's B-tree
  • Set parent for later commit propagation
  • Don't need base/manager since they commit to parent, not to the manager

123-164: LGTM - Manager commit with caveats

The commit logic correctly:

  • Validates non-negative height
  • Uses CAS for atomic root update
  • Creates immutable snapshots with current=nil to prevent commits
  • Updates base for subsequent branch creation

The snapshot manager stored in the pool has snapshotPool: nil which prevents nested snapshot access, which is appropriate.


239-260: Race condition in concurrent L1 commits exists but is documented usage constraint

The race condition described is technically possible: if two L1 branches both call memStore.Commit() before memStoreManager.Commit(), both will pass the manager.base.Load() != b.base check and the second will overwrite manager.current, silently losing the first branch's changes.

However, this violates a documented design assumption. The code explicitly states "current is implemented with the assumption that it is accessed only by a single writer." The intended usage pattern is: batch.Commit()tree.Commit() → next batch.Commit(). Within this pattern, the concurrent modification detection works correctly—the second batch's commit will panic when it detects the base has changed.

This is not a hidden bug but a documented constraint on single-threaded writer access. The test TestSimpleConcurrentL1BatchCommitPanic validates the correct behavior under proper sequencing. If concurrent L1 branch commits without intervening tree.Commit() calls occur, data loss silently follows—which warrants either enforcing the constraint at the API level or updating the synchronization mechanism.

// replace broken goleveldb
github.com/syndtr/goleveldb => github.com/syndtr/goleveldb v1.0.1-0.20210819022825-2ae1ddf74ef7

github.com/tidwall/btree => github.com/InjectiveLabs/btree v0.0.0-20250402045845-4690b506608a
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if the InjectiveLabs btree fork exists and verify the commit
echo "=== Checking InjectiveLabs/btree repository ==="
git ls-remote https://github.com/InjectiveLabs/btree.git 2>&1 | head -20

echo -e "\n=== Checking for the specific commit ==="
git ls-remote https://github.com/InjectiveLabs/btree.git 4690b506608a 2>&1

echo -e "\n=== Checking available tags ==="
git ls-remote --tags https://github.com/InjectiveLabs/btree.git 2>&1 | tail -20

Repository: InjectiveLabs/cosmos-sdk

Length of output: 295


Use a tagged release of the InjectiveLabs/btree fork for better reproducibility.

The replace directive uses a pseudo-version referencing an untagged commit (4690b506608a), which is the current HEAD of the fork's master branch. The InjectiveLabs/btree fork has no tagged releases. Consider creating a tagged release in the fork instead of relying on an untagged commit reference to ensure reproducible builds and clearer version tracking.

🤖 Prompt for AI Agents
In go.mod around line 228, the replace directive pins github.com/tidwall/btree
to an untagged pseudo-version of the InjectiveLabs fork (commit 4690b506608a);
create a proper tag in the InjectiveLabs/btree fork (e.g., v0.1.0) pointing to
that commit (or the intended commit) and then update go.mod to reference that
tag (or run `go get github.com/InjectiveLabs/btree@v0.1.0`), run `go mod tidy`
to update modules, verify builds, and commit the updated go.mod and go.sum so
consumers use a reproducible tagged release; if tagging the fork is not
possible, document the reason and consider vendoring or submitting the change
upstream instead.

Comment on lines +68 to +83
func (p *snapshotPool) Limit(limit int64) {
if limit <= 0 {
panic("snapshot pool limit must be positive")
}

p.limit = limit
p.list = make([]*snapshotItem, limit)

for i := int64(0); i < limit; i++ {
p.list[i] = &snapshotItem{
mtx: &sync.RWMutex{},
store: nil,
height: 0,
}
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Limit() is not thread-safe.

Calling Limit() while concurrent Get/Set operations are in progress can cause data races. The pool's list slice is replaced without synchronization, and in-flight operations may access stale or invalid indices.

Consider either:

  1. Using a global mutex to protect Limit() and coordinate with Get/Set
  2. Documenting that Limit() must only be called during initialization before concurrent access
  3. Using sync/atomic for the limit and implementing lock-free resizing
🔎 Proposed fix (option 2 - documentation)
+// Limit sets the maximum number of snapshots to retain.
+// WARNING: This method is NOT thread-safe. It must only be called during
+// initialization before any concurrent Get/Set operations begin.
 func (p *snapshotPool) Limit(limit int64) {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (p *snapshotPool) Limit(limit int64) {
if limit <= 0 {
panic("snapshot pool limit must be positive")
}
p.limit = limit
p.list = make([]*snapshotItem, limit)
for i := int64(0); i < limit; i++ {
p.list[i] = &snapshotItem{
mtx: &sync.RWMutex{},
store: nil,
height: 0,
}
}
}
// Limit sets the maximum number of snapshots to retain.
// WARNING: This method is NOT thread-safe. It must only be called during
// initialization before any concurrent Get/Set operations begin.
func (p *snapshotPool) Limit(limit int64) {
if limit <= 0 {
panic("snapshot pool limit must be positive")
}
p.limit = limit
p.list = make([]*snapshotItem, limit)
for i := int64(0); i < limit; i++ {
p.list[i] = &snapshotItem{
mtx: &sync.RWMutex{},
store: nil,
height: 0,
}
}
}
🤖 Prompt for AI Agents
In store/memstore/snapshop_pool.go around lines 68–83, Limit() replaces the
pool's list unsafely causing data races with concurrent Get/Set; fix by
introducing a pool-level mutex (e.g., p.mu) and use it to coordinate resizing:
allocate and initialize the new slice locally without holding the lock, then
acquire the mutex and swap p.list and update p.limit atomically; also update
Get/Set to use the same mutex (RLock/RUnlock for readers) so they cannot access
p.list while it's being swapped.

memStoreSnapshot, exists := rs.memStoreManager.GetSnapshotBranch(version)

if !exists {
memStoreSnapshot = memstore.NewUnusableMemstore(version)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this means that anything calling ctx.MemStore() at an older height can crash the node given height is controlled by the user via query - is this intended?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I think a query panic wouldn't crash the node, but the design is still questionable.

return gaskv.NewStore(c.ms.GetKVStore(key), c.gasMeter, c.kvGasConfig)
}

func (c Context) MemStore(key []byte) storetypes.MemStore {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be gas metered?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's okay to leave out for performance and simplicity reasons.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
baseapp/abci_test.go (1)

426-437: Consider using defer for iterator cleanup to ensure proper resource release on test failures.

If any assertion between iter := typedEkv.Iterator(nil, nil) and iter.Close() fails, the iterator won't be closed, potentially leaking resources.

🔎 Proposed fix
 		iter := typedEkv.Iterator(nil, nil)
+		defer iter.Close()
 		for ; iter.Valid(); iter.Next() {
 			memStoreSnapshot = append(memStoreSnapshot, struct {
 				key   []byte
 				value *cmtproto.Block
 			}{
 				key:   iter.Key(),
 				value: iter.Value(),
 			})
 		}
-		iter.Close()
store/memstore/typed_memstore.go (1)

117-121: Error() creates a new error instance on every call when iterator is invalid.

Consider caching the error or using a package-level sentinel error for better performance and easier error comparison.

🔎 Proposed improvement
+var errInvalidIterator = errors.New("invalid typedMemStoreIterator")
+
 // Error returns an error if the iterator is invalid
 func (ti *typedMemStoreIterator[T]) Error() error {
 	if !ti.Valid() {
-		return errors.New("invalid typedMemStoreIterator")
+		return errInvalidIterator
 	}
 	return nil
 }
store/types/memstore.go (1)

125-151: MemStoreIterator lacks Error() method while TypedMemStoreIterator has it.

The TypedMemStoreIterator[T] interface includes an Error() error method (line 237), but the base MemStoreIterator interface does not. This inconsistency may cause issues when wrapping iterators or checking for errors uniformly.

Consider adding Error() error to MemStoreIterator for consistency.

🔎 Proposed addition
 	MemStoreIterator interface {
 		// ... existing methods ...
 
 		// Close releases any resources associated with the iterator.
 		// It must be called when done using the iterator.
 		Close() error
+
+		// Error returns an error if the iterator is invalid.
+		// Returns nil if the iterator is valid.
+		Error() error
 	}
store/memstore/internal/btree.go (1)

54-60: Get method uses value receiver while Set/Delete use pointer receivers.

Get at line 54 uses value receiver (bt BTree) while Set and Delete use pointer receivers (bt *BTree). While this works due to Go's auto-dereferencing, it's inconsistent and could cause subtle issues if someone takes the address of a BTree value.

🔎 Proposed fix for consistency
-func (bt BTree) Get(key []byte) any {
+func (bt *BTree) Get(key []byte) any {
 	i, found := bt.tree.Get(newItem[any](key, nil))
 	if !found {
 		return nil
 	}
 	return i.value
 }

-func (bt BTree) Iterator(start, end []byte) (TypedMemIterator[any], error) {
+func (bt *BTree) Iterator(start, end []byte) (TypedMemIterator[any], error) {

-func (bt BTree) ReverseIterator(start, end []byte) (TypedMemIterator[any], error) {
+func (bt *BTree) ReverseIterator(start, end []byte) (TypedMemIterator[any], error) {

-func (bt BTree) Copy() *BTree {
+func (bt *BTree) Copy() *BTree {

-func (bt BTree) Clear() {
+func (bt *BTree) Clear() {
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d2e93f0 and 4270312.

📒 Files selected for processing (7)
  • baseapp/abci_test.go
  • store/memstore/internal/btree.go
  • store/memstore/internal/memiterator.go
  • store/memstore/memstore.go
  • store/memstore/typed_memstore.go
  • store/rootmulti/store.go
  • store/types/memstore.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • store/memstore/internal/memiterator.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-05-04T02:34:26.648Z
Learnt from: maxim-inj
Repo: InjectiveLabs/cosmos-sdk PR: 55
File: x/bank/keeper/virtual_balances.go:90-105
Timestamp: 2025-05-04T02:34:26.648Z
Learning: The ObjectStore in Cosmos SDK doesn't persist data between blocks. It's implemented in the transient store package (store/transient/store.go) as ObjStore, and its Commit() method clears the internal data structure at the end of each block without writing to persistent storage.

Applied to files:

  • store/rootmulti/store.go
🧬 Code graph analysis (5)
store/types/memstore.go (4)
version/version.go (1)
  • Commit (37-37)
store/types/store.go (1)
  • Iterator (321-321)
store/reexport.go (1)
  • Key (22-22)
client/v2/autocli/flag/interface.go (1)
  • Value (20-23)
store/memstore/typed_memstore.go (1)
store/types/memstore.go (4)
  • TypedMemStore (174-205)
  • TypedMemStoreIterator (210-238)
  • MemStore (112-123)
  • MemStoreIterator (127-151)
baseapp/abci_test.go (4)
store/types/memstore.go (1)
  • MemStore (112-123)
store/memstore/typed_memstore.go (1)
  • NewTypedMemStore (20-24)
store/prefix/memstore.go (2)
  • NewTypedMemStore (18-21)
  • NewMemStore (24-29)
store/memstore/memstore.go (1)
  • NewMemStoreManager (73-89)
store/memstore/memstore.go (3)
store/types/memstore.go (4)
  • MemStoreManager (10-43)
  • MemStore (112-123)
  • SnapshotPool (156-169)
  • MemStoreIterator (127-151)
store/memstore/internal/btree.go (2)
  • BTree (30-33)
  • NewBTree (36-44)
store/memstore/snapshop_pool.go (1)
  • SnapshotPool (15-21)
store/memstore/internal/btree.go (2)
store/types/store.go (1)
  • Iterator (321-321)
store/memstore/internal/types.go (1)
  • TypedMemIterator (3-10)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: tests (02)
  • GitHub Check: tests (01)
  • GitHub Check: tests (00)
  • GitHub Check: tests (03)
  • GitHub Check: test-simapp
  • GitHub Check: liveness-test
  • GitHub Check: test-sim-nondeterminism
  • GitHub Check: test-e2e
  • GitHub Check: test-integration
  • GitHub Check: build (amd64)
  • GitHub Check: build (arm64)
  • GitHub Check: build (arm)
  • GitHub Check: Gosec
  • GitHub Check: Analyze
  • GitHub Check: dependency-review
  • GitHub Check: golangci-lint
🔇 Additional comments (10)
baseapp/abci_test.go (1)

221-314: Test coverage for MemStore cache context lifecycle looks solid.

The test properly validates:

  • Prefixed MemStore operations within EndBlocker
  • Dropped vs committed cache contexts (cacheCtx discarded, cacheCtx2 committed)
  • Post-commit state verification showing data properly isolated and committed

The assertions correctly verify that block-2 (from discarded cache) is nil while block-1 and block-3 are present after commit.

store/rootmulti/store.go (3)

553-563: Commit defer block executes after flushMetadata defer - verify ordering intent.

The defer at line 553 executes after the defer rs.flushMetadata(...) at line 551, meaning memstore commit happens after metadata is flushed. If memstore commit panics (e.g., concurrent modification at line 561), metadata would already be written but memstore state would be inconsistent.

Consider whether the memstore operations should be part of the main commit flow (before defer) or if this ordering is intentional for your consistency model.


646-658: LGTM - CacheMultiStore properly branches from pre-isolated memstore.

The nil guard and the branching pattern ensure each CacheMultiStore gets an isolated view of the memstore state. This correctly implements copy-on-write semantics.


724-737: Snapshot branch fallback uses unusable memstore when height not found.

When a snapshot doesn't exist for the requested version, NewUnusableMemstore(version) is returned. Based on the past review discussion, this can panic if query code attempts write operations. This is acceptable for read-only query contexts, but ensure callers are aware of this limitation.

store/memstore/typed_memstore.go (1)

32-40: Type assertion will panic on type mismatch - ensure consistent usage.

The type assertion val.(T) at line 39 will panic if the stored value is not of type T. This is acceptable for a typed store where callers are expected to use consistent types, but it means mixing types under the same prefix would cause runtime panics.

store/types/memstore.go (1)

1-43: Well-documented interface contracts for MemStoreManager.

The documentation clearly explains:

  • CoW snapshot semantics for branching
  • Single-commit guarantee with concurrent modification detection
  • Height validation requirements

The interfaces provide a solid foundation for the memstore abstraction.

store/memstore/memstore.go (3)

60-70: Good - UncommittableMemStore now properly panics on Set/Delete operations.

This addresses the past review comment about allowing Set/Delete while panicking only on Commit. The behavior is now consistent - all mutating operations panic.


153-158: CompareAndSwap pattern may have subtle race with concurrent Branch calls.

The CAS at line 154 uses t.base.Load() as the expected value. However, between t.base.Load() being called and the CAS completing, another goroutine could call Branch() which also loads from t.root. This is likely intentional for the design (branches get a snapshot), but the t.base.Store(current) at line 158 updates base after CAS, which could affect in-flight Branch calls.

Verify this is the intended behavior for concurrent Branch + Commit scenarios.


250-268: memStore.Commit correctly propagates changes to parent or manager.

The logic properly handles:

  • Nested branches: updates parent's current pointer
  • Top-level branches: validates base hasn't changed before updating manager

The unreachable panic at line 267 is a good defensive measure.

store/memstore/internal/btree.go (1)

19-33: Thread safety documentation is clear and addresses the design intent.

The documentation properly explains:

  • BTree is NOT thread-safe for concurrent mutations
  • Single goroutine access per instance is enforced via isolated branches
  • copyLock only serializes Copy() calls

This addresses the past review comment about documenting the thread-safety invariant.

Copy link

@kakysha kakysha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • was it tested with -race in a context of multiple reading goroutines? We added copyLock inside internal/btree, so I assume it should be working, need to test that.

}

if other.memStore != nil {
other.memStore.Commit()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if this is what we actually want / need to do here? Restore(other) means we "resetting" our current cms to the state of other store, but doing other.memStore.Commit() will commit to other.memStore.parent, and that is not always current cms.memStore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain when other.memStore.parent would not be from cms ? But I assume you just mean it's theoretically possible, because in practice you would always call it from where the Clone was called from.

So are you fine if I just add a panic for the case it doesn't match?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something like that, yes. I haven’t checked all the call sites, but I know this mechanism is used in EVM restore from snapshot logic. Maybe in there we can end up in a situation when snapshot has a store not directly branched from current cms?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only relevant use case for this inside EVM I found is during reverting native change to a snapshot. And since snapshots are done via cachems.Clone(), which is implemented couple of lines above via memstore.Branch(), it must be safe, since all snapshots are effectively a branches from current cms memstore.


snapshotItem struct {
mtx *sync.RWMutex
store types.MemStoreManager
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we store managers in pool instead of memstores? It provokes unneccesarry Branch() calls on GetSnapshotBranch() with tree copying, even though the snapshots are read-only?

Copy link
Member Author

@gorgos gorgos Jan 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its uncommittable manager. But yeah storing the mempool directly is still maybe better, you still need to copy it though anyways. Will add this refactor.

Edit: Actually, maybe its possible without the Copy to be honest. I don't actually see why it's needed, going to verify with Jesse...


// Get retrieves a value for the given key from the current branch.
func (b *memStore) Get(key []byte) any {
return b.current.Get(key)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BIG note: we don't return copy of the data, but instead return the pointer to the data stored in the memstore. DO NOT MODIFY directly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are you proposing here?

defer func() {
height := rs.lastCommitInfo.Version
current := rs.memStore.Load()
(*current).Commit()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why de-referencing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats what Load returns

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@store/memstore/internal/btree.go`:
- Around line 88-90: Clear() is defined with a value receiver (func (bt BTree)
Clear()) which prevents modifications to the original BTree when the underlying
tree reference might be replaced and is inconsistent with Set and Delete; change
the receiver to a pointer (func (bt *BTree) Clear()) so the method operates on
the original BTree instance and any reassignment of the tree field (bt.tree)
will persist, mirroring how Set and Delete use pointer receivers.

In `@store/memstore/snapshop_pool.go`:
- Line 1: The file is misnamed "snapshop_pool.go"; rename it to
"snapshot_pool.go" to fix the typo and ensure consistent imports and tooling;
update any references or imports if they rely on the filename (e.g., CI scripts,
build targets) and run `go vet`/`go test` to verify package memstore still
compiles with the renamed source file.
🧹 Nitpick comments (3)
store/memstore/internal/btree.go (1)

44-50: Inconsistent empty key validation between mutation and iteration methods.

Set and Delete do not validate empty keys, but Iterator and ReverseIterator return errKeyEmpty for empty (non-nil, zero-length) keys. This inconsistency could lead to data being stored with empty keys that cannot be iterated over.

Consider adding the same validation to Set and Delete for consistency, or document that empty keys are valid for storage but not for iteration bounds.

🔧 Optional: Add empty key validation to Set
 func (bt *BTree) Set(key []byte, value any) {
+	if key != nil && len(key) == 0 {
+		panic("key cannot be empty")
+	}
 	bt.tree.Set(newItem(key, value))
 }
store/memstore/internal/memiterator.go (1)

70-75: Error() semantics are unconventional.

The Error() method returns an error when the iterator is invalid, rather than for actual operational errors. This differs from typical Go iterator patterns where Error() reports errors encountered during iteration. The current implementation is used internally by assertValid() to trigger panics.

This is functional but may surprise callers expecting standard iterator error semantics.

store/memstore/manager.go (1)

111-124: Line 123 (t.current = current) appears redundant.

At line 112, current := t.current copies the reference. After the CAS succeeds, line 123 assigns t.current = current, which sets it to the same value it already held. This line has no effect.

If the intent was to prepare for the next block cycle, the assignment would need to be a new copy. However, looking at the overall flow, t.current is only updated externally when an L1 memStore commits, so this line seems unnecessary.

🔧 Consider removing redundant assignment
 	t.base.Store(current)
 
-	t.current = current
 	t.snapshotPool.Set(height, current.Copy())
 }

Comment on lines +88 to +90
func (bt BTree) Clear() {
bt.tree.Clear()
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clear() uses value receiver - changes won't affect the original BTree.

The Clear() method uses a value receiver (bt BTree) instead of a pointer receiver (bt *BTree). Since BTree contains a pointer to the underlying tree, the Clear() call will work on the tree data, but if the underlying btree implementation replaces the tree reference, changes may not propagate correctly. More importantly, this is inconsistent with Set and Delete which use pointer receivers.

🔧 Suggested fix
-func (bt BTree) Clear() {
+func (bt *BTree) Clear() {
 	bt.tree.Clear()
 }
🤖 Prompt for AI Agents
In `@store/memstore/internal/btree.go` around lines 88 - 90, Clear() is defined
with a value receiver (func (bt BTree) Clear()) which prevents modifications to
the original BTree when the underlying tree reference might be replaced and is
inconsistent with Set and Delete; change the receiver to a pointer (func (bt
*BTree) Clear()) so the method operates on the original BTree instance and any
reassignment of the tree field (bt.tree) will persist, mirroring how Set and
Delete use pointer receivers.

@@ -0,0 +1,85 @@
package memstore
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Filename typo: "snapshop_pool.go" should be "snapshot_pool.go".

The filename contains a typo - "snapshop" instead of "snapshot".

🤖 Prompt for AI Agents
In `@store/memstore/snapshop_pool.go` at line 1, The file is misnamed
"snapshop_pool.go"; rename it to "snapshot_pool.go" to fix the typo and ensure
consistent imports and tooling; update any references or imports if they rely on
the filename (e.g., CI scripts, build targets) and run `go vet`/`go test` to
verify package memstore still compiles with the renamed source file.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
store/cachemulti/store.go (1)

132-142: Add nil guards in Write() and CacheMultiStore() for consistency and robustness.

The memStore field can be nil, as evidenced by defensive nil checks in Clone() and Restore() methods. However, Write() (line 137) calls cms.memStore.Commit() and CacheMultiStore() (line 167) calls cms.memStore.Branch() without nil checks, risking panics if a nil memStore reaches these code paths. While the current rootmulti caller enforces non-nil invariants via checks before instantiation, direct construction or legacy call paths could bypass this protection.

Proposed fix (nil-safe memStore handling)
 func (cms Store) Write() {
 	if cms.branched {
 		panic("cannot Write on branched store")
 	}
 	for _, store := range cms.stores {
 		store.Write()
 	}
-
-	cms.memStore.Commit()
+	if cms.memStore != nil {
+		cms.memStore.Commit()
+	}
 }

 func (cms Store) CacheMultiStore() types.CacheMultiStore {
 	// Important: create an isolated MemStore branch for nested cache contexts.
 	// Reusing the same memStore instance causes writes in a dropped CacheContext
 	// to leak into the parent context. Branching ensures writes are only
 	// propagated when Write() is called on the child cache store.
-	return NewFromParent(cms.getCacheWrapper, cms.traceWriter, cms.traceContext, cms.memStore.Branch())
+	var memStore types.MemStore
+	if cms.memStore != nil {
+		memStore = cms.memStore.Branch()
+	}
+	return NewFromParent(cms.getCacheWrapper, cms.traceWriter, cms.traceContext, memStore)
 }

@gorgos gorgos requested a review from kakysha January 28, 2026 20:32
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@store/memstore/manager.go`:
- Around line 74-94: Branch() currently loads t.root and t.base separately so
current := root.Copy() can end up out-of-sync with base if Commit() swaps root
concurrently; change the logic so the branch's base always refers to the same
snapshot used to build current—e.g., after doing root := t.root.Load() and
current := root.Copy(), set base = root (instead of calling t.base.Load()), or
implement a retry loop that reloads root/base until they are consistent; update
the memStore returned to use that base, and keep references to memStoreManager,
Branch(), t.root.Load(), t.base.Load(), current := root.Copy(), and Commit()
when making the change.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 52d7cff and e052b7e.

📒 Files selected for processing (1)
  • store/memstore/manager.go

Comment on lines +74 to +94
// Branch creates a top-level branch.
// It creates a copy-on-write snapshot of the tree's root btree as its working copy.
func (t *memStoreManager) Branch() types.MemStore {
root := t.root.Load()
// Create a copy-on-write snapshot for the current
current := root.Copy()

var base *btree
if t.base != nil {
base = t.base.Load()
}

return &memStore{
// This is a top-level branch, so parent is nil
parent: nil,

current: current,
base: base,
manager: t,
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Ensure Branch() captures a consistent root/base snapshot under concurrent Commit().

On Line 77-84, root and base are loaded separately. If Commit() swaps root between those loads, current can be built from the old root while base points at the new one, breaking the invariant that base matches the snapshot used to build current. That can allow a stale branch to pass commit checks or panic unexpectedly. Consider retrying until root == base (or using base = root) before copying.

🔧 Proposed fix (retry until root/base are consistent)
 func (t *memStoreManager) Branch() types.MemStore {
-	root := t.root.Load()
-	// Create a copy-on-write snapshot for the current
-	current := root.Copy()
-
-	var base *btree
-	if t.base != nil {
-		base = t.base.Load()
-	}
+	var root, base *btree
+	for {
+		root = t.root.Load()
+		if t.base == nil {
+			base = root
+			break
+		}
+		base = t.base.Load()
+		if base == root {
+			break
+		}
+		// commit in flight; retry for a consistent snapshot
+	}
+
+	// Create a copy-on-write snapshot for the current
+	current := root.Copy()
 
 	return &memStore{
 		// This is a top-level branch, so parent is nil
 		parent: nil,
 
 		current: current,
 		base:    base,
 		manager: t,
 	}
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Branch creates a top-level branch.
// It creates a copy-on-write snapshot of the tree's root btree as its working copy.
func (t *memStoreManager) Branch() types.MemStore {
root := t.root.Load()
// Create a copy-on-write snapshot for the current
current := root.Copy()
var base *btree
if t.base != nil {
base = t.base.Load()
}
return &memStore{
// This is a top-level branch, so parent is nil
parent: nil,
current: current,
base: base,
manager: t,
}
}
// Branch creates a top-level branch.
// It creates a copy-on-write snapshot of the tree's root btree as its working copy.
func (t *memStoreManager) Branch() types.MemStore {
var root, base *btree
for {
root = t.root.Load()
if t.base == nil {
base = root
break
}
base = t.base.Load()
if base == root {
break
}
// commit in flight; retry for a consistent snapshot
}
// Create a copy-on-write snapshot for the current
current := root.Copy()
return &memStore{
// This is a top-level branch, so parent is nil
parent: nil,
current: current,
base: base,
manager: t,
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@store/memstore/manager.go` around lines 74 - 94, Branch() currently loads
t.root and t.base separately so current := root.Copy() can end up out-of-sync
with base if Commit() swaps root concurrently; change the logic so the branch's
base always refers to the same snapshot used to build current—e.g., after doing
root := t.root.Load() and current := root.Copy(), set base = root (instead of
calling t.base.Load()), or implement a retry loop that reloads root/base until
they are consistent; update the memStore returned to use that base, and keep
references to memStoreManager, Branch(), t.root.Load(), t.base.Load(), current
:= root.Copy(), and Commit() when making the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants