Skip to content

Commit 14ffbc3

Browse files
fafkMartinquaXD
andcommitted
Optimize batching for CoinGecko (#4215)
# Description I have run a bunch of experiment goal of which was to make CoinGecko batches bigger (fetch price for more tokens at once) to save on costs. In a previous experiment I increased `concurrent_requests` on the `CachingNativePriceEstimator` to 500 and that totally worked and CoinGecko debounces all these requests nicely, but also means we would be spamming solvers with a lot of requests at once when we start the service. The first issue at hand is that before this PR we would create futures that would hit the cache and return immediately, which was bad, because they take up a slot in the queue when using buffered() so fewer useful futures that actually issue a native price request would get to run. I changed this to buffered_unordered() and that helped a little bit, but then the queue gets full and after that new futures get in only as previous futures finishe, so the execution is spread out in time and our CoinGecko debouncing can't gather enough tokens to form a full batch. This PR introduces a solution where we omit non-expired tokens altogether which solves the issue of futures that hit the cache taking up a spot in the queue _and_ we just run batches of 19 sequentially. We wait for each batch to finish (max ~3s, limited by QUERY_TIMEOUT) and only then issues a new batch. As a tradeoff we get a slower cache warm up/refresh, but we get to save a lot of money on CoinGecko. In addition to this change I set the refresh rate to 30s from 1s, so we can gather more expired tokens at once. <img width="2103" height="480" alt="Screenshot 2026-03-02 at 11 30 33" src="https://github.com/user-attachments/assets/190bbefe-ac26-4837-b2c2-05a7c63ebbcb" /> # Changes * [x] Don't try to fetch non-expired prices * [x] Run batches sequentially * [x] NATIVE_PRICE_CACHE_REFRESH set to 30s instead of 1s in service config --------- Co-authored-by: Martin Magnus <martin.beckmann@protonmail.com>
1 parent cb58b60 commit 14ffbc3

1 file changed

Lines changed: 32 additions & 7 deletions

File tree

crates/shared/src/price_estimation/native_price_cache.rs

Lines changed: 32 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -485,15 +485,40 @@ impl NativePriceUpdater {
485485

486486
let max_age = cache.max_age().saturating_sub(prefetch_time);
487487
let timeout = self.estimator.0.quote_timeout;
488-
self.estimator
489-
.estimate_prices_and_update_cache(tokens_to_update.iter().copied(), max_age, timeout)
490-
// Drive the stream to completion. Results are written to the cache as
491-
// a side effect, so we don't need to inspect them here.
492-
.for_each(|_| async {})
493-
.await;
488+
489+
// Pre-filter to only tokens whose cache entries have expired.
490+
let now = Instant::now();
491+
let expired_tokens: Vec<_> = tokens_to_update
492+
.iter()
493+
.copied()
494+
.filter(|token| Cache::get_cached_price(*token, now, &cache.0.data, &max_age).is_none())
495+
.collect();
496+
497+
// Process expired tokens in chunks, waiting for each chunk to complete
498+
// before starting the next. This ensures all tokens in a chunk reach
499+
// the BufferedRequest channel simultaneously, producing full CoinGecko
500+
// API batches instead of trickling tokens one-by-one. Normally it's
501+
// better to just pass the whole vector to `estimate_prices_and_update_cache`
502+
// and let it handle the chunking, but in this case we want to ensure that all
503+
// tokens go in in chunks of 19 to fetch as many tokens from CoinGecko as is
504+
// allowed. We debounce request to CoinGecko to happen once every 100ms
505+
// to build the batches and if we use buffer_unordered deeper in the stack the
506+
// execution will happen faster, but we will build more smaller batches that we
507+
// send to CoinGecko. This happens because we also use estimations from solvers
508+
// which vary in time, so as requests in the buffer_unorderd() stream
509+
// finish, new ones start immediately, tokens "trickle in" and we build smaller
510+
// batches.
511+
let chunk_size = self.estimator.0.concurrent_requests;
512+
for chunk in expired_tokens.chunks(chunk_size) {
513+
self.estimator
514+
.estimate_prices_and_update_cache(chunk.iter().copied(), max_age, timeout)
515+
.for_each(|_| async {})
516+
.await;
517+
}
518+
494519
metrics
495520
.native_price_cache_background_updates
496-
.inc_by(tokens_to_update.len() as u64);
521+
.inc_by(expired_tokens.len() as u64);
497522
}
498523
}
499524

0 commit comments

Comments
 (0)