Conversation
📝 WalkthroughWalkthroughRefactors Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/src/main.rs`:
- Around line 433-449: tokens_in_radius currently uses a non-strict boundary
check (distance_from_center(...) <= radius) which is inconsistent with other
object queries that use a strict check (< radius); update the filter in
tokens_in_radius to use a strict comparison (< radius) so Token objects are
excluded when exactly on the radius boundary and behavior matches the other
functions (match the check used elsewhere like in the ship/pellet/asteria
filters), leaving the rest of the pipeline (Token::try_from, Token.amount > 0,
mapping to PositionalInterface::Token) unchanged.
- Around line 672-680: The current code maps each user-supplied tokens entry
into an unbounded set of futures and calls join_all (see the tokens variable,
tokens_in_radius function, and join_all usage), which can trigger
Pagination::all() for every token concurrently; instead either truncate the
input (e.g., limit tokens.iter().take(MAX_TOKENS)) or replace join_all with a
bounded concurrency pattern such as converting the futures vector into a stream
and using buffer_unordered(CONCURRENCY_LIMIT) (or FuturesUnordered with a
semaphore) to limit how many tokens_in_radius calls run at once; add a
configurable MAX_TOKENS or CONCURRENCY_LIMIT constant and apply it where the
tokens are mapped before awaiting results.
| async fn tokens_in_radius( | ||
| api: &BlockfrostAPI, | ||
| pellet_address: &str, | ||
| token: &TokenInput, | ||
| radius: i32, | ||
| center: &PositionInput, | ||
| ) -> Result<Vec<PositionalInterface>, Error> { | ||
| Ok(fetch_utxos_by_policy(api, pellet_address, &token.policy_id) | ||
| .await? | ||
| .into_iter() | ||
| .map(|utxo| Token::try_from((token.clone(), utxo))) | ||
| .collect::<Result<Vec<Token>, Error>>()? | ||
| .into_iter() | ||
| .filter(|token| distance_from_center(token.position.x, token.position.y, center) <= radius) | ||
| .filter(|token| token.amount > 0) | ||
| .map(PositionalInterface::Token) | ||
| .collect()) |
There was a problem hiding this comment.
Align the radius boundary check across object types.
Line 446 uses <= radius, while Lines 378, 401, and 424 use a strict < radius. Tokens on the boundary are therefore returned while ships, pellets, and Asteria at the same distance are not.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 433 - 449, tokens_in_radius currently uses
a non-strict boundary check (distance_from_center(...) <= radius) which is
inconsistent with other object queries that use a strict check (< radius);
update the filter in tokens_in_radius to use a strict comparison (< radius) so
Token objects are excluded when exactly on the radius boundary and behavior
matches the other functions (match the check used elsewhere like in the
ship/pellet/asteria filters), leaving the rest of the pipeline (Token::try_from,
Token.amount > 0, mapping to PositionalInterface::Token) unchanged.
| async { | ||
| match tokens.as_ref().map(|tokens| { | ||
| tokens | ||
| .iter() | ||
| .map(|token| tokens_in_radius(api, &pellet_address, token, radius, ¢er)) | ||
| .collect::<Vec<_>>() | ||
| }) { | ||
| Some(futs) => join_all(futs).await, | ||
| None => Vec::new(), |
There was a problem hiding this comment.
Bound the token fan-out before calling Blockfrost.
This branch turns a user-supplied tokens array into one Pagination::all() upstream scan per entry, all in flight at once. A large request can burn rate limits and amplify latency for the whole resolver. Please cap the list size or switch to bounded concurrency instead of join_all.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/src/main.rs` around lines 672 - 680, The current code maps each
user-supplied tokens entry into an unbounded set of futures and calls join_all
(see the tokens variable, tokens_in_radius function, and join_all usage), which
can trigger Pagination::all() for every token concurrently; instead either
truncate the input (e.g., limit tokens.iter().take(MAX_TOKENS)) or replace
join_all with a bounded concurrency pattern such as converting the futures
vector into a stream and using buffer_unordered(CONCURRENCY_LIMIT) (or
FuturesUnordered with a semaphore) to limit how many tokens_in_radius calls run
at once; add a configurable MAX_TOKENS or CONCURRENCY_LIMIT constant and apply
it where the tokens are mapped before awaiting results.
Summary by CodeRabbit