⚡ Bolt: Stream indexing results with concurrent task pool#405
⚡ Bolt: Stream indexing results with concurrent task pool#405AhmmedSamier wants to merge 1 commit intomasterfrom
Conversation
…e blocking Co-authored-by: AhmmedSamier <17784876+AhmmedSamier@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
📝 WalkthroughWalkthroughThe indexing worker refactored its file processing model from sequential batch-chunked iteration to a concurrency-limited task pool using Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@language-server/src/core/indexer-worker.ts`:
- Around line 40-68: The current eager filePaths.map(limit(...)) approach should
be replaced with a pull-based runner pool to bound memory and fix the any[]
type: change batchItems from any[] to the parser item type (e.g., SymbolInfo[]
or the exact type returned by parser.parseFile) and remove the map/limit
pattern; instead spawn N = Math.min(CONCURRENCY, filePaths.length) async runners
that loop while (true) { const idx = nextIndex++; if (idx >= filePaths.length)
break; try { const items = await parser.parseFile(filePaths[idx]); for (let i =
0; i < items.length; i++) batchItems.push(items[i]); } catch { /* ignore */ }
finally { totalCompleted++; batchProcessedFiles++; if (batchProcessedFiles >=
BATCH_SIZE || totalCompleted === filePaths.length) { parentPort?.postMessage({
type: 'result', items: batchItems, count: batchProcessedFiles, isPartial:
totalCompleted < filePaths.length, }); batchItems = []; batchProcessedFiles = 0;
} } } and await Promise.all(runners) to finish; keep references to
parser.parseFile, batchItems, totalCompleted, batchProcessedFiles, CONCURRENCY
and BATCH_SIZE to locate and update the code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6da07f1a-7313-4072-bb7d-f2920bef085c
📒 Files selected for processing (1)
language-server/src/core/indexer-worker.ts
| await Promise.all( | ||
| filePaths.map((filePath) => | ||
| limit(async () => { | ||
| try { | ||
| return await parser.parseFile(filePath); | ||
| const items = await parser.parseFile(filePath); | ||
| // ⚡ Bolt: Fast manual loop pushing to avoid Maximum Call Stack Size Exceeded | ||
| for (let i = 0; i < items.length; i++) { | ||
| batchItems.push(items[i]); | ||
| } | ||
| } catch { | ||
| return []; | ||
| // Ignore | ||
| } finally { | ||
| totalCompleted++; | ||
| batchProcessedFiles++; | ||
|
|
||
| if (batchProcessedFiles >= BATCH_SIZE || totalCompleted === filePaths.length) { | ||
| parentPort?.postMessage({ | ||
| type: 'result', | ||
| items: batchItems, | ||
| count: batchProcessedFiles, | ||
| isPartial: totalCompleted < filePaths.length, | ||
| }); | ||
| batchItems = []; | ||
| batchProcessedFiles = 0; | ||
| } | ||
| } | ||
| }), | ||
| ); | ||
|
|
||
| const chunkItems = results.flat(); | ||
|
|
||
| // Send back chunk result immediately to keep main thread unblocked but processing | ||
| parentPort?.postMessage({ | ||
| type: 'result', | ||
| items: chunkItems, | ||
| count: chunk.length, | ||
| isPartial: i + BATCH_SIZE < filePaths.length, | ||
| }); | ||
| } | ||
| ), | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify how many files can be sent to one worker task at a time.
rg -n -C4 '\bassignWorkerTask\s*\(|\bgetWorkerChunkSize\s*\(|filePaths:\s*batchFiles' language-server/src/core/workspace-indexer.tsRepository: AhmmedSamier/DeepLens
Length of output: 1818
🏁 Script executed:
# Check the actual type declarations in indexer-worker.ts, especially around batchItems
sed -n '1,70p' language-server/src/core/indexer-worker.tsRepository: AhmmedSamier/DeepLens
Length of output: 2624
🏁 Script executed:
# Get the complete getWorkerChunkSize implementation
sed -n '752,770p' language-server/src/core/workspace-indexer.tsRepository: AhmmedSamier/DeepLens
Length of output: 695
🏁 Script executed:
# Check CONCURRENCY constant definition
rg -n 'CONCURRENCY|const.*Concurrency' language-server/src/core/indexer-worker.tsRepository: AhmmedSamier/DeepLens
Length of output: 146
Replace the eager map/limit pattern with a pull-based runner pool to bound memory allocation.
The code violates two guidelines:
-
Type safety: Line 39 uses
any[]forbatchItems. This removes type safety from the worker-to-parent message contract and contradicts the guideline "Ensure no newanytypes are introduced during development". Use an explicit type (e.g.,SymbolInfo[]or appropriate item type from the parser). -
Memory efficiency:
filePaths.map(limit(...))creates a closure and promise for every file upfront, even though onlyCONCURRENCY(15) tasks run concurrently. When a worker receives 200+ files (e.g.,getWorkerChunkSizereturns 60 for batches ≥200), this allocates ~200 closures that remain queued in memory. A pull-based model that spawns only 15 runners pulling files one-by-one keeps queue size O(CONCURRENCY) and reduces GC pressure in this hot path.
The suggested implementation spawns exactly Math.min(CONCURRENCY, filePaths.length) runners, each polling the next file index until exhausted, rather than eagerly mapping all files.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@language-server/src/core/indexer-worker.ts` around lines 40 - 68, The current
eager filePaths.map(limit(...)) approach should be replaced with a pull-based
runner pool to bound memory and fix the any[] type: change batchItems from any[]
to the parser item type (e.g., SymbolInfo[] or the exact type returned by
parser.parseFile) and remove the map/limit pattern; instead spawn N =
Math.min(CONCURRENCY, filePaths.length) async runners that loop while (true) {
const idx = nextIndex++; if (idx >= filePaths.length) break; try { const items =
await parser.parseFile(filePaths[idx]); for (let i = 0; i < items.length; i++)
batchItems.push(items[i]); } catch { /* ignore */ } finally { totalCompleted++;
batchProcessedFiles++; if (batchProcessedFiles >= BATCH_SIZE || totalCompleted
=== filePaths.length) { parentPort?.postMessage({ type: 'result', items:
batchItems, count: batchProcessedFiles, isPartial: totalCompleted <
filePaths.length, }); batchItems = []; batchProcessedFiles = 0; } } } and await
Promise.all(runners) to finish; keep references to parser.parseFile, batchItems,
totalCompleted, batchProcessedFiles, CONCURRENCY and BATCH_SIZE to locate and
update the code.
💡 What:
Replaced the
forloop and chunk-basedPromise.alllogic inindexer-worker.tswith a continuouspLimitconcurrency task pool. It now trackstotalCompletedacross all files and maintains a bounded batch state array, flushing it whenever the batch limit is reached or when all files are completed.🎯 Why:
The chunk-based
Promise.allapproach suffers from head-of-line blocking: if a single file in a 25-file chunk is extremely large and slow to parse, the other 24 threads/results sit idle until that specific chunk resolves. This delays the streaming of early parsed results back to the parent thread. Using apLimitpool keeps maximum concurrent tasks running constantly and streams batches back to the parent without chunk-based bottlenecks.📊 Impact:
Significantly reduces initial indexing latency and provides smoother UI progress indicators by streamlining result yields, maximizing concurrent throughput instead of bottlenecking on the slowest file per chunk.
🔬 Measurement:
Run the workspace indexer test suite (
cd language-server && bun test) and verify the indexing tests complete correctly without hanging. In actual usage, tracking the rate at whichresultevents fire back to the parent thread compared to the previous chunked execution demonstrates faster time-to-first-yield.PR created automatically by Jules for task 985315326601958411 started by @AhmmedSamier
Summary by CodeRabbit