Advanced Voice Optimization (BBBA)#1015
Advanced Voice Optimization (BBBA)#1015trummerschlunk wants to merge 2 commits intosuitenumerique:mainfrom
Conversation
|
With our latest commit to this PR, we introduce different handling of audio processing on android devices. The situation is as follows:
|
|
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
WalkthroughThis pull request adds WebAssembly-based audio processing infrastructure to the frontend. It introduces three new WASM loader modules ( Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Detailed notesThe changes introduce several interconnected patterns that require careful review:
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches⚔️ Resolve merge conflicts
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/frontend/src/locales/en/settings.json (1)
23-29:⚠️ Potential issue | 🟡 MinorInconsistent aria labels after terminology change.
The
labelandheadinghave been updated to "Advanced voice optimization" and "Audio filters", but theariaLabel.enableandariaLabel.disablestill reference "noise reduction". This creates an inconsistent experience for screen reader users.Consider updating the aria labels to match the new terminology:
Proposed fix
"noiseReduction": { "label": "Advanced voice optimization", "heading": "Audio filters", "ariaLabel": { - "enable": "Enable noise reduction", - "disable": "Disable noise reduction" + "enable": "Enable advanced voice optimization", + "disable": "Disable advanced voice optimization" } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend/src/locales/en/settings.json` around lines 23 - 29, The ariaLabel entries under the noiseReduction JSON block are inconsistent with the updated terminology: update "ariaLabel.enable" and "ariaLabel.disable" to use the same language as "label" and "heading" (e.g., replace "Enable noise reduction"/"Disable noise reduction" with phrases like "Enable advanced voice optimization"/"Disable advanced voice optimization" or "Enable audio filters"/"Disable audio filters") so screen-reader text matches the "noiseReduction" -> label/heading changes; adjust the two string values for the noiseReduction.ariaLabel keys accordingly.src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts (1)
290-318:⚠️ Potential issue | 🟠 MajorMissing cleanup message to audio processor.
The
destroy()method disconnects nodes but never sends{ type: 'destroy' }to the processor's port. Combined with the earlier issues in the processor's destroy handler, WASM resources will leak.Proposed fix
async destroy() { // Clean up audio nodes and context + // Signal the processor to release WASM resources + this.noiseSuppressionNode?.port?.postMessage({ type: 'destroy' }); + this.sourceNode?.disconnect() this.noiseSuppressionNode?.disconnect() this.destinationNode?.disconnect()This should be combined with fixes to the worklet's
destroy()handler to actually free WASM allocations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts` around lines 290 - 318, The destroy() method must notify the audio worklet processor to free WASM resources before disconnecting/suspending; post a { type: 'destroy' } message on the worklet port (e.g., noiseSuppressionNode.port.postMessage({ type: 'destroy' })) and optionally await an acknowledgement event from the processor before disconnecting nodes and calling audioContext.suspend(); update the cleanup sequence in destroy() so the processor receives the destroy message while noiseSuppressionNode is still connected and only then clear sourceNode, destinationNode, processedTrack and set nodes to undefined.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/frontend/public/wasm/mapi-proc.js`:
- Around line 140-143: The destroy() method currently only flips disconnected
and nulled bbba but never frees WASM-native resources; update
MapiProcessorInstance.destroy() to call the wasm module's _mapi_destroy(handle)
(using the instance field that stores the native handle allocated via
_malloc/_mapi_create) and free any pointers allocated with _malloc (use
Module._free for the buffer pointers created in the constructor: the bbba
backing pointer/byte buffer and any other _malloc-returning fields), then set
those fields (native handle, bbba pointer, any size fields) to null/0 and mark
disconnected; ensure you reference the MapiProcessorInstance constructor fields
(the native handle and bbba pointer) and invoke Module._mapi_destroy and
Module._free before nulling to avoid leaks.
- Around line 71-74: Replace loose inequality operators with strict ones for the
IO checks: change occurrences of "options.numberOfInputs !=
options.numberOfOutputs" and "options.numberOfInputs != 1" (and the similar
checks at the other two locations around lines where the same comparisons
appear) to use !== so they become "options.numberOfInputs !==
options.numberOfOutputs" and "options.numberOfInputs !== 1"; update the three
matching expressions in the mapi-proc.js file where these validation checks
occur to avoid type-coercion surprises.
- Around line 51-62: Replace the per-sample loops in the process method with
TypedArray bulk ops: instead of iterating to copy into module.HEAPF32, create a
Float32Array view or use module.HEAPF32.subarray(...) starting at audioData/4
and call .set(buffer.subarray(bufferOffset, bufferOffset + bufferSize)); call
module._mapi_process(this.handle, this.audioPtrs, this.audioPtrs, bufferSize);
then copy the output back with buffer.set(module.HEAPF32.subarray(audioData/4,
audioData/4 + bufferSize), bufferOffset). Update uses of this.audioData and
shifts (i << 2) accordingly to compute the 32-bit float index (divide byte
offset by 4) and ensure buffer is a Float32Array or converted before bulk
operations; keep references to process, module.HEAPF32, audioData,
_mapi_process, and audioPtrs to locate the changes.
In `@src/frontend/src/features/rooms/components/Join.tsx`:
- Around line 220-229: Remove the extra blank line between the audio constraints
after voiceIsolation: false and the "Audio quality optimized for voice" comment,
and add a brief comment near the constraint block (around noiseSuppression,
echoCancellation, autoGainControl, voiceIsolation) that documents the
intentional layered noise-suppression strategy (browser WebRTC APM + WASM BBBA
processor) and why both are enabled for defense-in-depth, or alternatively
change noiseSuppression to false if you decide to rely solely on the BBBA
processor; update the comment to reference the WASM BBBA processor by name so
future readers understand the design choice.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`:
- Around line 259-264: The hardcoded magic parameters sent to the audio worker
in RnnNoiseProcessor (the nn.port!.postMessage calls with symbols "intensity",
"leveler_target", "sb_strength", "mb_strength", "pre_gain", "post_gain") should
be extracted into a named constant (e.g., BBBA_PARAMS or RNN_NOISE_PARAMS) with
inline comments explaining each field's meaning and value range; update the
postMessage calls to reference those constants so the values are documented and
easier to maintain, and add a short comment above the constant indicating these
were tuned for voice optimization and where the values came from.
- Around line 59-65: catchHandler currently types its parameter as string which
mismatches what Promise.catch provides (typically an Error); change the handler
to accept Error|unknown (or just unknown) and normalize the value before storing
and rejecting — e.g., in catchHandler(error: unknown) compute const msg = error
instanceof Error ? error.message : String(error) and set loadedFiles.error = msg
then call reject(msg); update the catchHandler signature and usages to use this
normalized message so you don't store "[object Error]" or lose details.
- Around line 108-117: The MAPIModule interface incorrectly extends
WebAssembly.Module and uses untyped Emscripten runtime fields; remove the
WebAssembly.Module inheritance and instead declare MAPIModule as the
Emscripten-style runtime interface (or extend a proper Emscripten Module type)
so runtime helpers belong to the module object, update
_malloc/_mapi_create/_mapi_process/_mapi_set_parameter signatures to be
numeric-returning/callable members, and type HEAPF32 as Float32Array and HEAPU32
as Uint32Array; keep lengthBytesUTF8 and stringToUTF8 signatures as-is but
ensure their parameter types match the numeric buffer pointer types used
elsewhere.
- Around line 242-245: The Blob URL created for the worklet ("processorBlob" ->
"processorURL") is never revoked; wrap the module load in a try/finally (or add
a .finally) so that after awaiting
audioContext.audioWorklet.addModule(processorURL) you call
URL.revokeObjectURL(processorURL) to avoid the minor memory leak, keeping use of
loadedFiles.worklet and audioContext.audioWorklet.addModule unchanged.
- Around line 193-195: The 'destroy' message handler must free the WASM
resources allocated by the processor: call the module destructor (e.g.
Module._mapi_destroy(mapiHandle) — verify _mapi_destroy is exported) and free
any malloc'd buffers (Module._free(audioData), Module._free(audioPtrs),
Module._free(csymbolData) or any other pointer variables created), then null out
those variables (audioData = audioPtrs = csymbolData = mapiHandle = null) to
avoid reuse; implement this cleanup in the 'destroy' case of the postMessage
handler in RnnNoiseProcessor.ts.
---
Outside diff comments:
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`:
- Around line 290-318: The destroy() method must notify the audio worklet
processor to free WASM resources before disconnecting/suspending; post a { type:
'destroy' } message on the worklet port (e.g.,
noiseSuppressionNode.port.postMessage({ type: 'destroy' })) and optionally await
an acknowledgement event from the processor before disconnecting nodes and
calling audioContext.suspend(); update the cleanup sequence in destroy() so the
processor receives the destroy message while noiseSuppressionNode is still
connected and only then clear sourceNode, destinationNode, processedTrack and
set nodes to undefined.
In `@src/frontend/src/locales/en/settings.json`:
- Around line 23-29: The ariaLabel entries under the noiseReduction JSON block
are inconsistent with the updated terminology: update "ariaLabel.enable" and
"ariaLabel.disable" to use the same language as "label" and "heading" (e.g.,
replace "Enable noise reduction"/"Disable noise reduction" with phrases like
"Enable advanced voice optimization"/"Disable advanced voice optimization" or
"Enable audio filters"/"Disable audio filters") so screen-reader text matches
the "noiseReduction" -> label/heading changes; adjust the two string values for
the noiseReduction.ariaLabel keys accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3345cb97-47e0-4a0e-b3a6-15010ac665b3
⛔ Files ignored due to path filters (2)
src/frontend/public/wasm/BBBA-mapi.wasmis excluded by!**/*.wasmsrc/frontend/public/wasm/BBBA-nosimd-mapi.wasmis excluded by!**/*.wasm
📒 Files selected for processing (6)
src/frontend/public/wasm/BBBA-mapi.jssrc/frontend/public/wasm/BBBA-nosimd-mapi.jssrc/frontend/public/wasm/mapi-proc.jssrc/frontend/src/features/rooms/components/Join.tsxsrc/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.tssrc/frontend/src/locales/en/settings.json
| process(buffer, bufferSize, bufferOffset) { | ||
| if (! this.enabled) | ||
| return; | ||
|
|
||
| for (let i = 0; i < bufferSize; ++i) | ||
| this.module.HEAPF32[this.audioData + (i << 2) >> 2] = buffer[bufferOffset + i]; | ||
|
|
||
| this.module._mapi_process(this.handle, this.audioPtrs, this.audioPtrs, bufferSize); | ||
|
|
||
| for (let i = 0; i < bufferSize; ++i) | ||
| buffer[bufferOffset + i] = this.module.HEAPF32[this.audioData + (i << 2) >> 2]; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider using TypedArray methods for better performance.
The manual loops for copying audio data could be replaced with more efficient TypedArray operations.
Proposed optimization
process(buffer, bufferSize, bufferOffset) {
if (! this.enabled)
return;
- for (let i = 0; i < bufferSize; ++i)
- this.module.HEAPF32[this.audioData + (i << 2) >> 2] = buffer[bufferOffset + i];
+ // Create a view into the WASM heap for direct copy
+ const heapView = new Float32Array(
+ this.module.HEAPF32.buffer,
+ this.audioData,
+ bufferSize
+ );
+ heapView.set(buffer.subarray(bufferOffset, bufferOffset + bufferSize));
this.module._mapi_process(this.handle, this.audioPtrs, this.audioPtrs, bufferSize);
- for (let i = 0; i < bufferSize; ++i)
- buffer[bufferOffset + i] = this.module.HEAPF32[this.audioData + (i << 2) >> 2];
+ buffer.set(heapView, bufferOffset);
}This would eliminate per-sample loop overhead in the audio processing hot path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/public/wasm/mapi-proc.js` around lines 51 - 62, Replace the
per-sample loops in the process method with TypedArray bulk ops: instead of
iterating to copy into module.HEAPF32, create a Float32Array view or use
module.HEAPF32.subarray(...) starting at audioData/4 and call
.set(buffer.subarray(bufferOffset, bufferOffset + bufferSize)); call
module._mapi_process(this.handle, this.audioPtrs, this.audioPtrs, bufferSize);
then copy the output back with buffer.set(module.HEAPF32.subarray(audioData/4,
audioData/4 + bufferSize), bufferOffset). Update uses of this.audioData and
shifts (i << 2) accordingly to compute the 32-bit float index (divide byte
offset by 4) and ensure buffer is a Float32Array or converted before bulk
operations; keep references to process, module.HEAPF32, audioData,
_mapi_process, and audioPtrs to locate the changes.
| if (options.numberOfInputs != options.numberOfOutputs) | ||
| throw Error('Mis-matching IO, number of inputs must match outputs'); | ||
| if (options.numberOfInputs != 1) | ||
| throw Error('Invalid IO, must be mono'); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer strict equality operators.
Use !== instead of != for consistency and to avoid type coercion surprises.
Proposed fix
// validity checks
- if (options.numberOfInputs != options.numberOfOutputs)
+ if (options.numberOfInputs !== options.numberOfOutputs)
throw Error('Mis-matching IO, number of inputs must match outputs');
- if (options.numberOfInputs != 1)
+ if (options.numberOfInputs !== 1)
throw Error('Invalid IO, must be mono');Same applies to lines 155 and 166.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (options.numberOfInputs != options.numberOfOutputs) | |
| throw Error('Mis-matching IO, number of inputs must match outputs'); | |
| if (options.numberOfInputs != 1) | |
| throw Error('Invalid IO, must be mono'); | |
| if (options.numberOfInputs !== options.numberOfOutputs) | |
| throw Error('Mis-matching IO, number of inputs must match outputs'); | |
| if (options.numberOfInputs !== 1) | |
| throw Error('Invalid IO, must be mono'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/public/wasm/mapi-proc.js` around lines 71 - 74, Replace loose
inequality operators with strict ones for the IO checks: change occurrences of
"options.numberOfInputs != options.numberOfOutputs" and "options.numberOfInputs
!= 1" (and the similar checks at the other two locations around lines where the
same comparisons appear) to use !== so they become "options.numberOfInputs !==
options.numberOfOutputs" and "options.numberOfInputs !== 1"; update the three
matching expressions in the mapi-proc.js file where these validation checks
occur to avoid type-coercion surprises.
| destroy() { | ||
| this.disconnected = true; | ||
| this.bbba = null; | ||
| } |
There was a problem hiding this comment.
WASM handle not destroyed - potential memory leak.
The destroy() method sets this.bbba = null but doesn't call the WASM module's _mapi_destroy function to properly release native resources. The MapiProcessorInstance allocates memory via _malloc (lines 34-38) that will leak.
Proposed fix
destroy() {
this.disconnected = true;
+ if (this.bbba) {
+ // Note: Would need to add _mapi_destroy call and free allocated memory
+ // this.bbba.module._mapi_destroy(this.bbba.handle);
+ // this.bbba.module._free(this.bbba.audioData);
+ // this.bbba.module._free(this.bbba.audioPtrs);
+ // this.bbba.module._free(this.bbba.csymbolData);
+ }
this.bbba = null;
}Consider adding a destroy() method to MapiProcessorInstance that cleans up WASM allocations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/public/wasm/mapi-proc.js` around lines 140 - 143, The destroy()
method currently only flips disconnected and nulled bbba but never frees
WASM-native resources; update MapiProcessorInstance.destroy() to call the wasm
module's _mapi_destroy(handle) (using the instance field that stores the native
handle allocated via _malloc/_mapi_create) and free any pointers allocated with
_malloc (use Module._free for the buffer pointers created in the constructor:
the bbba backing pointer/byte buffer and any other _malloc-returning fields),
then set those fields (native handle, bbba pointer, any size fields) to null/0
and mark disconnected; ensure you reference the MapiProcessorInstance
constructor fields (the native handle and bbba pointer) and invoke
Module._mapi_destroy and Module._free before nulling to avoid leaks.
| noiseSuppression: true, | ||
| echoCancellation: true, | ||
| autoGainControl: true, | ||
| voiceIsolation: false, | ||
|
|
||
|
|
||
| // Audio quality optimized for voice | ||
| sampleRate: 48000, // High quality sample rate | ||
| channelCount: 1, // Mono for voice calls (saves bandwidth) | ||
| sampleSize: 16 // 16-bit audio |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there's documentation or comments about the dual noise suppression strategy
rg -n -C3 'noiseSuppression|noiseReduction' --type=ts --type=tsx -g '!node_modules/**'Repository: suitenumerique/meet
Length of output: 1786
🏁 Script executed:
#!/bin/bash
# Search for noiseSuppression and noiseReduction patterns to understand the dual processing approach
rg 'noiseSuppression|useNoiseReduction|RnnNoiseProcessor|BBBA' --type=ts --type=tsxRepository: suitenumerique/meet
Length of output: 90
🌐 Web query:
MediaTrackConstraints noiseSuppression browser audio processing effects
💡 Result:
MediaTrackConstraints.noiseSuppression is a ConstrainBoolean property used in getUserMedia or applyConstraints to request or require noise suppression on audio tracks, typically from microphones. It filters out background noise, hum, and similar artifacts before delivering audio to the application. When set to true (preferred) or {exact: true} (mandatory), the browser enables noise suppression if supported by the hardware or software. Browsers ignore unsupported constraints without failing unless exact is mandated and unmet. Support is available in Chromium-based browsers (Chrome 67+, Edge 79+, Opera 54+) and Firefox (55+), but not in Safari. Check support via navigator.mediaDevices.getSupportedConstraints.noiseSuppression. In browsers like Chrome, it leverages WebRTC's Audio Processing Module (APM), which includes configurable noise suppression. Disabling it (false) provides raw audio for custom processing. Example usage: navigator.mediaDevices.getUserMedia({ audio: { noiseSuppression: true, echoCancellation: true, autoGainControl: true } }).then(stream => { // Use stream }); The current applied setting is readable via MediaTrackSettings.noiseSuppression (boolean). As of 2026, compatibility remains limited in Safari per MDN documentation.
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints/noiseSuppression
- 2: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackSupportedConstraints/noiseSuppression
- 3: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints
- 4: https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackSettings/noiseSuppression
- 5: http://docs.w3cub.com/dom/mediatrackconstraints/noisesuppression
- 6: https://w3c.github.io/mediacapture-main/
- 7: noiseSuppression property missing in MediaTrackConstraintSet microsoft/TypeScript-DOM-lib-generator#1148
- 8: https://stackoverflow.com/questions/29936416/webrtc-disable-all-audio-processing
🏁 Script executed:
#!/bin/bash
# Retry with correct file types - search typescript files
rg 'noiseSuppression|useNoiseReduction|RnnNoiseProcessor|BBBA' -t typescriptRepository: suitenumerique/meet
Length of output: 97
🏁 Script executed:
#!/bin/bash
# Search for the Join.tsx file and show the audio constraints section
fd 'Join\.tsx' -type f --exec head -250 {} \; | tail -50Repository: suitenumerique/meet
Length of output: 233
🏁 Script executed:
#!/bin/bash
# Find Join.tsx file
fd Join.tsxRepository: suitenumerique/meet
Length of output: 114
🏁 Script executed:
#!/bin/bash
# Search for noiseSuppression, useNoiseReduction, BBBA patterns in all files
rg 'noiseSuppression|useNoiseReduction|BBBA' -C2Repository: suitenumerique/meet
Length of output: 50376
🏁 Script executed:
#!/bin/bash
# Read the Join.tsx file and extract lines around 220-229
wc -l src/frontend/src/features/rooms/components/Join.tsxRepository: suitenumerique/meet
Length of output: 118
🏁 Script executed:
#!/bin/bash
# Show lines 200-240 to see context around the audio constraints
sed -n '200,240p' src/frontend/src/features/rooms/components/Join.tsxRepository: suitenumerique/meet
Length of output: 1123
Remove unnecessary blank lines (224-225) and document the layered noise suppression approach if intentional.
Lines 224-225 contain two consecutive blank lines between voiceIsolation: false, and the audio quality comment—remove one.
The dual-processing setup (browser noiseSuppression: true via WebRTC APM + WASM BBBA processor) is valid but lacks documentation. If this layered approach is intentional for defense-in-depth audio cleanup, add a brief comment explaining why both are active. Otherwise, clarify whether the browser-level constraint should be disabled to avoid potential over-processing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/components/Join.tsx` around lines 220 - 229,
Remove the extra blank line between the audio constraints after voiceIsolation:
false and the "Audio quality optimized for voice" comment, and add a brief
comment near the constraint block (around noiseSuppression, echoCancellation,
autoGainControl, voiceIsolation) that documents the intentional layered
noise-suppression strategy (browser WebRTC APM + WASM BBBA processor) and why
both are enabled for defense-in-depth, or alternatively change noiseSuppression
to false if you decide to rely solely on the BBBA processor; update the comment
to reference the WASM BBBA processor by name so future readers understand the
design choice.
| const catchHandler = (error: string) => { | ||
| // only reject Promise once | ||
| if (!loadedFiles.error) { | ||
| loadedFiles.error = error; | ||
| reject(loadedFiles.error); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Type mismatch in error handler.
catchHandler expects a string parameter, but fetch().catch() and Promise.catch() typically pass Error objects. This could result in loadedFiles.error being set to "[object Error]" or losing error details.
Proposed fix
- const catchHandler = (error: string) => {
+ const catchHandler = (error: unknown) => {
+ const errorMessage = error instanceof Error ? error.message : String(error);
// only reject Promise once
if (!loadedFiles.error) {
- loadedFiles.error = error;
+ loadedFiles.error = errorMessage;
reject(loadedFiles.error);
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const catchHandler = (error: string) => { | |
| // only reject Promise once | |
| if (!loadedFiles.error) { | |
| loadedFiles.error = error; | |
| reject(loadedFiles.error); | |
| } | |
| }; | |
| const catchHandler = (error: unknown) => { | |
| const errorMessage = error instanceof Error ? error.message : String(error); | |
| // only reject Promise once | |
| if (!loadedFiles.error) { | |
| loadedFiles.error = errorMessage; | |
| reject(loadedFiles.error); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`
around lines 59 - 65, catchHandler currently types its parameter as string which
mismatches what Promise.catch provides (typically an Error); change the handler
to accept Error|unknown (or just unknown) and normalize the value before storing
and rejecting — e.g., in catchHandler(error: unknown) compute const msg = error
instanceof Error ? error.message : String(error) and set loadedFiles.error = msg
then call reject(msg); update the catchHandler signature and usages to use this
normalized message so you don't store "[object Error]" or lose details.
| interface MAPIModule extends WebAssembly.Module { | ||
| _malloc: (size: number) => number; | ||
| _mapi_process: (handle: number, inputPtr: number, outputPtr: number, numFrames: number) => void; | ||
| _mapi_set_parameter: (handle: number, symbol: number, value: number) => void; | ||
| _mapi_create: (sampleRate: number, bufferSize: number) => number; | ||
| lengthBytesUTF8: (str: string) => number; | ||
| stringToUTF8: (str: string, buffer: number, bufferSize: number) => void; | ||
| HEAPF32: any; | ||
| HEAPU32: any; | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Interface incorrectly extends WebAssembly.Module.
MAPIModule extends WebAssembly.Module, but the properties (_malloc, HEAPF32, etc.) are from the Emscripten runtime Module object, not the WebAssembly.Module. Also, HEAPF32/HEAPU32 should have proper types.
Proposed fix
-interface MAPIModule extends WebAssembly.Module {
+interface MAPIModule {
_malloc: (size: number) => number;
+ _free: (ptr: number) => void;
_mapi_process: (handle: number, inputPtr: number, outputPtr: number, numFrames: number) => void;
_mapi_set_parameter: (handle: number, symbol: number, value: number) => void;
_mapi_create: (sampleRate: number, bufferSize: number) => number;
+ _mapi_destroy?: (handle: number) => void;
lengthBytesUTF8: (str: string) => number;
stringToUTF8: (str: string, buffer: number, bufferSize: number) => void;
- HEAPF32: any;
- HEAPU32: any;
+ HEAPF32: Float32Array;
+ HEAPU32: Uint32Array;
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| interface MAPIModule extends WebAssembly.Module { | |
| _malloc: (size: number) => number; | |
| _mapi_process: (handle: number, inputPtr: number, outputPtr: number, numFrames: number) => void; | |
| _mapi_set_parameter: (handle: number, symbol: number, value: number) => void; | |
| _mapi_create: (sampleRate: number, bufferSize: number) => number; | |
| lengthBytesUTF8: (str: string) => number; | |
| stringToUTF8: (str: string, buffer: number, bufferSize: number) => void; | |
| HEAPF32: any; | |
| HEAPU32: any; | |
| }; | |
| interface MAPIModule { | |
| _malloc: (size: number) => number; | |
| _free: (ptr: number) => void; | |
| _mapi_process: (handle: number, inputPtr: number, outputPtr: number, numFrames: number) => void; | |
| _mapi_set_parameter: (handle: number, symbol: number, value: number) => void; | |
| _mapi_create: (sampleRate: number, bufferSize: number) => number; | |
| _mapi_destroy?: (handle: number) => void; | |
| lengthBytesUTF8: (str: string) => number; | |
| stringToUTF8: (str: string, buffer: number, bufferSize: number) => void; | |
| HEAPF32: Float32Array; | |
| HEAPU32: Uint32Array; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`
around lines 108 - 117, The MAPIModule interface incorrectly extends
WebAssembly.Module and uses untyped Emscripten runtime fields; remove the
WebAssembly.Module inheritance and instead declare MAPIModule as the
Emscripten-style runtime interface (or extend a proper Emscripten Module type)
so runtime helpers belong to the module object, update
_malloc/_mapi_create/_mapi_process/_mapi_set_parameter signatures to be
numeric-returning/callable members, and type HEAPF32 as Float32Array and HEAPU32
as Uint32Array; keep lengthBytesUTF8 and stringToUTF8 signatures as-is but
ensure their parameter types match the numeric buffer pointer types used
elsewhere.
| case 'destroy': | ||
| break; | ||
| } |
There was a problem hiding this comment.
ScriptProcessor destroy case doesn't release WASM resources.
The 'destroy' case in the postMessage handler does nothing, but the module has allocated WASM memory (audioData, audioPtrs, csymbolData) and created a handle via _mapi_create. These should be freed to prevent memory leaks.
Proposed fix
case 'destroy':
+ module._mapi_destroy?.(handle);
+ module._free?.(audioData);
+ module._free?.(audioPtrs);
+ module._free?.(csymbolData);
break;Note: Verify _mapi_destroy is exported by the WASM module.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'destroy': | |
| break; | |
| } | |
| case 'destroy': | |
| module._mapi_destroy?.(handle); | |
| module._free?.(audioData); | |
| module._free?.(audioPtrs); | |
| module._free?.(csymbolData); | |
| break; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`
around lines 193 - 195, The 'destroy' message handler must free the WASM
resources allocated by the processor: call the module destructor (e.g.
Module._mapi_destroy(mapiHandle) — verify _mapi_destroy is exported) and free
any malloc'd buffers (Module._free(audioData), Module._free(audioPtrs),
Module._free(csymbolData) or any other pointer variables created), then null out
those variables (audioData = audioPtrs = csymbolData = mapiHandle = null) to
avoid reuse; implement this cleanup in the 'destroy' case of the postMessage
handler in RnnNoiseProcessor.ts.
| const processorBlob = new Blob([loadedFiles.worklet!], { type: 'text/javascript' }); | ||
| const processorURL = URL.createObjectURL(processorBlob); | ||
|
|
||
| await audioContext.audioWorklet.addModule(processorURL) |
There was a problem hiding this comment.
Blob URL never revoked - minor memory leak.
The Blob URL created for the worklet processor is never revoked with URL.revokeObjectURL(). While minor, this can accumulate over repeated processor initializations.
Proposed fix
const processorBlob = new Blob([loadedFiles.worklet!], { type: 'text/javascript' });
const processorURL = URL.createObjectURL(processorBlob);
await audioContext.audioWorklet.addModule(processorURL)
+ URL.revokeObjectURL(processorURL)
this.noiseSuppressionNode = new AudioWorkletNode(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const processorBlob = new Blob([loadedFiles.worklet!], { type: 'text/javascript' }); | |
| const processorURL = URL.createObjectURL(processorBlob); | |
| await audioContext.audioWorklet.addModule(processorURL) | |
| const processorBlob = new Blob([loadedFiles.worklet!], { type: 'text/javascript' }); | |
| const processorURL = URL.createObjectURL(processorBlob); | |
| await audioContext.audioWorklet.addModule(processorURL) | |
| URL.revokeObjectURL(processorURL) | |
| this.noiseSuppressionNode = new AudioWorkletNode( |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`
around lines 242 - 245, The Blob URL created for the worklet ("processorBlob" ->
"processorURL") is never revoked; wrap the module load in a try/finally (or add
a .finally) so that after awaiting
audioContext.audioWorklet.addModule(processorURL) you call
URL.revokeObjectURL(processorURL) to avoid the minor memory leak, keeping use of
loadedFiles.worklet and audioContext.audioWorklet.addModule unchanged.
| nn.port!.postMessage({ type: 'param', symbol: "intensity", value: 100 }); | ||
| nn.port!.postMessage({ type: 'param', symbol: "leveler_target", value: -18 }); | ||
| nn.port!.postMessage({ type: 'param', symbol: "sb_strength", value: 60 }); | ||
| nn.port!.postMessage({ type: 'param', symbol: "mb_strength", value: 60 }); | ||
| nn.port!.postMessage({ type: 'param', symbol: "pre_gain", value: 2 }); | ||
| nn.port!.postMessage({ type: 'param', symbol: "post_gain", value: 0 }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Document the magic parameter values.
These hardcoded audio processing parameters lack documentation explaining their purpose and why these specific values were chosen:
intensity: 100leveler_target: -18sb_strength: 60,mb_strength: 60pre_gain: 2,post_gain: 0
Consider adding comments or extracting these to named constants for maintainability.
Example documentation
// BBBA audio processor parameters (tuned for voice optimization)
const BBBA_PARAMS = {
intensity: 100, // Noise reduction intensity (0-100)
leveler_target: -18, // Target loudness in dBFS
sb_strength: 60, // Single-band compressor strength
mb_strength: 60, // Multi-band compressor strength
pre_gain: 2, // Input gain in dB
post_gain: 0, // Output gain in dB
} as const;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/frontend/src/features/rooms/livekit/processors/RnnNoiseProcessor.ts`
around lines 259 - 264, The hardcoded magic parameters sent to the audio worker
in RnnNoiseProcessor (the nn.port!.postMessage calls with symbols "intensity",
"leveler_target", "sb_strength", "mb_strength", "pre_gain", "post_gain") should
be extracted into a named constant (e.g., BBBA_PARAMS or RNN_NOISE_PARAMS) with
inline comments explaining each field's meaning and value range; update the
postMessage calls to reference those constants so the values are documented and
easier to maintain, and add a short comment above the constant indicating these
were tuned for voice optimization and where the values came from.
|
Hi @trummerschlunk, thanks a lot for your PR, I am sorry we have been very busy lately and hadn't time to put the effort required into it. I have tested it today and it does work pretty well from my point of view :
On the technical side, I haven't dived too much, but would it possible for you to package your processing pipeline / wasm / etc. directly on npm so that we don't add those binary files directly on our repo? I am also seing some mention to specific license in the files, which I am not sure we can pull in like this in this repo. In all cases, thanks a lot for your work so far ! |
great, happy to hear :)
Yes, we were planning to do this and are looking into it. Might take a week or two though...
I'll check. Cheers :) |
|
Awesome, thanks @trummerschlunk ! |
|
What are your license requirements for BBBA? |


Purpose
Better Sound for LaSuite Meet.
Proposal
This pull request introduces BBBA, a powerful voice optimization audio processor for LaSuite Meet. The solution runs entirely on the client, utilizing a WASM AudioWorklet inserted into the LaSuite Meet client's Livekit audio chain.
Features:
The result is significantly improved speech clarity and listening experience for all participants.
Demo Video
PR.LaSuite.05.compact04.mp4