Conversation
- Can be configured in Global Plugin Settings. - Uses another texture for the - StartBlink now sets a MaterialPropertyBlock on the renderer - EndBlink clears it with SetPropertyBlock(null). A single static s_Mpb is reused to avoid per-blink allocations.
Grabacion.de.pantalla.2026-03-10.062607.mp4 |
- Uses 1024 atlas of 16 textures of 256. - Each letter has a mouth pose associated - The character talks when chating
Mouth720p.mp4 |
- Added Avatar Face Debug panel - Added Expressions system - Added Eyebrows Atlas - Added Expressions config file (scriptable)
- Push Y + Number to execute an expression
- Moves the mouth with hardcoded text while recognizing voice chat. - Alert! this uses the nametag "is talking" flag, to determine if you should be talking, which I dont hinkg is the best way to do it.
|
The two "Talking" components exist because of assembly boundaries. NametagPlacementSystem and VoiceChatNametagsHandler live in separate assemblies that don't know about each other, they can't share a single signal component. The bridge component pattern lets each system write into the AvatarShape assembly without creating a dependency chain between them. AvatarMouthAnimationComponent is different in nature it's not a signal, it's runtime state that only AvatarFacialAnimationSystem ever touches. Merging the signals into it would mean the chat system and voice system would need to write directly into animation state, which breaks encapsulation (they'd need to know about CharacterIndex, CharacterTimer, etc.). If both chat and voice were in the same assembly, you could collapse the two bridge components into one. As it stands, the split is the cost of keeping the assembly graph clean. |
Created custom VoiceChat handler for the AvatarMouth instead of reusing the one on the nametag
- Using the same input component for the mouth movement from chat and voice chat
- Decoupled avatar mouth animation system from the chat bubble / nametag, added a service directly in the chat services to control input
Previously any unhandled character was defaulted to pose 0. This is now defaulted to the face expression override, which feels more natural.
Screen.Recording.2026-03-13.015532.mp4 |
Signed-off-by: Alvaro Luque <41490935+olavra@users.noreply.github.com>
|
@claude review |
|
Claude finished @olavra's task in 3m 35s —— View job PR Review — feat: avatars now blink and talk
6 issues found. 2 are bugs that must be fixed before merge; 2 are ECS rule violations per Bugs1. 2. Eyebrows never applied when eye or mouth renderer is missing ( ECS Rule Violations3. 4. 5. Performance / Architecture6. 7. Facial expression indices packed into REVIEW_RESULT: FAIL ❌ |
| [None(typeof(DeleteEntityIntention))] | ||
| private void ApplyExpressionToPlayer([Data] int expressionIndex, ref AvatarFaceExpressionComponent expression, ref LocalPlayerFacialExpressionComponent networkExpression) | ||
| { | ||
| AvatarFaceExpressionDefinition def = expressions[expressionIndex]; |
There was a problem hiding this comment.
Bug: IndexOutOfRangeException when expressions is empty
ResolveExpressionIndex returns 0 when expressions.Length == 0, then expressions[0] throws. The guard in ResolveExpressionIndex handles the "nothing to pick" case by returning 0 but then that value is blindly used as an array index.
Fix: skip the query entirely if there are no expressions to apply:
if (triggeredSlot < 0)
return;
int expressionIndex = ResolveExpressionIndex(triggeredSlot);
triggeredSlot = -1;
if (expressions.Length == 0)
return;
ApplyExpressionToPlayerQuery(World, expressionIndex);Why: a server build with no expression config, or a missing asset, will crash on any Y+1..9 keypress.
| /// Also handles re-initialisation of the eyebrows renderer after avatar re-instantiation. | ||
| /// </summary> | ||
| [Query] | ||
| [All(typeof(AvatarBlinkComponent), typeof(AvatarMouthAnimationComponent))] |
There was a problem hiding this comment.
Bug: eyebrows are never applied when eye or mouth renderer is missing
[All(typeof(AvatarBlinkComponent), typeof(AvatarMouthAnimationComponent))] means UpdateFaceExpression is silently skipped for any avatar that doesn't have both components. SetupBlinkComponent and SetupMouthComponent bail out early when eyeRenderer == null / mouthRenderer == null, so an avatar without a Mask_Eyes or Mask_Mouth mesh will never have its eyebrows updated, even if the eyebrows renderer exists.
The eyebrows and expression-base-index propagation logic should not depend on the blink and mouth components being present. Either:
- Split
UpdateFaceExpressioninto two queries (one for eyebrows that doesn't require Blink/Mouth, one for propagating base indices), or - Add a fallback query for entities that have
AvatarFaceExpressionComponentbut not Blink/Mouth components, applying at least the eyebrows and clamping blink/mouth index syncing.
|
|
||
| private void SetMouthSpeaking(Entity entity, bool isSpeaking) | ||
| { | ||
| if (world.Has<AvatarMouthInputComponent>(entity)) |
There was a problem hiding this comment.
ECS violation: entity manipulation from outside a system
Per project rules, entity manipulation (including world.Has, world.Get, world.Add) is strictly forbidden outside of ECS systems. This method is called from a LiveKit event callback on a potentially non-main thread (depends on LiveKit's threading model). Even on the main thread, bypassing the ECS update cycle can cause structural changes mid-frame that invalidate outstanding refs in other systems.
The recommended pattern is to write the desired state into a thread-safe queue or a pending-writes list that a dedicated system flushes each frame, keeping all world mutations inside BaseUnityLoopSystem.Update. Fix this →
|
|
||
| private void SetPendingMessage(Entity entity, string message) | ||
| { | ||
| if (world.Has<AvatarMouthInputComponent>(entity)) |
There was a problem hiding this comment.
ECS violation: entity manipulation from outside a system
Same issue as VoiceChatMouthAnimationHandler: world.Has, world.Get, and world.Add are called from a chatHistory.MessageAdded event callback, not from inside an ECS system's Update. This can cause mid-frame structural changes that corrupt outstanding component references in other systems running the same frame.
The fix is the same pattern: cache the pending message + sender wallet in a plain field/queue inside this service, then have a dedicated ECS system read and apply it each frame. Fix this →
| private readonly Dictionary<string, int> slotByActionName = new (SLOT_COUNT); | ||
| private readonly DCLInput.FaceExpressionsActions faceExpressionsActions; | ||
|
|
||
| private int triggeredSlot = -1; |
There was a problem hiding this comment.
ECS violation: system holds persistent state
triggeredSlot, faceExpressionsActions, expressions, and slotByActionName are all instance fields that persist across frames. Per project rules: "Systems must not contain state — all state goes into ECS."
triggeredSlot should be a singleton ECS component (e.g. PendingFaceExpressionSlot) that the input callback sets and the query reads+clears. faceExpressionsActions and the action subscriptions belong in a service/plugin layer, not the system itself. This also makes the system testable via the normal UnitySystemTestBase<T> pattern.
|
|
||
| private void OnActiveSpeakersUpdated() | ||
| { | ||
| var newActiveSpeakers = new HashSet<string>(); |
There was a problem hiding this comment.
Performance: new HashSet<string>() allocated on every active-speakers update
OnActiveSpeakersUpdated fires whenever the set of speakers changes (can be frequent during a voice call). Replacing activeSpeakers with a newly allocated HashSet each time means the old one is handed to the GC every call.
Fix: reuse the existing instance by swapping two pooled sets or by using Clear + Add directly on activeSpeakers:
private void OnActiveSpeakersUpdated()
{
// Collect speakers that stopped talking
foreach (string oldSpeakerId in activeSpeakers)
if (!voiceChatRoom.ActiveSpeakers.Contains(oldSpeakerId))
SetIsSpeaking(oldSpeakerId, false);
activeSpeakers.Clear();
foreach (string speakerId in voiceChatRoom.ActiveSpeakers)
{
if (activeSpeakers.Add(speakerId)) // Add returns true if it was newly inserted
SetIsSpeaking(speakerId, true);
}
}(Adjust if voiceChatRoom.ActiveSpeakers doesn't support Contains / random access.)
| movement.GlideState = (Decentraland.Kernel.Comms.Rfc4.Movement.Types.GlideState)message.animState.GlideState; | ||
| // GlideStateValue uses values 0-3 (2 bits). Facial expression indices (4 bits each) are | ||
| // packed into bits 2-13 of the GlideState int field to avoid adding new proto fields. | ||
| int packedFaceExpression = (message.eyebrowsExpressionIndex & 0xF) |
There was a problem hiding this comment.
Fragile proto field overloading — silent breakage risk
Facial expression indices are packed into bits 2-13 of GlideState, which is defined in a shared protobuf schema. If GlideStateValue ever gains new enum values that use bits ≥ 2, or if a partner client decodes GlideState directly (e.g. the old uncompressed path or a mobile client), facial expression data will either be misread as a GlideState or the GlideState will be corrupted.
The same packing is mirrored in UncompressedMovementMessage (decode side, line ~156) and needs to stay in sync manually. This is a maintenance trap.
If adding a new proto field is off the table for this PR, at least add a defensive assertion:
// Ensure GlideState values only occupy the low 2 bits we reserved.
Debug.Assert((int)message.animState.GlideState <= 3, "GlideState exceeded 2-bit budget; facial expression packing is broken.");And document the bit layout as a constant with a // TODO: migrate to dedicated proto field comment so it doesn't get forgotten.
1. IndexOutOfRangeException in UpdateFaceExpressionInputSystem (line 83) ResolveExpressionIndex returns 0 even when expressions is empty, so expressions[0] throws if any Y+slot key is pressed with no expression config loaded. → inline comment 2. Eyebrows never applied when eye or mouth renderer is missing (AvatarFacialExpressionSystem.cs line 247) UpdateFaceExpression has [All(typeof(AvatarBlinkComponent), typeof(AvatarMouthAnimationComponent))]. If either renderer is absent (→ component not added), the query skips that avatar entirely — including the eyebrows update, even if the eyebrows renderer exists. → inline comment ECS Rule Violations 3. VoiceChatMouthAnimationHandler.SetMouthSpeaking calls world.Add/world.Get from a LiveKit event callback — entity manipulation outside systems is strictly forbidden. → inline comment 4. ChatAvatarMouthService.SetPendingMessage has the same violation — world mutations from a MessageAdded event callback. → inline comment 5. UpdateFaceExpressionInputSystem stores per-frame state in instance fields (triggeredSlot, faceExpressionsActions, etc.) — systems must not hold state. → inline comment Performance / Architecture 6. VoiceChatMouthAnimationHandler.OnActiveSpeakersUpdated allocates new HashSet<string>() on every speaker-change event — reuse the existing instance with Clear + in-place diffing. → inline comment 7. Facial expression indices packed into GlideState proto field bits 2-13 — fragile coupling between the facial feature and movement protocol layers. Any future addition to GlideStateValue silently corrupts expressions. A defensive assertion + TODO comment would reduce the risk. → inline comment REVIEW_RESULT: FAIL ❌
|
@claude review |
|
Claude finished @olavra's task in 4m 26s —— View job PR Review — feat: avatars now blink and talk
The previous review's major issues (ECS violations in 4 remaining issues found. 2 are bugs/wrong code that must be fixed; 2 carry over from the previous review. Bugs (must fix)1. Unused 2. Wrong writer in Carried from previous review3. 4. REVIEW_RESULT: FAIL ❌ |
|
@claude review |
|
Claude finished @olavra's task in 3m 32s —— View job PR Review — feat: avatars now blink and talks
Good progress — almost all previous issues are resolved. One new bug found. Previously flagged issues — resolved ✅All bugs and ECS violations from the two prior reviews are addressed:
New bug (must fix)Thread-safety race in Carried from previous review (non-blocking)GlideState bit-packing still has no defensive REVIEW_RESULT: FAIL ❌ |
|
@claude review |
|
Claude finished @olavra's task in 1m 58s —— View job PR Review — feat: avatars now blink and talks
All issues from the three previous reviews are resolved. No new issues found. Previously flagged issues — all resolved ✅
Non-blocking carry-over
REVIEW_RESULT: PASS ✅ |
Added test feature to make avatars blink and talk. This is an experimental feature
Can be configured in Global Plugin Settings.
Uses another texture for the Blink and Mouth Atlas for poses
StartBlink now sets a MaterialPropertyBlock on the renderer
EndBlink clears it with SetPropertyBlock(null). A single static s_Mpb is reused to avoid per-blink allocations.
Mouth atlas is 1024, divided in chunks of 256s to fix the texture array
Each character is associated with mouth pose
You can se it in action while speaking