Skip to content

Feature Investigation: Music Reactivity Support #3

@wporter82

Description

@wporter82

Overview

The official Lotus Lamp X app and the physical remote control both include music reactivity features that allow the lamp to respond to audio input. I'd like to investigate how this works and whether it can be implemented in this Python library.

Current App Capabilities

The app provides three audio input sources:

1. Device MIC (Lamp's Built-in Microphone)

  • Uses the microphone visible on the lamp base
  • Sensitivity adjustment: 0-100%
  • 8 Visualization modes:
    • Energy1
    • Rhythm1
    • Spectrum1
    • Scroll1
    • Energy2
    • Rhythm2
    • Spectrum2
    • Scroll2

2. Phone MIC (Phone's Microphone)

  • Uses phone's microphone to pick up ambient sound
  • Sensitivity adjustment: 0-100%
  • No mode selection (handled by app or lamp?)

3. Music (Pre-loaded Songs)

  • Built-in songs play through phone's speakers
  • Lamp reacts to the music
  • Limited song library
  • Cannot use custom music files or streaming apps like Spotify

Investigation Questions

1. Audio Processing Architecture

  • Device MIC mode: Does the lamp's firmware perform all audio analysis internally?
  • Phone MIC mode: Does the phone analyze audio and send real-time visualization commands via BLE?
  • Music mode: How does this differ from Phone MIC mode if the phone is playing the audio?

2. Visualization Modes

What do the 8 modes actually do?

  • Energy1/2: Overall volume/energy level visualization?
  • Rhythm1/2: Beat/tempo detection?
  • Spectrum1/2: Frequency spectrum analysis (bass/mid/treble)?
  • Scroll1/2: Scrolling animation effect based on audio?

3. BLE Command Structure

Need to discover commands for:

  • Audio source selection (Device MIC / Phone MIC / Music)
  • Visualization mode selection (8 modes for Device MIC)
  • Sensitivity adjustment (0-100%)
  • Real-time audio data streaming (if Phone MIC sends audio data to lamp)

Investigation Approach

Phase 1: APK Code Analysis

  • Search decompiled APK for audio/music-related code
  • Look for keywords: "music", "mic", "audio", "spectrum", "fft", "beat", "rhythm"
  • Identify BLE command bytes for mode selection and sensitivity
  • Determine if phone sends audio data or just control commands

Phase 2: BLE Command Discovery

  • Use app to test each audio mode while monitoring for BLE commands
  • Document command structure for:
    • Switching between Device MIC / Phone MIC / Music
    • Selecting visualization modes (Energy1, Rhythm1, etc.)
    • Adjusting sensitivity slider
  • Test if commands work when sent from Python library

Phase 3: Audio Processing Investigation

If Phone MIC/Music modes send audio data to lamp:

  • Determine audio data format (raw PCM, frequency bands, etc.)
  • Identify sample rate and bit depth
  • Understand how app processes audio before sending to lamp

If lamp does all processing:

  • Commands only need to enable/configure modes
  • Much simpler implementation!

Potential Implementation

Basic Mode Control

# Enable Device MIC mode with specific visualization
await lamp.set_music_mode(
    source="device_mic",
    mode="Spectrum1",
    sensitivity=75
)

# Enable Phone MIC mode
await lamp.set_music_mode(
    source="phone_mic",
    sensitivity=50
)

# Disable music reactivity
await lamp.disable_music_mode()

Advanced Audio Processing (if feasible)

# Use PC's microphone for music reactivity
import sounddevice as sd
import numpy as np

async def audio_callback(audio_data):
    # Analyze audio (FFT, beat detection, etc.)
    spectrum = analyze_spectrum(audio_data)
    await lamp.send_audio_spectrum(spectrum)

# Stream PC audio to lamp
await lamp.start_audio_stream(callback=audio_callback)

Potential Enhancements

If I can reverse-engineer the protocol, potential improvements over the official app:

  • Desktop audio reactivity: React to PC/Mac audio output
  • Spotify/streaming integration: Real-time visualization for any audio source
  • Custom visualization modes: Create new modes beyond the built-in 8
  • Advanced audio analysis: Better beat detection, frequency isolation, etc.
  • Multi-lamp sync: Synchronized music visualization across multiple lamps

Investigation Timeline

I'll start with APK analysis to understand the architecture before attempting BLE command discovery. This will determine whether advanced features are feasible.

Technical Challenges

If Phone Sends Audio Data:

  • BLE bandwidth limitations: Audio streaming over BLE may be slow
  • Audio processing complexity: Need FFT, beat detection libraries
  • Real-time requirements: Low latency needed for good visualization

If Lamp Does Processing:

  • Much simpler: Just send mode/sensitivity commands
  • Limited customization: Can't create new visualization modes
  • Hardware dependent: Relies on lamp's firmware capabilities

Community Input Welcome

If anyone has:

  • Experience with BLE audio streaming
  • Knowledge of audio visualization algorithms
  • Ideas for creative use cases
  • Other BLE LED devices with similar features

Please share your thoughts!


Labels: enhancement, investigation, needs-testing

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestinvestigationRequires research or exploration to determine feasibility and implementation approachneeds testingRequires testing or experimentation to gather more informationpriority: mediumShould address eventually

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions