Overview
The official Lotus Lamp X app and the physical remote control both include music reactivity features that allow the lamp to respond to audio input. I'd like to investigate how this works and whether it can be implemented in this Python library.
Current App Capabilities
The app provides three audio input sources:
1. Device MIC (Lamp's Built-in Microphone)
- Uses the microphone visible on the lamp base
- Sensitivity adjustment: 0-100%
- 8 Visualization modes:
- Energy1
- Rhythm1
- Spectrum1
- Scroll1
- Energy2
- Rhythm2
- Spectrum2
- Scroll2
2. Phone MIC (Phone's Microphone)
- Uses phone's microphone to pick up ambient sound
- Sensitivity adjustment: 0-100%
- No mode selection (handled by app or lamp?)
3. Music (Pre-loaded Songs)
- Built-in songs play through phone's speakers
- Lamp reacts to the music
- Limited song library
- Cannot use custom music files or streaming apps like Spotify
Investigation Questions
1. Audio Processing Architecture
- Device MIC mode: Does the lamp's firmware perform all audio analysis internally?
- Phone MIC mode: Does the phone analyze audio and send real-time visualization commands via BLE?
- Music mode: How does this differ from Phone MIC mode if the phone is playing the audio?
2. Visualization Modes
What do the 8 modes actually do?
- Energy1/2: Overall volume/energy level visualization?
- Rhythm1/2: Beat/tempo detection?
- Spectrum1/2: Frequency spectrum analysis (bass/mid/treble)?
- Scroll1/2: Scrolling animation effect based on audio?
3. BLE Command Structure
Need to discover commands for:
- Audio source selection (Device MIC / Phone MIC / Music)
- Visualization mode selection (8 modes for Device MIC)
- Sensitivity adjustment (0-100%)
- Real-time audio data streaming (if Phone MIC sends audio data to lamp)
Investigation Approach
Phase 1: APK Code Analysis
Phase 2: BLE Command Discovery
Phase 3: Audio Processing Investigation
If Phone MIC/Music modes send audio data to lamp:
If lamp does all processing:
Potential Implementation
Basic Mode Control
# Enable Device MIC mode with specific visualization
await lamp.set_music_mode(
source="device_mic",
mode="Spectrum1",
sensitivity=75
)
# Enable Phone MIC mode
await lamp.set_music_mode(
source="phone_mic",
sensitivity=50
)
# Disable music reactivity
await lamp.disable_music_mode()
Advanced Audio Processing (if feasible)
# Use PC's microphone for music reactivity
import sounddevice as sd
import numpy as np
async def audio_callback(audio_data):
# Analyze audio (FFT, beat detection, etc.)
spectrum = analyze_spectrum(audio_data)
await lamp.send_audio_spectrum(spectrum)
# Stream PC audio to lamp
await lamp.start_audio_stream(callback=audio_callback)
Potential Enhancements
If I can reverse-engineer the protocol, potential improvements over the official app:
- Desktop audio reactivity: React to PC/Mac audio output
- Spotify/streaming integration: Real-time visualization for any audio source
- Custom visualization modes: Create new modes beyond the built-in 8
- Advanced audio analysis: Better beat detection, frequency isolation, etc.
- Multi-lamp sync: Synchronized music visualization across multiple lamps
Investigation Timeline
I'll start with APK analysis to understand the architecture before attempting BLE command discovery. This will determine whether advanced features are feasible.
Technical Challenges
If Phone Sends Audio Data:
- BLE bandwidth limitations: Audio streaming over BLE may be slow
- Audio processing complexity: Need FFT, beat detection libraries
- Real-time requirements: Low latency needed for good visualization
If Lamp Does Processing:
- Much simpler: Just send mode/sensitivity commands
- Limited customization: Can't create new visualization modes
- Hardware dependent: Relies on lamp's firmware capabilities
Community Input Welcome
If anyone has:
- Experience with BLE audio streaming
- Knowledge of audio visualization algorithms
- Ideas for creative use cases
- Other BLE LED devices with similar features
Please share your thoughts!
Labels: enhancement, investigation, needs-testing
Overview
The official Lotus Lamp X app and the physical remote control both include music reactivity features that allow the lamp to respond to audio input. I'd like to investigate how this works and whether it can be implemented in this Python library.
Current App Capabilities
The app provides three audio input sources:
1. Device MIC (Lamp's Built-in Microphone)
2. Phone MIC (Phone's Microphone)
3. Music (Pre-loaded Songs)
Investigation Questions
1. Audio Processing Architecture
2. Visualization Modes
What do the 8 modes actually do?
3. BLE Command Structure
Need to discover commands for:
Investigation Approach
Phase 1: APK Code Analysis
Phase 2: BLE Command Discovery
Phase 3: Audio Processing Investigation
If Phone MIC/Music modes send audio data to lamp:
If lamp does all processing:
Potential Implementation
Basic Mode Control
Advanced Audio Processing (if feasible)
Potential Enhancements
If I can reverse-engineer the protocol, potential improvements over the official app:
Investigation Timeline
I'll start with APK analysis to understand the architecture before attempting BLE command discovery. This will determine whether advanced features are feasible.
Technical Challenges
If Phone Sends Audio Data:
If Lamp Does Processing:
Community Input Welcome
If anyone has:
Please share your thoughts!
Labels: enhancement, investigation, needs-testing