- Changed from magnifying glass (🔍) to brain icon (🧠)
- Indicates AI-powered capabilities
- Mobile-friendly positioning fixed (proper spacing between icons)
The search now automatically detects query type:
Traditional Search (fast, direct DB):
- "John Smith" → finds voter by name
- "123 Main St" → finds voters at address
- "1234567890" → finds voter by VUID
AI Search (natural language):
- "Show me voters in TX-15 who voted in 2024 but not 2026"
- "How many voters switched from Republican to Democratic?"
- "Find new voters in Hidalgo County"
When AI detects a question, it shows:
- ✨ AI Response section at top (like Google's AI Overview)
- Natural language explanation of results
- Collapsible SQL query (for transparency)
- Follow-up question suggestions
- Data results below (tables or voter cards)
- Bottom icons now properly spaced on mobile
- No more overlapping buttons
- Touch-friendly sizing (44px)
- New 🧠 AI Assistant tab in admin dashboard
- Check Ollama service status
- Check for and install updates with one click
- Manage installed models (pull, delete, test)
- View performance statistics
- Action log for all operations
public/search.js- Added hybrid search logic with AI detectionpublic/index.html- Added llm-chat.js script tagpublic/styles.css- Added AI response styles + fixed mobile icon spacing
backend/app.py- Added/api/llm/query,/api/llm/status, and Ollama management endpointsbackend/llm_query.py- Already created (QueryAssistant class)backend/llm_api_endpoint.py- Reference implementation (integrated into app.py)
backend/admin/dashboard.html- Added AI Assistant tabbackend/admin/dashboard.js- Added Ollama management functions
deploy/setup_llm_assistant.sh- Automated setup script
AI_SEARCH_IMPLEMENTATION.md- Complete technical documentationAI_SEARCH_READY.md- This file
# 1. Copy setup script to server
scp -i WhoVoted/deploy/whovoted-key.pem WhoVoted/deploy/setup_llm_assistant.sh ubuntu@politiquera.com:/tmp/
# 2. Run setup script
ssh -i WhoVoted/deploy/whovoted-key.pem ubuntu@politiquera.com
sudo bash /tmp/setup_llm_assistant.shThe script will:
- Install Ollama (LLM runtime)
- Download Llama 3.2 3B model (~2GB, takes 2-5 minutes)
- Install Python ollama package
- Restart gunicorn with new code
- Verify everything is working
ssh -i WhoVoted/deploy/whovoted-key.pem ubuntu@politiquera.com
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Start service
sudo systemctl enable ollama
sudo systemctl start ollama
# Pull model (this takes a few minutes)
ollama pull llama3.2:3b-instruct
# Install Python package
/opt/whovoted/venv/bin/pip install ollama
# Restart gunicorn
pkill gunicorn
cd /opt/whovoted
PYTHONPATH=/opt/whovoted/backend /opt/whovoted/venv/bin/gunicorn -w 5 -b 127.0.0.1:5000 'app:app' --daemoncurl http://localhost/api/llm/statusShould return:
{
"available": true,
"models": ["llama3.2:3b-instruct"],
"recommended": "llama3.2:3b-instruct"
}- Go to https://politiquera.com/admin
- Click 🧠 AI Assistant tab
- Should show:
- Service status (Running/Not Installed)
- Update checker
- Installed models list
- Performance stats
- If not installed, click Install Ollama button
- Once installed, click ⬆️ Update Now to check for updates
- Test a model by clicking 🧪 Test button
- Click brain icon (🧠) at bottom-left
- Type: "John Smith"
- Press Enter
- Should show voter cards (fast, <200ms)
- Click brain icon (🧠)
- Type: "Show me voters in TX-15 who voted in 2024 but not 2026"
- Press Enter
- Should show:
- AI Response section with explanation
- SQL query (collapsible)
- Follow-up suggestions
- Results table below
- Open on mobile device or resize browser to <768px
- Check bottom icons are properly spaced
- Brain icon should be at left, other icons at right
- No overlapping
- Brain icon (🧠) at bottom-left corner
- Click to open search modal
- Type question or name
- Get instant results
- Brain icon (🧠) at bottom-left
- Properly spaced from other icons
- Full-screen modal on tap
- Touch-friendly buttons
- "Maria Garcia"
- "123 Main Street"
- "McAllen"
- "Show me voters in TX-15 who voted in 2024 but not 2026"
- "How many voters switched from Republican to Democratic?"
- "Find new voters in Hidalgo County"
- "What's the turnout rate by age group?"
- "Show me voters who voted early in 2026"
- "Find voters in precinct 123"
- Name: Llama 3.2 3B Instruct
- Size: 2GB RAM
- Speed: ~50 tokens/sec on CPU (2-5 sec per query)
- Cost: $0/month (runs locally)
- Traditional search: 50-200ms
- AI search: 2-5 seconds
- LLM generation: 1-3 sec
- SQL execution: 50-500ms
- Explanation: 500ms-1s
- Requires authentication
- SQL injection prevention
- Read-only database access
- Query validation
- $0.045 per query
- 1000 queries/month = $45/month
- Annual: $540
- $0 per query
- Unlimited queries
- Annual: $0
- Savings: $540/year
# Check Ollama is running
sudo systemctl status ollama
# Check model is downloaded
ollama list
# Restart if needed
sudo systemctl restart ollama- Normal - model loads into memory on first use
- Subsequent queries are faster
- Model stays in memory
- LLM may occasionally generate invalid SQL
- User sees error with SQL query shown
- Can refine question and retry
- Report persistent issues
# Check Ollama status
sudo systemctl status ollama
# View Ollama logs
sudo journalctl -u ollama -f
# Check available models
ollama list
# Test LLM directly
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2:3b-instruct",
"prompt": "Convert to SQL: Show all voters in TX-15"
}'After deployment:
- Test with various queries
- Monitor performance and errors
- Collect user feedback
- Consider enhancements:
- Query history
- Saved queries
- Export to CSV
- Chart generation
If issues arise:
- Check logs:
sudo journalctl -u ollama -f - Verify model:
ollama list - Test API:
curl http://localhost/api/llm/status - Restart services:
sudo systemctl restart ollama pkill gunicorn cd /opt/whovoted PYTHONPATH=/opt/whovoted/backend /opt/whovoted/venv/bin/gunicorn -w 5 -b 127.0.0.1:5000 'app:app' --daemon
✅ Search icon changed to brain (🧠)
✅ Hybrid search detects traditional vs AI queries
✅ Google-style AI response section
✅ Mobile icon layout fixed
✅ Admin dashboard for Ollama management
✅ One-click updates from admin panel
✅ Model management (pull, delete, test)
✅ Zero API costs (local LLM)
✅ Ready to deploy with automated script
Ready to go live!
Access at: https://politiquera.com/admin → 🧠 AI Assistant
Service Status Section:
- Ollama service status (Running/Stopped)
- Version information
- API availability
- Number of installed models
- One-click refresh
Updates Section:
- Automatic update checker
- Shows current vs latest version
- ⬆️ Update Now button (one-click update)
- Requires superadmin role for updates
Models Section:
- List of all installed models
- Model size and last modified date
- ➕ Pull New Model button
- Per-model actions:
- 🧪 Test - Test model with sample query
- 🗑️ Delete - Remove model (superadmin only)
Performance Stats:
- Total queries processed
- Average response time
- Success rate
- Memory usage
Action Log:
- Real-time log of all operations
- Timestamps for each action
- Success/failure indicators
- All Ollama management requires authentication
- Install/Update/Delete operations require superadmin role
- Pull and Test operations available to all authenticated users
- Action log tracks all operations