Common issues and solutions for ProxyMe plugin.
Symptom: Error when installing plugin from disk
Solutions:
-
Check Rider version:
Help → AboutRequired: Rider 2024.3 or later
-
Verify ZIP file:
unzip -t ProxyMe-2.1.0.zip
Should show no errors
-
Try clean install:
- Uninstall old version
- Restart Rider
- Install new version
- Restart again
Solutions:
-
Verify installation:
Settings → Plugins → InstalledProxyMe should be listed and enabled
-
Check plugin is enabled:
- Find ProxyMe in plugin list
- Ensure checkbox is checked
-
Restart Rider:
File → Exit → Reopen Rider
Symptom: "Failed to start proxy" error
Check Node.js:
node --version
# Required: v18 or laterInstall Node.js if missing:
- macOS:
brew install node - Windows: Download from nodejs.org
- Linux:
sudo apt install nodejs npm
Check port availability:
# macOS/Linux
lsof -i :3000
# Windows
netstat -ano | findstr :3000If port is in use:
# Kill the process using port 3000
# macOS/Linux
kill -9 <PID>
# Windows
taskkill /PID <PID> /FCheck proxy logs:
tail -50 ~/.proxyme/logs/proxyme.logCheck configuration files:
# Verify models.json exists
cat ~/.proxyme/proxy/models.json
# Check .env file has API keys
cat ~/.proxyme/proxy/.envVerify API keys are valid:
- Test keys directly with provider's API
- Check for typos or expired keys
- Ensure keys have proper permissions
Test proxy endpoint:
# Check health
curl http://localhost:3000/health
# List available models
curl http://localhost:3000/v1/modelsSymptom: Rider crashes when clicking "Restart Proxy"
Solution: This is a known issue in older versions. Update to v2.1.0 Build 2 or later.
Workaround for older versions:
- Stop proxy manually
- Wait 5 seconds
- Start proxy again
- Don't use "Restart Proxy" button
Symptom: Rider shows wrong models or no models
Solution 1: Restart the proxy
Tools → ProxyMe → Restart Proxy Server
Models are loaded only on startup.
Solution 2: Refresh Rider AI Assistant
- Go to
Settings → Tools → AI Assistant → Models - Click "Test Connection"
- Close and reopen AI Assistant
- Check model dropdown
Solution 3: Full Rider restart
- Save all work
- Exit Rider completely
- Reopen Rider
- Check AI Assistant again
Verify models.json is generated:
cat ~/.proxyme/proxy/models.json | jq '.models[].id'Should show your enabled models.
Symptom: See default models instead of your configured models
Cause: Proxy hasn't restarted since you made changes
Solution:
- Save settings in ProxyMe
- Restart proxy
- Wait 5 seconds
- Refresh Rider AI Assistant
Symptom: "Add Model" button doesn't work or dialog doesn't open
Solutions:
-
Check for modal dialogs:
- Look for hidden error dialogs
- Press Escape to close any modals
-
Restart Rider:
- Close Rider completely
- Reopen and try again
-
Check logs for errors:
tail -50 ~/.proxyme/logs/proxyme.log
Check key format:
Different providers use different key formats:
- DeepSeek:
sk-... - Perplexity:
pplx-... - Anthropic:
sk-ant-... - OpenAI:
sk-...
Verify key in .env file:
cat ~/.proxyme/proxy/.envShould show:
DEEPSEEK_API_KEY=sk-your-key-here
PERPLEXITY_API_KEY=pplx-your-key-here
ANTHROPIC_API_KEY=sk-ant-your-key-here
Test key directly:
# Test DeepSeek key
curl https://api.deepseek.com/v1/models \
-H "Authorization: Bearer sk-your-key-here"
# Should return list of modelsRegenerate .env:
- Edit model in ProxyMe
- Re-enter API key
- Click OK and Save
- Restart proxy
Check environment file exists:
ls -la ~/.proxyme/proxy/.envCheck file permissions:
# Should be readable
chmod 600 ~/.proxyme/proxy/.envForce regeneration:
- Delete .env file:
rm ~/.proxyme/proxy/.env - Open ProxyMe settings
- Click Save
- Restart proxy
Symptom: Model behavior doesn't change when adjusting temperature
Solution: Temperature is applied only when proxy starts
Steps:
- Edit model temperature in ProxyMe
- Click OK and Save
- Restart proxy ← Important!
- Test model again
Verify temperature in models.json:
cat ~/.proxyme/proxy/models.json | jq '.models[] | {id, temperature}'Check stream setting:
- Edit model in ProxyMe
- Ensure "Stream" is checked
- Save settings
- Restart proxy
Test streaming:
curl -X POST http://localhost:3000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"messages": [{"role": "user", "content": "Hi"}],
"stream": true
}'Should show incremental responses.
Check template exists:
ls -la ~/.proxyme/templates/presets/
ls -la ~/.proxyme/templates/user/Check template format:
cat ~/.proxyme/templates/user/your-template.json | jq .Should be valid JSON.
Recreate template directory:
mkdir -p ~/.proxyme/templates/presets
mkdir -p ~/.proxyme/templates/userCheck directory permissions:
ls -ld ~/.proxyme/templates/user/
# Should be writableFix permissions:
chmod 755 ~/.proxyme/templates/user/Check disk space:
df -h ~Check network connection:
- Test provider API directly
- Check for network issues
- Try different provider
Check temperature:
- Lower temperature = faster, more focused
- Higher temperature = slower, more creative
Check model size:
- Smaller models respond faster
- Consider using faster models for quick tasks
Check proxy logs for errors:
tail -f ~/.proxyme/logs/proxyme.logRestart proxy:
Tools → ProxyMe → Restart Proxy Server
Check for runaway processes:
# macOS/Linux
ps aux | grep node
# Kill if needed
kill -9 <PID>Verify proxy is running:
- Check status indicator (should be green)
- Test endpoint:
curl http://localhost:3000/health
Check Rider AI Assistant configuration:
Settings → Tools → AI Assistant → Models
Should be:
- Provider:
OpenAI API - URL:
http://localhost:3000/v1 - API Key: (empty)
Test connection:
- Click "Test Connection" button
- Should show: ✅ Connected
- If fails, check proxy logs
Symptom: Quick Edit doesn't use expected model
Solution: Assign model in Rider AI Assistant
Settings → Tools → AI Assistant → Models
→ Instant helpers → Select your model
Recommended for Quick Edit:
- Model:
deepseek-chat - Temperature: 0.1-0.3 (focused)
- Avoid: High temperature or search models
Common errors:
"Model not found"
- Restart proxy to reload models
- Check model is enabled in ProxyMe
- Verify models.json contains the model
"Connection refused"
- Proxy isn't running - start it
- Wrong URL in Rider settings
- Firewall blocking localhost:3000
"Rate limit exceeded"
- Provider API rate limit hit
- Wait and try again
- Check your API plan limits
Default locations:
~/.proxyme/
├── proxy/
│ ├── .env # API keys
│ ├── models.json # Enabled models
│ ├── proxy.js # Proxy server
│ └── package.json # Node dependencies
├── logs/
│ └── proxyme.log # Log files
└── templates/
├── presets/ # Built-in templates
└── user/ # Your templates
Windows equivalent:
C:\Users\YourUsername\.proxyme\
Create directories if missing:
mkdir -p ~/.proxyme/proxy
mkdir -p ~/.proxyme/logs
mkdir -p ~/.proxyme/templates/presets
mkdir -p ~/.proxyme/templates/userFix directory permissions:
chmod 755 ~/.proxyme
chmod 755 ~/.proxyme/proxy
chmod 755 ~/.proxyme/logs
chmod 755 ~/.proxyme/templatesFix file permissions:
chmod 600 ~/.proxyme/proxy/.env
chmod 644 ~/.proxyme/proxy/models.jsonCheck current logs:
tail -f ~/.proxyme/logs/proxyme.logView full log:
cat ~/.proxyme/logs/proxyme.logClear old logs:
rm ~/.proxyme/logs/*.logCheck Rider IDE logs:
Help → Show Log in Finder/Explorer
Look for ProxyMe-related errors.
For bug reports, collect:
# System info
node --version
java -version
echo "OS: $(uname -a)"
# ProxyMe version
cat ProxyMe/gradle.properties | grep version
# Configuration
cat ~/.proxyme/proxy/models.json | jq .
cat ~/.proxyme/proxy/.env | sed 's/=.*/=***REDACTED***/g'
# Logs (last 100 lines)
tail -100 ~/.proxyme/logs/proxyme.log
# Proxy status
curl http://localhost:3000/health
curl http://localhost:3000/v1/modelsStatus: Fixed in v2.1.0 Build 2
Workaround:
- Use Stop → Start instead of Restart
- Update to latest version
Warning: This plugin has only been tested with JetBrains Rider.
Other JetBrains IDEs:
- IntelliJ IDEA: Not tested
- WebStorm: Not tested
- PyCharm: Not tested
Use at your own risk with other IDEs.
Notice: This project contains AI-generated code that may need:
- Refactoring
- Security review
- Bug fixes
- Performance optimization
Contributions welcome! See CONTRIBUTING.md
-
Check existing issues:
- GitHub Issues
- Search for similar problems
-
Ask the community:
- GitHub Discussions
- Post your question with details
-
Report a bug:
- Use the bug report template
- Include logs and system info
- Describe steps to reproduce
- Read this troubleshooting guide
- Check closed issues
- Try latest version
- Collect diagnostic information
- Test with minimal configuration
Need more help? Visit our documentation or open an issue on GitHub.