We welcome contributions! Rhumb is built to serve agents and the developers who build with them.
If you work at or with a service we've scored and the data looks wrong, please:
- Open an issue with the service slug and the dimension you believe is incorrect
- Include evidence (API docs, changelog links, error response samples)
- We'll review and update within 48 hours
Want to see a service scored? Open an issue with:
- Service name and primary API docs URL
- Which category it belongs to (payments, auth, ai, etc.)
- Why it's relevant for AI agent workflows
For bugs in the web UI, API, or MCP server:
- Open a GitHub issue
- Include: what you expected, what happened, steps to reproduce
- For API/MCP issues, include the request and response
- Fork the repo
- Create a feature branch (
git checkout -b feature/your-feature) - Make your changes
- Run tests:
cd packages/api && python -m pytest/cd packages/web && npm test - Open a PR with a clear description
Every service page on rhumb.dev has a "Dispute this score" option. If you believe a score is inaccurate:
- Click "Dispute this score" on the service page, or
- Email team@supertrained.ai with the service slug and your evidence
- We review all disputes within 48 hours
- The AN Score methodology is published and auditable — we'll explain the scoring basis
The scoring methodology is open and documented in docs/AN-SCORE-V2-SPEC.md. We believe transparency builds trust. If you have suggestions for improving the methodology itself, open a discussion.
Be respectful, be constructive, be honest. We're building infrastructure that agents depend on — accuracy and integrity matter more than speed.
By contributing, you agree that your contributions will be licensed under the MIT License.