Thanks for your interest in contributing to Context CLI! This guide will help you get set up and submit your first pull request.
These principles guide all code in the project:
- Async-first — all I/O uses
async/await. The CLI bridges to sync withasyncio.run(). - Pydantic-first — every data contract is a Pydantic model with
Field(description=...)on every field. These propagate to MCP tool schemas automatically. - Errors don't crash — exceptions during audits are captured in
AuditReport.errors, not raised to the caller. - CLI is a thin wrapper — core logic lives in
core/. Both the Typer CLI (main.py) and the MCP server (server.py) are thin wrappers that delegate to the same core functions.
- Python 3.10 or higher
- Git
git clone https://github.com/hanselhansel/context-cli.git
cd context-cli
pip install -e ".[dev]"Context CLI uses a headless browser (via crawl4ai) for content extraction. After installing, set it up:
crawl4ai-setupcontext-cli lint example.com --singlepytest tests/ -vRun tests with coverage:
make coverageRun a single test by name:
pytest tests/test_auditor.py -k "test_name" -vTests use pytest-asyncio with asyncio_mode = "auto", so async test functions work without extra decorators. You do not need to add @pytest.mark.asyncio — any async def test_* function is automatically detected.
We use Ruff for linting (line-length 100, Python 3.10 target):
ruff check src/ tests/Fix auto-fixable issues:
ruff check --fix src/ tests/src/context_cli/
├── main.py # Typer CLI (thin wrapper)
├── server.py # FastMCP server (thin wrapper)
└── core/
├── models.py # Pydantic v2 data contracts
├── auditor.py # Audit orchestration + scoring
├── crawler.py # crawl4ai headless browser wrapper
└── discovery.py # Sitemap/spider page discovery
CLI input (url) MCP request (url)
\ /
v v
main.py server.py
\ /
v v
auditor.py (orchestration)
|
v
discovery.py (find pages: sitemap, spider)
|
v
crawler.py (fetch & extract via headless browser)
|
v
auditor.py (scoring & pillar checks)
|
v
models.py (AuditReport Pydantic model)
|
v
CLI text output MCP JSON response
Key design principles:
auditor.pyis the core entry point -- both CLI and MCP server callaudit_url()/audit_site()models.pydefines all data contracts as Pydantic models withField(description=...)on every field- Async-first -- core logic is async; the CLI bridges with
asyncio.run() - Errors don't crash -- all errors are captured in
AuditReport.errors
- Ruff for linting and formatting (config in
pyproject.toml) - Line length: 100 characters
- Async-first: use
async/awaitfor I/O operations - Pydantic models: add
Field(description=...)on all model fields -- these propagate to MCP tool schemas - Type hints: use modern Python syntax (
list[str],str | None, notList[str],Optional[str]) - src-layout: all source code lives under
src/context_cli/
- Fork the repository and clone your fork
- Create a branch for your change:
git checkout -b your-feature-name
- Make your changes -- keep PRs focused on a single concern
- Run tests and lint before committing:
pytest tests/ -v ruff check src/ tests/
- Commit with a clear message describing what and why
- Push to your fork and open a Pull Request against
main
- Keep PRs small and focused -- one feature or fix per PR
- Add tests for new functionality
- Update documentation if your change affects user-facing behavior
- Ensure all tests pass and linting is clean
- Describe what the PR does and why in the PR description
Use the format type: description (lowercase, imperative mood).
Allowed types:
| Type | Use for |
|---|---|
feat |
New feature |
fix |
Bug fix |
docs |
Documentation only |
ci |
CI/CD configuration |
chore |
Maintenance, deps, tooling |
test |
Adding or updating tests |
refactor |
Code change that neither fixes a bug nor adds a feature |
style |
Formatting, whitespace, linting (no logic change) |
perf |
Performance improvement |
Keep commits atomic — one logical change per commit. Write the subject line in imperative mood ("add feature", not "added feature").
If you're adding a new scoring pillar:
- Add the Pydantic model to
core/models.py - Add the check function to
core/auditor.py - Wire it into
compute_scores()and updateaudit_url() - Add it to the CLI display in
main.py - Add it to the MCP tool response in
server.py - Add tests in
tests/ - Update
docs/scoring.mdwith the new pillar's methodology
Context CLI follows semantic versioning (major.minor.patch).
- The canonical version lives in
pyproject.tomlunder[project] version. - To cut a release:
This bumps the version, creates a git tag, and pushes to origin.
make release VERSION=x.y.z
- Publishing to PyPI is automated via GitHub Actions — a workflow triggers on new version tags and builds + publishes the package.
When choosing a version number:
- Patch (0.1.0 -> 0.1.1) — bug fixes, docs, internal refactors
- Minor (0.1.1 -> 0.2.0) — new features, new lint pillars, new CLI flags
- Major (0.2.0 -> 1.0.0) — breaking changes to CLI interface or MCP tool schemas
Context CLI exposes its audit functionality as MCP (Model Context Protocol) tools. To test them locally:
- Start the MCP server:
context-cli mcp
- Connect with any MCP-compatible client — for example, add it to Claude Desktop's config or use a standalone MCP client.
- MCP tool definitions live in
server.py. When you change tool signatures or descriptions, verify that the updated schema appears correctly in the client.
Automated tests for the MCP layer run as part of the normal test suite (pytest tests/ -v), but manual testing with a real MCP client is recommended for schema and end-to-end validation.
New to the project? Look for issues labeled good first issue on GitHub. Great starting points include:
- Adding or improving tests for existing functionality
- Fixing typos or improving documentation
- Small bug fixes with a clear reproduction path
Pick one, comment that you're working on it, and open a PR when ready.
Found a bug or have a feature request? Open an issue with:
- A clear description of the problem or suggestion
- Steps to reproduce (for bugs)
- Expected vs. actual behavior
- Your Python version and OS
By contributing to Context CLI, you agree that your contributions will be licensed under the MIT License.