This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Claude's training data may lag behind current releases. When reviewing docs or code, don't flag unfamiliar names as speculative or non-existent. Assume the authors are referencing newer, valid resources (e.g., model names like GPT-5, GitHub runner types like ubuntu-slim, library versions, etc.).
- Use top-level imports (only use lazy imports when necessary)
- Only add docstrings in tests when they provide additional context
- Only add comments that explain non-obvious logic or provide additional context
- When touching the SQLAlchemy tracking store, keep all workspace-aware paths and validations intact; never drop workspace plumbing even if the change focuses on single-tenant behavior
- New functionality in the tracking layer should be mirrored by workspace-aware tests (e.g., add workspace variants in
tests/store/tracking/test_sqlalchemy_store_workspace.pywhen applicable)
MLflow is an open-source platform for managing the end-to-end machine learning lifecycle. It provides tools for:
- Experiment tracking
- Model versioning and deployment
- LLM observability and tracing
- Model evaluation
- Prompt management
# Start both MLflow backend and React frontend dev servers
# (The script will automatically clean up any existing servers)
nohup uv run bash dev/run-dev-server.sh > /tmp/mlflow-dev-server.log 2>&1 &
# Monitor the logs
tail -f /tmp/mlflow-dev-server.log
# Servers will be available at:
# - MLflow backend: http://localhost:5000
# - React frontend: http://localhost:3000This uses uv (fast Python package manager) to automatically manage dependencies and run the development environment.
For debugging errors, enable debug logging (must be set before importing mlflow):
export MLFLOW_LOGGING_LEVEL=DEBUGTo run the MLflow dev server that proxies requests to a Databricks workspace:
# IMPORTANT: All four environment variables below are REQUIRED for proper Databricks backend operation
# Set them in this exact order:
export DATABRICKS_HOST="https://your-workspace.databricks.com" # Your Databricks workspace URL
export DATABRICKS_TOKEN="your-databricks-token" # Your Databricks personal access token
export MLFLOW_TRACKING_URI="databricks" # Must be set to "databricks"
export MLFLOW_REGISTRY_URI="databricks-uc" # Use "databricks-uc" for Unity Catalog, or "databricks" for workspace model registry
# Start the dev server with these environment variables
# (The script will automatically clean up any existing servers)
nohup uv run bash dev/run-dev-server.sh > /tmp/mlflow-dev-server.log 2>&1 &
# Monitor the logs
tail -f /tmp/mlflow-dev-server.log
# The MLflow server will now proxy tracking and model registry requests to Databricks
# Access the UI at http://localhost:3000 to see your Databricks experiments and modelsNote: The MLflow server acts as a proxy, forwarding API requests to your Databricks workspace while serving the local React frontend. This allows you to develop and test UI changes against real Databricks data.
If PyPI is unreachable, add --frozen to uv run commands that should use the existing uv.lock as-is without modifying the environment. This works when the required dependencies are already installed or available in the local cache:
uv run --frozen pytest tests/# First-time setup: Install test dependencies
uv sync
uv pip install -r requirements/test-requirements.txt
# Run Python tests
uv run pytest tests/
# Run specific test file
uv run pytest tests/test_version.py
# Run tests with specific package versions
uv run --with abc==1.2.3 --with xyz==4.5.6 pytest tests/test_version.py
# Run tests with optional dependencies/extras
uv run --with transformers pytest tests/transformers
uv run --extra gateway pytest tests/gateway# Python linting and formatting with Ruff
uv run ruff check . --fix # Lint with auto-fix
uv run ruff format . # Format code
# Custom MLflow linting with Clint
uv run clint . # Run MLflow custom linter
# Check for MLflow spelling typos
uv run bash dev/mlflow-typo.sh .# Run tests with minimal dependencies (skinny client)
uv run bash dev/run-python-skinny-tests.sh# Build documentation site (needs gateway extras for API doc generation)
uv run --all-extras bash dev/build-docs.sh --build-api-docs
# Build with R docs included
uv run --all-extras bash dev/build-docs.sh --build-api-docs --with-r-docs
# Serve documentation locally (after building)
cd docs && npm run serve --port 8080pyproject.toml: Package configuration and tool settings.python-version: Minimum Python version (3.10)requirements/: Dependency specificationsmlflow/ml-package-versions.yml: Supported ML framework versions
For frontend development (React, TypeScript, UI components), see mlflow/server/js/CLAUDE.md which covers:
- Development server setup with hot reload
- Available yarn scripts (testing, linting, formatting, type checking)
- UI components and design system usage
- Project structure and best practices
When committing changes:
- DCO sign-off: All commits MUST use the
-sflag (otherwise CI will reject them) - Co-Authored-By trailer: Include when Claude Code authors or co-authors changes
- Pre-commit hooks: Run before committing (see Pre-commit Hooks)
# Commit with required DCO sign-off
git commit -s -m "Your commit message
Co-Authored-By: Claude <noreply@anthropic.com>"
# Push your changes
git push origin <your-branch>When creating pull requests, read the instructions at the top of the PR template and follow them carefully.
Use GitHub CLI to check for failing CI:
# Check workflow runs for current branch
gh run list --branch $(git branch --show-current)
# View details of a specific run
gh run view <run-id>
# Watch a run in progress
gh run watchThe repository uses pre-commit for code quality. Install hooks with:
uv run pre-commit install --install-hooks
uv run pre-commit run install-bin -a -vRun pre-commit manually:
# Run on all files
uv run pre-commit run --all-files
# Run on specific files
uv run pre-commit run --files path/to/file.py
# Run a specific hook
uv run pre-commit run ruff --all-filesThis runs Ruff, typos checker, and other tools automatically before commits.