Welcome to the Conductor distributed testing framework! This guide will help you get started with contributing to the project.
- Development Setup
- Testing Guide
- Code Quality
- Contribution Workflow
- Architecture Overview
- Common Tasks
- Python 3.8 or higher
- Git
- Virtual environment support
# Clone the repository
git clone https://github.com/benroeder/conductor.git
cd conductor
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode with all dependencies
pip install -e ".[dev]"
# Verify installation
conduct --help
player --helpThe project includes these development tools:
- pytest: Test runner with coverage reporting
- pytest-cov: Coverage plugin for pytest
- pytest-mock: Mocking support for tests
- hypothesis: Property-based testing framework
- ruff: Fast Python linter and formatter
The test suite is organized into several categories:
tests/
├── test_*.py # Unit tests for core modules
├── test_*_edge_cases.py # Edge case tests using Hypothesis
├── test_*_integration.py # Integration tests
├── localhost/ # Local integration test configs
├── timeout/ # Timeout test configs
└── multi_player/ # Multi-player end-to-end tests
# Run all tests
python -m pytest
# Run tests with coverage
python -m pytest --cov=conductor --cov-report=term-missing
# Run specific test categories
python -m pytest -m unit # Unit tests only
python -m pytest -m integration # Integration tests only
python -m pytest -m slow # Slow-running tests
# Run specific test files
python -m pytest tests/test_json_protocol.py
python -m pytest tests/test_client.py
# Run with verbose output
python -m pytest -v
# Run tests matching a pattern
python -m pytest -k "test_max_message_size"# Run the full integration test script
./test_conductor.sh
# Manual integration testing
cd tests/localhost
# Terminal 1: Start player
../../venv/bin/player dut.cfg
# Terminal 2: Run conductor
../../venv/bin/conduct conductor.cfg
# Test with different options
../../venv/bin/conduct --format json conductor.cfg
../../venv/bin/conduct --dry-run conductor.cfg
../../venv/bin/conduct --max-message-size 20 conductor.cfg# Run comprehensive multi-player tests
cd tests/multi_player
python test_multi_player.py
# This tests 2-10 concurrent players automaticallyThe project uses Hypothesis for property-based testing:
# Run hypothesis tests specifically
python -m pytest tests/test_*_hypothesis.py
# Run with more examples for thorough testing
python -m pytest tests/test_json_protocol_hypothesis.py --hypothesis-show-statistics- Minimum Coverage: 80% overall
- Core Modules: 90%+ coverage required for:
json_protocol.pyclient.pyphase.pystep.pyretval.py
Check current coverage:
python -m pytest --cov=conductor --cov-report=html
# Open htmlcov/index.html to view detailed coverage reportimport pytest
from conductor.step import Step
class TestStep:
def test_step_creation(self):
"""Test basic step creation."""
step = Step("echo hello")
assert step.cmd == "echo hello"
assert step.spawn is False
assert step.timeout is None
def test_step_execution(self):
"""Test step execution."""
step = Step("echo hello")
result = step.run()
assert result.code == 0
assert "hello" in result.messageimport tempfile
import configparser
from conductor.client import Client
def test_client_with_config():
"""Test client creation with real config."""
with tempfile.NamedTemporaryFile(mode='w', suffix='.cfg', delete=False) as f:
f.write("""
[Coordinator]
player = 127.0.0.1
conductor = 127.0.0.1
cmdport = 6970
resultsport = 6971
[Startup]
step1 = echo "test"
""")
f.flush()
config = configparser.ConfigParser()
config.read(f.name)
client = Client(config)
assert client.host == "127.0.0.1"
assert client.cmd_port == 6970from hypothesis import given, strategies as st
from conductor.json_protocol import encode_message
@given(st.text(), st.dictionaries(st.text(), st.text()))
def test_encode_message_properties(msg_type, data):
"""Test that encode_message always produces valid output."""
result = encode_message(msg_type, data)
# Should always be bytes
assert isinstance(result, bytes)
# Should have length header
assert len(result) >= 4
# Length header should match payload
expected_length = len(result) - 4
actual_length = int.from_bytes(result[:4], byteorder='big')
assert actual_length == expected_lengthThe project uses ruff for linting and formatting:
# Check code style
ruff check .
# Fix auto-fixable issues
ruff check --fix .
# Format code
ruff format .-
Follow PEP 8 with these exceptions:
- Line length: 88 characters (Black compatible)
- Use double quotes for strings
-
Documentation:
- All public functions need docstrings
- Use Google-style docstrings
- Include type hints where helpful
-
Error Handling:
- Always handle expected exceptions
- Use specific exception types
- Include helpful error messages
-
Testing:
- Write tests for all new features
- Include edge cases
- Test error conditions
def send_message(sock: socket.socket, msg_type: str, data: Dict[str, Any],
max_message_size: int = None) -> None:
"""Send a JSON message over a socket.
Args:
sock: The socket to send the message on
msg_type: Type of message (e.g., 'config', 'phase', 'retval')
data: Message payload data
max_message_size: Maximum allowed message size in bytes
Raises:
ProtocolError: If message exceeds size limit
OSError: If socket operation fails
Example:
>>> send_message(sock, 'config', {'host': '127.0.0.1'})
"""- Check existing issues for similar work
- Create an issue to discuss major changes
- Fork the repository if you're an external contributor
# Create a feature branch
git checkout -b feature/your-feature-name
# Make your changes
# ... edit code ...
# Run tests frequently
python -m pytest tests/test_relevant_module.py
# Run full test suite before committing
python -m pytest
# Check code quality
ruff check .
ruff format .# Run comprehensive testing
make test # If Makefile exists, otherwise:
# Unit tests
python -m pytest -m unit
# Integration tests
python -m pytest -m integration
./test_conductor.sh
# Multi-player tests
cd tests/multi_player && python test_multi_player.py
# Edge case testing
python -m pytest tests/test_*_edge_cases.py
# Manual testing
cd tests/localhost
../../venv/bin/player dut.cfg &
../../venv/bin/conduct conductor.cfg-
Use clear, descriptive commit messages
-
Follow conventional commit format when possible:
feat: add max message size configuration fix: handle binary output in step execution docs: update contributing guide test: add edge cases for JSON protocol -
Keep commits focused and atomic
-
Include tests with feature commits
- Ensure all tests pass
- Update documentation if needed
- Add changelog entry if appropriate
- Create pull request with:
- Clear description of changes
- Link to related issues
- Test results summary
Understanding the architecture helps with contributions:
- Conductor: Orchestrates tests across multiple nodes
- Player: Executes commands on individual nodes
- JSON Protocol: Secure communication between conductor/players
- Reporter: Handles output formatting (text/JSON)
Client: Manages player connections from conductorPhase: Container for test steps (startup/run/collect/reset)Step: Individual command execution unitRetVal: Standardized return valuesConfig: Configuration file parser
Conductor → JSON Protocol → Player
↓ ↓ ↓
Client → TCP Socket → Command Executor
↓ ↓ ↓
Reporter ← JSON Results ← Step Results
- Update argument parser in
scripts/conduct.pyorscripts/player.py - Add validation function if needed
- Update main() function to handle the option
- Write tests in
tests/test_conduct_cli.pyortests/test_player_cli.py - Update documentation in
docs/CLI_REFERENCE.md
- Define message structure in
json_protocol.py - Add encoding/decoding functions
- Update protocol version if breaking change
- Write comprehensive tests including edge cases
- Update documentation in
docs/ARCHITECTURE.md
- Extend Reporter class in
reporter.py - Implement format-specific methods
- Add CLI option for the new format
- Write tests for the new reporter
- Update documentation and examples
-
Use verbose logging:
conduct -v test.cfg player -v config.cfg
-
Check network connectivity:
netstat -an | grep 6970 # Check if ports are open telnet conductor_ip 6970 # Test connectivity
-
Use dry-run mode:
conduct --dry-run test.cfg
-
Examine test configurations:
# Use simple configs from tests/localhost/ cp tests/localhost/conductor.cfg my_test.cfg
- Documentation: Check
docs/directory - Examples: Look in
tests/localhost/andexamples/ - Issues: Create a GitHub issue for bugs or questions
- Architecture: See
docs/ARCHITECTURE.mdfor detailed internals
By contributing, you agree that your contributions will be licensed under the same license as the project (BSD 3-Clause).