This document describes the testing strategy and how to run tests for ShellDock.
ShellDock has two types of tests:
- Unit Tests - Go unit tests that test individual functions and packages
- Integration Tests - Shell scripts that test the full CLI end-to-end
make testThis runs both unit tests and integration tests.
make test-unit
# or
go test ./...With coverage:
go test -v -race -coverprofile=coverage.out ./...
go tool cover -func=coverage.outWith HTML coverage report:
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.htmlmake test-integration
# or
./test/test-suite.shRun specific integration test:
./test/test-all-features.shUnit tests are located alongside the source code in *_test.go files:
internal/repo/repository_test.go- Repository operationsinternal/repo/manager_test.go- Repository managerinternal/config/config_test.go- Configuration managementinternal/cli/run_test.go- CLI command execution logic
internal/cli: 23.4% coverageinternal/config: 42.5% coverageinternal/repo: 56.2% coverage
# Test a specific package
go test ./internal/repo -v
# Test a specific function
go test ./internal/repo -v -run TestGetCommandSet
# Run with race detector
go test -race ./...
# Run tests multiple times (to catch flaky tests)
go test -count=10 ./...Integration tests are shell scripts in the test/ directory that test the full CLI:
test/test-suite.sh- Comprehensive test suitetest/test-all-features.sh- Feature-specific teststest/test-*.yaml- Test command set files
- ✅ Basic functionality (list, show, run)
- ✅ Versioning (v1, v2, v3, latest detection)
- ✅ Platform support (detection, configuration, platform-specific commands)
- ✅ Step filtering (--skip, --only, ranges)
- ✅ Flag combinations (version + skip, version + only, etc.)
- ✅ Error handling (non-existent sets, invalid formats, conflicts)
- ✅ Command execution (with -a flag)
- ✅ Dynamic arguments (--args flag, interactive prompting)
- ✅ Edge cases (empty sets, platform-only commands, etc.)
# Full test suite
./test/test-suite.sh
# Feature tests
./test/test-all-features.sh
# With verbose output
bash -x ./test/test-suite.shTests run automatically in GitHub Actions:
Runs on:
- Push to main/master/develop branches
- Pull requests
- Manual trigger
Tests:
- Unit tests on multiple OS (Ubuntu, macOS, Windows)
- Multiple Go versions (1.21, 1.22)
- Linting with golangci-lint
- Integration tests
- Build verification
Tests must pass before any release can happen.
The release workflow:
- Runs unit tests first (
testjob) - Only proceeds with builds if tests pass
- All package jobs depend on the test job
Create a *_test.go file in the same package:
package repo
import "testing"
func TestMyFunction(t *testing.T) {
// Arrange
input := "test"
// Act
result := MyFunction(input)
// Assert
if result != "expected" {
t.Errorf("Expected 'expected', got %q", result)
}
}Add test cases to test/test-suite.sh:
test_start "My new feature test"
if sd my-command 2>&1 | grep -q "expected output"; then
test_pass
else
test_fail "Expected output not found"
fiinternal/repo/repository_test.go- 11 testsinternal/repo/manager_test.go- 3 testsinternal/config/config_test.go- 7 testsinternal/cli/run_test.go- 6 tests
test/test-suite.sh- Main integration test suitetest/test-all-features.sh- Feature-specific teststest/test-clean.yaml- Clean test command settest/test-commands.yaml- Command test settest/test-multi-version.yaml- Multi-version test set
Current coverage:
- Overall: ~33.6%
- Target: 70%+ for critical packages
- TUI package (0% coverage) - Terminal UI components
- CLI package (23.4% coverage) - More command execution scenarios
- Config package (42.5% coverage) - Edge cases in platform detection
- Repo package (56.2% coverage) - Error handling and edge cases
go test -v ./...go test -v -run TestGetCommandSet ./internal/repodlv test ./internal/repogo test -race ./...go test -timeout 30s ./...- Write tests before fixing bugs - Reproduce the bug in a test first
- Test edge cases - Empty inputs, nil values, invalid data
- Use table-driven tests - For multiple test cases
- Keep tests fast - Unit tests should run in milliseconds
- Test error paths - Don't just test happy paths
- Use meaningful test names -
TestGetCommandSet_NotFoundis better thanTest1 - Clean up resources - Use
t.TempDir()for temporary files - Test in isolation - Tests shouldn't depend on each other
- Check for hardcoded paths
- Verify environment variables
- Check for race conditions (use
-raceflag) - Ensure test data is included in repository
- Ensure
shelldockbinary is built:go build -o shelldock . - Check test directory permissions
- Verify test YAML files are valid
- Check for platform-specific issues
- Run with
-coverprofile=coverage.out - Ensure you're testing the right packages
- Check that test files are in the same package
- Add tests for new features
- Increase coverage for existing code
- Refactor to make code more testable
- Add integration tests for new CLI features
- Document test scenarios in this file