MSST-S3 is a vendor-neutral S3 interoperability test suite that combines the best practices from Ceph s3-tests, MinIO mint, boto3 test frameworks, and follows the organizational patterns of xfstests/blktests.
- Vendor Neutrality: Test any S3-compatible storage system
- Itemized Tests: Each test is independently numbered and runnable (like xfstests)
- Configuration-Driven: Use kconfig for all test configuration
- Automation-First: Ansible integration for deployment and orchestration
- Multi-SDK Testing: Test the same operations across different S3 client libraries
- Comprehensive Coverage: Cover basic operations to advanced S3 features
msst-s3/
├── Kconfig # Main configuration menu
├── Makefile # Primary build and test targets
├── Makefile.subtrees # Git subtree management
├── scripts/
│ ├── kconfig/ # Kconfig implementation (git subtree)
│ ├── test-runner.py # Main test execution framework
│ └── result-analyzer.py # Test result analysis
├── tests/
│ ├── common/ # Shared test utilities
│ │ ├── __init__.py
│ │ ├── s3_client.py # S3 client wrapper
│ │ ├── fixtures.py # Test fixtures
│ │ └── validators.py # Result validators
│ ├── basic/ # Basic S3 operations (001-099)
│ ├── multipart/ # Multipart upload tests (100-199)
│ ├── versioning/ # Versioning tests (200-299)
│ ├── acl/ # ACL tests (300-399)
│ ├── encryption/ # Encryption tests (400-499)
│ ├── lifecycle/ # Lifecycle tests (500-599)
│ ├── performance/ # Performance tests (600-699)
│ ├── stress/ # Stress tests (700-799)
│ └── compatibility/ # Vendor-specific tests (800-899)
├── workflows/
│ ├── s3-tests/
│ │ ├── Makefile # S3 test workflow targets
│ │ └── Kconfig # S3 test configuration
│ └── performance/
│ ├── Makefile # Performance workflow targets
│ └── Kconfig # Performance configuration
├── playbooks/
│ ├── s3-tests.yml # Main S3 test playbook
│ ├── inventory/ # Ansible inventory
│ └── roles/
│ ├── s3-setup/ # S3 endpoint setup
│ ├── s3-tests/ # Test execution role
│ └── s3-results/ # Result collection
├── configs/ # Pre-defined configurations
│ ├── aws.config # AWS S3 configuration
│ ├── minio.config # MinIO configuration
│ ├── ceph.config # Ceph RGW configuration
│ └── gcs.config # Google Cloud Storage
└── results/ # Test results and reports
Following xfstests/blktests pattern:
- 001-099: Basic operations (bucket/object CRUD)
- 100-199: Multipart uploads
- 200-299: Versioning
- 300-399: Access control (ACL, policies)
- 400-499: Encryption (SSE-S3, SSE-C, SSE-KMS)
- 500-599: Lifecycle management
- 600-699: Performance tests
- 700-799: Stress tests
- 800-899: Vendor-specific compatibility
- 900-999: Reserved for future use
-
Target Configuration
- S3 endpoint URL
- Authentication credentials
- Region settings
- TLS/SSL options
-
Test Selection
- Test groups to run
- Individual test selection
- Skip lists for known failures
-
Test Parameters
- Object sizes
- Concurrency levels
- Duration for stress tests
- Performance thresholds
-
Output Configuration
- Result format (JSON, YAML, JUnit)
- Logging verbosity
- Report generation
-
Test Runner (
scripts/test-runner.py)- Parse configuration from kconfig
- Execute selected tests
- Collect and format results
- Handle test dependencies
-
S3 Client Wrapper (
tests/common/s3_client.py)- Abstraction over boto3/other SDKs
- Vendor-specific workarounds
- Connection pooling
- Retry logic
-
Test Structure
# tests/basic/001 def test_001_bucket_create(): """Create a simple bucket""" # Test implementation # tests/basic/002 def test_002_bucket_list(): """List buckets""" # Test implementation
-
Setup Phase
- Configure S3 endpoints
- Install dependencies
- Validate connectivity
-
Execution Phase
- Run selected test groups
- Monitor progress
- Handle failures
-
Collection Phase
- Gather results
- Generate reports
- Archive artifacts
# Configuration
make menuconfig # Interactive configuration
make defconfig # Default configuration
# Testing
make test # Run all enabled tests
make test-basic # Run basic tests only
make test GROUP=acl # Run specific test group
make test TEST=001 # Run specific test
# Ansible automation
make s3-deploy # Deploy test infrastructure
make s3-run # Run tests via ansible
make s3-results # Collect and analyze results
# Maintenance
make refresh-kconfig # Update kconfig subtree
make clean # Clean build artifacts-
Configuration Loading
- Read .config from kconfig
- Generate s3_config.yaml
- Validate configuration
-
Test Discovery
- Scan test directories
- Filter by configuration
- Build execution plan
-
Test Execution
- Initialize S3 client
- Run tests in order
- Handle dependencies
- Collect results
-
Result Processing
- Format output (JSON/YAML/JUnit)
- Generate summary report
- Archive for analysis
- Configuration profiles for each vendor
- Vendor-specific test markers
- Feature capability detection
- Workaround management
- AWS S3 - Reference implementation
- MinIO - Open source S3 compatible
- Ceph RGW - Ceph RADOS Gateway
- Google Cloud Storage - S3 compatibility mode
- Azure Blob Storage - S3 API layer
- Wasabi - S3 compatible cloud storage
- DigitalOcean Spaces - S3 compatible object storage
- Operation latency (p50, p95, p99)
- Throughput (MB/s, ops/s)
- Concurrency scaling
- Error rates
- Single vs multipart upload performance
- Concurrent operation scaling
- Large object handling
- Small file performance
- Run against multiple S3 implementations
- Test matrix for different configurations
- Result aggregation and reporting
- Regression detection- Create test file in appropriate category
- Follow naming convention (e.g., tests/basic/042)
- Update Kconfig if new options needed
- Document in test catalog
- Create configuration profile
- Add to vendor detection logic
- Document known limitations
- Update CI matrix
- Python 3.8+
- boto3
- pytest
- ansible
- make
- kconfig tools
- Multi-SDK Testing: Beyond boto3 (aws-sdk-go, minio-py, etc.)
- Compliance Testing: S3 API specification validation
- Chaos Testing: Failure injection and recovery
- Cost Analysis: Operation cost estimation
- Security Testing: Permission and encryption validation