This guide covers installing and setting up skill-split on your system.
- System Requirements
- Installation Methods
- Verification
- Configuration
- Optional Features
- Troubleshooting
- Python: 3.8 or higher
- Operating System: Linux, macOS, or Windows
- Disk Space: ~50 MB for core installation
- Supabase Account: For remote storage and semantic search
- OpenAI API Key: For vector embeddings and semantic search
- Git: For cloning the repository
This method gives you the latest code and allows for easy development.
# Clone the repository
git clone https://github.com/JoeyBe1/skill-split.git
cd skill-split
# Install in editable mode
pip install -e .
# Or install dependencies manually
pip install click pytestAdvantages:
- Latest features and bug fixes
- Easy to modify and contribute
- Automatic PATH configuration
When published to PyPI:
pip install skill-splitCreate a Dockerfile:
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application
COPY . .
# Set entrypoint
ENTRYPOINT ["python", "skill_split.py"]Build and run:
docker build -t skill-split .
docker run -v $(pwd):/data skill-split parse /data/SKILL.md# Install pipenv if needed
pip install pipenv
# Clone repository
git clone https://github.com/JoeyBe1/skill-split.git
cd skill-split
# Install with pipenv
pipenv install --dev
# Run commands
pipenv run python skill_split.py --helpAfter installation, verify that skill-split is working correctly:
# Check version/help
python skill_split.py --help
# Run test suite
python -m pytest test/ -v
# Parse a test file
python skill_split.py parse test/fixtures/simple_skill.mdExpected output from --help:
Usage: skill_split.py [OPTIONS] COMMAND [ARGS]...
Intelligently split YAML and Markdown files into sections for progressive
disclosure.
Options:
--version Show the version and exit.
--help Show this message and exit.
Commands:
backup Create a backup of the database
checkin Check in a deployed file
checkout Deploy a file from the library
get Get file metadata and frontmatter
get-section Get a specific section by ID
ingest Ingest files into Supabase
list List all sections in a file
list-library List all files in the library
next Get next section (progressive disclosure)
parse Parse a file and display its structure
restore Restore database from backup
search Search sections by content (BM25)
search-library Search the library
search-semantic Search sections using semantic/hybrid search
status Show active checkouts
store Store a file in the database
tree Display section hierarchy as a tree
validate Validate a file structure
verify Verify round-trip integrity
Create a .env file in your project directory:
# Supabase Configuration (optional, for remote storage)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_PUBLISHABLE_KEY=your-anon-key
# OpenAI Configuration (optional, for semantic search)
OPENAI_API_KEY=sk-your-key-here
# Enable Embeddings (required for semantic search)
ENABLE_EMBEDDINGS=trueskill-split uses SQLite by default. Database locations:
- Default:
./skill_split.db(current directory) - Production:
~/.claude/databases/skill-split.db - Custom: Specify with
--dbflag
# Use default database
./skill_split.py store SKILL.md
# Use custom database
./skill_split.py store SKILL.md --db ~/.claude/databases/skill-split.db
# Use production database
./skill_split.py search "python" --db ~/.claude/databases/skill-split.dbFor remote storage and semantic search:
-
Create a Supabase Project
Visit https://supabase.com and create a new project.
-
Get Your Credentials
- Go to Project Settings > API
- Copy your project URL and anon/public key
-
Configure Database Tables
Run the SQL setup script in your Supabase SQL Editor:
-- Create files table CREATE TABLE files ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), path TEXT UNIQUE NOT NULL, original_hash TEXT, type TEXT, format TEXT, frontmatter TEXT, created_at TIMESTAMP DEFAULT NOW() ); -- Create sections table CREATE TABLE sections ( id UUID PRIMARY KEY DEFAULT uuid_generate_v4(), file_id UUID NOT NULL REFERENCES files(id) ON DELETE CASCADE, title TEXT, level INTEGER, content TEXT, start_byte INTEGER, end_byte INTEGER, line_start INTEGER, line_end INTEGER, parent_id UUID REFERENCES sections(id) ON DELETE CASCADE, order_index INTEGER, embedding VECTOR(1536), created_at TIMESTAMP DEFAULT NOW() ); -- Create indexes CREATE INDEX idx_sections_file_id ON sections(file_id); CREATE INDEX idx_sections_parent_id ON sections(parent_id);
-
Set Environment Variables
export SUPABASE_URL="https://your-project.supabase.co" export SUPABASE_PUBLISHABLE_KEY="your-anon-key"
Requires OpenAI API and Supabase:
# Set OpenAI API key
export OPENAI_API_KEY="sk-your-key-here"
# Enable embeddings
export ENABLE_EMBEDDINGS=true
# Run semantic search
./skill_split.py search-semantic "code execution" --vector-weight 0.7When storing files with semantic search enabled:
ENABLE_EMBEDDINGS=true ./skill_split.py store SKILL.mdThis generates embeddings for each section using OpenAI's text-embedding-3-small model.
Backups are stored in ~/.claude/backups/ by default:
# Create backup
./skill_split.py backup --output my-backup
# Restore from backup
./skill_split.py restore my-backup --overwriteProblem: python: command not found
Solution: Make sure Python 3.8+ is installed and in your PATH:
# Check Python version
python --version
# or
python3 --version
# On macOS with Homebrew
brew install python@3.11
# On Ubuntu/Debian
sudo apt-get install python3.11Problem: Permission denied when running skill-split.py
Solution: Make the script executable:
chmod +x skill_split.pyProblem: ModuleNotFoundError: No module named 'click'
Solution: Install dependencies:
pip install -r requirements.txt
# or
pip install click pytestProblem: sqlite3.OperationalError: database is locked
Solution: Close other processes using the database or use a different database:
./skill_split.py store SKILL.md --db skill-split-new.dbProblem: supabase.ClientError: Invalid API key
Solution: Verify your environment variables:
echo $SUPABASE_URL
echo $SUPABASE_PUBLISHABLE_KEY
# Test connection
curl -I $SUPABASE_URLProblem: openai.error.AuthenticationError: No API key provided
Solution: Set your OpenAI API key:
export OPENAI_API_KEY="sk-your-key-here"
# Verify it's set
echo $OPENAI_API_KEYProblem: Tests fail with import errors
Solution: Install test dependencies:
pip install pytest pytest-mock pytest-cov# Remove the installation
pip uninstall skill-split
# Or if installed in editable mode
rm -rf skill-split# Remove default database
rm skill-split.db
# Remove production database
rm ~/.claude/databases/skill-split.db
# Remove backups
rm -rf ~/.claude/backups/After installation:
- Read the Quick Start: README.md
- Try Examples: EXAMPLES.md
- Explore the API: API.md
- Contribute: CONTRIBUTING.md
For installation issues:
- Check the Troubleshooting section
- Review existing GitHub issues
- Open a new issue with your error message and environment details
Last Updated: 2026-02-10