Crawl every skill from ClawHub, migrate metadata.openclaw to metadata.strawpot, and publish to StrawHub.
See DESIGN.md for detailed architecture.
pip install -e .Requires Python 3.10+.
clawhub-importer --dry-runclawhub-importer --dump-dir ./outputexport STRAWHUB_URL="https://your-strawhub-instance.dev"
export STRAWHUB_TOKEN="your-api-token"
clawhub-importer --publishOr pass the target URL explicitly:
clawhub-importer --publish --target https://your-strawhub-instance.dev --token your-api-tokenclawhub-importer --publish --slugs gridtrx 12306clawhub-importer --publish --forceclawhub-importer [OPTIONS]
--target URL StrawHub base URL (or set STRAWHUB_URL env var)
--token TOKEN StrawHub API Bearer token (or set STRAWHUB_TOKEN env var)
--dry-run Crawl and transform but don't actually publish
--publish Actually publish to StrawHub (requires --token and --target)
--dump-dir DIR Directory to dump transformed skills for inspection
--slugs SLUG [...] Only process specific skill slugs
--force Re-import all skills, ignoring previous state
--state-file PATH Path to import state file (default: .clawhub_importer_state.json)
-v, --verbose Enable debug logging
The importer tracks which skills have been imported and at which version in .clawhub_importer_state.json. On subsequent runs, it skips unchanged skills and only downloads new or updated ones. Use --force to re-import everything.
Skills claimed by another user on StrawHub (400 "You do not own this skill") are permanently added to a skipped_slugs list in the state file, so they are never re-downloaded or re-attempted.
A daily import workflow runs via .github/workflows/import.yml:
- 06:00 UTC — imports to the preview environment
- 08:00 UTC — imports to production
The workflow uses actions/cache to persist the state file between runs. Manual dispatch is also supported via workflow_dispatch with a target selector (preview or production).
Required repository secrets: STRAWHUB_PREVIEW_URL, STRAWHUB_PREVIEW_TOKEN, STRAWHUB_PROD_URL, STRAWHUB_PROD_TOKEN.
The importer preserves original metadata.openclaw / metadata.clawdbot and adds metadata.strawpot alongside it:
Before (ClawHub):
metadata: {"openclaw":{"emoji":"...","requires":{"bins":["node"],"env":["API_KEY"]}}}After (StrawHub):
metadata:
openclaw:
emoji: "..."
requires:
bins: [node]
env: [API_KEY]
strawpot:
dependencies: []
tools:
node:
description: "Required binary: node"
install:
macos: brew install node
linux: apt install nodejs
windows: winget install OpenJS.NodeJSThe importer respects ClawHub's rate limits (120 req/60s for list API, 20 req/60s for downloads) by reading response headers and throttling automatically. Filtering by state before downloading avoids wasting the expensive download quota on unchanged skills.