Nemesis is an open-source, centralized data processing platform that ingests, enriches, and allows collaborative analysis (with humans and AI) of files collected during offensive security assessments.
Nemesis 2.0 is built on Docker with heavy Dapr integration, our goal with Nemesis was to create a centralized file processing platform that functions as an "offensive VirusTotal".
Note: the previous Nemesis 1.0.1 code base has been preserved as a branch
Follow the quickstart guide.
See the Nemesis Usage Guide.
Nemesis uses GitHub Actions as the source of truth for build health:
CI Fast Gate(.github/workflows/ci-fast.yml):- Runs on pull requests to
main(excluding docs-only changes) and manual dispatch. - Installs dependencies with
uv, runs tests via./tools/test.sh, and runs non-mutating type checks via./tools/typecheck.sh.
- Runs on pull requests to
Docker Validate Nightly(.github/workflows/docker-validate-nightly.yml):- Runs daily at
10:00 UTCand supports manual dispatch. - Performs build-only validation on both
amd64andarm64runners for base images, production service images, and the CLI production image.
- Runs daily at
Image publishing workflows remain separate and are intentionally unchanged:
.github/workflows/docker-build.yml.github/workflows/docker-build-base.yml.github/workflows/docker-build-noseyparker.yml
When the --llm profile is enabled, Nemesis routes model calls through LiteLLM (infra/litellm/config.yml).
For both Codex and Gemini, you can choose either interactive OAuth credentials or non-interactive credentials for automation.
| Provider | Auth option | Best for | Notes |
|---|---|---|---|
| Codex (OpenAI via LiteLLM) | OAuth user/session token | Local interactive testing | Fast to get started for personal/dev usage; token refresh can be required. |
| Codex (OpenAI via LiteLLM) | API key or service credential | CI, servers, unattended runs | Recommended for stability in automation and scheduled jobs. |
| Gemini (Google via LiteLLM) | OAuth user/session token | Local interactive testing | Useful for developer workflows; avoid for unattended runtime. |
| Gemini (Google via LiteLLM) | API key or service credential | CI, servers, unattended runs | Recommended for fork CI and long-running deployments. |
Credential handling guidance:
- Store secrets in
.env(gitignored) or your runtime secret manager, never in git-tracked files. - Reference those secrets from
infra/litellm/config.yml. - Keep at least one configured model named
defaultfor agents.
Choose one mode at a time, then restart with ./tools/nemesis-ctl.sh start prod --llm.
Codex OAuth mode (.env):
LLM_AUTH_MODE=codex_oauth
CODEX_AUTH_EXPERIMENTAL=true
CODEX_AUTH_PROFILE_MOUNT_SOURCE=$HOME/.codex/auth.json
CODEX_AUTH_PROFILE_PATH=/run/secrets/codex-auth-profile.json
CODEX_AUTH_PROFILE_NAME=openai-codex:defaultGemini via Google AI Studio (.env):
LLM_AUTH_MODE=official_key
GEMINI_API_KEY=your-gemini-api-keyGemini via Google AI Studio (compose.yaml, litellm.environment add):
- GEMINI_API_KEY=${GEMINI_API_KEY:-}Gemini via Google AI Studio (infra/litellm/config.yml, set default):
model_list:
- model_name: default
litellm_params:
model: gemini/gemini-2.5-flash
api_key: os.environ/GEMINI_API_KEYGemini via Vertex AI (.env):
LLM_AUTH_MODE=official_key
VERTEX_PROJECT=your-gcp-project-id
VERTEX_LOCATION=us-central1
VERTEX_CREDENTIALS_FILE=/absolute/path/to/service-account.jsonGemini via Vertex AI (compose.yaml, litellm.environment add):
- VERTEX_PROJECT=${VERTEX_PROJECT:-}
- VERTEX_LOCATION=${VERTEX_LOCATION:-}Gemini via Vertex AI (compose.yaml, litellm.volumes add):
- ${VERTEX_CREDENTIALS_FILE}:/run/secrets/vertex-service-account.json:roGemini via Vertex AI (infra/litellm/config.yml, set default):
model_list:
- model_name: default
litellm_params:
model: vertex_ai/gemini-2.5-pro
vertex_project: your-gcp-project-id
vertex_location: us-central1
vertex_credentials: /run/secrets/vertex-service-account.jsonWhen the --llm profile is enabled, the agents service starts the chatbot MCP server (genai-toolbox) at startup.
Nemesis now runs a startup preflight that validates the chatbot_readonly database login before launching MCP.
If CHATBOT_DB_PASSWORD in .env drifts from the actual Postgres role password, startup logs include an explicit preflight failure with remediation guidance.
If you rotate CHATBOT_DB_PASSWORD after initial database bootstrap, sync the role password and restart chatbot path services:
cd /path/to/Nemesis
POSTGRES_USER=$(awk -F= '/^POSTGRES_USER=/{gsub(/"/,"",$2);print $2}' .env)
POSTGRES_PASSWORD=$(awk -F= '/^POSTGRES_PASSWORD=/{gsub(/"/,"",$2);print $2}' .env)
POSTGRES_DB=$(awk -F= '/^POSTGRES_DB=/{gsub(/"/,"",$2);print $2}' .env)
CHATBOT_DB_PASSWORD=$(awk -F= '/^CHATBOT_DB_PASSWORD=/{gsub(/"/,"",$2);print $2}' .env)
docker compose --profile llm -f compose.yaml exec -T postgres sh -lc \
"PGPASSWORD='${POSTGRES_PASSWORD}' psql -h postgres -U '${POSTGRES_USER}' -d '${POSTGRES_DB}' -v ON_ERROR_STOP=1 -c \"ALTER ROLE chatbot_readonly WITH PASSWORD '${CHATBOT_DB_PASSWORD}';\""
docker compose --profile llm -f compose.yaml up -d agents agents-daprBlog Posts:
| Title | Nemesis Version | Date |
|---|---|---|
| Nemesis 2.2 | v2.2 | Feb 25, 2025 |
| Nemesis 2.0 | v2.0 | Aug 5, 2025 |
| Nemesis 1.0.0 | v1.0 | Apr 25, 2024 |
| Summoning RAGnarok With Your Nemesis | v1.0 | Mar 13, 2024 |
| Shadow Wizard Registry Gang: Structured Registry Querying | v1.0 | Sep 5, 2023 |
| Hacking With Your Nemesis | v1.0 | Aug 9, 2023 |
| Challenges In Post-Exploitation Workflows | v1.0 | Aug 2, 2023 |
| On (Structured) Data | v1.0 | Jul 26, 2023 |
Presentations:
| Title | Date |
|---|---|
| OffensiveX 2025 | Jun 19, 2025 |
| x33fcon 2025 | Jun 13, 2025 |
| SAINTCON 2023 | Oct 24, 2023 |
| BSidesAugusta 2023 | Oct 7, 2023 |
| 44CON 2023 | Sep 15, 2023 |
| BlackHat Arsenal USA 2023 | Sep 15, 2023 |
Nemesis is built on large chunk of other people's work. Throughout the codebase we've provided citations, references, and applicable licenses for anything used or adapted from public sources. If we're forgotten proper credit anywhere, please let us know or submit a pull request!
We also want to acknowledge Evan McBroom, Hope Walker, and Carlo Alcantara from SpecterOps for their help with the initial Nemesis concept and amazing feedback throughout the development process. Also thanks to Matt Ehrnschwender for tons of k3s and GitHub workflow help in Nemesis 1.0!
And finally, shout out to OpenAI and Claude for helping with this rewrite.
