This sample shows a multi-service AI app built with Mastra, Next.js, PostgreSQL (with pgvector), Redis, and BullMQ, deployed with Defang from a single Docker Compose file.
A background worker generates sample tasks and events with an LLM, pushes them to a queue, then they get classified and stored with vector embeddings in PostgreSQL for semantic search. A Mastra-powered chat agent uses tools to inspect the current state before answering questions.
- Download Defang CLI
- (Optional) If you are using Defang BYOC authenticate with your cloud provider account
- (Optional for local development) Docker CLI
To run the application locally for development, use the development compose file:
docker compose -f compose.dev.yaml up --buildThis will:
- Start the Next.js app on
http://localhost:3000 - Start PostgreSQL on port 5432
- Start Redis on port 6379
- Start a background worker for item classification
- Start Docker model-provider services for chat + embeddings
Local development uses:
ai/qwen2.5:3B-Q4_K_Mfor chat/tool-callingmxbai-embed-largefor embeddings
This matches the CrewAI sample's local model setup and relies on Docker Model Runner / model-provider support being available in your local Docker installation. The first run will download both models, so startup can take a few minutes.
If docker compose -f compose.dev.yaml up fails with exec: "model": executable file not found in $PATH, your local Docker installation does not have Docker Model Runner enabled yet.
To keep iteration practical on CPU-only setups, compose.dev.yaml enables LOCAL_FAST_DATA=true, which uses deterministic sample generation and classification locally while still exercising the real chat and embedding services.
In deployed environments, the app uses dedicated chat and embedding model services defined in compose.yaml. Defang injects OpenAI-compatible CHAT_URL / CHAT_MODEL and EMBEDDING_URL / EMBEDDING_MODEL environment variables automatically, so the application code stays platform-independent.
For this sample, you will need to provide the following configuration. Note that if you are using the 1-click deploy option, you can set these values as secrets in your GitHub repository and the action will automatically deploy them for you.
The password for your PostgreSQL database. You need to set this before deploying for the first time.
You can easily set this to a random string using defang config set POSTGRES_PASSWORD --random
- Open the app.
- Click Generate sample items.
- Watch the worker create 10 tasks and 10 events, then fan out per-item classify/embed jobs (progress updates in real time via SSE).
- Ask questions like:
What should I look at first?Summarize the tasks and events.Which items seem related?Find events similar to the deploy rollback.
Note
Download Defang CLI
Deploy your application to the Defang Playground by opening up your terminal and typing:
defang compose upIf you want to deploy to your own cloud account, you can use Defang BYOC.
The default sample uses Defang's managed model provider services:
chatuseschat-defaultembeddingusesembedding-default
If you want to pin different models, edit the provider.options.model values in compose.yaml.
Warning
Extended deployment time: This sample creates a managed PostgreSQL database which may take upwards of 20 minutes to provision on first deployment. Subsequent deployments are much faster (2-5 minutes).
Title: Mastra Extended
Short Description: A small Defang sample where background jobs classify and embed tasks and events, and a Mastra copilot answers questions with tools.
Tags: Mastra, Next.js, PostgreSQL, Redis, BullMQ, AI, Agents
Languages: TypeScript, JavaScript, Docker