A visual drag-and-drop interface for creating, configuring, and executing AI agent workflows. Build complex agent interactions through an intuitive node-based editor that generates Python code using the Strands Agent SDK.
- Visual Flow Editor: Drag-and-drop interface for building agent workflows
- Multi-Agent Support: Create complex hierarchical agent workflows with orchestrator agents that coordinate sub-agents
- Interactive Chat Interface: Chat directly with your agents using full conversation history, streaming responses, and contextual memory across conversations
- MCP Server Integration: Connect to Model Context Protocol servers for extended tool capabilities
- Custom Tool Nodes: Define your own Python functions as reusable tools with @tool decorator
- Multiple Model Providers: Support for AWS Bedrock and OpenAI-compatible API endpoints
- Code Generation: Automatically generates Python code from visual flows
- Real-time Execution: Execute agents with streaming support and live updates
- Project Management: Save, load, and manage multiple agent projects with persistent local storage
- Execution History: Track and replay previous agent runs
- One-Click Deployment: Deploy agents to AWS Bedrock AgentCore / AWS Lambda Function/ ECS Fargate with a single click
- Youtube: Build AI Agent Teams Visually - No Code Required! 🤖 | Open Studio for Strands Agent
- Wechat: 一款专为Strands Agent打造的无代码可视化编排工具
- Install Node Js 22
- Install uv
- Install all frontent dependencies in the project folder
npm install- Install all backend dependencies in the project folder
cd backend
uv sync-
Install AWS CLI https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
-
(Optional) intall
aws-sam-clianddockerfor Lambda deployment
uv pip install aws-sam-cliinstall Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo systemctl enable docker && sudo systemctl start docker && sudo usermod -aG docker ubuntu
sudo chmod 666 /var/run/docker.sock# Start frontend development server
npm run dev
# Start backend server
npm run backend:dev
# Run both frontend and backend
npm run dev:full- Build any agent flow with input/output nodes
- Click "Chat with Agent" in the execution panel
- Have a natural conversation with your agent using streaming responses and full conversation history
Download this sample Flow and import to the UI, and experience it.

# Start all services in production mode
./start_all.sh
# Stop all services
./stop_all.sh# Build for production
npm run build
# Build and preview frontend
npm run preview
# Start backend in production mode
cd backend && uv run uvicorn main:app --host 0.0.0.0 --port 8000
# Or use the npm script
npm run backend:prod- Automated Setup:
start_all.shhandles dependency installation, building, and service startup - Background Execution: Services run in the background with proper logging
- Health Checks: Automatic verification that services started successfully
- Port Management: Checks for port conflicts and provides warnings
- Graceful Shutdown:
stop_all.shproperly stops all services and cleans up processes - Log Management: Centralized logging in
logs/directory - Secure Proxy Architecture: Backend only accessible internally via Vite proxy
- Single Port Exposure: Only frontend port (5173) needs to be exposed
- Cloud Deployment: Auto-detects public IP for EC2 deployment
- ALB Support: Compatible with AWS Application Load Balancer
The application uses a secure proxy architecture where:
- Frontend (port 5173): Publicly accessible, serves the React application
- Backend (port 8000): Internal only, proxied through frontend
- All API requests are automatically routed through the frontend to backend
- Only port 5173 needs to be exposed in firewalls/security groups
./start_all.sh
# Access: http://localhost:5173
# API Docs: http://localhost:5173/docs./start_all.sh
# Auto-detects public IP (e.g., http://35.88.128.160:5173)
# API Docs: http://35.88.128.160:5173/docsexport ALB_HOSTNAME=your-alb-hostname.us-west-2.elb.amazonaws.com
./start_all.sh
# Access: http://your-alb-hostname.us-west-2.elb.amazonaws.com:5173
# API Docs: http://your-alb-hostname.us-west-2.elb.amazonaws.com:5173/docsSecurity Groups / Firewall Rules:
- Inbound: Only allow port 5173 (frontend)
- Port 8000: Not exposed externally (backend is internal-only)
Access URLs:
- Application:
http://YOUR_HOST:5173 - API Documentation:
http://YOUR_HOST:5173/docs(proxied to backend) - Health Check:
http://YOUR_HOST:5173/health(proxied to backend)
- Frontend logs:
logs/frontend.log - Backend logs:
logs/backend.log
The Open Studio now supports one-click deployment of your agent workflows to AWS infrastructure, making it easy to move from development to production.
Deploy your agent as a Bedrock AgentCore agent for serverless, managed AI agent execution.
Features:
- Fully managed agent runtime by AWS Bedrock
- Automatic scaling and high availability
- Integrated with AWS services (S3, DynamoDB, Lambda)
- Pay-per-use pricing model
- Built-in monitoring and logging via CloudWatch
How to Deploy:
- Build your agent workflow in the visual editor
- Click the "Deploy to AgentCore" button in the execution panel
- Configure deployment settings (agent name, IAM role, etc.)
- The system will automatically:
- Generate the agent code
- Package dependencies
- Create CloudFormation stack
- Deploy to Bedrock AgentCore
- Provide the agent ARN for invocation
Requirements:
- AWS credentials configured (via AWS CLI or environment variables)
- Appropriate IAM permissions for Bedrock and CloudFormation
- Bedrock AgentCore enabled in your AWS region
Deploy your agent as an AWS Lambda Function for serverless execution with HTTP API access.
Features:
- Serverless compute with automatic scaling
- HTTP API endpoint for agent invocation
- Support for synchronous and asynchronous execution
- Integration with API Gateway, EventBridge, and other AWS services
- Cost-effective pay-per-request pricing
- Built-in monitoring via CloudWatch Logs
How to Deploy:
- Build your agent workflow in the visual editor
- Click the "Deploy to Lambda" button in the execution panel
- Configure deployment settings (function name, memory, timeout, etc.)
- The system will automatically:
- Generate the agent code with Lambda handler
- Package dependencies into deployment package
- Create CloudFormation stack with Lambda function and IAM role
- Deploy to AWS Lambda
- Provide the function ARN and invocation URL
Requirements:
- AWS credentials configured (via AWS CLI or environment variables)
- Appropriate IAM permissions for Lambda, IAM, and CloudFormation
- Sufficient Lambda quotas in your AWS account
Both deployment options use AWS CloudFormation for infrastructure as code, ensuring:
- Reproducible deployments
- Version control for infrastructure
- Easy rollback capabilities
- Automated resource cleanup
The deployment process:
- Code Generation: Converts visual flow to production-ready Python code
- Dependency Packaging: Bundles all required packages (Strands SDK, tools, etc.)
- CloudFormation Stack Creation: Provisions AWS resources (Lambda/AgentCore, IAM roles, etc.)
- Deployment: Uploads code and creates the agent/function
- Validation: Verifies successful deployment and provides invocation details
View Deployment Status:
- Use the AWS Console to monitor CloudFormation stacks
- Check CloudWatch Logs for execution logs
- View Lambda/AgentCore metrics in CloudWatch
Update Deployment:
- Make changes to your agent workflow
- Click deploy again with the same stack name to update
Delete Deployment:
- Delete the CloudFormation stack via AWS Console or CLI
- All associated resources will be cleaned up automatically
- Use Separate AWS Accounts/Regions: Deploy dev/staging/prod environments separately
- Configure Timeouts: Set appropriate Lambda timeout values based on agent complexity
- Monitor Costs: Use AWS Cost Explorer to track deployment costs
- Enable Logging: CloudWatch Logs are enabled by default for debugging
- Secure Credentials: Use IAM roles instead of hardcoded credentials in agent code
- Test Locally First: Validate your agent workflow in the Studio before deploying
- Frontend: React 19, TypeScript, Vite, Tailwind CSS, XYFlow
- Backend: FastAPI, Python, Uvicorn
- AI Agents: Strands Agent SDK with support for AWS Bedrock and OpenAI-compatible models
The application consists of a React frontend for the visual editor and a FastAPI backend for code execution and conversation management. Projects are stored locally in the browser, while execution artifacts and conversation sessions are managed by the backend's file-based storage system. The chat interface provides real-time interaction with agents using full conversation history and streaming responses.
- Input Node
- Output Node
- Single Agent Node
- Orchestrator Agent Node
- MCP server node
- Built tool node
- Custom tool node
- Swarm Agent Node
- Structural Output Node - to do
- Condition Node - to do
- Single agent mode
- Agents as tool mode
- Graph mode
- Single turn execution run
- Multi turns interactive chat mode
- One-click deploy to Bedrock AgentCore
- One-click deploy to Lambda
- One-click deploy to ECS Fargate










