This document provides instructions for running the Tasky Backend Service using Docker.
- Docker Engine 20.10+
- Docker Compose 2.0+
- Make (optional, for using Makefile commands)
-
Clone the repository and navigate to the project directory
-
Copy the environment file
cp env.example .env
-
Update the environment variables in
.envwith your actual values -
Start the development environment
# Using docker-compose directly docker-compose up --build # Or using Make (if available) make dev
-
The application will be available at
http://localhost:3000
-
Create a production environment file
cp env.example .env.production
-
Update the production environment variables
-
Start the production environment
# Using docker-compose docker-compose -f docker-compose.prod.yml up --build -d # Or using Make make prod
If you have Make installed, you can use these convenient commands:
make help # Show all available commands
make dev # Start development environment
make dev-detached # Start development environment in detached mode
make dev-down # Stop development environment
make dev-logs # View logs from development environment
make dev-shell # Get shell access to the app container
make prod # Start production environment
make prod-down # Stop production environment
make prod-logs # View logs from production environment
make db-migrate # Run database migrations
make db-seed # Run database seeders
make db-reset # Reset database (drop, create, migrate, seed)
make build # Build the Docker image
make clean # Clean up Docker resources
make clean-all # Clean up all Docker resources including images# Using Make
make db-migrate
# Using docker-compose directly
docker-compose exec app npm run migrate# Using Make
make db-seed
# Using docker-compose directly
docker-compose exec app npm run seed# Using Make
make db-reset
# Using docker-compose directly
docker-compose exec app npm run db:resetThe application requires the following environment variables:
| Variable | Description | Required |
|---|---|---|
NODE_ENV |
Application environment (development/production) | Yes |
PORT |
Port for the application to run on | Yes |
DB_HOST |
Database host | Yes |
DB_PORT |
Database port | Yes |
DB_NAME |
Database name | Yes |
DB_USER |
Database username | Yes |
DB_PASSWORD |
Database password | Yes |
SENTRY_API_KEY |
Sentry DSN for error tracking | No |
GOOGLE_SERVICE_KEY |
Firebase service account JSON | No |
SENDGRID_API_KEY |
SendGrid API key for emails | No |
CLOUDINARY_API_KEY |
Cloudinary API key | No |
CLOUDINARY_SECRET_KEY |
Cloudinary secret key | No |
CLOUD_NAME |
Cloudinary cloud name | No |
postgres_data: Persists PostgreSQL data- Development volume mounts: Source code is mounted for hot reloading in development
tasky-network: Internal bridge network for service communication
The application includes health checks:
- App container: HTTP check on
/api/v1/healthendpoint - PostgreSQL container: Database connectivity check using
pg_isready
-
Port already in use
# Check what's using the port lsof -i :3000 # Kill the process or change the port in .env
-
Database connection issues
# Check if PostgreSQL container is running docker-compose ps # Check PostgreSQL logs docker-compose logs postgres
-
Permission issues
# Ensure proper ownership sudo chown -R $USER:$USER .
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f app
docker-compose logs -f postgres# App container
docker-compose exec app sh
# PostgreSQL container
docker-compose exec postgres psql -U postgres -d tasky_development- Use a reverse proxy (Nginx, Traefik) for SSL termination and load balancing
- Set up proper logging with log rotation
- Configure monitoring and alerting
- Use secrets management for sensitive environment variables
- Regular backups of the PostgreSQL data volume
- Resource limits should be set based on your infrastructure
- The application runs as a non-root user inside the container
- Sensitive files are excluded via
.dockerignore - Use environment variables for all configuration
- Consider using Docker secrets for production deployments