depends_onrevisited: Understanding its limitations (only ensures start order, not service readiness).- Healthchecks: Defining commands that Docker can run periodically to check if a service is healthy, allowing Compose to wait for services to be truly ready.
- Environment Variables: Passing configuration to services (using
environmentand.envfiles). - Building Images with Compose: The
buildkey for building images on the fly. - Scaling Services: Using
docker-compose up --scaleto run multiple instances of a service. profiles(Compose v3.4+): For running different subsets of services in development vs. testing vs. production.extends(Compose v2/3): Reusing common configurations across multiple Compose files.
- Healthchecks (improving
depends_on):
version: '3.8'
services:
web:
build: ./backend # Build from backend directory
# Removed ports mapping here, as Nginx will handle external exposure
volumes:
- ./backend:/app # Mount host's backend directory to container's /app
environment:
REDIS_HOST: redis
networks: # Connect 'web' service to 'app_network'
- app_network
depends_on:
redis: # Ensure 'redis' service starts before 'web' (does not wait for readiness)
condition: service_healthy # Wait for redis to be healthy- Using
.envfiles for environment variables: Create a file named.envin the same directory asdocker-compose.yml:
DB_USER=myuser
DB_PASSWORD=mypassword
Then, in docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:13
environment:
POSTGRES_USER: ${DB_USER} # Uses variable from .env
POSTGRES_PASSWORD: ${DB_PASSWORD} # Uses variable from .envDocker Compose will automatically load variables from a .env file.
- Scaling a service:
Adding nginx to docker-compose.yaml
services:
nginx: # New Nginx service to act as a reverse proxy/load balancer
image: nginx:alpine # Use a lightweight Nginx image
volumes:
- ./nginx:/etc/nginx/conf.d # Mount custom Nginx configuration
ports:
- "8080:80" # Expose Nginx's port 80 to host's port 8080
networks:
- app_network
depends_on:
web: # Nginx depends on the 'web' service to be running
condition: service_started # Nginx can start as soon as web containers are startedmkdir nginx
cd nginx# default.conf
upstream web_app {
# This tells Nginx to load balance requests across all instances
# of the 'web' service. Docker Compose's internal DNS handles resolution.
server web:5000;
}
server {
listen 80; # Nginx listens on port 80 internally
location / {
# Proxy all requests to the 'web_app' upstream group
proxy_pass http://web_app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
docker-compose up -d --scale web=3 # Runs 3 instances of the 'web' service
docker-compose ps # Observe multiple web containersThen scale back down:
docker-compose up -d --scale web=1profilesfor development vs. production services:
version: '3.8'
services:
app:
build: .
profiles: ["app"] # Only starts with 'app' profile
db:
image: postgres
profiles: ["app"]
test_runner:
image: my-test-image
profiles: ["test"] # Only starts with 'test' profileRun with specific profile: ddocker-compose --profile app up -docker-compose --profile app up -d
- Take your multi-tiered application from Challenge 8.
- Add a healthcheck to your PostgreSQL service to ensure it's truly ready before your Flask app attempts to connect.
- Use a
.envfile to manage sensitive information like database credentials, rather than hardcoding them indocker-compose.yml. - Experiment with scaling your web service using
docker-compose up --scale. Observe how Docker handles the multiple instances.