Development Environment & Tools - 1/2
From Development Environment to Deployment Reality
You’ve mastered professional development environments with automated setup scripts, implemented comprehensive configuration management that prevents environment drift, built debugging workflows that make complex problems tractable, and created productivity tooling that transforms your IDE into a development command center. Your local development experience is now polished, consistent, and efficient. But here’s the deployment reality check that separates hobby projects from production systems: perfect local development environments mean nothing if your application deployment is inconsistent, unreliable, or depends on “it works on my machine” magic.
The deployment nightmare that destroys production systems:
# Your team's deployment horror story
# Developer 1: "It worked perfectly in development!"
$ ssh production-server
$ git pull origin main
$ npm install
npm ERR! peer dep missing: node@>=18.0.0, but node@16.14.2
# Developer 2: "Let me fix the Node.js version"
$ nvm install 18
$ npm install
# 30 minutes later...
$ npm start
Error: Cannot find module '/usr/local/lib/node_modules/some-global-package'
# Developer 3: "The database connection isn't working"
$ cat /etc/hosts
127.0.0.1 localhost
# No database host configuration, postgres installed differently
# Developer 4: "Why are the file permissions wrong?"
$ ls -la uploads/
drwxr-xr-x 2 www-data www-data 4096 Dec 1 10:30 uploads
# Local development used your user, production uses www-data
# Operations team: "The server is down again"
$ systemctl status myapp
● myapp.service - My Application
Active: failed (Result: exit-code) since Wed 2023-12-01 15:45:12 UTC; 2m ago
Process: 15432 ExecStart=/usr/bin/node server.js (code=exited, status=1)
# The cascading deployment disasters:
# - Different Node.js versions cause dependency conflicts
# - Missing system packages break native modules
# - Environment variables configured differently across servers
# - File permissions and ownership inconsistencies
# - Port conflicts and service discovery failures
# - Database connection strings hardcoded for local development
# - SSL certificates and security configurations missing
# - Log file locations and rotation policies inconsistent
# - Process management and auto-restart mechanisms absent
The uncomfortable deployment truth: Sophisticated development workflows and comprehensive testing can’t save you from production disasters when your deployment strategy is “copy files and pray.” Professional deployment requires environment consistency, dependency isolation, and infrastructure reproducibility.
Real-world deployment failure consequences:
// What happens when containerization is ignored:
const deploymentFailureImpact = {
productionOutages: {
problem: "Application works in dev but crashes in production",
cause: "Different OS versions, missing system dependencies",
impact: "6-hour outage during peak traffic, $200K revenue loss",
prevention: "Container-based deployment would have caught this in staging",
},
scalingNightmares: {
problem: "Cannot scale horizontally due to server-specific configurations",
cause: "Applications coupled to specific server environments",
impact:
"Manual server provisioning takes 3 days, missed growth opportunity",
reality: "Competitors using containers scale in minutes, not days",
},
securityIncidents: {
problem: "Production server compromised through outdated system packages",
cause: "Inconsistent patching across hand-configured servers",
impact: "Data breach, compliance violations, customer trust destroyed",
cost: "Legal fees and regulatory fines exceed $2M",
},
// Perfect code architecture is worthless when your deployment
// infrastructure introduces inconsistency and failure points
};
Containerization mastery requires understanding:
- What containerization is and how it solves the “works on my machine” problem through environment isolation
- Docker fundamentals that package your application with all dependencies into portable, reproducible units
- Dockerfile best practices that create secure, optimized, and maintainable container images
- Container networking and volumes that handle data persistence and service communication professionally
- Docker Compose orchestration that manages multi-service applications with production-ready configurations
This article transforms your deployment from unreliable manual processes into predictable, automated containerization that ensures your applications run consistently across development, staging, and production environments.
What Is Containerization: Solving the Deployment Consistency Problem
The Containerization Revolution
Containerization transforms deployment from chaos to consistency:
// ❌ Traditional deployment: The configuration nightmare
const traditionalDeployment = {
localDevelopment: {
os: "macOS 13.1",
nodeVersion: "18.12.1",
python: "3.9.16",
database: "PostgreSQL 15.1 (Homebrew)",
redis: "7.0.5 (local install)",
environment: "All services running natively",
dependencies: "Globally installed packages with version conflicts",
},
stagingServer: {
os: "Ubuntu 20.04",
nodeVersion: "16.14.2", // Different version!
python: "3.8.10", // Different version!
database: "PostgreSQL 13.8 (apt package)",
redis: "6.0.16 (different configuration)",
environment: "Services managed by systemd",
dependencies: "System packages with different versions",
},
productionServer: {
os: "CentOS 7",
nodeVersion: "14.21.1", // Even older!
python: "2.7.5", // Ancient!
database: "PostgreSQL 12.9 (yum package)",
redis: "3.2.12 (seriously outdated)",
environment: "Services manually configured",
dependencies: "Hand-compiled packages with unknown versions",
},
// The inevitable result: Deployment lottery
deploymentSuccess: "Maybe 60% chance it works",
debuggingTime: "Hours to days for environment differences",
scalingCapability: "Impossible without repeating server setup hell",
};
Containerization eliminates deployment inconsistencies through environment standardization:
# ✅ Container-based deployment: Guaranteed consistency
# The container is the same everywhere it runs
# Dockerfile - Your application's environment specification
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy dependency manifests
COPY package*.json ./
# Install dependencies in container
RUN npm ci --only=production && npm cache clean --force
# Copy application code
COPY src/ ./src/
COPY config/ ./config/
# Create non-root user for security
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
# Set ownership and switch to non-root user
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose application port
EXPOSE 3000
# Define health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Start application
CMD ["node", "src/server.js"]
# This EXACT environment runs everywhere:
# - Developer laptop: ✅ Same Node.js 18
# - CI/CD pipeline: ✅ Same Node.js 18
# - Staging server: ✅ Same Node.js 18
# - Production cluster: ✅ Same Node.js 18
# - New team member's machine: ✅ Same Node.js 18
# No more "works on my machine" - it works everywhere or nowhere
The containerization paradigm shift:
// Container thinking: Applications as portable units
class ContainerizedApplication {
constructor() {
this.runtime = "Isolated from host system";
this.dependencies = "Bundled with application";
this.configuration = "Environment-specific via env vars";
this.data = "Externalized to volumes";
this.networking = "Service discovery and load balancing";
this.scalability = "Horizontal scaling ready";
}
deployTo(environment) {
// Same container image runs everywhere
const container = {
image: "myapp:v1.2.3", // Immutable artifact
environment: environment.getConfig(),
volumes: environment.getDataMounts(),
networks: environment.getNetworkConfig(),
resources: environment.getResourceLimits(),
};
return container.run();
}
// The magic: Write once, run anywhere
scale(instances) {
return Array(instances)
.fill(null)
.map(() => this.deployTo(currentEnvironment));
}
}
Docker Fundamentals: Your Application Packaging System
Docker Architecture That Actually Makes Sense
Understanding Docker’s core components for professional usage:
# Docker ecosystem components explained
#
# 1. Docker Image: Read-only template for creating containers
# - Like a blueprint for a house
# - Contains OS, runtime, dependencies, application code
# - Immutable - never changes once built
#
# 2. Docker Container: Running instance of an image
# - Like a house built from the blueprint
# - Has its own filesystem, network, process space
# - Can be started, stopped, moved, deleted
#
# 3. Dockerfile: Text file with instructions to build an image
# - Like architectural plans for the blueprint
# - Version controlled with your application code
#
# 4. Docker Registry: Storage for Docker images
# - Like a library of blueprints
# - Docker Hub is public, you can run private registries
# Essential Docker commands for backend development
# Image management
docker build -t myapp:v1.0.0 . # Build image from Dockerfile
docker images # List local images
docker rmi myapp:v1.0.0 # Remove image
docker pull node:18-alpine # Download image from registry
docker push myapp:v1.0.0 # Upload image to registry
# Container lifecycle
docker run -d --name myapp-container myapp:v1.0.0 # Run container in background
docker ps # List running containers
docker ps -a # List all containers
docker stop myapp-container # Stop container
docker start myapp-container # Start stopped container
docker rm myapp-container # Remove container
# Container interaction
docker exec -it myapp-container /bin/sh # Open shell in container
docker logs myapp-container # View container logs
docker logs -f myapp-container # Follow container logs
docker inspect myapp-container # Detailed container info
# Data and networking
docker volume create myapp-data # Create named volume
docker network create myapp-network # Create custom network
Professional Docker image building strategy:
# Dockerfile.production - Multi-stage build for optimized images
# Stage 1: Build environment
FROM node:18-alpine AS builder
WORKDIR /app
# Copy package files
COPY package*.json ./
COPY tsconfig.json ./
# Install all dependencies including dev dependencies
RUN npm ci
# Copy source code
COPY src/ ./src/
# Build application (TypeScript compilation, etc.)
RUN npm run build
# Run tests in build stage
RUN npm run test:ci
# Stage 2: Production environment
FROM node:18-alpine AS production
# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init
# Create application directory
WORKDIR /app
# Create non-root user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
# Copy package files
COPY package*.json ./
# Install only production dependencies
RUN npm ci --only=production && npm cache clean --force
# Copy built application from builder stage
COPY --from=builder --chown=nextjs:nodejs /app/dist ./dist
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
# Switch to non-root user
USER nextjs
# Expose port
EXPOSE 3000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1
# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]
# Image optimization results:
# - Multi-stage build: Only production files in final image
# - Alpine Linux: Smaller base image (5MB vs 1GB)
# - Non-root user: Security best practice
# - Health check: Container orchestration compatibility
# - Signal handling: Graceful shutdowns
Docker development workflow integration:
#!/bin/bash
# docker-dev-workflow.sh - Professional Docker development integration
set -euo pipefail
# Docker development commands
build_dev() {
echo "🏗️ Building development Docker image..."
# Build with development target
docker build \
--target development \
--tag myapp:dev \
--file Dockerfile.dev \
.
echo "✅ Development image built successfully"
}
build_prod() {
echo "🏗️ Building production Docker image..."
# Get current git commit for tagging
local git_sha=$(git rev-parse --short HEAD)
local version=$(node -p "require('./package.json').version")
# Build production image with multiple tags
docker build \
--target production \
--tag "myapp:${version}" \
--tag "myapp:${git_sha}" \
--tag "myapp:latest" \
--file Dockerfile \
.
echo "✅ Production image built: myapp:${version}"
}
run_dev() {
echo "🚀 Starting development container..."
# Stop existing dev container if running
docker stop myapp-dev 2>/dev/null || true
docker rm myapp-dev 2>/dev/null || true
# Run development container with:
# - Volume mounts for live code reloading
# - Environment variables from .env.local
# - Network access to other services
# - Debug port exposed
docker run \
--name myapp-dev \
--detach \
--publish 3000:3000 \
--publish 9229:9229 \
--volume "$(pwd)/src:/app/src:ro" \
--volume "$(pwd)/public:/app/public:ro" \
--env-file .env.local \
--network myapp-network \
myapp:dev
echo "✅ Development container running at http://localhost:3000"
echo "🔍 Debug port available at localhost:9229"
}
run_prod() {
echo "🚀 Starting production container..."
docker run \
--name myapp-prod \
--detach \
--publish 8080:3000 \
--env NODE_ENV=production \
--env-file .env.production \
--restart unless-stopped \
--memory=512m \
--cpus=1.0 \
myapp:latest
echo "✅ Production container running at http://localhost:8080"
}
test_container() {
echo "🧪 Running container tests..."
# Build test image
docker build \
--target test \
--tag myapp:test \
--file Dockerfile.test \
.
# Run tests in container
docker run \
--rm \
--env NODE_ENV=test \
myapp:test npm run test:ci
echo "✅ Container tests passed"
}
clean_docker() {
echo "🧹 Cleaning up Docker resources..."
# Remove containers
docker rm -f myapp-dev myapp-prod 2>/dev/null || true
# Remove unused images
docker image prune -f
# Remove unused volumes
docker volume prune -f
# Remove unused networks
docker network prune -f
echo "✅ Docker cleanup complete"
}
# Command routing
case "${1:-help}" in
build:dev)
build_dev
;;
build:prod)
build_prod
;;
run:dev)
run_dev
;;
run:prod)
run_prod
;;
test)
test_container
;;
clean)
clean_docker
;;
help|*)
cat << EOF
Docker Development Workflow
Usage: $0 <command>
Commands:
build:dev Build development Docker image
build:prod Build production Docker image
run:dev Run development container with hot reload
run:prod Run production container
test Run tests in container environment
clean Clean up Docker resources
Examples:
$0 build:dev # Build and run development environment
$0 run:dev
$0 build:prod # Build and test production image
$0 test
$0 run:prod
EOF
;;
esac
Dockerfile Best Practices: Security and Optimization
Production-Ready Dockerfile Architecture
The Dockerfile that doesn’t embarrass you in production:
# Dockerfile - Professional backend application containerization
# Multi-stage build optimized for security, size, and performance
# ========================================
# Stage 1: Dependencies and Build Tools
# ========================================
FROM node:18-alpine AS base
# Install security updates and required system packages
RUN apk update && apk upgrade && \
apk add --no-cache \
dumb-init \
curl \
ca-certificates && \
rm -rf /var/cache/apk/*
# Create app directory with correct permissions
WORKDIR /app
# Create non-root user early
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001 -G nodejs
# ========================================
# Stage 2: Development Dependencies
# ========================================
FROM base AS dependencies
# Copy dependency manifests
COPY package*.json ./
COPY tsconfig.json ./
# Install ALL dependencies (including dev dependencies)
RUN npm ci --include=dev && \
npm cache clean --force
# ========================================
# Stage 3: Build Application
# ========================================
FROM dependencies AS build
# Copy source code
COPY src/ ./src/
COPY public/ ./public/
COPY config/ ./config/
# Build application
RUN npm run build
# Run static analysis
RUN npm run lint
RUN npm run type-check
# Run unit tests
RUN npm run test:ci
# Generate production manifest
RUN npm run build:manifest
# ========================================
# Stage 4: Production Dependencies
# ========================================
FROM base AS production-deps
# Copy package files
COPY package*.json ./
# Install ONLY production dependencies
RUN npm ci --only=production && \
npm cache clean --force
# ========================================
# Stage 5: Production Image
# ========================================
FROM base AS production
# Copy production dependencies
COPY --from=production-deps --chown=nextjs:nodejs /app/node_modules ./node_modules
# Copy built application
COPY --from=build --chown=nextjs:nodejs /app/dist ./dist
COPY --from=build --chown=nextjs:nodejs /app/public ./public
# Copy configuration files
COPY --chown=nextjs:nodejs config/production.js ./config/
# Set environment
ENV NODE_ENV=production
ENV PORT=3000
# Switch to non-root user
USER nextjs
# Expose port
EXPOSE 3000
# Configure health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
# Use dumb-init for proper signal handling
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]
# ========================================
# Image Optimization Results:
# ========================================
# Base image size: ~150MB (Alpine Linux)
# Final image size: ~200MB (with app)
# Build cache hits: 90%+ on code-only changes
# Security: Non-root user, minimal attack surface
# Performance: Multi-stage build, optimized layers
Advanced Dockerfile patterns for backend applications:
# Dockerfile.advanced - Advanced patterns for complex backend apps
# ========================================
# Build Arguments and Configuration
# ========================================
ARG NODE_VERSION=18
ARG ALPINE_VERSION=3.17
ARG BUILD_DATE
ARG VCS_REF
ARG VERSION
# ========================================
# Base Image with Build Args
# ========================================
FROM node:${NODE_VERSION}-alpine${ALPINE_VERSION} AS base
# Labels for image metadata
LABEL maintainer="team@company.com" \
version="${VERSION}" \
description="Backend API service" \
build-date="${BUILD_DATE}" \
vcs-ref="${VCS_REF}"
# Install system dependencies with specific versions
RUN apk update && apk upgrade && \
apk add --no-cache \
dumb-init=1.2.5-r2 \
curl=8.0.1-r0 \
ca-certificates \
&& rm -rf /var/cache/apk/*
# ========================================
# Security Hardening
# ========================================
FROM base AS security
# Remove unnecessary packages and files
RUN rm -rf /tmp/* /var/tmp/* /usr/share/man /usr/share/doc
# Create secure app directory
WORKDIR /app
# Create minimal user and group
RUN addgroup -g 10001 -S appgroup && \
adduser -S appuser -u 10001 -G appgroup -h /app -s /sbin/nologin
# Set secure directory permissions
RUN chown -R appuser:appgroup /app && \
chmod -R 755 /app
# ========================================
# Development Stage
# ========================================
FROM security AS development
# Install development tools
RUN apk add --no-cache git openssh
# Copy package files
COPY --chown=appuser:appgroup package*.json ./
# Install dependencies with development tools
RUN npm ci --include=dev
# Set development environment
ENV NODE_ENV=development
ENV DEBUG=*
# Switch to app user
USER appuser
# Start with nodemon for development
CMD ["npx", "nodemon", "--inspect=0.0.0.0:9229", "src/server.js"]
# ========================================
# Testing Stage
# ========================================
FROM development AS testing
# Copy source code
COPY --chown=appuser:appgroup . .
# Run security audit
RUN npm audit --audit-level moderate
# Run linting
RUN npm run lint
# Run type checking
RUN npm run type-check
# Run unit tests
RUN npm run test:unit
# Run integration tests
RUN npm run test:integration
# Generate coverage report
RUN npm run test:coverage
# ========================================
# Build Stage
# ========================================
FROM testing AS builder
# Build application
RUN npm run build
# Optimize built files
RUN npm run optimize
# Generate build manifest
RUN npm run build:manifest
# ========================================
# Production Dependencies
# ========================================
FROM security AS prod-deps
COPY package*.json ./
# Install production dependencies with optimizations
RUN npm ci --only=production --no-audit --no-fund && \
npm cache clean --force
# ========================================
# Production Stage
# ========================================
FROM security AS production
# Copy production dependencies
COPY --from=prod-deps --chown=appuser:appgroup /app/node_modules ./node_modules
# Copy built application
COPY --from=builder --chown=appuser:appgroup /app/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/public ./public
COPY --from=builder --chown=appuser:appgroup /app/config/production.js ./config/
# Set production environment
ENV NODE_ENV=production \
PORT=3000 \
LOG_LEVEL=info \
METRICS_ENABLED=true
# Switch to app user
USER appuser
# Expose application port
EXPOSE 3000
# Add comprehensive health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD curl -f -H "Accept: application/json" \
-H "User-Agent: Docker-Healthcheck" \
http://localhost:3000/health/detailed || exit 1
# Use dumb-init for signal handling
ENTRYPOINT ["dumb-init", "--"]
# Start application with production optimizations
CMD ["node", "--enable-source-maps", "--max-old-space-size=512", "dist/server.js"]
# ========================================
# Image Verification
# ========================================
FROM production AS verify
# Verify image integrity
RUN node --version && \
npm --version && \
curl --version && \
id appuser && \
ls -la /app
# Test application startup
RUN timeout 30s node dist/server.js --test-startup || exit 1
Docker security and optimization checklist:
#!/bin/bash
# docker-security-check.sh - Comprehensive Docker security validation
validate_dockerfile() {
echo "🔍 Validating Dockerfile security..."
local dockerfile="${1:-Dockerfile}"
local issues=0
# Check for root user
if ! grep -q "USER " "$dockerfile"; then
echo "❌ No USER directive found - container will run as root"
((issues++))
fi
# Check for specific versions
if grep -q "FROM.*:latest" "$dockerfile"; then
echo "⚠️ Using 'latest' tag - specify exact versions"
((issues++))
fi
# Check for package updates
if ! grep -q "apk update.*apk upgrade" "$dockerfile"; then
echo "⚠️ No system package updates found"
((issues++))
fi
# Check for cleanup
if ! grep -q "rm -rf.*cache\|clean.*cache" "$dockerfile"; then
echo "⚠️ No package cache cleanup found"
((issues++))
fi
# Check for health check
if ! grep -q "HEALTHCHECK" "$dockerfile"; then
echo "⚠️ No health check defined"
((issues++))
fi
if [ $issues -eq 0 ]; then
echo "✅ Dockerfile security validation passed"
else
echo "❌ Found $issues security issues"
return 1
fi
}
scan_image_vulnerabilities() {
local image="${1:-myapp:latest}"
echo "🔍 Scanning image for vulnerabilities..."
# Use trivy for vulnerability scanning
if command -v trivy &> /dev/null; then
trivy image --severity HIGH,CRITICAL "$image"
else
echo "💡 Install trivy for comprehensive vulnerability scanning"
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image --severity HIGH,CRITICAL "$image"
fi
}
optimize_image_size() {
local image="${1:-myapp:latest}"
echo "📊 Analyzing image size optimization..."
# Show image layers and sizes
docker history "$image" --human --no-trunc
# Use dive for detailed layer analysis
if command -v dive &> /dev/null; then
dive "$image"
else
echo "💡 Install dive for detailed layer analysis:"
echo " docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive:latest $image"
fi
}
validate_runtime_security() {
local container="${1:-myapp-test}"
echo "🔍 Validating runtime security..."
# Run container for testing
docker run -d --name "$container" myapp:latest
# Check if running as non-root
local user=$(docker exec "$container" whoami)
if [ "$user" = "root" ]; then
echo "❌ Container running as root user"
else
echo "✅ Container running as non-root user: $user"
fi
# Check file permissions
docker exec "$container" ls -la /app
# Check for writable directories
local writable=$(docker exec "$container" find /app -type d -writable 2>/dev/null)
if [ -n "$writable" ]; then
echo "⚠️ Writable directories found: $writable"
fi
# Cleanup
docker stop "$container" && docker rm "$container"
}
# Run security checks
validate_dockerfile "$@"
scan_image_vulnerabilities myapp:latest
optimize_image_size myapp:latest
validate_runtime_security myapp-security-test
Container Networking and Volumes: Data and Communication
Professional Container Networking
Container networking that doesn’t break in production:
# Docker networking fundamentals for backend applications
# ========================================
# Network Types and Use Cases
# ========================================
# 1. Bridge Network (default) - Single host communication
docker network create --driver bridge myapp-bridge
# 2. Host Network - Direct host networking (use sparingly)
docker run --network host myapp:latest
# 3. None Network - No networking (for security)
docker run --network none batch-processor:latest
# 4. Custom Networks - Professional service communication
docker network create \
--driver bridge \
--subnet 172.20.0.0/16 \
--gateway 172.20.0.1 \
--opt com.docker.network.bridge.name=myapp-br0 \
myapp-network
# ========================================
# Service Discovery and Communication
# ========================================
# Create network for service discovery
docker network create myapp-services
# Start database with network
docker run -d \
--name postgres-db \
--network myapp-services \
--env POSTGRES_DB=myapp \
--env POSTGRES_USER=postgres \
--env POSTGRES_PASSWORD=secure_password \
postgres:15-alpine
# Start Redis with network
docker run -d \
--name redis-cache \
--network myapp-services \
--env REDIS_PASSWORD=secure_redis_password \
redis:7-alpine
# Start application with service discovery
docker run -d \
--name myapp-api \
--network myapp-services \
--env DATABASE_URL=postgresql://postgres:secure_password@postgres-db:5432/myapp \
--env REDIS_URL=redis://:secure_redis_password@redis-cache:6379 \
--publish 3000:3000 \
myapp:latest
# Services can communicate using container names as hostnames
# postgres-db:5432 - Database connection
# redis-cache:6379 - Cache connection
# myapp-api:3000 - API service
Advanced networking configuration:
// network-config.js - Professional container networking setup
const { execSync } = require("child_process");
class DockerNetworkManager {
constructor() {
this.networks = new Map();
this.services = new Map();
}
createNetwork(name, config = {}) {
const networkConfig = {
driver: "bridge",
subnet: "172.20.0.0/16",
gateway: "172.20.0.1",
enableIPv6: false,
internal: false,
...config,
};
console.log(`🌐 Creating network: ${name}`);
let createCmd = `docker network create --driver ${networkConfig.driver}`;
if (networkConfig.subnet) {
createCmd += ` --subnet ${networkConfig.subnet}`;
}
if (networkConfig.gateway) {
createCmd += ` --gateway ${networkConfig.gateway}`;
}
if (networkConfig.internal) {
createCmd += ` --internal`;
}
createCmd += ` ${name}`;
try {
execSync(createCmd, { stdio: "inherit" });
this.networks.set(name, networkConfig);
console.log(`✅ Network ${name} created successfully`);
} catch (error) {
if (error.message.includes("already exists")) {
console.log(`ℹ️ Network ${name} already exists`);
} else {
throw error;
}
}
}
createServiceNetwork() {
console.log("🏗️ Setting up service networking...");
// Frontend network - public facing
this.createNetwork("frontend-network", {
subnet: "172.21.0.0/16",
gateway: "172.21.0.1",
});
// Backend network - internal services
this.createNetwork("backend-network", {
subnet: "172.22.0.0/16",
gateway: "172.22.0.1",
internal: false, // Allow outbound internet
});
// Database network - restricted access
this.createNetwork("database-network", {
subnet: "172.23.0.0/16",
gateway: "172.23.0.1",
internal: true, // No internet access
});
}
configureLoadBalancer() {
console.log("⚖️ Configuring load balancer...");
// Create Nginx load balancer configuration
const nginxConfig = `
events {
worker_connections 1024;
}
http {
upstream backend {
server myapp-api-1:3000;
server myapp-api-2:3000;
server myapp-api-3:3000;
}
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Health check
proxy_connect_timeout 5s;
proxy_read_timeout 30s;
proxy_send_timeout 30s;
}
location /health {
access_log off;
return 200 "healthy\\n";
}
}
}`;
// Write Nginx config
require("fs").writeFileSync("nginx.conf", nginxConfig);
// Start load balancer
execSync(`
docker run -d \\
--name nginx-lb \\
--network frontend-network \\
--network backend-network \\
--publish 80:80 \\
--volume "$(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro" \\
nginx:alpine
`);
console.log("✅ Load balancer configured");
}
monitorNetwork() {
console.log("📊 Network monitoring commands:");
console.log("\n# List all networks:");
console.log("docker network ls");
console.log("\n# Inspect network details:");
console.log("docker network inspect backend-network");
console.log("\n# View network connectivity:");
console.log("docker exec myapp-api-1 netstat -tlnp");
console.log("\n# Test service discovery:");
console.log("docker exec myapp-api-1 nslookup postgres-db");
console.log("docker exec myapp-api-1 ping -c 3 redis-cache");
console.log("\n# Monitor network traffic:");
console.log("docker stats --format 'table {{.Container}}\\t{{.NetIO}}'");
}
cleanup() {
console.log("🧹 Cleaning up networks...");
// Remove services first
const containers = [
"nginx-lb",
"myapp-api-1",
"myapp-api-2",
"myapp-api-3",
];
containers.forEach((container) => {
try {
execSync(`docker stop ${container} && docker rm ${container}`, {
stdio: "inherit",
});
} catch (error) {
// Container may not exist
}
});
// Remove networks
this.networks.forEach((config, name) => {
try {
execSync(`docker network rm ${name}`, { stdio: "inherit" });
} catch (error) {
console.log(`⚠️ Could not remove network ${name}`);
}
});
console.log("✅ Network cleanup complete");
}
}
// Usage
const networkManager = new DockerNetworkManager();
networkManager.createServiceNetwork();
networkManager.configureLoadBalancer();
networkManager.monitorNetwork();
Data persistence with Docker volumes:
#!/bin/bash
# docker-volumes.sh - Professional data persistence management
setup_data_persistence() {
echo "💾 Setting up data persistence..."
# ========================================
# Volume Types and Use Cases
# ========================================
# 1. Named Volumes - Docker managed, best for production
docker volume create postgres-data
docker volume create redis-data
docker volume create app-logs
# 2. Bind Mounts - Host directory, good for development
mkdir -p ./data/postgres ./data/redis ./logs
# 3. tmpfs Mounts - In-memory, for temporary data
# (configured in docker run command)
}
configure_database_persistence() {
echo "🗄️ Configuring database persistence..."
# PostgreSQL with persistent data
docker run -d \
--name postgres-persistent \
--restart unless-stopped \
--network backend-network \
--volume postgres-data:/var/lib/postgresql/data \
--volume ./backups:/backups \
--env POSTGRES_DB=myapp \
--env POSTGRES_USER=postgres \
--env POSTGRES_PASSWORD=secure_password \
--env PGDATA=/var/lib/postgresql/data/pgdata \
postgres:15-alpine
# Redis with persistence configuration
docker run -d \
--name redis-persistent \
--restart unless-stopped \
--network backend-network \
--volume redis-data:/data \
--volume ./redis.conf:/usr/local/etc/redis/redis.conf \
redis:7-alpine redis-server /usr/local/etc/redis/redis.conf
echo "✅ Database persistence configured"
}
setup_application_volumes() {
echo "📁 Setting up application volumes..."
# Application with multiple volume types
docker run -d \
--name myapp-persistent \
--restart unless-stopped \
--network backend-network \
--network frontend-network \
--publish 3000:3000 \
`# Named volume for uploaded files` \
--volume app-uploads:/app/uploads \
`# Named volume for logs` \
--volume app-logs:/app/logs \
`# Bind mount for configuration (development)` \
--volume "$(pwd)/config:/app/config:ro" \
`# tmpfs for temporary files` \
--tmpfs /tmp:rw,noexec,nosuid,size=100m \
--env NODE_ENV=production \
myapp:latest
echo "✅ Application volumes configured"
}
backup_volumes() {
echo "💾 Creating volume backups..."
# Backup PostgreSQL data
docker exec postgres-persistent pg_dump \
-U postgres \
-d myapp \
-f /backups/myapp-$(date +%Y%m%d-%H%M%S).sql
# Backup application uploads
docker run --rm \
--volume app-uploads:/source \
--volume "$(pwd)/backups:/backup" \
alpine tar czf /backup/uploads-$(date +%Y%m%d-%H%M%S).tar.gz -C /source .
# Backup Redis data
docker exec redis-persistent redis-cli BGSAVE
docker cp redis-persistent:/data/dump.rdb "./backups/redis-$(date +%Y%m%d-%H%M%S).rdb"
echo "✅ Backups created in ./backups/"
}
restore_volumes() {
local backup_date=${1:-latest}
echo "📥 Restoring volumes from backup..."
# Stop services for consistent restore
docker stop myapp-persistent postgres-persistent redis-persistent
# Restore PostgreSQL
if [ -f "./backups/myapp-${backup_date}.sql" ]; then
docker start postgres-persistent
sleep 10 # Wait for PostgreSQL to start
docker exec -i postgres-persistent psql -U postgres -d myapp < "./backups/myapp-${backup_date}.sql"
fi
# Restore uploads
if [ -f "./backups/uploads-${backup_date}.tar.gz" ]; then
docker run --rm \
--volume app-uploads:/target \
--volume "$(pwd)/backups:/backup" \
alpine sh -c "cd /target && tar xzf /backup/uploads-${backup_date}.tar.gz"
fi
# Restart services
docker start redis-persistent myapp-persistent
echo "✅ Volume restore complete"
}
monitor_volume_usage() {
echo "📊 Volume usage monitoring:"
# List all volumes
echo -e "\n📋 All volumes:"
docker volume ls
# Show volume sizes
echo -e "\n💾 Volume sizes:"
docker system df -v
# Inspect specific volumes
echo -e "\n🔍 Volume details:"
for volume in postgres-data redis-data app-uploads app-logs; do
echo "--- $volume ---"
docker volume inspect "$volume" | jq '.[0] | {Name, Mountpoint, CreatedAt}'
done
# Show container volume usage
echo -e "\n📈 Container volume usage:"
docker stats --format "table {{.Container}}\t{{.BlockIO}}\t{{.MemUsage}}" --no-stream
}
cleanup_volumes() {
echo "🧹 Cleaning up volumes..."
# Stop and remove containers
docker stop myapp-persistent postgres-persistent redis-persistent
docker rm myapp-persistent postgres-persistent redis-persistent
# Remove volumes (WARNING: This deletes data!)
read -p "Are you sure you want to delete all data volumes? (y/N): " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
docker volume rm postgres-data redis-data app-uploads app-logs
echo "✅ Volumes removed"
else
echo "Volume cleanup cancelled"
fi
# Clean up unused volumes
docker volume prune -f
echo "✅ Volume cleanup complete"
}
# Command routing
case "${1:-help}" in
setup)
setup_data_persistence
configure_database_persistence
setup_application_volumes
;;
backup)
backup_volumes
;;
restore)
restore_volumes "${2:-latest}"
;;
monitor)
monitor_volume_usage
;;
cleanup)
cleanup_volumes
;;
help|*)
cat << EOF
Docker Volume Management
Usage: $0 <command> [args]
Commands:
setup Set up persistent volumes for all services
backup Create backups of all volumes
restore [date] Restore volumes from backup
monitor Show volume usage and statistics
cleanup Remove all volumes and data
Examples:
$0 setup # Initial volume setup
$0 backup # Create backup
$0 restore 20231201-143022 # Restore specific backup
$0 monitor # Check volume usage
EOF
;;
esac
Docker Compose: Multi-Service Orchestration
Production-Ready Docker Compose Architecture
Docker Compose that actually scales and doesn’t break:
# docker-compose.yml - Professional multi-service backend orchestration
version: "3.8"
# ========================================
# Networks
# ========================================
networks:
frontend:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
backend:
driver: bridge
internal: false
ipam:
driver: default
config:
- subnet: 172.21.0.0/16
database:
driver: bridge
internal: true
ipam:
driver: default
config:
- subnet: 172.22.0.0/16
# ========================================
# Volumes
# ========================================
volumes:
postgres_data:
driver: local
redis_data:
driver: local
elasticsearch_data:
driver: local
app_uploads:
driver: local
app_logs:
driver: local
prometheus_data:
driver: local
grafana_data:
driver: local
# ========================================
# Services
# ========================================
services:
# ========================================
# Reverse Proxy / Load Balancer
# ========================================
nginx:
image: nginx:alpine
container_name: nginx-proxy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
networks:
- frontend
- backend
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/ssl:/etc/nginx/ssl:ro
- ./nginx/logs:/var/log/nginx
depends_on:
- app
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost/health"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# ========================================
# Application Service
# ========================================
app:
build:
context: .
dockerfile: Dockerfile
target: production
container_name: myapp-api
restart: unless-stopped
networks:
- backend
- database
volumes:
- app_uploads:/app/uploads
- app_logs:/app/logs
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
- REDIS_URL=redis://:${REDIS_PASSWORD}@redis:6379
- ELASTICSEARCH_URL=http://elasticsearch:9200
- JWT_SECRET=${JWT_SECRET}
- SESSION_SECRET=${SESSION_SECRET}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
elasticsearch:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
reservations:
memory: 256M
cpus: "0.5"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# ========================================
# Database Services
# ========================================
postgres:
image: postgres:15-alpine
container_name: postgres-db
restart: unless-stopped
networks:
- database
volumes:
- postgres_data:/var/lib/postgresql/data
- ./database/init:/docker-entrypoint-initdb.d
- ./database/backups:/backups
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
memory: 1G
cpus: "2.0"
reservations:
memory: 512M
cpus: "1.0"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
redis:
image: redis:7-alpine
container_name: redis-cache
restart: unless-stopped
networks:
- database
volumes:
- redis_data:/data
- ./redis/redis.conf:/usr/local/etc/redis/redis.conf
environment:
- REDIS_PASSWORD=${REDIS_PASSWORD}
command: redis-server /usr/local/etc/redis/redis.conf --requirepass ${REDIS_PASSWORD}
healthcheck:
test:
[
"CMD",
"redis-cli",
"--no-auth-warning",
"-a",
"${REDIS_PASSWORD}",
"ping",
]
interval: 10s
timeout: 3s
retries: 3
deploy:
resources:
limits:
memory: 256M
cpus: "0.5"
reservations:
memory: 128M
cpus: "0.25"
# ========================================
# Search and Analytics
# ========================================
elasticsearch:
image: elasticsearch:8.10.0
container_name: elasticsearch-search
restart: unless-stopped
networks:
- database
volumes:
- elasticsearch_data:/usr/share/elasticsearch/data
environment:
- discovery.type=single-node
- xpack.security.enabled=false
- "ES_JAVA_OPTS=-Xms512m -Xmx1g"
healthcheck:
test:
["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
deploy:
resources:
limits:
memory: 1.5G
cpus: "2.0"
reservations:
memory: 1G
cpus: "1.0"
# ========================================
# Message Queue
# ========================================
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: rabbitmq-queue
restart: unless-stopped
networks:
- database
volumes:
- ./rabbitmq/data:/var/lib/rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=${RABBITMQ_USER}
- RABBITMQ_DEFAULT_PASS=${RABBITMQ_PASSWORD}
- RABBITMQ_DEFAULT_VHOST=${RABBITMQ_VHOST}
ports:
- "15672:15672" # Management UI
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 30s
timeout: 10s
retries: 3
# ========================================
# Background Job Processor
# ========================================
worker:
build:
context: .
dockerfile: Dockerfile
target: production
container_name: myapp-worker
restart: unless-stopped
networks:
- database
environment:
- NODE_ENV=production
- WORKER_MODE=true
- DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
- REDIS_URL=redis://:${REDIS_PASSWORD}@redis:6379
- RABBITMQ_URL=amqp://${RABBITMQ_USER}:${RABBITMQ_PASSWORD}@rabbitmq:5672/${RABBITMQ_VHOST}
command: ["node", "dist/worker.js"]
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
deploy:
resources:
limits:
memory: 256M
cpus: "0.5"
reservations:
memory: 128M
cpus: "0.25"
# ========================================
# Monitoring and Observability
# ========================================
prometheus:
image: prom/prometheus:latest
container_name: prometheus-metrics
restart: unless-stopped
networks:
- backend
ports:
- "9090:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/usr/share/prometheus/console_libraries"
- "--web.console.templates=/usr/share/prometheus/consoles"
- "--web.enable-lifecycle"
grafana:
image: grafana/grafana:latest
container_name: grafana-dashboard
restart: unless-stopped
networks:
- backend
ports:
- "3001:3000"
volumes:
- grafana_data:/var/lib/grafana
- ./monitoring/grafana/provisioning:/etc/grafana/provisioning
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
- GF_INSTALL_PLUGINS=grafana-piechart-panel
depends_on:
- prometheus
# ========================================
# Development and Testing Services
# ========================================
mailhog:
image: mailhog/mailhog:latest
container_name: mailhog-smtp
restart: unless-stopped
networks:
- backend
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
profiles:
- dev
- testing
# ========================================
# Backup Service
# ========================================
backup:
image: postgres:15-alpine
container_name: postgres-backup
networks:
- database
volumes:
- ./database/backups:/backups
- ./scripts/backup.sh:/backup.sh
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
command: ["sh", "/backup.sh"]
depends_on:
postgres:
condition: service_healthy
profiles:
- backup
Advanced Docker Compose management scripts:
#!/bin/bash
# docker-compose-manager.sh - Professional Docker Compose orchestration
set -euo pipefail
# Load environment variables
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
# Configuration
COMPOSE_FILE="${COMPOSE_FILE:-docker-compose.yml}"
PROJECT_NAME="${PROJECT_NAME:-myapp}"
ENVIRONMENT="${ENVIRONMENT:-production}"
deploy_stack() {
echo "🚀 Deploying full application stack..."
# Validate environment
validate_environment
# Create necessary directories
mkdir -p nginx/logs database/backups rabbitmq/data monitoring/grafana/provisioning
# Generate configurations if needed
generate_configs
# Start services with dependency order
echo "📦 Starting core services..."
docker-compose up -d postgres redis elasticsearch rabbitmq
# Wait for core services to be healthy
wait_for_services postgres redis elasticsearch rabbitmq
echo "🖥️ Starting application services..."
docker-compose up -d app worker
# Wait for application to be ready
wait_for_services app
echo "⚖️ Starting load balancer..."
docker-compose up -d nginx
# Start monitoring stack
if [ "$ENVIRONMENT" = "production" ]; then
echo "📊 Starting monitoring..."
docker-compose up -d prometheus grafana
fi
# Show deployment status
show_deployment_status
echo "✅ Stack deployment complete!"
}
validate_environment() {
echo "🔍 Validating environment configuration..."
local required_vars=(
"POSTGRES_DB"
"POSTGRES_USER"
"POSTGRES_PASSWORD"
"REDIS_PASSWORD"
"JWT_SECRET"
"SESSION_SECRET"
)
local missing_vars=()
for var in "${required_vars[@]}"; do
if [ -z "${!var:-}" ]; then
missing_vars+=("$var")
fi
done
if [ ${#missing_vars[@]} -ne 0 ]; then
echo "❌ Missing required environment variables: ${missing_vars[*]}"
echo "Please create a .env file with all required variables"
exit 1
fi
echo "✅ Environment validation passed"
}
generate_configs() {
echo "⚙️ Generating configuration files..."
# Generate Nginx configuration
cat > nginx/nginx.conf << 'EOF'
events {
worker_connections 1024;
}
http {
upstream backend {
server app:3000;
keepalive 32;
}
server {
listen 80;
server_name _;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
location /health {
access_log off;
return 200 "healthy\n";
}
}
}
EOF
# Generate Redis configuration
cat > redis/redis.conf << EOF
# Redis configuration for production
maxmemory 256mb
maxmemory-policy allkeys-lru
save 900 1
save 300 10
save 60 10000
appendonly yes
appendfsync everysec
EOF
# Generate Prometheus configuration
mkdir -p monitoring
cat > monitoring/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'myapp'
static_configs:
- targets: ['app:3000']
metrics_path: '/metrics'
scrape_interval: 10s
- job_name: 'postgres'
static_configs:
- targets: ['postgres:5432']
- job_name: 'redis'
static_configs:
- targets: ['redis:6379']
EOF
echo "✅ Configuration files generated"
}
wait_for_services() {
local services=("$@")
for service in "${services[@]}"; do
echo "⏳ Waiting for $service to be healthy..."
local timeout=60
local counter=0
while [ $counter -lt $timeout ]; do
if docker-compose exec -T "$service" echo "healthy" &>/dev/null; then
echo "✅ $service is ready"
break
fi
sleep 2
((counter += 2))
done
if [ $counter -ge $timeout ]; then
echo "❌ $service failed to become healthy"
show_service_logs "$service"
exit 1
fi
done
}
show_deployment_status() {
echo ""
echo "📊 Deployment Status:"
echo "━━━━━━━━━━━━━━━━━━━━━━━━"
docker-compose ps --format "table {{.Service}}\t{{.Status}}\t{{.Ports}}"
echo ""
echo "🌐 Service URLs:"
echo " Application: http://localhost"
echo " Grafana: http://localhost:3001"
echo " Prometheus: http://localhost:9090"
echo " RabbitMQ: http://localhost:15672"
if docker-compose ps mailhog &>/dev/null; then
echo " MailHog: http://localhost:8025"
fi
echo ""
echo "📈 Resource Usage:"
docker stats --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}" --no-stream
}
show_service_logs() {
local service=$1
echo "📜 Recent logs for $service:"
docker-compose logs --tail=20 "$service"
}
backup_data() {
echo "💾 Creating data backup..."
local backup_dir="./backups/$(date +%Y%m%d-%H%M%S)"
mkdir -p "$backup_dir"
# Database backup
docker-compose exec -T postgres pg_dump \
-U "$POSTGRES_USER" \
-d "$POSTGRES_DB" > "$backup_dir/database.sql"
# Application uploads backup
docker run --rm \
--volume "${PROJECT_NAME}_app_uploads:/source" \
--volume "$(pwd)/$backup_dir:/backup" \
alpine tar czf /backup/uploads.tar.gz -C /source .
echo "✅ Backup created in $backup_dir"
}
scale_services() {
local service="${1:-app}"
local replicas="${2:-3}"
echo "📈 Scaling $service to $replicas replicas..."
docker-compose up -d --scale "$service=$replicas" "$service"
# Update load balancer if scaling app
if [ "$service" = "app" ]; then
docker-compose restart nginx
fi
echo "✅ $service scaled to $replicas replicas"
}
health_check() {
echo "🏥 Running health checks..."
local failed_services=()
# Check each service
for service in $(docker-compose config --services); do
if docker-compose ps "$service" | grep -q "Up.*healthy"; then
echo "✅ $service: healthy"
elif docker-compose ps "$service" | grep -q "Up"; then
echo "⚠️ $service: running (no health check)"
else
echo "❌ $service: not running"
failed_services+=("$service")
fi
done
if [ ${#failed_services[@]} -ne 0 ]; then
echo "❌ Failed services: ${failed_services[*]}"
return 1
fi
echo "✅ All services are healthy"
}
teardown_stack() {
echo "🛑 Tearing down application stack..."
# Stop services gracefully
docker-compose stop
# Remove containers
docker-compose down --remove-orphans
# Remove volumes if requested
if [ "${1:-}" = "--volumes" ]; then
echo "⚠️ Removing all data volumes..."
docker-compose down --volumes
fi
# Clean up unused resources
docker system prune -f
echo "✅ Stack teardown complete"
}
# Command routing
case "${1:-help}" in
deploy)
deploy_stack
;;
validate)
validate_environment
;;
status)
show_deployment_status
;;
logs)
show_service_logs "${2:-app}"
;;
backup)
backup_data
;;
scale)
scale_services "${2:-app}" "${3:-3}"
;;
health)
health_check
;;
teardown)
teardown_stack "${2:-}"
;;
help|*)
cat << EOF
Docker Compose Stack Manager
Usage: $0 <command> [args]
Commands:
deploy Deploy full application stack
validate Validate environment configuration
status Show deployment status and URLs
logs [service] Show logs for specific service
backup Create data backup
scale [service] [replicas] Scale service to N replicas
health Run health checks on all services
teardown [--volumes] Stop and remove stack
Examples:
$0 deploy # Deploy full stack
$0 scale app 5 # Scale app to 5 instances
$0 logs postgres # Show database logs
$0 teardown --volumes # Remove everything including data
EOF
;;
esac
Key Takeaways
Professional containerization transforms deployment from error-prone manual processes into predictable, automated systems that ensure consistency across all environments. Docker provides the foundation for modern application packaging, while Docker Compose orchestrates complex multi-service applications with production-ready configurations.
The containerization mastery mindset:
- Consistency eliminates surprises: Containerized applications run identically across development, staging, and production
- Isolation prevents conflicts: Each service runs in its own environment without dependency conflicts
- Scalability becomes simple: Container orchestration makes horizontal scaling straightforward
- Security improves through best practices: Non-root users, minimal attack surfaces, and proper network isolation
What distinguishes professional containerization:
- Multi-stage Docker builds that create secure, optimized images
- Comprehensive networking that handles service discovery and load balancing
- Persistent data management with backup and recovery strategies
- Docker Compose orchestration that manages complex service dependencies
- Monitoring and health checks that ensure system reliability
What’s Next
This article covered containerization fundamentals, Docker best practices, networking, volumes, and Docker Compose orchestration. The next article advances to container orchestration at scale with Kubernetes concepts, production deployment strategies, container security hardening, image optimization techniques, and registry management for enterprise environments.
You’re no longer deploying applications with crossed fingers—you’re shipping containerized systems that run consistently everywhere. The foundation is solid. Now we scale to production orchestration.