This article will talk about docker, one of the most popular containerisation technologies that every developer should know and master.
Understanding Docker and Docker Compose: A Comprehensive Guide
1. What is Docker?
Docker is a platform that enables developers to package applications and their dependencies into lightweight, portable containers. Think of a container as a standardized unit that includes everything needed to run a piece of software: code, runtime, system tools, libraries, and settings.
The Core Problem Docker Solves: Have you ever heard developers say “it works on my machine”? Docker eliminates this problem by ensuring that applications run identically across different environments—whether on your laptop, a colleague’s workstation, or a production server.
Key Concepts in Docker
- Images: Read-only templates that contain the application and its dependencies
 - Containers: Running instances of images—lightweight, isolated processes
 - Dockerfile: A text file with instructions to build a Docker image
 - Docker Engine: The runtime that executes and manages containers
 
2. What is Docker Compose?
Docker Compose is a tool for defining and running multi-container Docker applications. While Docker handles individual containers, Docker Compose orchestrates multiple containers that work together.
2.1 Why Docker Compose?
Let me ask you this: What happens when your application needs multiple services? For example, a web application might need:
- A web server (Node.js, Python, etc.)
 - A database (PostgreSQL, MongoDB)
 - A cache layer (Redis)
 - A reverse proxy (Nginx)
 
Running each service individually with separate docker run commands becomes tedious and error-prone. Docker Compose solves this by letting you define all services in a single YAML file.
3. Key Differences
| Aspect | Docker | Docker Compose | 
|---|---|---|
| Purpose | Manages individual containers | Orchestrates multiple containers | 
| Configuration | Command-line or Dockerfile | YAML configuration file | 
| Use Case | Single-container applications | Multi-container applications | 
| Networking | Manual network setup | Automatic network creation | 
| Startup | One container at a time | All services with one command | 
4. Benefits of Docker Technology
4.1. Consistency Across Environments
Eliminates the “works on my machine” problem by ensuring identical environments everywhere.
4.2. Isolation
Each container runs independently, preventing conflicts between different applications or versions.
4.3. Efficiency
Containers share the host OS kernel, making them more lightweight than traditional virtual machines.
4.4. Rapid Deployment
Start, stop, and rebuild applications in seconds rather than minutes.
4.5. Scalability
Easily replicate containers to handle increased load.
4.6. Version Control
Docker images can be versioned, making rollbacks simple and reliable.
5. Practical Example: Dockerizing a Full-Stack Application
Let me walk you through dockerizing a real-world application: a Node.js/Express API with a PostgreSQL database and Redis for caching.
5.1 Project Structure
1  | my-app/  | 
5.2 Steps
Step 1: Create the Backend Application
backend/src/server.js
1  | const express = require('express');  | 
backend/package.json
1  | {  | 
Step 2: Create the Dockerfile
backend/Dockerfile
1  | # Use official Node.js runtime as base image  | 
**Note: **Each comment means what each instruction does. Understanding the layered nature of Docker images is crucial—each instruction creates a new layer.
Why copy package.json separately? This is a Docker best practice called “layer caching.” Docker caches each step. If your code changes but dependencies don’t, Docker skips reinstalling everything. This makes rebuilds MUCH faster.
Step 3: Create the Docker Compose Configuration
docker-compose.yml
1  | version: '3.8'  | 
Step 4: Create Database Initialization Script
init.sql
1  | -- Create users table  | 
Step 5: Running the Application
Here’s where Docker Compose shows its power. Instead of running multiple docker run commands, you execute just one:
1  | # Start all services  | 
5.3 Understanding the Docker Compose Configuration
Let me break down what’s happening here—can you identify why we use depends_on? It ensures services start in the correct order, though it doesn’t wait for a service to be “ready,” only “started.”
Key Features Demonstrated:
- Service Definition: Three services (backend, postgres, redis) working together
 - Networking: Automatic network creation allows services to communicate using service names
 - Volume Mounting: Data persistence for PostgreSQL and hot-reload for development
 - Environment Variables: Configuration without hardcoding values
 - Port Mapping: Exposing services to the host machine
 - Dependencies: Ensuring proper startup order
 
5.4 Testing the Application
1  | # Check health  | 
6. Advanced Docker Compose Features
6.1 Environment Files
Create a .env file for sensitive data:
1  | DB_PASSWORD=secretpassword  | 
Reference in docker-compose.yml:
1  | environment:  | 
6.2 Multiple Compose Files
1  | # Development  | 
7. Best Practices of Docker Compose
Use .dockerignore: Exclude unnecessary files from the build context
Multi-stage builds: Reduce image size by separating build and runtime dependencies
Single-Stage Build (Inefficient)
dockerfile
1
2
3
4
5
6
7
8
9
10FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install # Installs ALL dependencies (including dev tools)
COPY . .
RUN npm run build # Builds your app
CMD ["npm", "start"]
# Problem: Final image includes build tools, dev dependencies, source code
# Image size: ~500MBMulti-Stage Build (Efficient)
dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19# Stage 1: BUILD - Use full Node.js with build tools
FROM node:18 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm install # All dependencies
COPY . .
RUN npm run build # Creates /app/dist
# Stage 2: RUN - Use lightweight Node.js, copy only what's needed
FROM node:18-alpine # Smaller base image
WORKDIR /app
COPY package*.json ./
RUN npm install --production # Only production dependencies
COPY --from=builder /app/dist ./dist # Copy ONLY the built files
CMD ["node", "dist/server.js"]
# Result: Final image only has production code and dependencies
# Image size: ~150MBKey insight: The final image only contains what’s in the LAST stage. Everything from earlier stages is discarded.
Don’t run as root: Create non-root users in containers for security
Use specific image tags: Avoid
latestfor reproducibilityHealth checks: Implement health checks for better orchestration
Named volumes: Use named volumes for important data