Setting up Docker on EC2 instances seems straightforward until you realize each AMI (Amazon Machine Image) has its quirks, package managers, and permission models. I've spent countless hours troubleshooting "docker: command not found" errors and permission issues across different AMI types, and this guide consolidates everything I've learned.
Whether you're deploying microservices, running CI/CD pipelines, or containerizing applications, this guide will help you get Docker running smoothly on any EC2 AMI.
Why Docker on EC2?
Before diving into setup instructions, let's understand why this combination is powerful:
Cost Efficiency: Run multiple containerized applications on a single EC2 instance instead of provisioning separate instances for each service.
Portability: Build once, deploy anywhere. Your Docker images work identically across development, staging, and production environments.
Resource Isolation: Containers provide process and filesystem isolation without the overhead of full virtual machines.
Rapid Deployment: Deploy new versions in seconds by pulling updated images, not minutes by provisioning new instances.
Overview: EC2 AMI Types
Amazon offers several AMI types, each with different package managers and default configurations:
- Amazon Linux 2: Older, stable, uses yum package manager
- Amazon Linux 2023: Modern, optimized for AWS, uses dnf package manager
- Ubuntu Server: Popular, uses apt package manager, extensive community support
- Red Hat Enterprise Linux (RHEL): Enterprise-grade, uses yum/dnf, commercial support
- CentOS/Rocky Linux: Community alternatives to RHEL
Each requires slightly different setup procedures, which I'll cover in detail.
Prerequisites
Before starting, ensure you have:
- An active AWS account
- An EC2 instance launched with your chosen AMI
- SSH access to your instance (PEM key or Session Manager)
- Security group allowing inbound SSH (port 22)
- (Optional) Security group rules for your containerized applications
Amazon Linux 2: Docker Setup
Amazon Linux 2 is widely used but approaching end-of-life. Here's the battle-tested setup:
Step 1: Update System Packages
Always start with a clean system update:
sudo yum update -yThis ensures you have the latest security patches and package definitions.
Step 2: Install Docker
Amazon Linux 2 includes Docker in its extras repository:
sudo amazon-linux-extras install docker -yWhy not `yum install docker`? The amazon-linux-extras repository provides newer, AWS-optimized packages compared to base repos.
Step 3: Start and Enable Docker Service
sudo systemctl start docker
sudo systemctl enable dockerThe enable command ensures Docker starts automatically after reboots.
Step 4: Add User to Docker Group
By default, Docker requires root privileges. Add your user to the docker group:
sudo usermod -a -G docker ec2-userCritical: Log out and log back in for group changes to take effect:
exit
# SSH back into your instanceStep 5: Verify Installation
docker --version
docker psIf you see "permission denied," you forgot to re-login after the usermod command.
Common Amazon Linux 2 Issues
Issue 1: Docker daemon not starting
Check logs:
sudo journalctl -u docker.service -n 50
Common fix: Insufficient disk space. Check with df -h and clean up /var/lib/docker/ if needed.
Issue 2: Containers can't reach internet
Solution: Enable IP forwarding:
sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.confAmazon Linux 2023: Docker Setup
Amazon Linux 2023 (AL2023) is the modern successor with improved performance and security. Setup differs slightly:
Step 1: Update System
sudo dnf update -yNote: AL2023 uses dnf instead of yum (though yum is aliased to dnf).
Step 2: Install Docker
AL2023 doesn't use amazon-linux-extras. Install from the default repository:
sudo dnf install docker -yStep 3: Start and Enable Docker
sudo systemctl start docker
sudo systemctl enable dockerStep 4: User Permissions
sudo usermod -a -G docker ec2-user
newgrp docker # Applies group without re-loginPro tip: The newgrp docker command applies the group membership immediately in your current session.
Step 5: Configure Docker Daemon
Create a daemon configuration for better defaults:
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2"
}
EOFThis prevents logs from filling your disk and uses the efficient overlay2 storage driver.
Restart Docker to apply:
sudo systemctl restart docker
AL2023-Specific Considerations
SELinux is enforcing by default: If containers fail with permission errors:
Check SELinux status:
getenforce
Temporary disable (not recommended for production): sudo setenforce 0
Better solution: Use proper volume mount options:
docker run -v /host/path:/container/path:z myimage
The :z flag tells Docker to apply appropriate SELinux labels.
Ubuntu Server: Docker Setup
Ubuntu is the most popular choice for Docker deployments due to excellent documentation and community support.
Step 1: Update Package Index
sudo apt update
sudo apt upgrade -yStep 2: Install Prerequisites
sudo apt install -y ca-certificates curl gnupg lsb-releaseStep 3: Add Docker's Official GPG Key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpgStep 4: Add Docker Repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/nullStep 5: Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginThis installs:
- docker-ce: Docker Engine
- docker-ce-cli: Command-line interface
- containerd.io: Container runtime
- docker-buildx-plugin: Extended build capabilities
- docker-compose-plugin: Multi-container orchestration
Step 6: Start Docker
sudo systemctl start docker
sudo systemctl enable dockerStep 7: User Permissions
sudo usermod -aG docker ubuntu
newgrp dockerNote: Default user is ubuntu, not ec2-user.
Step 8: Verify Installation
docker run hello-worldIf successful, you'll see "Hello from Docker!" message.
Ubuntu-Specific Tips
Issue: Old Docker from snap
Ubuntu might have Docker installed via snap. Remove it first:
sudo snap remove docker
Systemd integration: Ubuntu uses systemd, so you can check Docker status with:
sudo systemctl status docker
Red Hat Enterprise Linux (RHEL): Docker Setup
RHEL users should actually use Podman (Red Hat's Docker alternative) as Docker is no longer officially supported. However, if you need Docker:
Option 1: Install Podman (Recommended)
sudo dnf install -y podman podman-dockerThe podman-docker package provides a docker command alias pointing to Podman.
Option 2: Install Docker CE (Unsupported)
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker ec2-userWarning: This uses CentOS repos on RHEL, which may cause compatibility issues.
Post-Installation: Essential Configuration
Regardless of your AMI, apply these best practices:
1. Configure Docker Logging
Prevent disk space exhaustion from container logs:
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"log-driver": "json-file",
"log-opts": {
"max-size": "50m",
"max-file": "5"
}
}
EOF
sudo systemctl restart docker2. Set Up Docker Content Trust (Security)
Enable image signature verification:
export DOCKER_CONTENT_TRUST=1
echo 'export DOCKER_CONTENT_TRUST=1' >> ~/.bashrcThis ensures you only pull signed, verified images.
3. Configure Resource Limits
For production workloads, set container resource limits in /etc/docker/daemon.json:
{
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 64000,
"Soft": 64000
}
}
}4. Enable Docker Metrics
For monitoring with Prometheus or CloudWatch:
{
"metrics-addr": "127.0.0.1:9323",
"experimental": true
}Restart Docker after changes:
sudo systemctl restart docker
Working with Docker Images on EC2
Now that Docker is installed, here are practical patterns for working with images:
Pulling Images from Docker Hub
docker pull nginx:latest
docker pull postgres:15-alpine
docker pull node:20-slimTip: Use specific tags (not latest) in production for reproducibility.
Building Custom Images
Create a simple Dockerfile:
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]Build and tag: docker build -t my-app:1.0.0 .
Using Amazon ECR (Elastic Container Registry)
ECR is AWS's managed Docker registry. To push images:
Step 1: Authenticate Docker to ECR:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
Step 2: Tag your image:
docker tag my-app:1.0.0 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0
Step 3: Push to ECR:
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0
Pro tip: Attach an IAM role to your EC2 instance with AmazonEC2ContainerRegistryReadOnly policy to pull images without credentials.
Multi-Stage Builds for Smaller Images
Reduce image size by using multi-stage builds:
# Build stage
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .# Production stage
FROM node:20-slim
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]
`
This pattern keeps build dependencies out of your final image.
Running Containers in Production
Basic Container Deployment
docker run -d \
--name my-app \
--restart unless-stopped \
-p 80:3000 \
-e NODE_ENV=production \
-v /data/app:/app/data \
my-app:1.0.0Flags explained:
-d: Detached mode (runs in background)--restart unless-stopped: Auto-restart on failure or reboot-p 80:3000: Map host port 80 to container port 3000-e: Environment variable-v: Volume mount for persistent data
Health Checks
Add health checks to your containers:
docker run -d \
--name my-app \
--health-cmd="curl -f http://localhost:3000/health || exit 1" \
--health-interval=30s \
--health-timeout=10s \
--health-retries=3 \
my-app:1.0.0Check container health:
`docker ps --format "table {{.Names}}\t{{.Status}}"
Docker Compose for Multi-Container Apps
Install Docker Compose (if not already included):
For Amazon Linux 2:
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-composeFor AL2023/Ubuntu (already included):
docker compose version
Example docker-compose.yml:
services: web: image: nginx:latest ports: - "80:80" volumes: - ./html:/usr/share/nginx/html restart: unless-stopped
app: build: . environment: - DATABASE_URL=postgres://db:5432/myapp depends_on: - db restart: unless-stopped
db: image: postgres:15-alpine environment: - POSTGRES_PASSWORD=secretpassword volumes: - db-data:/var/lib/postgresql/data restart: unless-stopped
volumes:
db-data:
`
Deploy stack:
docker compose up -d
Security Best Practices
1. Use Non-Root Users in Containers
In your Dockerfile:
FROM node:20-slim
RUN useradd -m appuser
USER appuser
WORKDIR /home/appuser/app2. Scan Images for Vulnerabilities
Use Docker Scout (built-in):
docker scout cves my-app:1.0.0
Or Trivy (open-source):
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image my-app:1.0.03. Limit Container Capabilities
Run with minimal privileges:
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE my-app4. Use Read-Only Root Filesystems
docker run --read-only --tmpfs /tmp my-app5. Set Security Options
docker run --security-opt=no-new-privileges:true my-appPerformance Optimization
1. Use Overlay2 Storage Driver
Verify your storage driver:
docker info | grep "Storage Driver"
If not overlay2, configure in /etc/docker/daemon.json:
{
"storage-driver": "overlay2"
}2. Optimize Layer Caching
Order Dockerfile commands from least to most frequently changed:
# Dependency layers (change rarely) - cached
FROM node:20-slim
COPY package*.json ./# Application code (changes often) - rebuilt
COPY . .
`
3. Use BuildKit for Faster Builds
Enable BuildKit:
export DOCKER_BUILDKIT=1
docker build -t my-app .Or set permanently in /etc/docker/daemon.json:
{
"features": {
"buildkit": true
}
}4. Limit Container Resources
Prevent resource exhaustion:
docker run -d \
--memory="512m" \
--memory-swap="1g" \
--cpus="1.5" \
my-appTroubleshooting Common Issues
Issue 1: "Cannot connect to Docker daemon"
Symptom: Cannot connect to the Docker daemon at unix:///var/run/docker.sock
Solutions:
- . Check if Docker is running:
sudo systemctl status docker - . Start Docker:
sudo systemctl start docker - . Verify permissions:
groupsshould showdocker - . Re-login after usermod:
exitand SSH back in
Issue 2: Containers Can't Access Internet
Symptom: curl: (6) Could not resolve host inside containers
Solution: Configure DNS in /etc/docker/daemon.json:
{
"dns": ["8.8.8.8", "8.8.4.4"]
}Issue 3: Port Already in Use
Symptom: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use
Find the process:
sudo netstat -tlnp | grep :80
Kill it or use a different port:
docker run -p 8080:80 my-app
Issue 4: Disk Space Issues
Check Docker disk usage:
docker system df
Clean up unused resources:
docker system prune -a --volumes
Warning: This removes all stopped containers, unused networks, images, and volumes.
Issue 5: Permission Denied on Volume Mounts
Symptom: Container logs show permission errors accessing mounted volumes
Solution: Set proper ownership:
sudo chown -R 1000:1000 /path/to/volume
Or use :z flag for SELinux systems:
docker run -v /host/path:/container/path:z my-app
Monitoring and Logging
View Container Logs
docker logs my-app
docker logs -f my-app # Follow logs in real-time
docker logs --tail 100 my-app # Last 100 linesMonitor Resource Usage
docker stats
docker stats my-app # Specific containerIntegrate with CloudWatch
Install CloudWatch agent on EC2 and configure Docker logging driver:
{
"log-driver": "awslogs",
"log-opts": {
"awslogs-region": "us-east-1",
"awslogs-group": "docker-logs",
"awslogs-stream": "my-app"
}
}Real-World Deployment Example
Here's a complete example deploying a Node.js application with NGINX reverse proxy:
1. Project Structure
/home/ec2-user/my-app/
├── docker-compose.yml
├── nginx/
│ └── nginx.conf
└── app/
├── Dockerfile
├── package.json
└── server.js2. Application Dockerfile
FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
USER node
CMD ["node", "server.js"]3. NGINX Configuration
upstream app {
server app:3000;server { listen 80; server_name example.com;
location / {
proxy_pass http://app;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
`
4. Docker Compose File
services: nginx: image: nginx:alpine ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf depends_on: - app restart: unless-stopped
app:
build: ./app
environment:
- NODE_ENV=production
- PORT=3000
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
interval: 30s
timeout: 10s
retries: 3
`
5. Deployment Script
#!/bin/bashecho "Pulling latest changes..." git pull origin main
echo "Building images..." docker compose build
echo "Stopping old containers..." docker compose down
echo "Starting new containers..." docker compose up -d
echo "Cleaning up..." docker image prune -f
echo "Deployment complete!"
docker compose ps
`
Key Takeaways
- . Choose the right AMI: Amazon Linux 2023 for AWS-optimized performance, Ubuntu for community support
- . Always configure logging limits: Prevent disk space issues with max-size and max-file settings
- . Use usermod correctly: Remember to re-login after adding users to the docker group
- . Enable auto-restart: Use
--restart unless-stoppedfor production containers - . Secure your containers: Non-root users, read-only filesystems, and vulnerability scanning
- . Monitor resource usage: Set memory and CPU limits to prevent resource exhaustion
- . Use ECR for production: Better integration with AWS IAM and ECS/EKS
- . Implement health checks: Catch issues before they impact users
Next Steps
- Set up automated deployments with AWS CodeDeploy or GitHub Actions
- Explore AWS ECS (Elastic Container Service) for managed container orchestration
- Implement blue-green deployments with Docker tags and load balancers
- Set up centralized logging with CloudWatch or ELK stack
- Configure auto-scaling based on container metrics
Docker on EC2 is powerful once you understand the nuances of each AMI. Whether you're running a single application or orchestrating dozens of microservices, these foundations will serve you well.
What challenges have you faced deploying Docker on EC2? The comment section is open for questions and shared experiences.