Back to blog
·15 min read

Complete Guide: Setting Up Docker on Amazon EC2 AMIs

A comprehensive guide to installing and configuring Docker across different Amazon EC2 AMI types including Amazon Linux 2, Amazon Linux 2023, Ubuntu, and RHEL with best practices and troubleshooting tips.

DockerAWSEC2DevOpsCloud Infrastructure

Setting up Docker on EC2 instances seems straightforward until you realize each AMI (Amazon Machine Image) has its quirks, package managers, and permission models. I've spent countless hours troubleshooting "docker: command not found" errors and permission issues across different AMI types, and this guide consolidates everything I've learned.

Whether you're deploying microservices, running CI/CD pipelines, or containerizing applications, this guide will help you get Docker running smoothly on any EC2 AMI.

Why Docker on EC2?

Before diving into setup instructions, let's understand why this combination is powerful:

Cost Efficiency: Run multiple containerized applications on a single EC2 instance instead of provisioning separate instances for each service.

Portability: Build once, deploy anywhere. Your Docker images work identically across development, staging, and production environments.

Resource Isolation: Containers provide process and filesystem isolation without the overhead of full virtual machines.

Rapid Deployment: Deploy new versions in seconds by pulling updated images, not minutes by provisioning new instances.

Overview: EC2 AMI Types

Amazon offers several AMI types, each with different package managers and default configurations:

  • Amazon Linux 2: Older, stable, uses yum package manager
  • Amazon Linux 2023: Modern, optimized for AWS, uses dnf package manager
  • Ubuntu Server: Popular, uses apt package manager, extensive community support
  • Red Hat Enterprise Linux (RHEL): Enterprise-grade, uses yum/dnf, commercial support
  • CentOS/Rocky Linux: Community alternatives to RHEL

Each requires slightly different setup procedures, which I'll cover in detail.

Prerequisites

Before starting, ensure you have:

  • An active AWS account
  • An EC2 instance launched with your chosen AMI
  • SSH access to your instance (PEM key or Session Manager)
  • Security group allowing inbound SSH (port 22)
  • (Optional) Security group rules for your containerized applications

Amazon Linux 2: Docker Setup

Amazon Linux 2 is widely used but approaching end-of-life. Here's the battle-tested setup:

Step 1: Update System Packages

Always start with a clean system update:

sudo yum update -y

This ensures you have the latest security patches and package definitions.

Step 2: Install Docker

Amazon Linux 2 includes Docker in its extras repository:

sudo amazon-linux-extras install docker -y

Why not `yum install docker`? The amazon-linux-extras repository provides newer, AWS-optimized packages compared to base repos.

Step 3: Start and Enable Docker Service

sudo systemctl start docker
sudo systemctl enable docker

The enable command ensures Docker starts automatically after reboots.

Step 4: Add User to Docker Group

By default, Docker requires root privileges. Add your user to the docker group:

sudo usermod -a -G docker ec2-user

Critical: Log out and log back in for group changes to take effect:

exit
# SSH back into your instance

Step 5: Verify Installation

docker --version
docker ps

If you see "permission denied," you forgot to re-login after the usermod command.

Common Amazon Linux 2 Issues

Issue 1: Docker daemon not starting

Check logs: sudo journalctl -u docker.service -n 50

Common fix: Insufficient disk space. Check with df -h and clean up /var/lib/docker/ if needed.

Issue 2: Containers can't reach internet

Solution: Enable IP forwarding:

sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf

Amazon Linux 2023: Docker Setup

Amazon Linux 2023 (AL2023) is the modern successor with improved performance and security. Setup differs slightly:

Step 1: Update System

sudo dnf update -y

Note: AL2023 uses dnf instead of yum (though yum is aliased to dnf).

Step 2: Install Docker

AL2023 doesn't use amazon-linux-extras. Install from the default repository:

sudo dnf install docker -y

Step 3: Start and Enable Docker

sudo systemctl start docker
sudo systemctl enable docker

Step 4: User Permissions

sudo usermod -a -G docker ec2-user
newgrp docker  # Applies group without re-login

Pro tip: The newgrp docker command applies the group membership immediately in your current session.

Step 5: Configure Docker Daemon

Create a daemon configuration for better defaults:

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2"
}
EOF

This prevents logs from filling your disk and uses the efficient overlay2 storage driver.

Restart Docker to apply: sudo systemctl restart docker

AL2023-Specific Considerations

SELinux is enforcing by default: If containers fail with permission errors:

Check SELinux status: getenforce

Temporary disable (not recommended for production): sudo setenforce 0

Better solution: Use proper volume mount options: docker run -v /host/path:/container/path:z myimage

The :z flag tells Docker to apply appropriate SELinux labels.

Ubuntu Server: Docker Setup

Ubuntu is the most popular choice for Docker deployments due to excellent documentation and community support.

Step 1: Update Package Index

sudo apt update
sudo apt upgrade -y

Step 2: Install Prerequisites

sudo apt install -y ca-certificates curl gnupg lsb-release

Step 3: Add Docker's Official GPG Key

sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

Step 4: Add Docker Repository

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Step 5: Install Docker Engine

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

This installs:

  • docker-ce: Docker Engine
  • docker-ce-cli: Command-line interface
  • containerd.io: Container runtime
  • docker-buildx-plugin: Extended build capabilities
  • docker-compose-plugin: Multi-container orchestration

Step 6: Start Docker

sudo systemctl start docker
sudo systemctl enable docker

Step 7: User Permissions

sudo usermod -aG docker ubuntu
newgrp docker

Note: Default user is ubuntu, not ec2-user.

Step 8: Verify Installation

docker run hello-world

If successful, you'll see "Hello from Docker!" message.

Ubuntu-Specific Tips

Issue: Old Docker from snap

Ubuntu might have Docker installed via snap. Remove it first: sudo snap remove docker

Systemd integration: Ubuntu uses systemd, so you can check Docker status with: sudo systemctl status docker

Red Hat Enterprise Linux (RHEL): Docker Setup

RHEL users should actually use Podman (Red Hat's Docker alternative) as Docker is no longer officially supported. However, if you need Docker:

Option 1: Install Podman (Recommended)

sudo dnf install -y podman podman-docker

The podman-docker package provides a docker command alias pointing to Podman.

Option 2: Install Docker CE (Unsupported)

sudo dnf install -y dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker ec2-user

Warning: This uses CentOS repos on RHEL, which may cause compatibility issues.

Post-Installation: Essential Configuration

Regardless of your AMI, apply these best practices:

1. Configure Docker Logging

Prevent disk space exhaustion from container logs:

sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5"
  }
}
EOF
sudo systemctl restart docker

2. Set Up Docker Content Trust (Security)

Enable image signature verification:

export DOCKER_CONTENT_TRUST=1
echo 'export DOCKER_CONTENT_TRUST=1' >> ~/.bashrc

This ensures you only pull signed, verified images.

3. Configure Resource Limits

For production workloads, set container resource limits in /etc/docker/daemon.json:

{
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 64000,
      "Soft": 64000
    }
  }
}

4. Enable Docker Metrics

For monitoring with Prometheus or CloudWatch:

{
  "metrics-addr": "127.0.0.1:9323",
  "experimental": true
}

Restart Docker after changes: sudo systemctl restart docker

Working with Docker Images on EC2

Now that Docker is installed, here are practical patterns for working with images:

Pulling Images from Docker Hub

docker pull nginx:latest
docker pull postgres:15-alpine
docker pull node:20-slim

Tip: Use specific tags (not latest) in production for reproducibility.

Building Custom Images

Create a simple Dockerfile:

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]

Build and tag: docker build -t my-app:1.0.0 .

Using Amazon ECR (Elastic Container Registry)

ECR is AWS's managed Docker registry. To push images:

Step 1: Authenticate Docker to ECR: aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com

Step 2: Tag your image: docker tag my-app:1.0.0 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0

Step 3: Push to ECR: docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/my-app:1.0.0

Pro tip: Attach an IAM role to your EC2 instance with AmazonEC2ContainerRegistryReadOnly policy to pull images without credentials.

Multi-Stage Builds for Smaller Images

Reduce image size by using multi-stage builds:

# Build stage
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .

# Production stage FROM node:20-slim WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules CMD ["node", "dist/server.js"] `

This pattern keeps build dependencies out of your final image.

Running Containers in Production

Basic Container Deployment

docker run -d \
  --name my-app \
  --restart unless-stopped \
  -p 80:3000 \
  -e NODE_ENV=production \
  -v /data/app:/app/data \
  my-app:1.0.0

Flags explained:

  • -d: Detached mode (runs in background)
  • --restart unless-stopped: Auto-restart on failure or reboot
  • -p 80:3000: Map host port 80 to container port 3000
  • -e: Environment variable
  • -v: Volume mount for persistent data

Health Checks

Add health checks to your containers:

docker run -d \
  --name my-app \
  --health-cmd="curl -f http://localhost:3000/health || exit 1" \
  --health-interval=30s \
  --health-timeout=10s \
  --health-retries=3 \
  my-app:1.0.0

Check container health: `docker ps --format "table {{.Names}}\t{{.Status}}"

Docker Compose for Multi-Container Apps

Install Docker Compose (if not already included):

For Amazon Linux 2:

sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

For AL2023/Ubuntu (already included): docker compose version

Example docker-compose.yml:

services: web: image: nginx:latest ports: - "80:80" volumes: - ./html:/usr/share/nginx/html restart: unless-stopped

app: build: . environment: - DATABASE_URL=postgres://db:5432/myapp depends_on: - db restart: unless-stopped

db: image: postgres:15-alpine environment: - POSTGRES_PASSWORD=secretpassword volumes: - db-data:/var/lib/postgresql/data restart: unless-stopped

volumes: db-data: `

Deploy stack: docker compose up -d

Security Best Practices

1. Use Non-Root Users in Containers

In your Dockerfile:

FROM node:20-slim
RUN useradd -m appuser
USER appuser
WORKDIR /home/appuser/app

2. Scan Images for Vulnerabilities

Use Docker Scout (built-in): docker scout cves my-app:1.0.0

Or Trivy (open-source):

docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy image my-app:1.0.0

3. Limit Container Capabilities

Run with minimal privileges:

docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE my-app

4. Use Read-Only Root Filesystems

docker run --read-only --tmpfs /tmp my-app

5. Set Security Options

docker run --security-opt=no-new-privileges:true my-app

Performance Optimization

1. Use Overlay2 Storage Driver

Verify your storage driver: docker info | grep "Storage Driver"

If not overlay2, configure in /etc/docker/daemon.json:

{
  "storage-driver": "overlay2"
}

2. Optimize Layer Caching

Order Dockerfile commands from least to most frequently changed:

# Dependency layers (change rarely) - cached
FROM node:20-slim
COPY package*.json ./

# Application code (changes often) - rebuilt COPY . . `

3. Use BuildKit for Faster Builds

Enable BuildKit:

export DOCKER_BUILDKIT=1
docker build -t my-app .

Or set permanently in /etc/docker/daemon.json:

{
  "features": {
    "buildkit": true
  }
}

4. Limit Container Resources

Prevent resource exhaustion:

docker run -d \
  --memory="512m" \
  --memory-swap="1g" \
  --cpus="1.5" \
  my-app

Troubleshooting Common Issues

Issue 1: "Cannot connect to Docker daemon"

Symptom: Cannot connect to the Docker daemon at unix:///var/run/docker.sock

Solutions:

  1. . Check if Docker is running: sudo systemctl status docker
  2. . Start Docker: sudo systemctl start docker
  3. . Verify permissions: groups should show docker
  4. . Re-login after usermod: exit and SSH back in

Issue 2: Containers Can't Access Internet

Symptom: curl: (6) Could not resolve host inside containers

Solution: Configure DNS in /etc/docker/daemon.json:

{
  "dns": ["8.8.8.8", "8.8.4.4"]
}

Issue 3: Port Already in Use

Symptom: Error starting userland proxy: listen tcp 0.0.0.0:80: bind: address already in use

Find the process: sudo netstat -tlnp | grep :80

Kill it or use a different port: docker run -p 8080:80 my-app

Issue 4: Disk Space Issues

Check Docker disk usage: docker system df

Clean up unused resources: docker system prune -a --volumes

Warning: This removes all stopped containers, unused networks, images, and volumes.

Issue 5: Permission Denied on Volume Mounts

Symptom: Container logs show permission errors accessing mounted volumes

Solution: Set proper ownership: sudo chown -R 1000:1000 /path/to/volume

Or use :z flag for SELinux systems: docker run -v /host/path:/container/path:z my-app

Monitoring and Logging

View Container Logs

docker logs my-app
docker logs -f my-app  # Follow logs in real-time
docker logs --tail 100 my-app  # Last 100 lines

Monitor Resource Usage

docker stats
docker stats my-app  # Specific container

Integrate with CloudWatch

Install CloudWatch agent on EC2 and configure Docker logging driver:

{
  "log-driver": "awslogs",
  "log-opts": {
    "awslogs-region": "us-east-1",
    "awslogs-group": "docker-logs",
    "awslogs-stream": "my-app"
  }
}

Real-World Deployment Example

Here's a complete example deploying a Node.js application with NGINX reverse proxy:

1. Project Structure

/home/ec2-user/my-app/
├── docker-compose.yml
├── nginx/
│   └── nginx.conf
└── app/
    ├── Dockerfile
    ├── package.json
    └── server.js

2. Application Dockerfile

FROM node:20-slim
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
USER node
CMD ["node", "server.js"]

3. NGINX Configuration

upstream app {
    server app:3000;

server { listen 80; server_name example.com;

location / { proxy_pass http://app; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } `

4. Docker Compose File

services: nginx: image: nginx:alpine ports: - "80:80" volumes: - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf depends_on: - app restart: unless-stopped

app: build: ./app environment: - NODE_ENV=production - PORT=3000 restart: unless-stopped healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/health"] interval: 30s timeout: 10s retries: 3 `

5. Deployment Script

#!/bin/bash

echo "Pulling latest changes..." git pull origin main

echo "Building images..." docker compose build

echo "Stopping old containers..." docker compose down

echo "Starting new containers..." docker compose up -d

echo "Cleaning up..." docker image prune -f

echo "Deployment complete!" docker compose ps `

Key Takeaways

  1. . Choose the right AMI: Amazon Linux 2023 for AWS-optimized performance, Ubuntu for community support
  2. . Always configure logging limits: Prevent disk space issues with max-size and max-file settings
  3. . Use usermod correctly: Remember to re-login after adding users to the docker group
  4. . Enable auto-restart: Use --restart unless-stopped for production containers
  5. . Secure your containers: Non-root users, read-only filesystems, and vulnerability scanning
  6. . Monitor resource usage: Set memory and CPU limits to prevent resource exhaustion
  7. . Use ECR for production: Better integration with AWS IAM and ECS/EKS
  8. . Implement health checks: Catch issues before they impact users

Next Steps

  • Set up automated deployments with AWS CodeDeploy or GitHub Actions
  • Explore AWS ECS (Elastic Container Service) for managed container orchestration
  • Implement blue-green deployments with Docker tags and load balancers
  • Set up centralized logging with CloudWatch or ELK stack
  • Configure auto-scaling based on container metrics

Docker on EC2 is powerful once you understand the nuances of each AMI. Whether you're running a single application or orchestrating dozens of microservices, these foundations will serve you well.

What challenges have you faced deploying Docker on EC2? The comment section is open for questions and shared experiences.