At TechLabs, our AWS bill had crept up to $1,900/month. For a small team, that was eating significantly into our margins. Here's how I systematically reduced it to under $300.
The Problem
Like many startups, we had accumulated cloud debt: - Development environments running 24/7 - Oversized instances "just in case" - Orphaned EBS volumes and snapshots - No reserved instance strategy
Step 1: Visibility First
Before optimizing, I needed to understand where money was going. I set up:
- . **AWS Cost Explorer** with daily granularity
- . **Cost allocation tags** on every resource
- . **Budgets and alerts** for each team/project
This alone revealed that 40% of our spend was on resources no one was using.
Step 2: Right-Sizing
The biggest wins came from right-sizing:
- Moved from m5.xlarge to t3.medium for most workloads (70% savings)
- Identified instances running at <10% CPU utilization
- Used AWS Compute Optimizer recommendations
Step 3: Scheduled Scaling
Development and staging environments don't need to run at night or weekends:
# Lambda function to stop dev environments at 7 PM
def stop_dev_instances(event, context):
ec2 = boto3.client('ec2')
instances = ec2.describe_instances(
Filters=[{'Name': 'tag:Environment', 'Values': ['dev', 'staging']}]
)
# Stop instances logic hereThis saved ~60% on non-production environments.
Step 4: Reserved Instances & Savings Plans
For production workloads with predictable usage, I purchased: - 1-year reserved instances for databases - Compute Savings Plans for EC2
Results
| Category | Before | After | Savings |
| EC2 | $800 | $120 | 85% |
| RDS | $600 | $100 | 83% |
| Other | $500 | $80 | 84% |
| **Total** | **$1,900** | **$300** | **84%** |
The key lesson: cloud cost optimization isn't a one-time task. It requires ongoing visibility and regular reviews.