Beginner
Last quarter, I opened my AWS bill and nearly choked on my coffee. $4,200/month — for what I thought was a modest workload. A couple of web apps, some background processing, a few terabytes of data. Nothing crazy.
So I spent a weekend auditing everything. Five changes later, my bill dropped to $1,680/month. That’s a 60% reduction — no downtime, no architectural overhaul, no migration to another cloud. Just cleaning up waste that had been silently draining money for months.
If you’re running anything on AWS and haven’t done a cost audit recently, I guarantee you’re overpaying. Here are the exact five changes I made, with real commands and real numbers.
Prerequisites
- AWS CLI v2 installed and configured with appropriate IAM permissions
- Access to AWS Cost Explorer and Compute Optimizer (enable Compute Optimizer if you haven’t — it’s free)
- Basic familiarity with the AWS Management Console
1. Right-Sizing EC2 Instances Using AWS Compute Optimizer
The problem: I had six m5.xlarge instances running 24/7. Average CPU utilization? 8-12%. I was paying for 4 vCPUs and 16 GB RAM per instance while barely using a fraction of it.
Why most people miss it: You pick an instance size during initial setup, and you never revisit it. The app works, so you move on. Meanwhile, AWS happily charges you for idle compute every second.
The fix: Enable AWS Compute Optimizer (it’s free) and let it analyze your usage for at least 14 days. Then pull recommendations:
aws compute-optimizer get-ec2-instance-recommendations \
--region us-east-1 \
--query 'instanceRecommendations[*].{InstanceId:instanceArn,Current:currentInstanceType,Recommended:recommendationOptions[0].instanceType,Savings:recommendationOptions[0].projectedUtilizationMetrics}' \
--output table
Compute Optimizer told me to switch from m5.xlarge to t3.medium for four of my six instances. The other two went to m5.large.
Dollar impact: Six m5.xlarge On-Demand in us-east-1 = 6 × $0.192/hr = $829/month. After right-sizing (4× t3.medium at $0.0416/hr + 2× m5.large at $0.096/hr) = $259/month. Saved: $570/month.
2. Switching to Savings Plans Instead of On-Demand Pricing
The problem: After right-sizing, I was still paying On-Demand rates for instances that had been running continuously for over a year. That’s the most expensive way to use AWS compute.
Why most people miss it: Commitment-based pricing sounds scary. People think they’ll get locked into something inflexible. But Compute Savings Plans apply automatically to any EC2 instance family, size, OS, or region — and even to Fargate and Lambda.
The fix: Go to the AWS Cost Explorer → Savings Plans → Recommendations page. AWS will analyze your last 30-60 days of usage and recommend a specific hourly commitment. You can also use the CLI:
aws ce get-savings-plans-purchase-recommendation \
--savings-plans-type "COMPUTE_SP" \
--term-in-years "ONE_YEAR" \
--payment-option "NO_UPFRONT" \
--lookback-period-in-days "SIXTY_DAYS"
I chose a 1-year Compute Savings Plan with No Upfront payment — zero risk of cash outlay, and I could cancel the commitment style anytime by just scaling down.
Dollar impact: My right-sized compute was costing $259/month On-Demand. The Savings Plan gave me roughly a 30% discount, bringing it to $181/month. Saved: $78/month. Small, but it compounds across larger fleets.
3. S3 Lifecycle Policies to Move Data to Cheaper Storage Tiers
The problem: I had 4 TB of application logs and old backups sitting in S3 Standard. Most of this data hadn’t been accessed in over 90 days. S3 Standard costs $0.023/GB/month. That’s $92/month just to store data nobody was looking at.
Why most people miss it: S3 feels cheap, so nobody thinks about it. But storage costs are silent killers — they only grow over time and never shrink unless you take action.
The fix: Create a lifecycle policy that transitions objects to cheaper tiers automatically. Here’s the JSON configuration I used:
cat > lifecycle.json << 'EOF'
{
"Rules": [
{
"ID": "TransitionToInfrequentAndGlacier",
"Status": "Enabled",
"Filter": {
"Prefix": ""
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER_IR"
},
{
"Days": 365,
"StorageClass": "DEEP_ARCHIVE"
}
]
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-app-logs-bucket \
--lifecycle-configuration file://lifecycle.json
S3 Standard IA costs $0.0125/GB, Glacier Instant Retrieval costs $0.004/GB, and Deep Archive costs $0.00099/GB. The savings are dramatic for cold data.
Dollar impact: 4 TB in S3 Standard = $92/month. After lifecycle policies settled (most data moved to Glacier IR and Deep Archive): $22/month. Saved: $70/month.
4. Deleting Unattached EBS Volumes and Unused Snapshots
The problem: Over the past year, I'd launched and terminated dozens of EC2 instances during testing. But EBS volumes don't automatically delete when you terminate an instance (unless you explicitly set DeleteOnTermination to true). I had 15 unattached EBS volumes totaling 1.5 TB, plus 200+ orphaned snapshots.
Why most people miss it: These resources are invisible in day-to-day operations. You never see them unless you go looking. They just quietly bill you every month.
The fix: First, find all unattached volumes:
# Find all unattached EBS volumes
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[*].{ID:VolumeId,Size:Size,Created:CreateTime}' \
--output table
Review the list, then delete what you don't need:
# Delete a specific unattached volume (after confirming it's not needed)
aws ec2 delete-volume --volume-id vol-0abc123def456789a
# Find and calculate total snapshot storage
aws ec2 describe-snapshots \
--owner-ids self \
--query 'Snapshots[*].{ID:SnapshotId,Size:VolumeSize,Date:StartTime,Description:Description}' \
--output table
I wrote a quick script to delete snapshots older than 6 months:
# Delete snapshots older than 180 days
CUTOFF=$(date -d '180 days ago' +%Y-%m-%d 2>/dev/null || date -v-180d +%Y-%m-%d)
aws ec2 describe-snapshots --owner-ids self \
--query "Snapshots[?StartTime<'${CUTOFF}'].SnapshotId" \
--output text | tr '\t' '\n' | while read snap_id; do
echo "Deleting $snap_id"
aws ec2 delete-snapshot --snapshot-id "$snap_id"
done
Dollar impact: 1.5 TB of gp3 EBS volumes = $120/month. 200 snapshots (~2 TB incremental) = $100/month ($0.05/GB). Total saved: $220/month.
5. Using VPC Endpoints to Eliminate NAT Gateway Data Charges
The problem: My EC2 instances in private subnets were accessing S3 and DynamoDB through a NAT Gateway. NAT Gateways charge $0.045/GB for data processing on top of the $0.045/hr base cost. I was pushing ~2 TB/month to S3 alone through the NAT Gateway. That's $90/month in data processing fees plus $32/month for the gateway itself — just for S3 traffic.
Why most people miss it: NAT Gateways are set-and-forget infrastructure. The per-GB charge doesn't show up as a separate line item that screams "S3 traffic" — it's all lumped together. Most people don't realize S3 and DynamoDB traffic can bypass the NAT entirely.
The fix: Create Gateway VPC Endpoints for S3 and DynamoDB. These are completely free — no hourly charge, no data processing fee.
# Get your VPC ID and route table IDs
VPC_ID=$(aws ec2 describe-vpcs --filters Name=is-default,Values=false \
--query 'Vpcs[0].VpcId' --output text)
ROUTE_TABLE_IDS=$(aws ec2 describe-route-tables \
--filters Name=vpc-id,Values=$VPC_ID \
--query 'RouteTables[*].RouteTableId' --output text | tr '\t' ',')
# Create Gateway VPC Endpoint for S3
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--service-name com.amazonaws.us-east-1.s3 \
--route-table-ids $ROUTE_TABLE_IDS \
--vpc-endpoint-type Gateway
# Create Gateway VPC Endpoint for DynamoDB
aws ec2 create-vpc-endpoint \
--vpc-id $VPC_ID \
--service-name com.amazonaws.us-east-1.dynamodb \
--route-table-ids $ROUTE_TABLE_IDS \
--vpc-endpoint-type Gateway
Once created, AWS automatically routes S3 and DynamoDB traffic through the endpoint instead of the NAT Gateway. No application changes needed — it's entirely transparent.
Dollar impact: NAT Gateway processing for S3/DynamoDB traffic: $90/month. After VPC endpoints: $0/month. I still needed the NAT Gateway for other outbound internet traffic, but the base cost remained. Net saved on data processing: $90/month.
Common Mistakes to Avoid
- Right-sizing without monitoring first: Don't downsize instances based on a gut feeling. Let Compute Optimizer analyze at least 14 days of CloudWatch metrics. I once downsized a batch processing server based on average CPU and it fell over during peak — look at maximum utilization, not just average.
- Deleting EBS snapshots tied to AMIs: If a snapshot is associated with a registered AMI, you must deregister the AMI first. Otherwise the
delete-snapshotcall will fail. Always check withaws ec2 describe-images --owners selfbefore bulk deleting. - Forgetting S3 minimum storage durations: S3 Standard-IA has a 30-day minimum charge, and Glacier has a 90-day minimum. If you transition objects that get deleted before the minimum duration, you pay for the full period anyway. Don't lifecycle short-lived objects.
- Assuming VPC Gateway Endpoints work for all services: Gateway endpoints only support S3 and DynamoDB. For other services like SQS, Secrets Manager, or ECR, you need Interface VPC Endpoints (powered by AWS PrivateLink), which do have an hourly cost of ~$0.01/hr per AZ.
- Buying Savings Plans before right-sizing: Always right-size first, then commit. If you commit to a Savings Plan based on oversized usage, you'll still be locked into paying for compute you later eliminate.
The Full Savings Breakdown
| Change | Before | After | Monthly Savings |
|---|---|---|---|
| Right-sizing EC2 | $829 | $259 | $570 |
| Savings Plans | $259 | $181 | $78 |
| S3 Lifecycle Policies | $92 | $22 | $70 |
| Deleting EBS/Snapshots | $220 | $0 | $220 |
| VPC Gateway Endpoints | $90 | $0 | $90 |
| Total | $1,028/month | ||
Combined with other minor optimizations across the rest of my bill, the total reduction came to roughly 60%. Your numbers will vary, but I'd bet real money that at least two of these five apply to your account right now.
Conclusion
AWS cost optimization isn't about switching providers or rewriting your architecture. It's about eliminating waste that accumulates silently over time. These five changes took me a single weekend and saved over $1,000/month — and none of them required any downtime or application changes.
- Right-size first, commit second — use Compute Optimizer before buying Savings Plans.
- Automate storage lifecycle — S3 data only gets colder over time; your policies should reflect that.
- Audit orphaned resources monthly — unattached EBS volumes and old snapshots are pure waste.
- Deploy Gateway VPC Endpoints immediately — they're free and eliminate NAT data processing charges for S3 and DynamoDB.
- Schedule a monthly cost review — AWS Cost Explorer is free; ignorance is expensive.
Found this helpful? Share it with your team. For more practical AWS and DevOps guides, visit riseofcloud.com.
Let's keep learning consistently at a medium pace.