Understanding AWS Billing Through Restaurant Menu Surprises
You built a simple app that should cost almost nothing to run. Maybe it's a personal blog, a small API, or a side project. You expected to pay $5-10 per month. Then your first AWS bill arrives: $487.53. Sound familiar?
This isn't a rare occurrence. It's so common there's a whole genre of "AWS bill shock" stories across developer forums. Today, we're going to break down why cloud bills can be surprisingly high and how to avoid the most expensive traps.
Imagine going to a restaurant where the menu shows:
You order all three, expecting a $10 meal. Then your bill arrives:
Burger: $5.00
Fries: $3.00
Drink: $2.00
Table service: $15.00
Kitchen utilities: $8.00
Plate washing: $12.00
Napkin usage: $4.00
Sitting fee: $6.00/hour × 2 hours = $12.00
Walking to bathroom: $3.00
Breathing air: $2.00/hour
TOTAL: $64.00
This is essentially what AWS bills look like - the core services seem cheap, but the "supporting services" and usage-based charges add up fast.
1. Compute (The Main Course)
{
"EC2Instance": {
"Base": "t3.medium: $30.37/month",
"ActualCosts": {
"Instance": "$30.37",
"EBSStorage": "$8.00 (80GB gp3)",
"DataTransfer": "$15.20 (169GB out)",
"ElasticIP": "$3.60 (when not attached 24 hours)",
"LoadBalancer": "$16.20/month",
"Total": "$73.37/month"
}
}
}
2. Storage (The Side Dishes That Add Up)
{
"StorageCosts": {
"S3Standard": "$0.023/GB/month",
"YourUsage": "500 GB",
"MonthlyCost": "$11.50",
"SurpriseCharges": {
"Requests": "$2.15 (430,000 GET requests)",
"DataTransfer": "$8.50 (100GB downloads)",
"CrossRegionReplication": "$12.00 (automatic backup you forgot about)"
},
"TotalS3": "$34.15/month"
}
}
3. Data Transfer (The Hidden Delivery Fees)
{
"DataTransferCharges": {
"InternetOut": "$0.09/GB (first 1GB free)",
"CrossAZ": "$0.01/GB",
"CrossRegion": "$0.02/GB",
"CommonShock": {
"Scenario": "Backup script copies 500GB across regions weekly",
"MonthlyCost": "$433.60 (500GB × 4 weeks × $0.02 × 108.4%)",
"Prevention": "Use same-region backups or S3 Cross-Region Replication"
}
}
}
4. Database (The Premium Ingredients)
{
"RDSCosts": {
"BaseInstance": "db.t3.medium: $58.40/month",
"Storage": "100GB gp2: $11.50/month",
"Backups": "7 days retention: $2.30/month",
"MultiAZ": "Doubles instance cost: +$58.40/month",
"ReadReplicas": "Additional instances: +$58.40 each",
"UnexpectedTotal": "$189.00/month for 'simple' database"
}
}
5. Networking (The Table Service Charges)
{
"NetworkingCosts": {
"ApplicationLoadBalancer": "$16.20/month (always running)",
"NATGateway": "$32.40/month + $0.045/GB processed",
"VPCEndpoints": "$7.20/month (interface endpoints)",
"ElasticIPs": "$3.60/month when not attached to running instance"
}
}
What happened:
The bill:
{
"ExpectedCost": "2 hours × $12.24/hour = $24.48",
"ActualCost": "72 hours × $12.24/hour = $881.28",
"Difference": "$856.80 surprise charge"
}
Prevention:
What happened:
The bill:
{
"DataTransfer": "2000GB × $0.02/GB = $40.00",
"StorageInExpensiveRegion": "2000GB × $0.023/GB × 30 days = $1,380.00",
"Total": "$1,420.00 for forgotten test database"
}
Prevention:
What happened:
Monthly zombie costs:
{
"LoadBalancer": "$16.20/month (serving no traffic)",
"NATGateway": "$32.40/month (no instances to serve)",
"EBSVolumes": "$40.00/month (unattached storage)",
"ElasticIPs": "$10.80/month (3 unused IPs)",
"Total": "$99.40/month for 'shut down' environment"
}
Let's trace how a simple $5 app becomes a $500 monthly expense:
{
"ExpectedCosts": {
"EC2": "t3.micro free tier: $0/month",
"RDS": "db.t3.micro free tier: $0/month",
"S3": "Few GB storage: $1/month",
"Total": "~$5/month after free tier"
}
}
{
"Month1": {
"ActualCosts": {
"EC2": "$8.50 (exceeded free tier hours)",
"RDS": "$13.50 (backup storage not free)",
"LoadBalancer": "$16.20 (needed for HTTPS)",
"S3": "$3.50 (more uploads than expected)"
},
"Total": "$41.70"
},
"Month2": {
"AddedFeatures": {
"FileUploads": "Need larger EBS volume: +$8.00",
"EmailService": "SES charges: +$2.50",
"CDN": "CloudFront: +$5.00",
"Monitoring": "CloudWatch logs: +$8.00"
},
"Total": "$65.20"
},
"Month3": {
"ProductionReadiness": {
"MultiAZ": "RDS Multi-AZ: +$58.40",
"NATGateway": "Private subnets: +$32.40",
"Backups": "Extended retention: +$15.00",
"SSL": "ACM + Route53: +$0.50"
},
"Total": "$171.50"
}
}
{
"Month6": {
"TrafficGrowth": {
"DataTransfer": "$85.00 (1000GB outbound)",
"LargerInstance": "$60.74 (t3.large for performance)",
"ReadReplica": "$58.40 (for read scaling)",
"IncreasedStorage": "$25.00 (500GB total)",
"RequestCharges": "$12.50 (millions of API calls)"
},
"Total": "$513.14"
}
}
{
"ExpensivePatterns": [
{
"Mistake": "Downloading large files from S3 frequently",
"Cost": "$90/TB transferred to internet",
"Solution": "Use CloudFront CDN: $85/TB (plus caching benefits)"
},
{
"Mistake": "Cross-region database replication",
"Cost": "$20/TB between regions",
"Solution": "Same-region read replicas: $1/TB within AZ"
}
]
}
{
"AlwaysRunningCosts": {
"LoadBalancer": "$16.20/month (even with no traffic)",
"NATGateway": "$32.40/month (even with no usage)",
"UnusedEIPs": "$3.60/month per unused IP",
"StoppedInstancesWithStorage": "$8/month per 80GB EBS volume"
}
}
{
"InstanceMismatch": {
"Overkill": {
"Instance": "m5.4xlarge for simple API",
"Cost": "$560/month",
"Utilization": "5% CPU, 10% memory"
},
"RightSized": {
"Instance": "t3.small with auto scaling",
"Cost": "$15/month average",
"Utilization": "70% CPU, 60% memory",
"Savings": "$545/month"
}
}
}
# Check current utilization
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
--start-time 2025-01-01T00:00:00Z \
--end-time 2025-01-31T23:59:59Z \
--period 3600 \
--statistics Average
# If average CPU < 40%, consider smaller instance
# Find unattached EBS volumes
import boto3
def find_unused_resources():
ec2 = boto3.client('ec2')
# Unattached EBS volumes
volumes = ec2.describe_volumes(Filters=[{'Name': 'status', 'Values': ['available']}])
total_wasted = 0
for volume in volumes['Volumes']:
size = volume['Size']
volume_type = volume['VolumeType']
monthly_cost = calculate_ebs_cost(size, volume_type)
total_wasted += monthly_cost
print(f"Unused {volume_type} volume: {size}GB = ${monthly_cost}/month")
print(f"Total wasted on unused volumes: ${total_wasted}/month")
# Run monthly to find waste
find_unused_resources()
{
"ScheduledShutdowns": {
"Development": {
"Schedule": "Shutdown 7 PM - 8 AM weekdays, all weekend",
"Uptime": "45 hours/week vs 168 hours/week",
"Savings": "73% cost reduction"
},
"Staging": {
"Schedule": "Shutdown nights and weekends",
"Uptime": "50 hours/week",
"Savings": "70% cost reduction"
}
}
}
Top cost drivers to check:
# Create billing alert for $100 threshold
aws cloudwatch put-metric-alarm \
--alarm-name "Billing-Alert-100" \
--alarm-description "Alert when bill exceeds $100" \
--metric-name EstimatedCharges \
--namespace AWS/Billing \
--statistic Maximum \
--period 86400 \
--threshold 100 \
--comparison-operator GreaterThanThreshold \
--dimensions Name=Currency,Value=USD \
--alarm-actions arn:aws:sns:us-east-1:123456789012:billing-alerts
Tag everything to understand where money goes:
{
"TaggingStrategy": {
"Environment": ["Production", "Staging", "Development"],
"Project": ["WebApp", "MobileAPI", "DataPipeline"],
"Owner": ["TeamFrontend", "TeamBackend", "TeamData"],
"CostCenter": ["Engineering", "Marketing", "Operations"]
}
}
{
"DevOptimization": {
"Compute": {
"Strategy": "t3.micro instances with auto-shutdown",
"Schedule": "8 AM - 6 PM weekdays only",
"Savings": "75% vs 24/7 operation"
},
"Database": {
"Strategy": "Aurora Serverless with auto-pause",
"IdleTime": "Pause after 5 minutes inactivity",
"Savings": "80% vs always-on RDS"
},
"Storage": {
"Strategy": "Lifecycle policies for automatic cleanup",
"Rule": "Delete dev data after 30 days",
"Savings": "Prevents endless storage accumulation"
}
}
}
{
"StagingOptimization": {
"Compute": "Smaller instances than production",
"Database": "Single AZ, shorter backup retention",
"Networking": "Single NAT Gateway instead of multi-AZ",
"MonthlySavings": "60% vs production equivalent"
}
}
{
"ProductionOptimization": {
"ReservedInstances": "Save 40-60% on predictable workloads",
"SpotInstances": "Save 70% on fault-tolerant workloads",
"RightSizing": "Monitor and adjust based on actual usage",
"ScheduledScaling": "Scale down during low-traffic hours"
}
}
# Lambda function for cost anomaly alerts
import boto3
import json
def lambda_handler(event, context):
ce = boto3.client('ce')
sns = boto3.client('sns')
# Get cost for last 7 days
response = ce.get_cost_and_usage(
TimePeriod={
'Start': '2025-08-19',
'End': '2025-08-26'
},
Granularity='DAILY',
Metrics=['BlendedCost']
)
# Calculate daily average
daily_costs = [float(day['Total']['BlendedCost']['Amount']) for day in response['ResultsByTime']]
avg_daily_cost = sum(daily_costs) / len(daily_costs)
# Check for anomalies (>50% above average)
for i, cost in enumerate(daily_costs):
if cost > avg_daily_cost * 1.5:
send_cost_alert(f"Unusual spending detected: ${cost:.2f} vs ${avg_daily_cost:.2f} average")
return {'statusCode': 200}
def send_cost_alert(message):
sns = boto3.client('sns')
sns.publish(
TopicArn='arn:aws:sns:us-west-2:123456789012:cost-alerts',
Message=message,
Subject='AWS Cost Anomaly Detected'
)
{
"S3LifecyclePolicies": [
{
"Rule": "Move to IA after 30 days",
"Savings": "40% storage cost reduction"
},
{
"Rule": "Move to Glacier after 90 days",
"Savings": "80% storage cost reduction"
},
{
"Rule": "Delete after 1 year",
"Savings": "100% elimination of old data costs"
}
]
}
{
"DatabaseOptimization": [
{
"Strategy": "Aurora Serverless for variable workloads",
"Savings": "40-70% vs always-on RDS"
},
{
"Strategy": "Read replicas in same AZ",
"Savings": "Eliminate cross-AZ data transfer charges"
},
{
"Strategy": "Shorter backup retention (7 days vs 35 days)",
"Savings": "75% backup storage cost reduction"
}
]
}
{
"ComputeOptimization": [
{
"Strategy": "Spot instances for batch processing",
"Savings": "70% vs On-Demand pricing"
},
{
"Strategy": "Reserved instances for steady workloads",
"Savings": "40-60% vs On-Demand pricing"
},
{
"Strategy": "Lambda for event-driven tasks",
"Savings": "Pay only for execution time, no idle costs"
}
]
}
Understanding AWS costs is like learning to read restaurant bills - once you know where to look, you can avoid expensive surprises:
The companies that master cloud cost optimization gain a huge competitive advantage - they can experiment more, scale faster, and operate more efficiently than competitors who let cloud costs spiral out of control.
Tired of AWS bill surprises? Huskar automatically schedules your non-critical resources to run only when needed, reducing costs by 40-70% without affecting performance. Our intelligent scheduling works with your existing architecture to eliminate waste while maintaining reliability. Try our free tier to see how much you could save.
AWS, Cloud Costs, Cost Optimization, Billing, EC2, RDS, S3, ELI5, Startups