The Talent500 Blog
9 Levels of AWS Cost Savings 1

9 Levels of AWS Cost Savings

AWS offers great hosting functionality for engineers — at a cost. Understanding how these costs work can help you save serious amounts of dollars and there are many, many levels of spending optimizations you can put into practice.

Level 0 — Budgets

Whether you have an existing account or are just getting started, you’ll want to set up a  budget alarm. If you’re setting up a small personal project or bootstrapping a product, setting a budget in AWS is a must to avoid burning through your… budget. In an established environment, budget alerts can notify you if there’s some reason why your bill is trending higher than usual in a given month; you’ll likely need to investigate from there.

Level 1 — Turn it Off

The best way to save money in AWS is to make sure you’ve “turned the lights off” if you’re done using them. Make sure any services running are still necessary. Doing a monthly peruse of your AWS bill to spot services that don’t resonate with business needs is a good way to find “launched and forgotten” services costing you money.

Level 2 — Savings Plans

AWS Savings Plans come in 3 flavors with various trade-offs. For complete optimizing of AWS spend, I’d recommend a mix of Compute Savings Plans, EC2 Instance Savings Plans, and Reserved Instances (covered in Level 6). Getting the right mix will take time. If you only have a little time to spare, “Level 2” of cost savings would be to identify a good level of Compute Savings Plan.

With Savings Plans and Reserved Instances, the best savings compared to on-demand are available to those that can make 3 year commitments, paid all upfront. Slightly less savings are available when paying only partially upfront. And less than that, no-upfront payment reservations for 3 year commitments still offer savings but at a lower rate. 1 year commitments are also available in full up-front, partial, and no up-front.

The following compares the Compute Savings Plan Rates in us-east-1 for a m6g.large Linux instance, retrieved from the AWS Pricing Page:


| Term length | Payment Options | On-Demand Rate | Savings Plan Rate | Relative Savings |


| 1 year      | No Upfront      | $0.077         | $0.0567           | 26%              |

| 1 year      | Partial Upfront | $0.077         | $0.054            | 30%              |

| 1 year      | All Upfront     | $0.077         | $0.0529           | 31%              |

| 3 year      | No Upfront      | $0.077         | $0.04             | 48%              |

| 3 year      | Partial Upfront | $0.077         | $0.037            | 52%              |

| 3 year      | All Upfront     | $0.077         | $0.0363           | 53%              |


How much Savings Plans coverage will you need?

In your web console, AWS provides Savings Plans Recommendations, which can be a good start based on historical trends. Be aware that if you have recently been following “Level 1: Turn it off” and turning systems off that were unused, you may still get a recommendation based on your historic usage. I’d recommend starting at less than what you think you need and working your way up. If you’re taking a hands-on approach to managing AWS spend, having multiple Savings Plans expire throughout the year can give you flexibility to adjust needs mid-year that a 1-year reservation otherwise wouldn’t.

Level 3 — Reserved Instances (non EC2)

Assuming you have workloads running other than EC2 for compute, you can likely benefit from reserving capacity in them. A full list of the services which allow reservation can be found in AWS Docs, we’ll use Aurora w/ MySQL Compatibility as an example since odds are pretty good your application will have a database of some sort.

Once again, your savings are best if you’re willing to commit for longer terms or pay upfront. Note that for some of these services, the option to reserve for 3-years without any upfront costs seems to be absent.

At this point, I’ll mention a necessary budgeting tool, the AWS Pricing Calculator. It’s important to note that RDS/Aurora reservations only cover the instance costs; storage, IOPS, backups, and other costs are not covered. The pricing calculator can help project the total cost of running an AWS service and compare multiple options of reservations.

Once again using the us-east-1 region and a db.r6g.large MySQL-Compatible Aurora instance yields a familiar menu — remember that these savings only consider instance costs:


| Term length | Payment Options | On-Demand Rate | Savings Plan Rate | Relative Savings |


| 1 year      | No Upfront      | $0.26          | $0.170            | 35%              |

| 1 year      | Partial Upfront | $0.26          | $0.144            | 44%              |

| 1 year      | All Upfront     | $0.26          | $0.142            | 46%              |

| 3 year      | No Upfront      | $0.26          | n/a               | n/a              |

| 3 year      | Partial Upfront | $0.26          | $0.096            | 63%              |

| 3 year      | All Upfront     | $0.26          | $0.090            | 65%              |


When it comes to Database costs, there are more involved cost savings measures to consider as well:

  • Do you have non-performant queries increasing your IO costs?
  • Are you storing data that is still useful?
  • Is some of your data more appropriately stored in a block storage solution like S3?

Similar to turning services off after you’re done using them, these sorts of questions can help pare down your RDS/Aurora service usage.

Level 4 — S3

AWS S3 block storage is incredibly useful for many applications. The defaults in S3 provide incredible reliability and data durability SLAs. If you’re looking to save money on your AWS bill, you’ll need to ask if all of the data you’ve stored in S3 need to be stored in the default, and most costly, “S3 Standard” storage class or if other storage classes  are more appropriate?

On its face, S3 Intelligent-Tiering promises to optimize storage needs without any manual intervention. In my experience of workloads, I have a huge number of objects using small amounts of disk per file. S3 Intelligent-Tiering has an overhead of $0.0025 per 1,000 objects and will not monitor objects 128 KB or smaller. At 100 million objects, the monitoring cost reaches $250/mo. If your average object size was 128 KB your S3 Standard storage class would cost $294.40/mo in us-east-1, so Intelligent-Tiering would initially close-to double your costs, and would be likely to increase your overall spend unless your access patterns are exceptionally rare.

Cost Comparison by number of Objects for 10 TiB of Data in S3-Standard vs. S3 Intelligent Tiering fully utilizing Archive Instance Access Tier

S3 Intelligent-Tiering would greatly reduce the costs of storage needs that are large per-object file sizes. In the graph above, if 10 TiB were composed of 50 million objects that could be 100% allocated to Archive Instance Access Tier, Intelligent-Tiering could reduce storage costs compared to S3-Standard from $230/mo. to $165/mo.

If S3 Intelligent-Tiering isn’t a good fit, you could run a Storage Class Analysis  to determine if a Lifecycle Policy that moves some of your objects to S3-IA would be cost effective. S3-IA decreases your costs to store objects in S3 by roughly 50% compared to S3 Standard, but comes with an increase in retrieval costs. Storage Class Analysis incurs some cost, but the insights it provides can greatly help understand access patterns in your buckets. In one of my buckets, I saw a very obvious drop in access of S3 objects that were 30+ days old, down to just a 1% chance of retrieval! So a strategy of transitioning S3 objects to S3-IA 30 days after object creation made sense. If you get an obvious insight from Storage Class Analysis as I did, you can turn it off until the next time you want to measure your usage patterns — just be aware that it can take weeks to accurately observe your patterns.

In a simplified example where a constant 10 TiB of data is stored on S3 in either S3 Standard or S3-IA, graphed below, with the number of GET requests for the data variant (x-axis). The break-even in this case is 175 million GET requests, so if your data is not actually “infrequently accessed”, despite the favorable storage costs, you can incur additional costs due to data retrieval charges being more expensive in S3-IA. However, if your 10 TiB of data is very rarely retrieved, at just 10 million GET requests, the savings of S3-IA would be roughly $99/month or 42%.

Break-even cost of S3 Standard vs. S3-IA

Level 5 — Right-sizing and Instance Families

Not every workload is built the same. You likely know the CPU, memory, and other requirements of your workloads between EC2, RDS, and other services. You also have the best idea of how many instances you need to operate.

Right-sizing instances could be a topic of its own lengthy discussion, but the short version can be found in the CloudWatch dashboard. If you observe any of the following, you might have cost optimizations available to you by moving to a less expensive instance or running fewer instances in a scaling group:

  • CPU Utilization rarely reaching 30–40%
  • Memory Utilization rarely reaching 30–40% on memory-optimized instance classes (“but I swear <Language X> is a memory hogs” — your engineers, probably)
  • In the case of EC2, running replicas above your highly-available goals that result in low CPU/memory utilization

I’ve been working in AWS long enough that I remember m3 instances being the de facto standard of my first AWS account — hint: it’s not anymore. AWS regularly launches new instance families that promise better performance and often at a lower price. However there are two things to consider before assuming the most recent instance family is your best option:

  1. Don’t assume the newest is the  cheapest —the Graviton3 m7g compared to Graviton2 m6g is roughly 10% more expensive, but promises significant performance improvements.
  2. Don’t assume your workload will work on any architecture — there is a difference between M7g, M7i, M7a instances beyond its Scrabble value: chipset. M7g uses ARM-based architecture on AWS Graviton3 processors; M7i uses x86 architecture on 4th Gen Intel Xeon processors; M7a uses x86 architecture on 4th Gen AMD EPYC processors. If you’re upgrading from M5/M4/M3 instance family, that is the x86 architecture. Make sure to recompile & test your workload if you’re eyeing ARM-based architecture!

Level 6— EC2 Instance Savings Plans and Reserved Instances

Did you think you were done with EC2? Well, you could have been, but if you weren’t satisfied just throwing a Compute Savings Plan in your account and moving on, there’s more!

Compared to Compute Savings Plans, EC2 Instance Savings Plans offer an increase in potential savings, but with a decrease in flexibility: you must commit to a particular instance family in a particular region.

Keeping with the above m6g.large Linux example, the following compares the EC2 Instance Savings Plan rates in us-east-1 instance, retrieved from the AWS Pricing Page:


| Term length | Payment Options | On-Demand Rate | Savings Plan Rate | Relative Savings |


| 1 year      | No Upfront      | $0.077         | $0.0483           | 37%              |

| 1 year      | Partial Upfront | $0.077         | $0.046            | 40%              |

| 1 year      | All Upfront     | $0.077         | $0.0451           | 41%              |

| 3 year      | No Upfront      | $0.077         | $0.0335           | 56%              |

| 3 year      | Partial Upfront | $0.077         | $0.031            | 60%              |

| 3 year      | All Upfront     | $0.077         | $0.0291           | 62%              |


For this particular region and instance family, by locking in to the m6g class, 9–10% additional savings (relative to on-demand rates) are available.

Another option is the more traditional purchasing of Reserved Instances. For us-east-1 and the m6g.large Linux instance class, the rates are the exact same as EC2 Instance Savings Plan. The main difference is that Reserved Instances allow you to use the Reserved Instance Marketplace. I have very little experience with the Reserved Instance Marketplace, but in theory, it could be an advantage to offload highly sought-after instance class reservations before the reservation term is over.

Level 7 — Combining SPs and RIs

At this point you’re probably wondering if you need a Ph.D. in AWS cost optimization to best optimize your hard-earned cash, answer: you might. However, as the saying goes, “don’t let good be the enemy of great”, if you’ve reaped enough savings to this point double-check that opportunity cost trade-off: there’s far more spreadsheets and tabulations from here on out.

Consider your EC2 workload and answer the following questions:

  1. Do you expect your computer usage to increase, decrease, or fluctuate? If your usage might decrease, choose a 1 year term and try to project a minimal baseline.
  2. Do you have the time to figure all of this out? If not, go with a simple Compute Savings Plan.
  3. Do you have a stable baseline of usage for at least some of your workloads? Consider reserved instances for these and add Compute Savings Plan for any more variable workloads.
  4. Have you observed good savings with 1 year terms so far? Consider mixing in 3 year terms for workloads you’re certain will continue.

As 1 year RIs and SPs expire, strategic 3 years may replace them before filling in the rest with smaller 1 year renewals.

Level 8— Spot Instances

AWS Spot Instances offer the highest amount of potential savings, but have a considerable overhead of having to manage spot requests and handle workloads being potentially disrupted. This offering might be a good fit for temporary workloads, such as a testing environment, CI/CD executors, bespoke scripts, etc. You could even use Spot Instances in combination with Auto-Scaling Groups  to handle spikes in traffic.

Level 9 — NAT Gateways and Service Endpoints

Applications running on a private subnet require a NAT Gateway in order to process data going out to the public Internet. By default, this includes AWS services, like S3, CloudWatch Logs, etc. NAT Gateways cost $0.045 per GB of data “processed” through it (in us-east-1).

Many AWS services allow VPC endpoints to be created. Currently, two services allow creation of a Gateway endpoint, S3 and DynamoDB. The advantage of these VPC endpoints is that it avoids the $0.045 per GB data processed through a NAT Gateway, as the resource is routed locally within your VPC. In the case of Gateway endpoints there is no charge for running the endpoint. VPC Endpoints are charged $0.01 per AZ per hour. In both cases, a reduced rate of $0.01 per GB data processed through a VPC endpoint is cost-advantageous to the comparable NAT gateway cost.

Let’s put an example together to illustrate:

  • You guessed it, us-east-1 region
  • An application running on 3 subnets in 3 AZs
  • Application sends 1 TiB of data to CloudWatch Logs per month
  • Application sends 2 TiB of data to S3 per month

Option A — Only NAT Gateway


|           Service           | Running Cost | Data Processing GBs | Cost Per GB | Monthly Cost |


| NAT Gateway Hourly          | $0.045/hr    |                   – | –           | $32.85       |

| NAT Gateway Data Processing | –            |                3000 | $0.045      | $135         |


Total: $167.85/mo

Option B — Use S3 Gateway Endpoint and Logs VPC Endpoint


|           Service           | Running Cost | Data Processing GBs | Cost Per GB | Monthly Cost |


| NAT Gateway Hourly          | $0.045/hr    |                   – | –           | $32.85       |

| Logs VPC Endpoint (*3 AZ)   | $0.030/hr    |                   – | –           | $21.9        |

| VPC Endpoint Processing     | –            |                1000 | $0.01       | $10          |

| S3 Gateway                  | 0            |                2000 | $0.01       | $20          |

| NAT Gateway Data Processing | –            |                   0 | $0.045      | $0           |


Total: $84.75/mo

To best understand your own network, enabling VPC Flow Logs and identifying traffic sent to other AWS services is highly recommended.


If nothing else, I hope that I have given you the curiosity to look at your AWS monthly bill with a new lens of scrutiny. Optimizing your AWS spend definitely comes at the opportunity cost of whatever else you would spend your hours pursuing; I highly recommend looking at categories of your AWS spend and asking yourself “is this worth my time to look for cost savings?”

Through basic EC2 savings plans and some reserved instances, chances are you can look at your AWS bill, look at yourself in the mirror, and say “my time is better spent building things.”


Priyam Vaidya

A certified cloud architect (Azure and AWS) with over 15 years of experience in IT. Currently working as Sr Cloud Infrastructure Engineer. Love to explore and train others on new technology

Add comment