People starting out with AWS quickly discover that there is some free stuff, including running a t2.micro virtual machine. The instance size of t2.micro provides 1GiB RAM and 1 vCPU, and new users can run one of these at no cost for 12 months. Even after the expiry of the free-tier stuff, the t2.micro instance is cheap, typically costing US$10 a month to run it constantly. Don’t forget that you also have to pay for the EBS volume (virtual disk) and bandwidth out of AWS. However some of that is free too.
The problem with T instances is that you don’t get a whole thread of a CPU core all the time. T2, T3 and T3a instances earn “CPU credits” at a rate dependent on their size. They burn CPU credits if the CPU does any work. And if you run out of credits, you don’t get the whole CPU. Let me explain:
A “CPU Credit” provides:
100% of a core for 1 minute, or
25% of a core for 4 minutes, or
10% of a core for 10 minutes, etc.
A t2.micro has 1 vCPU. According to the documentation, it earns 6 CPU credits per hour. That’s one CPU credit every 10 minutes, conveniently making the maths easy.
Hence a t2.micro has a “baseline” of 6 credits / 60 minutes = 10%.
On the left of the chart below, the instance is burning through its existing CPU credits (in blue) faster than it is earning them, until they run out. When they run out, the CPU utilization (in orange) gets throttled down. After they’ve run out, the instance will have its CPU throttled so that it burns through CPU credits at the same rate that they are earned. For a t2.micro, this “baseline” is 10%.
A t2.small also has 1 vCPU. It earns 12 CPU Credits per hour.
Hence a t2.small has a baseline of 12 / 60 = 20%.
So a t2.small will burn through CPU credits at the same rate that they are earned if the CPU is running at a 20%. This is its “baseline”.
A t3.large (which has 2x vCPU) earns 36 CPU Credits per hour.
Hence a t3.large has a baseline of 36 / 60 / 2 = 30% per vCPU.
Learn more about AWS with QA's dedicated courses by top trainers like Justin. Click here for course details.
Let’s look at this in more detail:
On a t2.micro, your applications can use 10% of the CPU constantly forever, because the earn rate is the same as the burn rate. If your application uses less than 10% CPU load, then the t2.micro instance will “earn” credits faster than they are burned, allowing you to run at more than 10% CPU later until you have burned all your credits.
CPU credits are processed on a millisecond basis.
The number of credits that an instance can accrue is equivalent to the number of credits that can be earned in a 24-hour period. For example, a t2.micro instance has a maximum CPU credits balance of 144 credits.
The image below shows an instance with varying CPU load. Until midnight, the rate of earn is the same as the rate of burn, and the credits balance is zero. From midnight, CPU% drops to near zero and credit balance steadily rises. At about 04:30, there is a spike of CPU use for a few minutes, and the credit balance drops a bit.
What if I need more than the baseline of CPU load?
This “burstable” CPU capacity is adequate for many use cases, particularly for instances that are idle a lot of the time and have short spikes of activity. If credits are available, the virtual machine can instantly use up to 100% CPU capacity for a while.
But what happens if we have run out of CPU credits? Switching to a larger instance type will require several minutes of downtime. But it is possible to buy extra credits! Let me introduce “T2/T3 Unlimited”:
- In STANDARD MODE (T2/T3 Unlimited is disabled – which is the default with T2), as CPU credits run low, the instances are gently throttled down to the baseline.
- In UNLIMITED MODE (T2/T3 Unlimited is enabled – which is the default with T3), when CPU credits run out, surplus credits are added to the instance. They cost $0.05 per vCPU hour (Linux) or $0.096 (Windows).
- T2, T3 and T3a instances support both Standard and Unlimited mode. You can enable and disable T2/T3 Unlimited Mode at any time, without any disruption to your application or the operating system.
Let’s look at a cost comparison. This assumes a Linux instance, running in the London (eu-west-2) region, running at 100% CPU permanently:
- m5large (2 vCPU, 8GiB RAM) $0.111/hr (regardless of CPU load)
- t3large (2 vCPU, 8GiB RAM) $0.0944/hr (30% CPU baseline – see docs)
- plus 70% * 2 vCPU * $0.05 per vCPU hour (= $0.07)
- total $0.1644/hr for running a t3.large at 100%
- HENCE a t3.large costs about 48% more ($0.0534/hr extra) than the m5.large.
CPU Load is usually higher at launch time
When an instance is first started, CPU load is usually higher than average. For this purpose, T2 instances launched in Standard mode receive launch credits (30 extra credits per vCPU, at no extra cost), allowing 100% CPU upon start for just over 30 minutes before the CPU is throttled down to the baseline.
- T2 instances launched in Unlimited Mode do not receive launch credits.
- T3 and T3a instances do not get launch credits at all. If you want full power at the start, use Unlimited mode.
It is possible that the surplus credits (in Unlimited mode) will cost nothing, if the CPU credits earned in a 24-hour window is greater than the surplus credits burned when above baseline. For example:
- A t2.micro instance runs at 100% CPU for the first hour, then has an average of 5% load for the rest of the day.
- It earns 144 CPU credits in a 24-hour period.
- It burns 60 CPU credits in the first hour, then 3 CPU credits for each of the remaining 23 hours. Total 129 credits burned.
- At the end of the 24-hour window, a positive balance of credits remain, so no charge is made for the burst at the start.
However, if in the same example the CPU load was 100% for 2 hours:
- 120 CPU credits are burned in the first 2 hours.
- 5% load takes 8 hours to burn through the remaining 24 CPU credits.
- At the end of hour 11, we are charged for 3 CPU credits.
- At the end of hour 12, we are charged for 3 more CPU credits etc on an hourly basis.
Pricing examples are for eu-west-2 (London). Prices vary by region.
By the way, T-instances aren’t the only thing in AWS with a Burst-mode system. Look at EBS volumes too (TL;DR: should be using io1 instead of gp2.)