Lead Image © stillfix, 123RF.com

Lead Image © stillfix, 123RF.com

New storage classes for Amazon S3

Class Society

Article from ADMIN 55/2020
Each Amazon storage class addresses a different usage profile; we examine the new classes to help you make the right choice.

AWS introduced several new storage services and databases at re:Invent 2018, including new storage classes for Amazon Simple Storage Service (S3). In the meantime, new releases (S3 Intelligent-Tiering and S3 Glacier Deep Archive) have become available that quickly boost the number of storage classes in the oldest and most popular of all AWS services from three to six. In this article, I present the newcomers and their characteristics.

Amazon's Internet storage has always supported storage classes, between which users can choose when uploading an object and which they can also switch automatically later using lifecycle guidelines. The individual storage classes have different price models and availability classes, each of which optimally addresses a different usage profile. So, if you know the most common access patterns to your data stored in S3, you can optimize costs by intelligently choosing the right storage class.

High-Availability SLAs

The individual storage classes differ in terms of availability and durability. Because AWS generally replicates data within a region (with the exception of the S3 One Zone-IA class) across all availability zones, Amazon S3 is basically a simple, key-based object store. Amazon S3, for example, offers 99.99 percent availability in the standard storage class and 99.99999 percent permanence, which means that of 10,000 stored files, one file is lost every 11 million years, on average. AWS even guarantees this under its Amazon S3 Service Level Agreement [1]. By the way, such a service is by no means available for all AWS services.

The new S3 Intelligent-Tiering memory class also has a stability of 99.999999999 percent with an availability of 99.9 percent, just as in the S3 Standard-IA class. In the case of the S3 One Zone-IA storage class, however, replication only takes place within a single availability zone, resulting in reduced availability of 99.5 percent. Replication beyond regions does not take place in AWS to improve further availability or consistency, because this would contradict the corporate philosophy with regard to data protection. However, the user can configure automatic replication to another region in S3 if so desired.

Comparison of Storage Classes

Although S3 has made do with three memory classes – Standard, Standard-IA, and Glacier – for many years, three additional memory classes are now available: Intelligent-Tiering, One Zone-IA, and Glacier Deep Archive, all with a durability of 99.999999999 percent. The documentation still also lists the Standard with Reduced Redundancy (RRS) storage class with a stability of 99.99 percent. Currently, AWS does not recommend the use of RRS – originally intended for non-critical, reproducible data such as thumbnails – because the standard storage class is now cheaper anyway. As Table 1 shows, the inclusion of RRS would mean that there are seven storage classes.

Table 1

Current S3 Memory Classes

Storage Class Suitable for Resistance (%) Availability (%) Availability Zones
Standard Data with frequent access 99.999999999 99.99 ≥3
Standard-IA Long-term data with irregular access 99.999999999 99.9 ≥3
Intelligent-Tiering Long-term data with changing or unknown access patterns 99.999999999 99.9 ≥3
One Zone-IA Long-term, non-critical data with fairly infrequent access 99.999999999 99.5 1
Glacier Long-term archiving with recovery times between minutes and hours 99.999999999 99.99 (after restore) ≥3
Glacier Deep Archive Data archiving for barely used data with a recovery time of 12 hours 99.999999999 99.99 (after restore) ≥3
RRS (no longer recommended) Frequently retrieved, but non-critical data 99.99 99.99 ≥3

Amazon S3 Costs

Apart from the fact that prices for all AWS services generally vary between regions, S3 storage has four cost drivers: storage (storage prices), retrieval (retrieval prices), management (S3 storage management), and data transfer, where moving data to the cloud does not cost anything. In US East regions, for example, the S3 Standard storage class pricing (in early 2020) looks like this:

  • Storage price is $0.023/GB for the first 50TB.
  • Retrieval price is $0.005/1,000 PUT, COPY, POST, or LIST requests and $0.0004/1,000 for GET, SELECT and all other requests. All data returned by S3 is charged at $0.0007/GB, all data scanned at $0.002/GB. Lifecycle transition and retrieval requests are free, as are DELETE and CANCEL requests.
  • The price of S3 storage management depends on the functions included. For example, S3 object tagging costs $0.01/10,000 tags per month.
  • For outgoing transmissions, AWS allows up to 1GB/month free of charge. The next 9.999TB/month is charged at $0.09/GB, the next 40TB/month at $0.085/GB, the next 100TB/month at $0.07/GB, and the next 150TB/month at $0.05/GB.

A complete price overview can be found on the S3 product page [2].

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs

Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>


		<div class=