ABCDEFGHIJKLMNOPQRSTUVW
1
Storage/Database serviceReplication factors JustificationSource Status
2
S33"Designed to sustain the concurrent loss of data in two facilities"https://docs.aws.amazon.com/whitepapers/latest/aws-storage-services-overview/durability-and-availability.htmlImplemented
3
S3 Standard - STANDARD
S3 Standard-IA - SIA
S3 Inteligent Tiering - INT
S3 Glacier - GLACIER
S3 Glacier Deep Archive - GDA
at least 3Amazon S3 Standard, S3 Standard-IA, and S3 Glacier storage classes redundantly store your objects on multiple devices across a minimum of three Availability Zones (AZs) in an Amazon S3 Region before returning SUCCESS.https://aws.amazon.com/s3/faqs/Implemented
4
S3 One Zone - ZIA2The S3 One Zone-IA storage class stores data redundantly across multiple devices within a single AZ.Implemented
5
S3 Reduced Redundency Storage - RRS2"Designed to sustain the loss of data in a single facility."https://aws.amazon.com/s3/reduced-redundancy/Implemented
6
Amazon EC2 EBS2"When you create an EBS volume, it is automatically replicated within its Availability Zone"https://docs.aws.amazon.com/whitepapers/latest/aws-storage-services-overview/durability-and-availability-3.htmlImplemented
7
Amazon EFS2"By default, every Amazon EFS file system object (i.e. directory, file, and link) is redundantly stored across multiple AZs for file systems using Standard storage classes". Amazon EFS file system backup data created and managed by AWS Backup is replicated to 3 AZs and is designed for 99.999999999% (11 9’s) durability.https://aws.amazon.com/efs/faq/Implemented
8
EC2 Instance Storage1Unlike Amazon EBS volume data, data on instance store volumes persists only during the lifetime of its associated instance. If an instance reboots (intentionally or unintentionally), data in the instance store persists. However, data in the instance store is lost under any of the following circumstances: The underlying disk drive fails; The instance stops; The instance hibernates; The instance terminateshttps://docs.aws.amazon.com/whitepapers/latest/aws-storage-services-overview/durability-and-availability-4.htmlNot implemented - replication factor of 1
9
Amazon RDS Aurora6"When data is written to the primary DB instance, Aurora synchronously replicates the data across Availability Zones to six storage nodes associated with your cluster volume."https://www.youtube.com/watch?v=FKz72GugM2Q

https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.AuroraHighAvailability.html
Implemented
10
Amazon RDS2"In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone."https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.htmlImplemented
11
Amazon DocumentDBAt least 2"When you create an Amazon DocumentDB cluster, depending upon the number of Availability Zones in the subnet group (there must be at least two), Amazon DocumentDB provisions instances across the Availability Zones. "https://docs.aws.amazon.com/documentdb/latest/developerguide/replication.htmlImplemented
12
Amazon DynamoDBAt least 2"All of your data is stored on solid-state disks (SSDs) and is automatically replicated across multiple Availability Zones in an AWS Region, providing built-in high availability and data durability."https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.htmlImplemented
13
Amazon SimpleDBAt least 2"High Availability (automatic geo-redundant replication and failover): Amazon SimpleDB achieves high availability by automatically creating multiple copies of your data and managing failover to an available copy in the event one copy becomes unavailable. https://aws.amazon.com/simpledb/Implemented
14
Amazon ECR3"Amazon ECR stores your container images and artifacts in Amazon S3. Amazon S3 is designed for 99.999999999% (11 9’s) of data durability because it automatically creates and stores copies of all S3 objects across multiple systems. This means your data is available when needed and protected against failures, errors, and threats. ECR can also automatically replicate your data to multiple AWS Regions for your high availability applications."Implemented
15
AWS Storage Gateway3AWS Storage Gateway durably stores your on-premises application data by uploading it to Amazon S3 or Amazon S3 Glacier. Both of these AWS services store data in multiple facilities and on multiple devices within each facility, being designed to provide an average annual durability of 99.999999999 percent (11 nines)https://docs.aws.amazon.com/whitepapers/latest/aws-storage-services-overview/durability-and-availability-5.htmlNot implemented yet - need example data from Cost and Usage Reports
16
AWS Snowball1"Once the data is imported to AWS, the durability and availability characteristics of the target storage applies."

So while on the Snowball instance, the replication factor is just 1.
https://docs.aws.amazon.com/whitepapers/latest/aws-storage-services-overview/durability-and-availability-6.htmlNot implemented - replication factor of 1
17
CloudFront - with origin Failover set up2To set up origin failover, you must have a distribution with at least two origins. Next, you create an origin group for your distribution that includes two origins, setting one as the primary. Finally, you create or update a cache behavior to use the origin group.CloudFront Origin FailoverNot implemented - we ignore CloudFront rows from the Cost and Usage Report because it's either: a) Networking to/from the internet, or b) Unknown usage types (e.g. Lambda at the edge).
18
Amazon Keyspaces3"Amazon Keyspaces (for Apache Cassandra) stores three copies of your data in multiple Availability Zones for durability and high availability."https://docs.aws.amazon.com/keyspaces/latest/devguide/how-it-works.htmlNot implemented yet - need example data from Cost and Usage Reports
19
Amazon Neptune6"Neptune uses quorum writes that make six copies of your data across disk-volumes in three Availability Zones"https://docs.aws.amazon.com/neptune/latest/userguide/feature-overview-storage.htmlNot implemented yet - need example data from Cost and Usage Reports
20
Amazon TimestreamAt least 2"Timestream ensures durability of your data by automatically replicating your memory and magnetic store data across different Availability Zones within a single AWS Region."https://docs.aws.amazon.com/timestream/latest/developerguide/storage.htmlNot implemented yet - need example data from Cost and Usage Reports
21
Amazon Redshift1"Amazon Redshift will automatically detect and replace a failed node in your data warehouse cluster. "

"Currently, Amazon Redshift only supports Single-AZ deployments. You can run data warehouse clusters in multiple AZ's by loading data into two Amazon Redshift data warehouse clusters in separate AZs from the same set of Amazon S3 input files."
https://aws.amazon.com/redshift/faqs/Not implemented - replication factor of 1
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100