Upgrade to remove ads
AWS S3 FAQ
Terms in this set (61)
What is Amazon S3?
Amazon S3 is storage for the internet. It's a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at very low costs.
What are the technical benefits of S3?
Scalability, reliability, speed, low-cost, and simplicity
What kind of data can I store?
Virtually any kind of data in any format
How much data can I store?
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 TB. The largest object that can be uploaded in a single PUT is 5 GB. For objects larger than 100 MB, customers should consider using the Multipart Upload capability.
What storage classes does Amazon S3 offer?
S3 Standard: for general-purpose storage of frequently accessed data
S3 Infrequent Access: infrequent access for long-lived but less frequently accessed data
Glacier: long-term archive
S3 Reduced Redundancy Storage: enables customers to reduce their costs by storing noncritical, reproducible data at lower levels of redundancy than Amazon's S3 standard storage
How can I delete large numbers of objects?
Multi-Object Delete deletes large numbers of objects from S3. There is no charge for multi-object delete.
What does Amazon do with my data in Amazon S3?
Amazon will store you data and track its associated usage for billing purposes. AWS will not otherwise access your data for any purpose outside of the S3 offering, except when required to do so by law.
How is Amazon S3 data organized?
Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and can be constructed to mimic hierarchical attributes.
How do I interface with Amazon S3?
Amazon S3 provides a simple, standards-based REST web services interface that is designed to work with any Internet-development toolkit.
How reliable is Amazon S3?
S3 Standard is designed for 99.99% availability and S3-IA is designed for 99.9% availability. Both are backed by the S3 SLA.
What data consistency model does Amazon S3 employ?
Amazon S3 buckets in all Regions provide read-after write consistency for PUTS of new objects and eventual consistency for overwrite PUTS and DELETES
What happens if traffic from my application suddenly spikes?
Amazon S3 was designed from the ground up to handle traffic for any Internet application. S3's massive scale enables us to spread load evenly, so that no individual application is affected by traffic spikes.
How can I increase the number of Amazon S3 buckets that I can provision?
By default, customers can provision up to 100 buckets per AWS account. However, you can increase you S3 bucket limit by visiting AWS Service Limits.
Where is my data stored?
You specify a region when you create your S3 bucket. Within that region, your objects are redundantly stored on multiple devices across multiple facilities.
How am I charged for using Versioning?
Normal S3 rates apply for every version of an object stored or requested. You are charged per GB stored, number of GET requests, number of PUT requests, data transfer in and data transfer out, and prices can vary based on region.
How secure is my data in S3?
By default, only the bucket and object owners originally have access to Amazon S3 resources they create. S3 provides access control mechanisms such as bucket policies and Access Control Lists (ACLs) to selectively grant permissions to users and groups of users. You can securely upload and download data to S3 via SSL endpoints using HTTPS protocol. Server Side Encryption (SSE) and SSE with customer provided keys (SSE-C) can be used to store data at rest. You can also use your own encryption libraries to encrypt data before storing it in S3.
How can I control access to my data stored in S3?
IAM policies, bucket policies, ACLs, and query string authentication. IAM policies can be used to control access to S3 buckets or objects across an account. Bucket policies can define rules for specific S3 buckets. ACLs can grant specific permissions to buckets and objects. Query String Authentication allows customers to create a URL to an S3 bucket object that is only valid for a limited time.
Does S3 support data access auditing?
Yes, customers can configure S3 buckets to create access log records for all requests made against it.
What options do I have for encrypting data stored on S3?
SSE-S3, SSE-C, SSE-KMS, or a client library.
SSE-S3: integrated solution with S3 where AWS handles management of keys
SSE-C: AWS will perform encryption/decryption using customer provided keys
SSE-KMS: AWS KMS manages encryption keys
Client Library: data is encrypted prior to being placed in S3, customer handles all encryption
How does Amazon protect SSE encryption keys?
Every object is encrypted with a unique key. The object key itself is then encrypted by a separate master key. A new master key is issued at least monthly. Encrypted data, encryption keys and master keys are stored and secured on separate hosts for multiple layers of protection.
What is an Amazon VPC endpoint for Amazon S3?
An Amazon VPC Endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity only to S3. The VPC Endpoint routes requests to S3 and routes responses back to the VPC. A VPC endpoint enables you to create a private connection between your VPC and S3 without requiring access over the Internet.
How durable is data in S3?
S3-Standard and S3-IA are designed to provide 99.999999999% durability of objects in a given year. If you store 10,000 objects with S3 you can (on average) expect to lose 1 object every 10,000,000 years. S3 is designed to sustain the concurrent loss of data in two facilities. S3 best practices for backup include secure access permissions, cross-region replication, versioning and a functioning, regularly tested backup.
What checksums does S3 employ to detect data corruption?
S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. S3 repairs any corruption using redundant data.
What is versioning in S3?
Versioning allows you to preserve, retrieve, and restore every version of every object stored in an S3 bucket. Once you enable versioning for a bucket, S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.
Why should I use versioning in S3?
S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use versioning for data retention and archiving.
How do I start using versioning?
Enable the versioning setting on your S3 bucket.
Does versioning protect me from accidental deletion of my objects?
When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an S3 bucket can permanently delete a version.
Can I setup a trash, recycle bin, or rollback window on my S3 objects to recover from deletes and overwrites?
You can use lifecycle rules along with versioning to implement a rollback window for objects. Ex: with a versioning enabled bucket you can set up a rule that archives all of your previous versions to the lower-cost Glacier storage class and deletes them after 100 days, give you a 100 day rollback window while lowering your storage costs
Why would I choose to use Standard - IA?
Standard-IA is ideal for data that is access less frequently, but requires rapid access when needed. Standard-IA is ideally suited for long-term file storage, older data from sync and share, backup data, and disaster recovery files.
What performance does Standard-IA offer?
Same level of performance as Standard S3 with 99.9999999% durability but 99.9% availability.
How do I get my data into Standard-IA?
There are two ways to get data into Standard-IA from within S3. You can directly PUT into Standard-IA by specifying STANDARD_IA in the x-amz-storage-class header (of the HTTP request). You can also set lifecycle policies to transition objects from Standard to Standard-IA.
How will my latency and throughput performance be impacted as a result of using Standard-IA?
You should expect the same latency and throughput performance as Amazon S3 Standard when using Standard-IA
Is there a minimum duration for Standard-IA?
Standard-IA is designed for long-lived, but infrequently accessed data that is retained for months or years. Data that is deleted from Standard-IA within 30 days will be charged for a full 30 days.
Is there a minimum object size for Standard-IA?
Standard-IA has a minimum object size of 128KB. For objects smaller than 128KB, charges will be incurred as if the object were 128KB.
How can I store my data using the Amazon Glacier option?
You can use lifecycle rules to automatically archive sets of data from S3 based on lifetime. Data can be directly uploaded to Glacier using the Glacier REST API, AWS SDKs, or AWS Import/Export.
Can I use Amazon Glacier APIs to access S3 objects that I've archived to Amazon Glacier?
Because Amazon S3 maintains the mapping between your user-defined object name and Amazon Glacier's system-defined identifier, Amazon S3 objects that are stored using the Amazon Glacier option are only accessible through the Amazon S3 APIs or the Amazon S3 management console.
How long will it take to retrieve my objects in Amazon Glacier?
When a retrieval job is requested, data is moved from Glacier to S3-RRS. Access time of your request depends on the retrieval option you choose: Expedited (1-5min), Standard (3-5hrs), and Bulk (5-12hrs)
How am I charged for Glacier?
Charged per GB stored and per lifecycle transition requests. Objects stored in Glacier have a minimum of 90 days of storage, if an object is deleted before 90 days a pro-rated charge equal to the storage charges is incurred
What are Amazon S3 event notifications?
Amazon S3 event notifications can be send in response to actions in S3 like PUTs, POSTs, COPYs, or DELETEs. Notification messages can be sent through either Amazon SNS, SQS, or Lambda.
What can I do with Amazon S3 event notifications?
Amazon S3 event notifications enable you to run workflows, send alerts, or perform other actions in response to changes in your objects stored in S3. You can use even notifications to set up triggers to perform actions including transcoding media files when they are uploaded, processing data files when they become available, and synchronizing Amazon S3 objects with other data stores.
Can I host my static website on S3?
Yes, you can host your entire static website on S3 for inexpensive, highly available hosting solution that scales automatically to meet traffic demands.
What kinds of websites should I host using S3 static website hosting?
Can I use my own host name with my Amazon S3 hosted website?
Yes, you can map your domain name to your S3 bucket.
Does Amazon S3 support redirects?
Yes, S3 provides multiple ways to enable redirection of web content for your static websites. You can set rules on your bucket to enable automatic redirection. You can also configure a redirect on an individual S3 object.
What are S3 object tags?
S3 Object Tags are key-value pairs applied to S3 Objects which can be created, updated, or deleted at any time during the lifetime of the object. With these, you'll have the ability to create IAM policies, setup S3 lifecycle policies, and customize storage metrics. These object-level tags can then manage transitions between storage classes and expire objects in the background.
Why should I use Object Tags?
Object Tags allow you to control access to objects tagged with specific key-value pairs. They can also be used to label objects that belong to a specific project or business unit, which could be used in conjunction with lifecycle policies to manage transitions to the S3 Standard-IA and Glacier storage tiers.
Will my Object Tags be replicated if I use Cross-Region Replication?
Object tags can be replicated across regions using Cross-Region Replication. If cross-region replication is already enabled, new permissions are required in order for tags to replicate.
What is S3 Analytics - Storage Class Analysis?
Storage Class Analysis allows you to analyze storage access patterns and transition the right data to the right storage class. This feature automatically identifies infrequent access patterns to help you transition storage to Standard-IA. You can configure a storage class analytics policy to monitor an entire bucket, a prefix, or object tag. Storage class analysis also provides daily visualizations of your storage usage on the AWS Management Console that you can export to a S3 bucket to analyze using business intelligence tools, such as Amazon QuickSight.
What is S3 Inventory?
S3 Inventory provides a schedules alternative to Amazon S3's synchronous List API. S3 Inventory provides a CSV flat-file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or a shared prefix.
How do I get started with S3 CloudWatch Metrics?
You can use the AWS Management Console to enable the generation of 1-minute Cloud Watch metrics for your S3 bucket or configure filters for the metrics using a prefix or object tag. Alternately, you can call the S3 PUT Bucket Metrics API to enable and configure publication of S3 storage metrics.
What alarms can I set on my storage metrics?
You can use CloudWatch to set thresholds on any of the storage metric counts, timers, or rates and fire an action when the threshold is breached. For example, you can set a threshold on the percentage of 4xx Error Responses.
What is Lifecycle Management?
S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set lifecycle transition policy to automatically migrate Amazon S3 objects to Standard-IA and/or Glacier based on the age of the data. You can also set lifecycle expiration policies to automatically remove objects based on the age of the object. You can set a policy for multipart upload expiration, which expires incomplete multipart upload based on the age of the upload.
Why would I use a lifecycle policy to expire incomplete multipart uploads?
The lifecycle policy that expires incomplete multipart uploads allows you to save on costs by limiting the time non-completed multipart uploads are stored. For example, if your application uploads several multipart object parts, but never commits them, you will still be charged for that storage. This policy can lower your S3 storage bill by automatically removing incomplete multipart uploads and the associated storage after a predefined number of days.
What is Amazon S3 Cross-Region Replication (CRR)?
CRR is an Amazon S3 feature that automatically replicates data across AWS regions. With CRR, every object uploaded to an S3 bucket is automatically replicated to a destination bucket in a different AWS region that you choose. You can use CRR to provide lower-latency data access in different geographic regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart.
How do I enable CRR?
CRR is a bucket-level configuration. You enable a CRR configuration on your source bucket by specifying a destination bucket in a different region for replication. Versioning must be turned on for both the source and destination buckets to enable CRR.
What does CRR replicate to the target bucket?
CRR replicates every object-level upload that you directly make to your source bucket. The metadata and ACLs associated with the object are also part of the replication. Any change to the underlying data, metadata, or ACLs on the object would trigger a new replication to the destination bucket. You can either choose to replicate all objects uploaded to a source bucket or just a subset of objects by specifying prefixes. Existing data in the bucket prior to CRR is not replicated, you must use COPY to copy existing data to destination bucket.
Can I use CRR with lifecycle rules?
Yes, you can figure separate lifecycle rules on the source and destination buckets.
What is transfer acceleration?
Amazon S3 transfer acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. Transfer Acceleration leverages Amazon CloudFront's globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.
Who should use transfer acceleration?
Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. If you are uploading to a centralized bucket from geographically dispersed locations, or if you regularly transfer GBs or TBs of data across the continents, you may save hours or days of data transfer time.
How should I choose between Transfer Acceleration and Amazon CloudFront's PUT/POST?
Transfer Acceleration optimized the TCP protocol and adds additional intelligence between the client and the S3 bucket, making Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance.
Can Transfer Acceleration complement 3rd party integrated software?
Yes. Software packages that connect directly into Amazon S3 can take advantage of Transfer Acceleration when they send their jobs to Amazon S3.
THIS SET IS OFTEN IN FOLDERS WITH...
YOU MIGHT ALSO LIKE...
Accounting Information Systems
AWS Solutions Architect: Chapter 2
AWS Simple Storage Service (S3)
[Google Cloud Platform Fundamentals - Core Infrast…
OTHER SETS BY THIS CREATOR
Name that EC2 API!
Name That DynamoDB API!
Name that DynamoDB Error Response!