AWS Certified Solutions Architect - Associate Practice Questions

Terms in this set (851)

I definitely like B and C the best...

A. Using as an endpoint to collect thousands of data points per hour from a distributed fleet of sensors

This is far more applicable scenario for a Kinesis stream. Have the sensors send data into the stream, then process out of the stream (e.g. with a Lambda function to upload to DynamoDb for further analysis, or into CloudWatch if you just wanted to plot the data from the sensors as a time series).

B. Managing a multi-step and multi-decision checkout process of an e-commerce website

Ideal scenario for SWF. Track the progress of the checkout process as it proceeds through the multiple steps.

C. Orchestrating the execution of distributed and auditable business processes

Also good for SWF. The key words in the question are "process" and "distributed". If you've got multiple components involved the process, and you need to keep them all appraised of what the current state/stage in the process is, SWF can help.

D. Using as an SNS (Simple Notification Service) endpoint to trigger execution of video transcoding jobs

This is a potential scenario for Lambda, which can take an SNS notification as a triggering event. Lambda kicks off the transcoding job (or drops the piece of work into an SQS queue that workers pull from to kick off the transcoding job)

E. Using as a distributed session store for your web application

Not applicable for SWF at all. As for how you might want to do this, key word here is "distributed". If you wanted to store session state data for a web session on a single web server, just throw it into scratch space on the instance (e.g. ephmeral/instance-store drive mounted to the instance). But this is "distributed", meaning multiple web instances are in play. If one instance fails, you want session state to still be maintained when the user's traffic traverses a different web server. (It wouldn't be acceptable for them to have two items in their shopping cart, be ready to check out, have the instance they were on fail, their traffic go to another web instance, and their shopping cart suddenly shows up as empty.) So you save their session state off to an external session store. If the session state only needs to be maintained for, say, 24 hours, ElastiCache is a good solution. If the session state needs to be maintained for a long period of time, store it in DynamoDb.
D is the only possible answer

A. SQS guarantees the order of the messages.

Not true, SQS does not guarantee the order of the messages at all. If your app requires messages be processed in a certain order, make sure your messages in the SQS queue have a sequence number on them.

B. SQS synchronously provides transcoding output.

Transcoding output would mean a piece of media (eg audio/video) that needs to be stored somewhere. Since media files are usually large binary data, this would probably be into S3 (and possibly metadata about the media file into DynamoDB, such as the S3 location, user/job that generated it, date/time it was transcoded, etc.) While S3 messages can accept binary data as a data type, you probably wouldn't want to store a output media file as an SQS message because the maximum message size is 256KB, which would severely limit how large your transcoding output file could be. Also, the maximum retention time in an SQS queue is 14 days. In the unlikely case that you were willing to accept those limitations, you'd still be limited to a maximum of 120,000 messages in the queue, which would severely limit the amount of transcoding outputs you could store across those 14 days. This scenario just isn't a good fit for an SQS queue. Drop your transcoding output files into S3, instead.

C. SQS checks the health of the worker instances.

SQS does not check the health of anything. If you've got a fleet of worker instances you want to monitor the health of, probably you'd want to have them in an auto-scaling group with a health check on the ASG to replace failed worker instances.

D. SQS helps to facilitate horizontal scaling of encoding tasks.

Yes, this is a great scenario for SQS. "Horizontal scaling" means you have multiple instances involved in the workload (encoding tasks in this case). You can drop messages indicating an encoding job needs to be performed into an SQS queue, immediately making the job notification message accessible to any number of encoding worker instances.
D is right.
From https://acloud.guru, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-creating-snapshot.html
A. Plausible not fast, but not the slowest
B. The slowest but highest confidence factor
C. Fast , but 'Create Image' has special meaning and may not be what you are looking for.
D. _Fast , and 'start EBS snapshot' sounds right _
E. Starts Fast, but waiting for the Snapshot to finish could be hours, so not correct in my opinion.

Answer is B

Keywords are 'consistent backup' and 'relational database'

Making a consistent backup of a relational database (MSSQL, Oracle, MySQL),
-> with the database engine still running <-
can only be done using a "managed process" such as a backup agent, database tools or (SQL) statements executed by the database engine.

A, C, D and E make a backup of the database file (system) with a running database engine.
Even if you can suspend disk I/O, this does not force the database engine to flush it's (mostly HUGE) caches
Even if you can suspend(/halt) disk I/O, the database engine will complain (probably crash)
Even if you can suspend disk I/O and database engine accepts this, the databases (files) themselves will still be marked "open", which will result in (automatic or not) database repairs when mounted/attached from backup.

Stopping the database instance will flush (respective) caches and properly close database files.

As no database tooling is mentioned, the only answer which stops the database engine, is B, which stops the EC2 instance, thereby the database engine.

B is the only answer which results in a consistent backup of a relational database (or at least for MSSQL, Oracle, MySQL)
A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater
scalability and elasticity The web server currently shares read-only data using a network distributed file system
The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP
multicast The database tier uses shared-storage clustering to provide database fall over capability, and uses
several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly
to off-site tapes
Which AWS storage and database architecture meets the requirements of the application?

A.Web servers, store read-only data in S3, and copy from S3 to root volume at boot time App servers snare
state using a combination or DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one
or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB
snapshots.

B.Web servers store -read-only data in S3, and copy from S3 to root volume at boot time App servers share
state using a combination of DynamoDB and IP unicast Database, use RDS with multi-AZ deployment and one
or more read replicas Backup web servers app servers, and database backed up weekly to Glacier using
snapshots.

C.Web servers store read-only data In S3 and copy from S3 to root volume at boot time App servers share
state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment Backup
web and app servers backed up weekly via AM is. database backed up via DB snapshots

D.Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time App servers
share state using a combination of DynamoDB and IP multicast Database use RDS with multl-AZ deployment
and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via
DB snapshots
Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a
multi-regional deployment on AWS in Japan, Europe and US

A.
The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each
region has deployed its own database
In the HQ region you run an hourly batch process reading data from every region to compute cross-regional
reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly
optimize logistics how do you build the database architecture in order to meet the requirements'?
For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ
region

A.The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each
region has deployed its own database
In the HQ region you run an hourly batch process reading data from every region to compute cross-regional
reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly
optimize logistics how do you build the database architecture in order to meet the requirements'?
For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ
region

B.For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS
snapshots to the HQ region

C.For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots
to the HQ region

D.For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files
hourly to the HQ region

E.Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network
latency for the batch process
You have launched an EC2 instance with four (4) 500 GB EBS Provisioned IOPS volumes attached The EC2
Instance Is EBS-Optimized and supports 500 Mbps throughput between EC2 and EBS The two EBS volumes are
configured as a single RAID o device, and each Provisioned IOPS volume is provisioned with 4.000 IOPS (4 000
16KB reads or writes) for a total of 16.000 random IOPS on the instance The EC2 Instance initially delivers the
expected 16 000 IOPS random read and write performance Sometime later in order to increase the total
random I/O performance of the instance, you add an additional two 500 GB EBS Provisioned IOPS volumes to
the RAID Each volume Is provisioned to 4.000 lOPs like the original four for a total of 24.000 IOPS on the EC2
instance Monitoring shows that the EC2 instance CPU utilization increased from 50% to 70%. but the total
random IOPS measured at the instance level does not increase at all.
What is the problem and a valid solution?

A.Larger storage volumes support higher Provisioned IOPS rates: increase the provisioned volume storage of
each of the 6 EBS volumes to 1TB.

B.The EBS-Optimized throughput limits the total IOPS that can be utilized use an EBS-Optimized instance that
provides larger throughput.

C.Small block sizes cause performance degradation, limiting the I'O throughput, configure the instance device
driver and file system to use 64KB blocks to increase throughput.

D.RAID 0 only scales linearly to about 4 devices, use RAID 0 with 4 EBS Provisioned IOPS volumes but increase
each Provisioned IOPS EBS volume to 6.000 IOPS.

E.The standard EBS instance root volume limits the total IOPS rate, change the instant root volume to also be
a 500GB 4.000 Provisioned IOPS volume.
***You have been asked to design the storage layer for an application. The application requires disk performance
of at least 100,000 IOPS in addition, the storage layer must be able to survive the loss of an individual disk. EC2
instance, or Availability Zone without any data loss. The volume you provide must have a capacity of at least 3
TB.Which of the following designs will meet these objectives'?

A.Instantiate an 12 8xlarge instance in us-east-1a Create a RAID 0 volume using the four 800GB SSD
ephemeral disks provided with the instance Provision 3×1 TB EBS volumes attach them to the instance and
configure them as a second RAID 0 volume Configure synchronous, block-level replication from the ephemeralbacked volume to the EBS-backed volume.

B.Instantiate an 12 8xlarge instance in us-east-1a create a raid 0 volume using the four 800GB SSD ephemeral
disks provide with the Instance Configure synchronous block-level replication to an Identically configured
Instance in us-east-1b.

C.Instantiate a c3 8xlarge Instance In us-east-1 Provision an AWS Storage Gateway and configure it for 3 TB of
storage and 100 000 lOPS Attach the volume to the instance.

D.Instantiate a c3 8xlarge instance in us-east-i provision 4x1TB EBS volumes, attach them to the instance, and
configure them as a single RAID 5 volume Ensure that EBS snapshots are performed every 15 minutes.

E.Instantiate a c3 8xlarge Instance in us-east-1 Provision 3x1TB EBS volumes attach them to the instance, and
configure them as a single RAID 0 volume Ensure that EBS snapshots are performed every 15 minutes.
Your company runs a customer facing event registration site This site is built with a 3-tier architecture with
web and application tier servers and a MySQL database The application requires 6 web tier servers and 6
application tier servers for normal operation, but can run on a minimum of 65% server capacity and a single
MySQL database. When deploying this application in a region with three availability zones (AZs) which
architecture provides high availability?

A.A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto
Scaling Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2
instances in each AZ inside an Auto Scaling Group behind an ELB. and one RDS (Relational Database Service)
instance deployed with read replicas in the other AZ.

B.A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each A2 inside an Auto
Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2
instances in each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service)
Instance deployed with read replicas in the two other AZs.

C.d A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto
Scaling Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2
instances m each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database
Service) deployment.

D.A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto
Scaling Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2
instances In each AZ inside an Auto Scaling Group behind an ELB. And a Multi-AZ RDS (Relational Database
services) deployment.
Your company currently has a 2-tier web application running in an on-premises data center. You have
experienced several infrastructure failures in the past two months resulting in significant financial losses. Your
CIO is strongly agreeing to move the application to AWS. While working on achieving buy-in from the other
company executives, he asks you to develop a disaster recovery plan to help improve Business continuity in
the short term. He specifies a target Recovery Time Objective (RTO) of 4 hours and a Recovery Point Objective
(RPO) of 1 hour or less. He also asks you to implement the solution within 2 weeks. Your database is 200GB in
size and you have a 20Mbps Internet connection. How would you do this while minimizing costs?

A.Create an EBS backed private AMI which includes a fresh install or your application. Setup a script in your
data center to backup the local database every 1 hour and to encrypt and copy the resulting file to an S3
bucket using multi-part upload.

B.Install your application on a compute-optimized EC2 instance capable of supporting the application's
average load synchronously replicate transactions from your on-premises database to a database instance in
AWS across a secure Direct Connect connection.

C.Deploy your application on EC2 instances within an Auto Scaling group across multiple availability zones
asynchronously replicate transactions from your on-premises database to a database instance in AWS across a
secure VPN connection.

D.Create an EBS backed private AMI that includes a fresh install of your application. Develop a Cloud
Formation template which includes your Mil and the required EC2. Auto-Scaling and ELB resources to support

deploying the application across Multiple-Ability Zones. Asynchronously replicate transactions from your onpremises database to a database instance in AWS across a secure VPN connection.
Your startup wants to implement an order fulfillment process for selling a personalized gadget that needs an
average of 3-4 days to produce with some orders taking up to 6 months you expect 10 orders per day on your
first day. 1000 orders per day after 6 months and 10,000 orders after 12 months.
Orders coming in are checked for consistency men dispatched to your manufacturing plant for production
quality control packaging shipment and payment processing If the product does not meet the quality
standards at any stage of the process employees may force the process to repeat a step Customers are
notified via email about order status and any critical issues with their orders such as payment failure.
Your case architecture includes AWS Elastic Beanstalk for your website with an RDS MySQL instance for
customer data and orders.
How can you implement the order fulfillment process while making sure that the emails are delivered reliably?

A.Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS
database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.

B.Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group
with min/max=1 Use the decider instance to send emails to customers.

C.Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group
with min/max=1 use SES to send emails to customers.

D.Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks
and execute them. Use SES to send emails to customers.
Your company hosts a social media site supporting users in multiple countries. You have been asked to provide
a highly available design tor the application that leverages multiple regions tor the most recently accessed
content and latency sensitive portions of the wet) site The most latency sensitive component of the
application involves reading user preferences to support web site personalization and ad selection.
In addition to running your application in multiple regions, which option will support this application's
requirements?

A.Serve user content from S3. CloudFront and use Route53 latency-based routing between ELBs in each region
Retrieve user preferences from a local DynamoDB table in each region and leverage SQS to capture changes to
user preferences with SOS workers for propagating updates to each table.

B.Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3.
CloudFront with dynamic content and an ELB in each region Retrieve user preferences from an ElasticCache
cluster in each region and leverage SNS notifications to propagate user preference changes to a worker node
in each region.

C.Use the S3 Copy API to copy recently accessed content to multiple regions and serve user content from S3
CloudFront and Route53 latency-based routing Between ELBs In each region Retrieve user preferences from a
DynamoDB table and leverage SQS to capture changes to user preferences with SOS workers for propagating
DynamoDB updates.

D.Serve user content from S3. CloudFront with dynamic content, and an ELB in each region Retrieve user
preferences from an ElastiCache cluster in each region and leverage Simple Workflow (SWF) to manage the
propagation of user preferences from a centralized OB to each ElastiCache cluster.
You have deployed a three-tier web application in a VPC with a CIOR block of 10 0 0 0/28 You initially deploy
two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2
instances The web. Application and database servers are deployed across two availability zones (AZs). You also
deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the
first few days following the deployment, so you attempt to double the number of instances in each tier of the
application to handle the new load unfortunately some of these new instances fail to launch.
Which of the following could De the root caused? (Choose 2 answers)

A.The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike,
reducing the number of available private IP addresses for new instance launches.

B.AWS reserves one IP address In each subnet's CIDR block for Route53 so you do not have enough addresses
left to launch all of the new EC2 instances.

C.AWS reserves the first and the last private IP address in each subnet's CIDR block so you do not have enough
addresses left to launch all of the new EC2 instances.

D.The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private
IP addresses for new instance launches.

E.AWS reserves the first tour and the last IP address in each subnet's CIDR block so you do not have enough
addresses left to launch all of the new EC2 instances.
A newspaper organization has a on-premises application which allows the public to search its back catalogue
and retrieve individual newspaper pages via a website written in Java They have scanned the old newspapers
into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search
product. The hosting platform and software are now end of life and the organization wants to migrate Its
archive to AWS and produce a cost efficient architecture and still be designed for availability and durability
Which is the most appropriate?

A.Use S3 with reduced redundancy lo store and serve the scanned files, install the commercial search
application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.

B.Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open
source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.

C.Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query
processing, and use Elastic Beanstalk to host the website across multiple availability zones.

D.Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to
serve the website and translate user queries into SQL.

E.Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current
commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53
with DNS round-robin.
A corporate web application is deployed within an Amazon Virtual Private Cloud (VPC) and is connected to the
corporate data center via an iPsec VPN. The application must authenticate against the on-premises LDAP
server. After authentication, each logged-in user can only access an Amazon Simple Storage Space (S3)
keyspace specific to that user.
Which two approaches can satisfy these objectives? (Choose 2 answers)

A.Develop an identity broker that authenticates against IAM security Token service to assume a IAM role in
order to get temporary AWS security credentials The application calls the identity broker to get AWS
temporary security credentials with access to the appropriate S3 bucket.

B.The application authenticates against LOAP and retrieves the name of an IAM role associated with the user.
The application then cails the IAM Security Token Service to assume that IAM role The application can use the
temporary credentials to access the appropriate S3 bucket.

C.Develop an identity broker that authenticates against LDAP and then calls IAM Security Token Service to get
IAM federated user credentials The application calls the identity broker to get IAM federated user credentials
with access to the appropriate S3 bucket.

D.The application authenticates against LDAP the application then calls the AWS identity and Access
Management (IAM) Security service to log in to IAM using the LDAP credentials the application can use the
IAM temporary credentials to access the appropriate S3 bucket.

E.The application authenticates against IAM Security Token Service using the LDAP credentials the application
uses those temporary AWS security credentials to access the appropriate S3 bucket.
Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due
to a large burst In web traffic due to a company announcement Over the coming days, you are expecting
similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve
your infrastructures ability to handle unexpected increases in traffic.
The application currently consists of 2 tiers A web tier which consists of a load balancer and several Linux
Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database.
Which scenario below will provide full site functionality, while helping to improve the ability of your
application in the short timeframe required?

A.Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to
cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that
objects should exist in cache.

B.Migrate to AWS Use VM import 'Export to quickly convert an on-premises web server to an AMI create an
Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic Create an RDS
read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the
database.

C.Failover environment: Create an S3 bucket and configure it tor website hosting Migrate your DNS to
Route53 using zone (lie import and leverage Route53 DNS failover to failover to the S3 hosted website.

D.Hybrid environment Create an AMI which can be used of launch web serfers in EC2 Create an Auto Scaling
group which uses the * AMI to scale the web tier based on incoming traffic Leverage Elastic Load Balancing to
balance traffic between on-premises web servers and those hosted in AWS.
A. correct... You can have CloudFront sit in front of your on-prem web environment, via a custom origin (the origin doesn't have to be in AWS). This would protect against unexpected bursts in traffic by letting CloudFront handle the traffic that it can out of cache, thus hopefully removing some of the load from your on-prem web servers.

B. incorrect for two reasons... First, there is nothing in the question to say that the existing Apache web servers are VMs. They might be physical servers, for all we can tell from the question, so VM import/export may not be usable at all. Second, you wouldn't want just a read replica out in AWS. If your website instances in AWS are taking the brunt of the incoming burst of traffic, they may have to do both reads and writes, and you don't want to force them to talk all the way back to your on-prem DB to do writes. That's just going to add a lot of latency. And even after the writes are made to your on-prem DB, they still have to replicate back out to the read replica in AWS, which is asynchronous, and could lead to inconsistencies. (User has just clicked to add an item to their shopping cart, and master DB is aware, but read replica DB hasn't been informed by the master DB yet, so the user doesn't see it in their cart.)

C. incorrect for two reasons... First, because it doesn't provide any ability to absorb unexpected bursts in traffic, it merely provides you a failover refuge if your on-prem environment falls over dead from the load. Second, nothing in the question indicates 100% of the content is static content. If you have any dynamic content at all (which you probably do have, since there's a back-end database there for some reason), S3 wouldn't get it done.

D. incorrect because you cannot (currently) use an ELB to share load with an on-prem web server. (You have to specially configure an ELB to even be able to share load across AZs. On-prem is right out.) In theory, you could configure a weighted load-sharing entry in Route53, with a portion of the traffic going to your on-prem load-balancer, and the remainder of the traffic going to your ELB. But that's not what D is stating.
Your company produces customer commissioned one-of-a-kind skiing helmets combining nigh fashion with
custom technical enhancements Customers can show oft their Individuality on the ski slopes and have access
to head-up-displays. GPS rear-view cams and any other technical innovation they wish to embed in the helmet.
The current manufacturing process is data rich and complex including assessments to ensure that the custom
electronics and materials used to assemble the helmets are to the highest standards Assessments are a
mixture of human and automated assessments you need to add a new set of assessment to model the failure
modes of the custom electronics using GPUs with CUD

A.
across a cluster of servers with low latency networking.
What architecture would allow you to automate the existing process using a hybrid approach and ensure that
the architecture can support the evolution of processes over time?
Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling
group of G2 instances in a placement group.

A.across a cluster of servers with low latency networking.
What architecture would allow you to automate the existing process using a hybrid approach and ensure that
the architecture can support the evolution of processes over time?
Use AWS Data Pipeline to manage movement of data & meta-data and assessments Use an auto-scaling
group of G2 instances in a placement group.

B.Use Amazon Simple Workflow (SWF) 10 manages assessments, movement of data & meta-data Use an autoscaling group of G2 instances in a placement group.

C.Use Amazon Simple Workflow (SWF) lo manages assessments movement of data & meta-data Use an autoscaling group of C3 instances with SR-IOV (Single Root I/O Virtualization).

D.Use AWS data Pipeline to manage movement of data & meta-data and assessments use auto-scaling group
of C3 with SR-IOV (Single Root I/O virtualization).
Your website is serving on-demand training videos to your workforce. Videos are uploaded monthly in high
resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided
tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video
transcoding expertise and it required you may need to pay for a consultant.
How do you implement the most cost-efficient architecture without compromising high availability and quality
of video delivery'?

A.Elastic Transcoder to transcode original high-resolution MP4 videos to HLS S3 to host videos with Utecycle
Management to archive original flies to Glacier after a few days CloudFront to serve HLS transcoded videos
from S3

B.A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the
number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to
archive all files to Glacier after a few days CloudFront to serve HLS transcoding videos from Glacier

C.Elastic Transcoder to transcode original nigh-resolution MP4 videos to HLS EBS volumes to host videos and
EBS snapshots to incrementally backup original rues after a fe days.CioudFront to serve HLS transcoded videos
from EC2.

D.A video transcoding pipeline running on EC2 using SOS to distribute tasks and Auto Scaling to adjust the
number of nodes depending on the length of the queue E8S volumes to host videos and EBS snapshots to
incrementally backup original files after a few days CloudFront to serve HLS transcoded videos from EC2
You've been hired to enhance the overall security posture for a very large e-commerce site They have a well
architected multi-tier application running in a VPC that uses ELBs in front of both the web and the app tier with
static assets served directly from S3 They are using a combination of RDS and DynamoOB for their dynamic
data and then archiving nightly into S3 for further processing with EMR They are concerned because they
found questionable log entries and suspect someone is attempting to gain unauthorized access.
Which approach provides a cost effective scalable mitigation to this kind of attack?

A.Recommend mat they lease space at a DirectConnect partner location and establish a 1G DirectConnect
connection to tneirvPC they would then establish Internet connectivity into their space, filter the traffic in
hardware Web Application Firewall (WAF). And then pass the traffic through the DirectConnect connection
into their application running in their VPC.

B.Add previously identified hostile source IPs as an explicit INBOUND DENY NACL to the web tier subnet.

C.Add a WAF tier by creating a new ELB and an AutoScalmg group of EC2 Instances running a host-based WAF
They would redirect Route 53 to resolve to the new WAF tier ELB The WAF tier would thier pass the traffic to
the current web tier The web tier Security Groups would be updated to only allow traffic from the WAF tier
Security Group

D.Remove all but TLS 1 2 from the web tier ELB and enable Advanced Protocol Filtering This will enable the
ELB itself to perform WAF functionality
An AWS customer is deploying an application mat is composed of an AutoScaling group of EC2 Instances.
The customers security policy requires that every outbound connection from these instances to any other
service within the customers
Virtual Private Cloud must be authenticated using a unique x 509 certificate that contains the specific instanceid.
In addition an x 509 certificates must Designed by the customer's Key management service in order to be
trusted for authentication.
Which of the following configurations will support these requirements?

A.Configure an IAM Role that grants access to an Amazon S3 object containing a signed certificate and
configure me Auto Scaling group to launch instances with this role Have the instances bootstrap get the
certificate from Amazon S3 upon first boot.

B.Embed a certificate into the Amazon Machine Image that is used by the Auto Scaling group Have the
launched instances generate a certificate signature request with the instance's assigned instance-id to the Key
management service for signature.

C.Configure the Auto Scaling group to send an SNS notification of the launch of a new instance to the trusted
key management service. Have the Key management service generate a signed certificate and send it directly
to the newly launched instance.

D.Configure the launched instances to generate a new certificate upon first boot Have the Key management
service poll the AutoScaling group for associated instances and send new instances a certificate signature (hat
contains the specific instance-id.
You are designing a photo sharing mobile app the application will store all pictures in a single Amazon S3
bucket.
Users will upload pictures from their mobile device directly to Amazon S3 and will be able to view and
download their own pictures directly from Amazon S3.
You want to configure security to handle potentially millions of users in the most secure manner possible.
What should your server-side application do when a new user registers on the photo-sharing mobile
application?

A.Create a set of long-term credentials using AWS Security Token Service with appropriate permissions Store
these credentials in the mobile app and use them to access Amazon S3.

B.Record the user's Information in Amazon RDS and create a role in IAM with appropriate permissions. When
the user uses their mobile app create temporary credentials using the AWS Security Token Service
'AssumeRole' function Store these credentials in the mobile app's memory and use them to access Amazon S3
Generate new credentials the next time the user runs the mobile app.

C.Record the user's Information In Amazon DynamoDB. When the user uses their mobile app create
temporary credentials using AWS Security Token Service with appropriate permissions Store these credentials
in the mobile app's memory and use them to access Amazon S3 Generate new credentials the next time the
user runs the mobile app.

D.Create IAM user. Assign appropriate permissions to the IAM user Generate an access key and secret key for
the IAM user, store them in the mobile app and use these credentials to access Amazon S3.

E.Create an IAM user. Update the bucket policy with appropriate permissions for the IAM user Generate an
access Key and secret Key for the IAM user, store them In the mobile app and use these credentials to access
Amazon S3.
CDE.
C - CloudFront can absorb attack to some extent, and you may add WAF to ward off such attacks
D - you can use both external and internal facing ELBs
E - is obvious

A. incorrect... While it is definitely an AWS published suggestion to consider enhanced networking or even 10Gbps interfaces on an instance to assist in mitigating against high traffic floods, two ENIs cannot be used together to help balance network load. ELB always sends traffic to the primary address on the primary ENI of the instance.

B. Not an AWS recommended approach for dealing with DDoS mitigation.

C. Absolutely correct... Cloudfront is probably the single, best DDoS mitigation you can implement if you had to pick only one.

D. This one is close, and would absolutely work as a recommended best practice for the web and app tiers. This answer might even be workable on the DB tier for DB read requests (writes would be problematic), by load-balancing across a number of read replicas using a non-ELB load-balancing mechanism, such as DNS load balancing, HAproxy, F5 instance, etc. But D states you are using ELBs to perform the load-balancing, and it is not currently possible to attach an RDS instance to an ELB, only EC2 instances. Also, D states that you are using auto scaling groups for all three tiers, and it is not currently possible to use RDS instances in an auto scaling group.

E. Absolutely correct, and very helpful for auto scaling.

F. Seems unlikely. Question is asking about DDoS attacks, which could come from millions of source IP addresses. Even if you could identify incoming requests as malicious (you would not always be able to separate legitimate from malicious), there could be so many malicious source IP addresses in a DDoS that this would not scale very well.
You are implementing a URL whitelisting system for a company that wants to restrict outbound HTTP'S
connections to specific domains from their EC2-hosted applications you deploy a single EC2 instance running
proxy software and configure It to accept traffic from all subnets and EC2 instances in the VPC. You configure
the proxy to only pass through traffic to domains that you define in its whitelist configuration You have a
nightly maintenance window or 10 minutes where ail instances fetch new software updates. Each update Is
about 200MB In size and there are 500 instances In the VPC that routinely fetch updates After a few days you
notice that some machines are failing to successfully download some, but not all of their updates within the
maintenance window The download URLs used for these updates are correctly listed in the proxy's whitelist
configuration and you are able to access them manually using a web browser on the instances What might be
happening? (Choose 2 answers)

A.You are running the proxy on an undersized EC2 instance type so network throughput is not sufficient for all
instances to download their updates in time.

B.You have not allocated enough storage to the EC2 instance running me proxy so the network buffer is filling
up. causing some requests to fall

C.You are running the proxy in a public subnet but have not allocated enough EIPs lo support the needed
network throughput through the Internet Gateway (IGW)

D.You are running the proxy on a affilelentiy-sized EC2 instance in a private subnet and its network
throughput is being throttled by a NAT running on an undersized EO£ instance

E.The route table for the subnets containing the affected EC2 instances is not configured to direct network
traffic for the software update locations to the proxy.
A large real-estate brokerage is exploring the option o( adding a cost-effective location based alert to their
existing mobile application The application backend infrastructure currently runs on AWS Users who opt in to
this service will receive alerts on their mobile device regarding real-estate otters in proximity to their location.
For the alerts to be relevant delivery time needs to be in the low minute count the existing mobile app has 5
million users across the us Which one of the following architectural suggestions would you make to the
customer?

A.The mobile application will submit its location to a web service endpoint utilizing Elastic Load Balancing and
EC2 instances: DynamoDB will be used to store and retrieve relevant otters EC2 instances will communicate
with mobile earners/device providers to push alerts back to mobile application.


B.Use AWS DirectConnect or VPN to establish connectivity with mobile carriers EC2 instances will receive the
mobile applications ' location through carrier connection: ROS will be used to store and relevant relevant
offers EC2 instances will communicate with mobile carriers to push alerts back to the mobile application

C.The mobile application will send device location using SQS. EC2 instances will retrieve the relevant others
from DynamoDB AWS Mobile Push will be used to send offers to the mobile application

D.The mobile application will send device location using AWS Mobile Push EC2 instances will retrieve the
relevant offers from DynamoDB EC2 instances will communicate with mobile carriers/device providers to push
alerts back to the mobile application.
A company is building a voting system for a popular TV show, viewers win watch the performances then visit
the show's website to vote for their favorite performer. It is expected that in a short period of time after the
show has finished the site will receive millions of visitors. The visitors will first login to the site using their
Amazon.com credentials and then submit their vote. After the voting is completed the page will display the
vote totals. The company needs to build the site such that can handle the rapid influx of traffic while
maintaining good performance but also wants to keep costs to a minimum. Which of the design patterns
below should they use?

A.Use CloudFront and an Elastic Load balancer in front of an auto-scaled set of web servers, the web servers
will first can the Login With Amazon service to authenticate the user then process the users vote and store the
result into a multi-AZ Relational Database Service instance.

B.Use CloudFront and the static website hosting feature of S3 with the Javascript SDK to call the Login With
Amazon service to authenticate the user, use IAM Roles to gain permissions to a DynamoDB table to store the
users vote.

C.Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers
will first call the Login with Amazon service to authenticate the user, the web servers will process the users
vote and store the result into a DynamoDB table using IAM Roles for EC2 instances to gain permissions to the
DynamoDB table.

D.Use CloudFront and an Elastic Load Balancer in front of an auto-scaled set of web servers, the web servers
will first call the Login. With Amazon service to authenticate the user, the web servers win process the users
vote and store the result into an SQS queue using IAM Roles for EC2 Instances to gain permissions to the SQS
queue. A set of application servers will then retrieve the items from the queue and store the result into a
DynamoDB table.
You are developing a new mobile application and are considering storing user preferences in AWS.2w This
would provide a more uniform cross-device experience to users using multiple mobile devices to access the
application. The preference data for each user is estimated to be 50KB in size Additionally 5 million customers
are expected to use the application on a regular basis. The solution needs to be cost-effective, highly available,
scalable and secure, how would you design a solution to meet the above requirements?

A.Setup an RDS MySQL instance in 2 availability zones to store the user preference data. Deploy a public facing application on a server in front of the database to manage security and access
credentials

B.Setup a DynamoDB table with an item for each user having the necessary attributes to hold the user
preferences. The mobile application will query the user preferences directly from the DynamoDB table. Utilize
STS. Web Identity Federation, and DynamoDB Fine Grained Access Control to authenticate and authorize
access.

C.Setup an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference
data .The mobile application will query the user preferences from the read replicas. Leverage the MySQL user
management and access privilege system to manage security and access credentials.

D.Store the user preference data in S3 Setup a DynamoDB table with an item for each user and an item attribute
pointing to the user' S3 object. The mobile application will retrieve the S3 URL from DynamoDB and then
access the S3 object directly utilize STS, Web identity Federation, and S3 ACLs to authenticate and authorize
access.
I vote C:

A. This one is incorrect because you cannot "inherit permissions" from one AWS account to another. You could get the other account holder to send you the policy document on their side and cut and paste it into your policy document if you wanted to do so, but you cannot automatically "inherit" that policy.

B. Incorrect because the role for cross-account access is not created in the "Master" account. It would be created in the "Dev" and "Test" accounts that are trusting the "Master" account. (It would be a very bad scenario if the account wishing to have access to another account could create the role and everything all in their own account, without the truster account having to do anything to allow it. Any one account could take over any other account at any time. Clearly wrong.)

C. This is correct. You would have users log into IAM in the "Master" account, and have the role created by the admins in the "Dev" and "Test" accounts. Those IAM users in the "Master" account would switch role either using the pre-populated URL containing the AccountID and Role Name that the Dev and Test admins sent them, or would have to know those values and input them manually. Permissions would be needed in both accounts in order for this to work:
- The Master account IAM users would need to be either full Administrator or Power User access, or if not, would need to explicitly have access granted via policy to perform "Action": "sts:AssumeRole" with the resource for that action being the ARN of the other accounts' roles (which the other accounts administrators would have to send to the Master account admin)
- The "Test" and "Dev" account admins would need to grant the role in their accounts access to do pretty much anything, because the scenario is asking for the Master account to have full access.

Note: Srinivasu's comment is very observant. He notes that option A mentions that the Master account IAM users will have full Administrator access, thus allowing them to perform action "sts:AssumeRole", which they might not necessarily have unless they are Adminstrators or Power Users. And option C does not explicitly mention this, somewhat implying that option C may be missing this piece in the Master account. If you want to be that detailed/picky, then he is right, none of these answers are correct. But C is closest, and A is definitely wrong due to not being able to "inherit" permissions between accounts. So I'm voting C.

D. This is incorrect because linking accounts for consolidated billing purposes does not give the payer account access to do anything in the linked accounts. The payer account can only see billing info for those linked accounts.
I would choose C:

A. This choice is incorrect because, while RDS may be storing SQL transaction logs on the back end for its use in point-in-time recovery, you do not have access to do anything with them. It's just one of those parts of the RDS service that AWS manages for you.

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html

(In that doc, search for: "Viewing, downloading, or watching transaction logs is not supported.")

B. Incorrect because Multi-AZ standby DBs are not query-able for read purposes. Use a read replica for that.

C. Correct, this is the easiest way to get an "out of band" copy of the database for your analysis tools to play with, while only very minimally impacting the performance on the front end. (The master DB instance would have to perform asynchronous replication operations to the read replica, which has a very small performance impact. Unless you're replicating to many read replicas in which the performance hit could become noticeable. If you need many read replicas, use Aurora, which has workarounds for this performance hit and can do up to 15 read replicas. Or if your app is married to MySQL, do read replicas of read replicas if you need more than the limit of 5 and can deal with the asynchronous replication delay.)

D. This one is plausible at first glance, but in practice could impact performance more than the read replica option due to potentially very large queries that the reporting tier could be running. Also, if your cache is only maintaining the most recently-used records, you could miss pieces of the data you would want to have for reporting. Reporting would probably need access to everything across the board in your DB.