Only $35.99/year

AWS Certified Solutions Architect Professional

Terms in this set (75)

A multinational investment bank has a hybrid cloud architecture which uses a single 1 Gbps AWS Direct Connect connection to integrate their on-premises network to AWS Cloud. The bank has a total of 10 VPCs which are all connected to their on-premises data center via the same Direct Connect connection that you manage. Based on the recent IT audit, the existing network setup has a single point of failure which needs to be addressed immediately.
Which of the following is the MOST cost-effective solution that you should implement in order to improve the connection redundancy of your hybrid network?

Establish a new point-to-point Multiprotocol Label Switching (MPLS) connection to all of your 10 VPCs. Configure BGP to use this new connection with an active/passive routing.

Establish another 1 Gbps AWS Direct Connect connection using a public Virtual Interface (VIF). Prepare a a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VIF. Handle the failover to the VPN connection through the use of BGP.

Establish VPN tunnels from your on-premises data center to each of the 10 VPCs. Terminate each VPN tunnel connection at the virtual private gateway (VGW) of the respective VPC. Configure BGP for route management.

Establish another 1 Gbps AWS Direct Connect connection with corresponding private Virtual Interfaces (VIFs) to connect all of the 10 VPCs individually. Set up a Border Gateway Protocol (BGP) peering session for all of the VIFs.

Establish another 1 Gbps AWS Direct Connect connection using a public Virtual Interface (VIF). Prepare a a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VIF. Handle the failover to the VPN connection through the use of BGP.

Establish VPN tunnels from your on-premises data center to each of the 10 VPCs. Terminate each VPN tunnel connection at the virtual private gateway (VGW) of the respective VPC. Configure BGP for route management.
(Correct)

Establish another 1 Gbps AWS Direct Connect connection with corresponding private Virtual Interfaces (VIFs) to connect all of the 10 VPCs individually. Set up a Border Gateway Protocol (BGP) peering session for all of the VIFs.
Explanation
With AWS Direct Connect plus VPN, you can combine one or more AWS Direct Connect dedicated network connections with the Amazon VPC VPN. This combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput, and provides a more consistent network experience than internet-based VPN connections.
You can use AWS Direct Connect to establish a dedicated network connection between your network and create a logical connection to public AWS resources, such as an Amazon virtual private gateway IPsec endpoint. This solution combines the AWS managed benefits of the VPN solution with low latency, increased bandwidth, more consistent benefits of the AWS Direct Connect solution, and an end-to-end, secure IPsec connection.



It costs a lot of money to establish a Direct Connect connection which you rarely use. For a more cost-effective solution, you can configure a backup VPN connection for failover with your AWS Direct Connect connection.
If you want a short-term or lower-cost solution, you might consider configuring a hardware VPN as a failover option for a Direct Connect connection. VPN connections are not designed to provide the same level of bandwidth available to most Direct Connect connections. Ensure that your use case or application can tolerate a lower bandwidth if you are configuring a VPN as a backup to a Direct Connect connection.
Hence, the correct answer is the option that says: Establish VPN tunnels from your on-premises data center to each of the 10 VPCs. Terminate each VPN tunnel connection at the virtual private gateway (VGW) of the respective VPC. Configure BGP for route management.
The following options are incorrect:
-Establish another 1 Gbps AWS Direct Connect connection using a public Virtual Interface (VIF). Prepare a a VPN tunnel which will terminate on the virtual private gateway (VGW) of the respective VPC using the public VIF. Handle the failover to the VPN connection through the use of BGP.
-Establish another 1 Gbps AWS Direct Connect connection with corresponding private Virtual Interfaces (VIFs) to connect all of the 10 VPCs individually. Set up a Border Gateway Protocol (BGP) peering session for all of the VIFs.
Establishing yet another 1 Gbps AWS Direct Connect connection is not a cost-effective solution. It is better to establish a VPN connection instead as a backup.
Establishing a new point-to-point Multiprotocol Label Switching (MPLS) connection to all of your 10 VPCs and Configuring BGP to use this new connection with an active/passive routing is incorrect because you can't directly connect to your Multiprotocol Label Switching (MPLS) to AWS. To integrate your MPLS infrastructure, you need to set up a colocation with Direct Connect by placing the CGW in the same physical facility as Direct Connect location which will facilitate a local cross-connect between the CGW and AWS devices.

References:
https://aws.amazon.com/premiumsupport/knowledge-center/configure-vpn-backup-dx/
https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/aws-direct-connect-plus-vpn-network-to-amazon.html
https://aws.amazon.com/answers/networking/aws-network-connectivity-over-mpls/
You are working as a Cloud Engineer for an IoT start-up company which is developing a health monitoring pet collar for dogs and cats. The company has hired an electrical engineer to build a smart pet collar that collects biometric information of the pet every second and then sends it to a web portal through a POST API request. Your task is to architect the API services and the web portal which will accept and process the biometric data as well as provide complete trends and health reports to the pet owners. The portal should be highly durable, available, and scalable with an additional feature for showing real-time biometric data analytics. Which of the following is the best architecture to meet the above requirement?

1. Create an SQS queue to collect the incoming biometric data.
2. Analyze the data from SQS with Amazon Kinesis.
3. Store the results to RDS.

1. Create an S3 bucket to collect the incoming biometric data from the smart pet collar.
2. Use Data Pipeline to run a data analysis task in the S3 bucket everyday.
3. Use Redshift as the online analytic processing (OLAP) database for the web portal.

1. Launch an Elastic MapReduce instance to collect the incoming biometrics data.
2. Use Amazon Kinesis to analyze the data.
3. Save the results to DynamoDB.

1. Use Amazon Kinesis Data Streams to collect the incoming biometric data.
2. Analyze the data using Kinesis and show the results in real-time dashboard.
3. Set up a simple data aggregation process and pass the results to Amazon S3.
4. Store the data to Redshift, configured with automated backups, to handle complex analytics.
See all questions
BackSkip question
D.

Explanation
Amazon Kinesis Data Streams enable you to build custom applications that process or analyze streaming data for specialized needs. Kinesis Data Streams can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and location-tracking events. With the Kinesis Client Library (KCL), you can build Kinesis Applications and use streaming data to power real-time dashboards, generate alerts, implement dynamic pricing and advertising, and more. You can also emit data from Kinesis Data Streams to other AWS services such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon EMR, and AWS Lambda.
Hence, the following option is the correct answer as Amazon Kinesis is the one used here to collect the streaming data:
1. Use Amazon Kinesis Data Streams to collect the incoming biometric data.
2. Analyze the data using Kinesis and show the results in real-time dashboard.
3. Set up a simple data aggregation process and pass the results to Amazon S3.
4. Store the data to Redshift, configured with automated backups, to handle complex analytics.
The following option is incorrect because S3, Data Pipeline, and Redshift do not provide real-time data analytics:
1. Create an S3 bucket to collect the incoming biometric data from the smart pet collar.
2. Use Data Pipeline to run a data analysis task in the S3 bucket everyday.
3. Use Redshift as the online analytic processing (OLAP) database for the web portal.
The following option is incorrect because an SQS queue is not appropriate to accept all of the incoming biometric data. You should use Amazon Kinesis instead:
1. Create an SQS queue to collect the incoming biometric data.
2. Analyze the data from SQS with Amazon Kinesis.
3. Store the results to RDS.
The following option is incorrect because just like in the above, it does not use Amazon Kinesis to accept the incoming data:
1. Launch an Elastic MapReduce instance to collect the incoming biometrics data.
2. Use Amazon Kinesis to analyze the data.
3. Save the results to DynamoDB.

Reference:
https://aws.amazon.com/kinesis/data-streams/details/

Check out this Amazon Kinesis Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-kinesis/
Answer = D

Explanation
In this scenario, the main priority is ensuring the availability of the online portal at all times. The application server as well as the database server should be configured to be highly available through the use of Auto Healing capabilities of OpsWorks stacks and RDS Multi-AZ deployments. If the Auto Healing capability in OpsWorks is enabled, any failed EC2 instances will be automatically replaced to avoid any downtime for the portal. Hence, the correct answer here is the option that says: Deploy the online portal to two auto-scaled EC2 instances in two different Availability Zones with a load balancer in front using OpsWorks and then enable Auto Healing. Launch MySQL RDS in Multi-AZ deployments configuration.
Deploying the online portal to two EC2 instances in two different Availability Zones with a load balancer in front using OpsWorks and CloudWatch for monitoring, then launching MySQL RDS in Multi-AZ deployments configuration is incorrect because even though the setup provides both high availability and scalability, it doesn't provide a mechanism to heal or replace its broken EC2 instances unlike Option 3. Although CloudWatch is mentioned as a monitoring service, it didn't specify that it is configured for automatic recovery of the failed EC2 instances.
Deploying the online portal to an auto-scaled EC2 instances in one Availability Zone using OpsWorks, then launching a MySQL RDS with Read Replica to two separate Availability Zones is incorrect because if one Availability Zone went down, the entire portal will also go down since it is only deployed to one AZ. Although Read Replicas are good for scaling read-intensive applications, the online portal will still not be available in case the master MySQL RDS instance goes down. Remember that only read operations are permitted in Read Replicas.
Deploying the online portal to two auto-scaled EC2 instances in two different Availability Zones with a load balancer in front using OpsWorks, then launching a MySQL RDS with Read Replica to two separate Availability Zones is incorrect because Read Replicas only provide scalability for read-intensive applications. It is better to use a MySQL RDS with Multi-AZ deployment configuration to ensure that there is always a working MySQL instance in the event that one instance in a different Availability Zone failed. If your database is down and it only has a Read Replica, the portal will only be able to read the data but not insert new data.

References:
https://aws.amazon.com/rds/details/multi-az/
https://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-autohealing.html

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/
C


Explanation
For some AWS services, you can grant cross-account access to your resources. To do this, you attach a policy directly to the resource that you want to share, instead of using a role as a proxy. The resource that you want to share must support resource-based policies. Unlike a user-based policy, a resource-based policy specifies who (in the form of a list of AWS account ID numbers) can access that resource.
Cross-account access with a resource-based policy has some advantages over a role. With a resource that is accessed through a resource-based policy, the user still works in the trusted account and does not have to give up his or her user permissions in place of the role permissions. In other words, the user continues to have access to resources in the trusted account at the same time as he or she has access to the resource in the trusting account. This is useful for tasks such as copying information to or from the shared resource in the other account.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.
Hence, the option that says: Set up cross-account access with a resource-based Policy. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is correct.
The option that says: Set up cross-account access with a user-based policy. configuration. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a user-based policy maps the access to a certain IAM user and not to a certain AWS resource.
The option that says: Set up a service-linked role with an identity-based policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a service-linked role is just a unique type of IAM role that is linked directly to an AWS service. In addition, it is the AWS Config service, and not the AWS Systems Manager, that enables you to assess, audit, and evaluate the configurations of your AWS resources.
The option that says: Set up a service-linked role with a service control policy. Use AWS Systems Manager rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration is incorrect because a service control policy is primarily used in AWS Organizations and not for cross-account access. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf. This is not suitable for providing access to your resources to other AWS accounts, unlike cross-account access. You should also use AWS Config, and not AWS Systems Manager, to periodically audit changes to the IAM policy.

References:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_compare-resource-policies.html
https://aws.amazon.com/config/

Check out this AWS Config Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-config/
A print media company has a popular web application hosted on their on-premises network which allows anyone around the globe to search its back catalog and retrieve individual newspaper pages on their web portal. They have scanned the old newspapers into PNG image format and used Optical Character Recognition (OCR) software to automatically convert images to a text file. The license of their OCR software will expire soon and the news organization decided to move to AWS and produce a scalable, durable, and highly available architecture. Which is the best option to achieve this requirement?

Use S3 Intelligent-Tiering storage class to store and serve the scanned files. Migrate the commercial search application on an Auto Scaling group of Spot EC2 Instances across multiple Availability Zones with an Application Load Balancer to balance the incoming load. Use Amazon Rekognition to detect and recognize text from the scanned old newspapers.

Create a new S3 bucket to store and serve the scanned image files using a CloudFront web distribution. Launch a new Elastic Beanstalk environment to host the website across multiple Availability Zones and set up a CloudSearch for query processing, which the website can use. Use Amazon Rekognition to detect and recognize text from the scanned old newspapers.

Create a new CloudFormation template which has EBS-backed EC2 instances with an Application Load Balancer in front. Install and run an NGINX web server and an open source search application. Store the images to EBS volumes with Amazon Data Lifecycle Manager configured, and which automatically attach new volumes to the EC2 instances as required.

Store the images in an S3 bucket and prepare a separate bucket to host the static website. Utilize S3 Select for searching the images stored in S3. Set up a lifecycle policy to move the images to Glacier after 3 months and if needed, use Glacier Select to query the archives.
See all questions
BackSkip question
B

Explanation
Amazon CloudSearch is a managed service in the AWS Cloud that makes it simple and cost-effective to set up, manage, and scale a search solution for your website or application.
With Amazon CloudSearch, you can quickly add rich search capabilities to your website or application. You don't need to become a search expert or worry about hardware provisioning, setup, and maintenance. With a few clicks in the AWS Management Console, you can create a search domain and upload the data that you want to make searchable, and Amazon CloudSearch will automatically provision the required resources and deploy a highly tuned search index.
You can easily change your search parameters, fine tune search relevance, and apply new settings at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales to meet your needs.
Hence, the option that says: Create a new S3 bucket to store and serve the scanned image files using a CloudFront web distribution. Launch a new Elastic Beanstalk environment to host the website across multiple Availability Zones and set up a CloudSearch for query processing, which the website can use. Use Amazon Rekognition to detect and recognize text from the scanned old newspapers is correct because it satisfies the requirement: it uses S3 to store the images, instead of the commercial product which will be decommissioned soon. More importantly, it uses CloudSearch for query processing and in addition, it uses Multi-AZ implementation which provides high availability. It is also correct to use Amazon Rekognition to detect and recognize text from the scanned old newspapers.
Amazon Rekognition makes it easy to add image and video analysis to your applications. You just provide an image or video to the Rekognition API, and the service can identify the objects, people, text, scenes, and activities, as well as detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial recognition on images and video that you provide. You can detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.



The option that says: Create a new CloudFormation template which has EBS-backed EC2 instances with an Application Load Balancer in front. Install and run an NGINX web server and an open source search application. Store the images to EBS volumes with Amazon Data Lifecycle Manager configured, and which automatically attach new volumes to the EC2 instances as required is incorrect because an EBS volume is not a scalable nor a durable solution compared with S3. In addition, it is not as cost-effective compared to S3 since it entails maintenance overhead unlike the fully managed storage service provided by S3.
The option that says: Store the images in an S3 bucket and prepare a separate bucket to host the static website. Utilize S3 Select for searching the images stored in S3. Set up a lifecycle policy to move the images to Glacier after 3 months and if needed, use Glacier Select to query the archives is incorrect because although using S3 Select is a feasible option, it is not as scalable compared to CloudSearch. Amazon S3 Select can only retrieve a subset of data using SQL statements. Storing your data to Amazon Glacier will also affect the retrieval time of your data.
The option that says: Use S3 Intelligent-Tiering storage class to store and serve the scanned files. Migrate the commercial search application on an Auto Scaling group of Spot EC2 Instances across multiple Availability Zones with an Application Load Balancer to balance the incoming load. Use Amazon Rekognition to detect and recognize text from the scanned old newspapers is incorrect because even though it properly uses S3 for durable and scalable storage, it still uses the commercial search software which will be decommissioned soon. It is better to use CloudSearch instead.
You are working for a San Francisco-based company which provides digital transaction management services for facilitating electronic exchanges of contracts and signed documents. Using their online system, businessmen and contractors can digitally sign contracts anywhere and anytime, removing the hassle of signing them in paper and in person. They are using AWS to host their multi-tier online portal in which the application tier is using a NGINX server hosted on an extra large EC2 instance; the database tier is using an Oracle database which is regularly backed up to an S3 bucket using a custom backup utility and lastly, its static content is kept on a 512GB stored volume in AWS Storage Gateway which is attached to the application server via the iSCSI interface. In this scenario, which AWS based disaster recovery strategy will give you the best RTO?

1. Deploy the Oracle database and the NGINX app server to an EC2 instance.
2. Restore the Recovery Manager (RMAN) Oracle backups from an S3 bucket.
3. Restore the static content by attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the NGINX EC2 server.

1. Deploy the Oracle database on RDS.
2. Deploy the NGINX app server on an EC2 instance.
3. Restore the Recovery Manager (RMAN) Oracle backups from Amazon Glacier.
4. Generate an EBS volume of static content from the Storage Gateway and attach it to the NGINX EC2 server.

1. Deploy the Oracle database and the NGINX app server on EC2.
2. Restore the Recovery Manager (RMAN) Oracle backups from an S3 bucket.
3. Restore the static content from an AWS Storage Gateway-VTL running on Amazon EC2.

1. Deploy the Oracle database and the NGINX app server on an EC2 instance.
2. Restore the Recovery Manager (RMAN) Oracle backups from an Amazon S3 bucket.
3. Generate an EBS volume of static content from the Storage Gateway and attach it to the NGINX EC2 server.
See all questions
BackSkip question
D


Explanation
Since this uses a Volume Storage Gateway, you have to generate an EBS Volume to restore the data. Hence, the following option is correct:
1. Deploy the Oracle database and the NGINX app server on an EC2 instance.
2. Restore the Recovery Manager (RMAN) Oracle backups from an Amazon S3 bucket.
3. Generate an EBS volume of static content from the Storage Gateway and attach it to the NGINX EC2 server.
The following sets of options are incorrect because you cannot directly restore your data to EC2 instance:
1. Deploy the Oracle database and the NGINX app server on EC2.
2. Restore the Recovery Manager (RMAN) Oracle backups from an S3 bucket.
3. Restore the static content from an AWS Storage Gateway-VTL running on Amazon EC2.
--
1. Deploy the Oracle database and the NGINX app server to an EC2 instance.
2. Restore the Recovery Manager (RMAN) Oracle backups from an S3 bucket.
3. Restore the static content by attaching an AWS Storage Gateway running on Amazon EC2 as an iSCSI volume to the NGINX EC2 server.
Remember that RDS does not support RMAN backup utility, so the following option is incorrect as you cannot use RDS:
1. Deploy the Oracle database on RDS.
2. Deploy the NGINX app server on an EC2 instance.
3. Restore the Recovery Manager (RMAN) Oracle backups from Amazon Glacier.
4. Generate an EBS volume of static content from the Storage Gateway and attach it to the NGINX EC2 server.
While Amazon RDS does not use RMAN for backups, you can use the package to execute RMAN validation commands against the database, control file, SPFILE, tablespaces, or data files.
The volume gateway provides block storage to your applications using the iSCSI protocol. Data on the volumes are stored in Amazon S3. To access your iSCSI volumes in AWS, you can take EBS snapshots which can be used to create EBS volumes.

Reference:
https://aws.amazon.com/storagegateway/faqs/

Check out this AWS Storage Gateway Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-storage-gateway/

Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
https://tutorialsdojo.com/aws-cheat-sheet-aws-certified-solutions-architect-professional/
B

Explanation
AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.
AWS Server Migration Service is a significant enhancement of the EC2 VM Import/Export service. The AWS Server Migration Service provides automated, live incremental server replication and AWS Console support, unlike the VM Import/Export service. Hence, deploying the AWS Server Migration Service Connector virtual appliance on your on-premises VMware vCenter environment and using the AWS Server Migration Service for the migration process is the best answer.
VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment. This offering allows you to leverage your existing investments in the virtual machines that you have built to meet your IT security, configuration management, and compliance requirements by bringing those virtual machines into Amazon EC2 as ready-to-use instances. You can also export imported instances back to your on-premises virtualization infrastructure, allowing you to deploy workloads across your IT infrastructure.



Using the AWS Management Portal for vCenter to simplify the migration of your VMs from the on-premises VMware vCenter to AWS is incorrect because the AWS Management Portal for vCenter is primarily used as an easy-to-use interface for creating and managing AWS resources from VMware vCenter. Although it can be used for migration, it is not the most suitable migration service for on-premises Virtual Machines (VMs) to AWS since it does not generate AMIs nor support incremental updates unlike AWS SMS.
Using AWS VM Import/Export for the migration process is incorrect because although the VM Import/Export service enables you to easily import VM images from the on-premises data center to AWS Cloud in the form of EC2 instances, it does not support incremental updates of changing VMs unlike the AWS Server Migration Service (SMS).
Using the Amazon Mechanical Turk for the migration process is incorrect because the Amazon Mechanical Turk is a web service that provides an on-demand, scalable, human workforce to complete jobs that humans can do better than computers, such as recognizing objects in photographs. This is not a suitable service to use for migration.

Reference:
https://aws.amazon.com/server-migration-service
https://aws.amazon.com/ec2/vm-import/

Check out this AWS Server Migration Service Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-server-migration-service-sms/

Here is a deep dive on AWS Server Migration Service:
https://youtu.be/11IHvxjy4hw
D and E


Explanation
For this scenario, the optimal services to use are Amazon ElastiCache and RDS Read Replicas. Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing and Q&A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in cache. Moreover, with Redis' support for advanced data structures, you can augment the database tier to provide features (such as leaderboard, counting, session and tracking) that are not easily achievable via databases in a cost-effective way.
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB Instance for read-heavy database workloads.
You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, Oracle, and PostgreSQL as well as Amazon Aurora.



You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Read replicas allow you to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
To further maximize read performance, Amazon RDS for MySQL allows you to add table indexes directly to Read Replicas, without those indexes being present on the master.
Because read replicas can be promoted to master status, they are useful as part of a sharding implementation. To shard your database, add a read replica and promote it to master status, then, from each of the resulting DB Instances, delete the data that belongs to the other shard.
Setting up Read Replicas in each Availability Zone is correct because Read Replicas are used to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads hence, improving the read performance.
Implementing an in-memory cache using Amazon ElastiCache is correct because ElastiCache is an in-memory caching solution which reduces the load on the database and improves the read performance.
Migrating the database to Amazon Redshift and using its massively parallel query execution capability to improve the read performance of the application is incorrect because Amazon Redshift is more suitable for OLAP-type applications and not for online transaction processing (OLTP). Redshift is also not suitable to host your MySQL database.
Modifying the Amazon RDS Multi-AZ deployments configuration to launch multiple standby database instances and distributing the incoming traffic to the standby instances to improve the database performance is incorrect because you cannot distribute the incoming traffic to the standby instances since these are not readable at all. These database instances are primarily used to improve the availability of your database and your application.
Vertically scaling your RDS MySQL Instance by upgrading its instance size with provisioned IOPS is incorrect because although upgrading the instance size may improve the read performance to a certain extent, it is not as scalable compared with Read Replicas or ElastiCache.

References:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/rds/details/read-replicas/

Check out this Amazon Elasticache Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-elasticache/

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/
C


Explanation
Organizations usually begin to think about how they will migrate an application during Phase 2 (Portfolio Discovery and Planning) of the migration process. This is when you determine what is in your environment and the migration strategy for each application. The six approaches detailed below are common migration strategies employed and build upon "The 5 R's" that Gartner Inc, a global research and advisory firm, outlined in 2011.
You should gain a thorough understanding of which migration strategy will be best suited for certain portions of your portfolio. It is also important to consider that while one of the six strategies may be best for migrating certain applications in a given portfolio, another strategy might work better for moving different applications in the same portfolio.



1. Rehost ("lift and shift") - In a large legacy migration scenario where the organization is looking to quickly implement its migration and scale to meet a business case, we find that the majority of applications are rehosted. Most rehosting can be automated with tools such as AWS SMS although you may prefer to do this manually as you learn how to apply your legacy systems to the cloud.
You may also find that applications are easier to re-architect once they are already running in the cloud. This happens partly because your organization will have developed better skills to do so and partly because the hard part - migrating the application, data, and traffic - has already been accomplished.
2. Replatform ("lift, tinker and shift") -This entails making a few cloud optimizations in order to achieve some tangible benefit without changing the core architecture of the application. For example, you may be looking to reduce the amount of time you spend managing database instances by migrating to a managed relational database service such as Amazon Relational Database Service (RDS), or migrating your application to a fully managed platform like AWS Elastic Beanstalk.
3. Repurchase ("drop and shop") - This is a decision to move to a different product and likely means your organization is willing to change the existing licensing model you have been using. For workloads that can easily be upgraded to newer versions, this strategy might allow a feature set upgrade and smoother implementation.
4. Refactor / Re-architect - Typically, this is driven by a strong business need to add features, scale, or performance that would otherwise be difficult to achieve in the application's existing environment. If your organization is looking to boost agility or improve business continuity by moving to a service-oriented architecture (SOA) this strategy may be worth pursuing - even though it is often the most expensive solution.
5. Retire - Identifying IT assets that are no longer useful and can be turned off will help boost your business case and direct your attention towards maintaining the resources that are widely used.
6. Retain -You may want to retain portions of your IT portfolio because there are some applications that you are not ready to migrate and feel more comfortable keeping them on-premises, or you are not ready to prioritize an application that was recently upgraded and then make changes to it again.
Rehost is correct because this ("lift and shift") strategy is suitable for quickly migrating the systems to AWS to meet a certain business case, which is exactly the requirement in this scenario.
Replatform is incorrect because this strategy entails making a few cloud optimizations on your existing systems before migrating them to AWS, which is quite contrary to the scenario. This strategy is more suitable when you want to reduce the amount of time you spend managing database instances by migrating to a managed relational database service such as Amazon Relational Database Service (RDS), or migrating your application to a fully managed platform like AWS Elastic Beanstalk.
Repurchase is incorrect because this strategy entails a decision to move to a different product and likely means your organization is willing to change the existing licensing model you have been using. Hence, this is not a suitable migration strategy for this scenario.
Refactor/Re-architect is incorrect because in the Refactor / Re-architect strategy, it is clearly stated that the organization doesn't have the budget and the time to accommodate and perform the re-architecture/refactoring of their systems.

Reference:
https://aws.amazon.com/cloud-migration/

Check out this cheat sheet on the 6 R's of migration as well as other AWS migration services:
https://tutorialsdojo.com/aws-cheat-sheet-aws-migration-strategies-the-6-rs/
A

Explanation
You can use geo restriction - also known as geoblocking - to prevent users in specific geographic locations from accessing content that you're distributing through a CloudFront web distribution. To use geo restriction, you have two options:
1. Use the CloudFront geo restriction feature. Use this option to restrict access to all of the files that are associated with a distribution and to restrict access at the country level.
2. Use a third-party geolocation service. Use this option to restrict access to a subset of the files that are associated with a distribution or to restrict access at a finer granularity than the country level.
Hence, the correct answer for this scenario is creating a CloudFront distribution with Geo-Restriction enabled to block all of the blacklisted countries from accessing the trading platform as CloudFront can provide the users low-latency access to the files as well as block certain countries on the FTAF list.
The options that say: Deploy the trading platform using Elastic Beanstalk and deny all incoming traffic from the IP addresses of the blacklisted countries in the Network Access Control List (ACL) of the VPC is incorrect because blocking all of the IP addresses of each blacklisted country in the Network Access Control List entails a lot of work and is not a recommended way to accomplish the task. Using CloudFront geo restriction feature is a better solution for this.
The options that say: Use Route 53 with a Geolocation routing policy that blocks all traffic from the blacklisted countries and Use Route 53 with a Geoproximity routing policy that blocks all traffic from the blacklisted countries are incorrect because Route 53 does not provide low-latency access to users around the globe, unlike CloudFront. Geolocation routing policy is used when you want to route traffic based on the location of your users while Geoproximity routing policy is for scenarios where you want to route traffic based on the location of your resources and, optionally, shift traffic from resources on one location to resources in another.

Reference:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

Check out this Amazon CloudFront Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-cloudfront/

Latency Routing vs Geoproximity Routing vs Geolocation Routing:
https://tutorialsdojo.com/aws-cheat-sheet-latency-routing-vs-geoproximity-routing-vs-geolocation-routing/

Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services-for-udemy-students/
You are migrating an interactive car registration web system hosted on your on-premises network to AWS Cloud. The current architecture of the system consists of a single NGINX web server and a MySQL database running on a Fedora server, which both reside in their on-premises data center.
In this scenario, what would be the most efficient way to transfer the web application to AWS?

1. Use the AWS Application Discovery Service to migrate the NGINX web server.

2. Configure Auto Scaling to launch two web servers in two Availability Zones.

3. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in one Availability Zone only.

4. Import the data into Amazon RDS from the latest MySQL backup.

5. Use Amazon Route 53 to create a private hosted zone and point a non-alias A record to the ELB.

1. Export web files to an Amazon S3 bucket in one Availability Zone using AWS Migration Hub.

2. Run the website directly out of Amazon S3.

3. Migrate the database using the AWS Database Migration Service and AWS Schema Conversion Tool (AWS SCT).

4. Use Route 53 and create an alias record pointing to the ELB.

1. Use the AWS Server Migration Service (SMS) to create an EC2 AMI of the NGINX web server.

2. Configure auto-scaling to launch in two Availability Zones.

3. Launch a multi-AZ MySQL Amazon RDS instance in one availability zone only.

4. Import the data into Amazon RDS from the latest MySQL backup.

5. Create an ELB to front your web servers.

6. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.

1. Launch two NGINX EC2 instances in two Availability Zones.

2. Copy the web files from the on-premises web server to each Amazon EC2 web server, using Amazon S3 as the repository.

3. Migrate the database using the AWS Database Migration Service.

4. Create an ELB to front your web servers.

5. Use Route 53 and create an alias A record pointing to the ELB.
See all questions
BackSkip question
D

Explanation
This is a trick question which contains a lot of information to confuse you, especially if you don't know the fundamental concepts in AWS. All options seem to be correct except for their last steps in setting up Route 53.
To route domain traffic to an ELB load balancer, use Amazon Route 53 to create an alias record that points to your load balancer. An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com, and for subdomains, such as www.example.com. (You can create CNAME records only for subdomains). For EC2 instances, always use a Type A Record without an Alias. For ELB, Cloudfront and S3, always use a Type A Record with an Alias and finally, for RDS, always use the CNAME Record with no Alias. Hence, the following option is the correct answer:
1. Launch two NGINX EC2 instances in two Availability Zones.
2. Copy the web files from the on-premises web server to each Amazon EC2 web server, using Amazon S3 as the repository.
3. Migrate the database using the AWS Database Migration Service.
4. Create an ELB to front your web servers.
5. Use Route 53 and create an alias A record pointing to the ELB.



AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
The following sets of options are incorrect because it is just using an A record without an Alias:
1. Use the AWS Server Migration Service (SMS) to create an EC2 AMI of the NGINX web server.
2. Configure auto-scaling to launch in two Availability Zones.
3. Launch a multi-AZ MySQL Amazon RDS instance in one availability zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Create an ELB to front your web servers.
6. Use Amazon Route 53 and create an A record pointing to the elastic load balancer.
--
1. Use the AWS Application Discovery Service to migrate the NGINX web server.
2. Configure Auto Scaling to launch two web servers in two Availability Zones.
3. Launch a Multi-AZ MySQL Amazon Relational Database Service (RDS) instance in one Availability Zone only.
4. Import the data into Amazon RDS from the latest MySQL backup.
5. Use Amazon Route 53 to create a private hosted zone and point a non-alias A record to the ELB.
Take note as well that the AWS Server Migration Service (SMS) is primarily used to migrate virtual machines only, which can be from VMware vSphere and Windows Hyper-V to your AWS cloud. In addition, the AWS Application Discovery Service simply helps you to plan migration projects by gathering information about your on-premises data centers but this service is not a suitable migration service.
The following option is also incorrect because the web system that is being migrated is a non-static (dynamic) website, which cannot be hosted in S3:
1. Export web files to an Amazon S3 bucket in one Availability Zone using AWS Migration Hub.
2. Run the website directly out of Amazon S3.
3. Migrate the database using the AWS Database Migration Service and AWS Schema Conversion Tool (AWS SCT).
4. Use Route 53 and create an alias record pointing to the ELB.

References:
https://aws.amazon.com/cloud-migration/
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html

Check out this Amazon Route 53 Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-route-53/

Check out this cheat sheet on AWS Database Migration Service:
https://tutorialsdojo.com/aws-cheat-sheet-aws-database-migration-service/
You are working as a Solutions Architect for a multinational tech company which has multiple VPCs for each of its IT departments. You are instructed to launch a new central database server which can be accessed by the other VPCs of the company using the database.tutorialsdojo.com domain name. This server should only be accessible within the associated VPCs since only internal applications will be using the database.
Which of the following should you do to meet the above requirement?

Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true

Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the Elastic IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to false

Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create a CNAME record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to false and the enableDnsSupport attribute to false

Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true
See all questions
BackSkip question
D


Explanation
In AWS, a hosted zone is a container for records, and records contain information about how you want to route traffic for a specific domain, such as tutorialsdojo.com, and its subdomains (portal.tutorialsdojo.com, database.tutorialsdojo.com). A hosted zone and the corresponding domain have the same name. There are two types of hosted zones:
Public hosted zones contain records that specify how you want to route traffic on the internet.
Private hosted zones contain records that specify how you want to route traffic in an Amazon VPC



A private hosted zone is a container that holds information about how you want Amazon Route 53 to respond to DNS queries for a domain and its subdomains within one or more VPCs that you create with the Amazon VPC service. Your VPC has attributes that determine whether your EC2 instance receives public DNS hostnames, and whether DNS resolution through the Amazon DNS server is supported.
enableDnsHostnames - Indicates whether the instances launched in the VPC get public DNS hostnames. If this attribute is true, instances in the VPC get public DNS hostnames, but only if the enableDnsSupport attribute is also set to true.
enableDnsSupport - Indicates whether the DNS resolution is supported for the VPC. If this attribute is false, the Amazon-provided DNS server in the VPC that resolves public DNS hostnames to IP addresses is not enabled. If this attribute is true, queries to the Amazon provided DNS server at the 169.254.169.253 IP address, or the reserved IP address at the base of the VPC IPv4 network range plus two ( *...2 ) will succeed.
Hence, the option that says: Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true is the correct answer.
The options that say:
1. Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to true
2. Set up a public hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create a CNAME record with a value of database.tutorialsdojo.com which maps to the IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to false and the enableDnsSupport attribute to false
are incorrect because you have to create a private hosted zone and not a public one, since the database server will only be accessed by the associated VPCs and not publicly over the Internet. In addition, you have to create an A record for your database server and then set both the enableDnsHostNames and enableDnsSupport attributes to true.
The option that says: Set up a private hosted zone with a domain name of tutorialsdojo.com and specify the VPCs that you want to associate with the hosted zone. Create an A record with a value of database.tutorialsdojo.com which maps to the Elastic IP address of the EC2 instance of your database server. Modify the enableDnsHostNames attribute of your VPC to true and the enableDnsSupport attribute to false is incorrect because even though it mentions the use of a private hosted zone, the configuration is incorrect since it is required to set both the enableDnsHostNames and enableDnsSupport attributes of your VPC to true. In addition, an Elastic IP address is a public IPv4 address, which is reachable from the Internet and hence, it violates the requirement that the database server should only be accessible within your associated VPCs.

References:
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-support
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html

Check out these Amazon VPC and Route 53 Cheat Sheets:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/
https://tutorialsdojo.com/aws-cheat-sheet-amazon-route-53/
An online immigration system is currently hosted on one large EC2 instance with EBS volumes to store all of the applicants' data. The registration system accepts the information from the user including documents and photos and then performs automated verification and processing to check if the applicant is eligible for immigration. The immigration system becomes unavailable at times when there is a surge of applicants using the system. The existing architecture needs improvement as it takes a long time for the system to complete the processing and the attached EBS volumes are not enough to store the ever-growing data being uploaded by the users. Which of the following is the best option to achieve high availability and a more scalable data storage?

Use a SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.

Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region.

Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region.

Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue.
See all questions
BackSkip question
B


Explanation
In this scenario, you need to overhaul the existing immigration service to upgrade its storage and computing capacity. Since EBS Volumes can only provide limited storage capacity and are not scalable, you should use S3 instead. The system goes down at times when there is a surge of requests which indicates that the existing large EC2 instance could not handle the requests any longer. In this case, you should implement a highly-available architecture and a queueing system with SQSa and Auto Scaling.
The option that says: Upgrade your architecture to use an S3 bucket with cross-region replication (CRR) enabled, as the storage service. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use CloudFormation to replicate your architecture to another region is correct because it provides high availability and a scalable data storage with S3. Auto-scaling of EC2 instances reduces the overall processing time and SQS helps in distributing the tasks to a group of EC2 instances.
The option that says: Use EBS with Provisioned IOPS to store files, SNS to distribute tasks to a group of EC2 instances working in parallel, and Auto Scaling to dynamically size the group of EC2 instances depending on the number of SNS notifications. Use CloudFormation to replicate your architecture to another region is incorrect because neither EBS nor SNS are valid choices in this scenario. Using SQS is more suitable in distributing the tasks to an Auto Scaling group of EC2 instances and not SNS.
The option that says: Use a SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue. Use a SNS to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect because SNS is not a valid choice in this scenario.
The option that says: Upgrade to EBS with Provisioned IOPS as your main storage service and change your architecture to use an SQS queue to distribute the tasks to a group of EC2 instances. Use Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect because EBS is not a scalable storage solution. You should use S3 instead.
The option that says: Upgrade your architecture to use EC2 instance store as a cost-effective storage solution. Set up an SQS queue to distribute the tasks to a group of EC2 instances with Auto Scaling to dynamically increase or decrease the group of EC2 instances depending on the length of the SQS queue is incorrect because an EC2 instance store is an ephemeral storage and should never be your choice in storing user data. You should use S3 instead.

References:https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html
https://aws.amazon.com/s3/

Check out this Amazon S3 Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-s3/
A

Explanation
AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases.
AWS Database Migration Service can migrate your data to and from most of the widely used commercial and open source databases. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. Migrations can be from on-premises databases to Amazon RDS or Amazon EC2, databases running on EC2 to RDS, or vice versa, as well as from one RDS database to another RDS database. It can also move data between SQL, NoSQL, and text-based targets.



In heterogeneous database migrations, the source and target databases engines are different, like in the case of Oracle to Amazon Aurora, Oracle to PostgreSQL, or Microsoft SQL Server to MySQL migrations. In this case, the schema structure, data types, and database code of source and target databases can be quite different, requiring a schema and code transformation before the data migration starts. That makes heterogeneous migrations a two-step process.
First use the AWS Schema Conversion Tool to convert the source schema and code to match that of the target database, and then use the AWS Database Migration Service to migrate data from the source database to the target database. All the required data type conversions will automatically be done by the AWS Database Migration Service during the migration. The source database can be located in your own premises outside of AWS, running on an Amazon EC2 instance, or it can be an Amazon RDS database. The target can be a database in Amazon EC2 or Amazon RDS.
The option that says: Migrate the database from your on-premises data center using the AWS Server Migration Service (SMS). Afterwards, use the AWS Database Migration Service to convert and migrate your data to Amazon RDS for PostgreSQL database is incorrect because the AWS Server Migration Service (SMS) is primarily used to migrate virtual machines such as VMware vSphere and Windows Hyper-V. Although it is correct to use AWS Database Migration Service (DMS) to migrate the database, this option is still wrong because you should use the AWS Schema Conversion Tool to convert the source schema.
The option that says: Use a combination of AWS Data Pipeline service and CodeCommit to convert the source schema and code to match that of the target PostgreSQL database in RDS. Use AWS Batch with Spot EC2 instances to cost-effectively migrate the data from the source database to the target database in a batch process is incorrect because AWS Data Pipeline is primarily used to quickly and easily provision pipelines that remove the development and maintenance effort required to manage your daily data operations which lets you focus on generating insights from that data. Although you can use this to connect your data on your on-premises data center, it is not the most suitable service to use, compared with AWS DMS.
The option that says: Use the AWS Serverless Application Model (SAM) service to transform your database to PostgreSQL using AWS Lambda functions. Migrate the database to RDS using the AWS Database Migration Service (DMS) is incorrect because the Serverless Application Model (SAM) is an open-source framework that is primarily used to build serverless applications on AWS, and not for database migration.

References:
https://aws.amazon.com/dms/
https://aws.amazon.com/cloud-migration/

Check out these cheat sheets on AWS Database Migration Service and other common strategies for cloud migration:
https://tutorialsdojo.com/aws-cheat-sheet-aws-database-migration-service/
https://tutorialsdojo.com/aws-cheat-sheet-aws-migration-strategies-the-6-rs/
D


Explanation
Amazon RDS Read Replicas provide enhanced performance and durability for database (DB) instances. This feature makes it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads. You can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of your data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances.

Launching an RDS Read Replica linked to your Multi AZ master database and generating reports from the Read Replica, then using QuickSight to visualize the reports is correct because it uses the Read Replicas of the database for the querying of reports.Querying the synchronously replicated standby RDS MySQL instance maintained through Multi-AZ, and generating the report from the results then using Kibana to visualize the reports is incorrect because you cannot access the standby instance.Continuously sending transaction logs from your master database to an S3 bucket and using S3 byte range requests to generate the reports off the S3 bucket then using QuickSight to visualize the reports is incorrect because sending the logs to S3 would add to the overhead on the database instance.Generating the reports by querying the ElastiCache database caching tier then using Kibana to visualize the reports is incorrect because querying on ElastiCache may not always give you the latest and entire data, as the cache may not always be up-to-date.
Reference:
https://aws.amazon.com/rds/details/read-replicas

Check out these Amazon RDS and Amazon QuickSight Cheat Sheets:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/
https://tutorialsdojo.com/aws-cheat-sheet-amazon-quicksight/
You are working as an IT Consultant for one of the Big 4 accounting firms with multiple VPCs in various regions. As part of their security compliance, you need to set up a logging solution to track all of the changes made to their AWS resources in all regions, which host their enterprise accounting system such as EC2, S3, CloudFront and IAM. The logging solution must ensure the security, integrity, and durability of your log data in order to pass the compliance requirements. In addition, it should provide an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and API calls. In this scenario, which of the following options is the best solution to use?

Create a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

Create a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

Create a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.

Create a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies.
See all questions
BackSkip question
B


Explanation
The accounting firm requires a secure and durable logging solution that will track all of the activities of all AWS resources (such as EC2, S3, CloudFront and IAM) on all regions. CloudWatch can be used for this case with multi-region trail enabled. However, CloudWatch will only cover the activities of the regional services (EC2, S3, RDS etc.) and not for global services such as IAM, CloudFront, AWS WAF, and Route 53.
The option that says: Create a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is correct because it provides security, integrity and durability to your log data and in addition, it has the -include-global-service-events parameter enabled which will also include activity from global services such as IAM, Route 53, AWS WAF, and CloudFront.
The option that says: Create a new CloudWatch trail in a new S3 bucket using the AWS CLI and also pass both the --is-multi-region-trail and --include-global-service-events parameters then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch.
The option that says: Create a new CloudWatch trail in a new S3 bucket using the CloudTrail console and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because you need to use CloudTrail instead of CloudWatch. In addition, the --include-global-service-events parameter is also missing in this setup.
The option that says: Create a new CloudTrail trail in a new S3 bucket using the AWS CLI and also pass the --is-multi-region-trail parameter then encrypt log files using KMS encryption. Enable Multi Factor Authentication (MFA) Delete on the S3 bucket and ensure that only authorized users can access the logs by configuring the bucket policies is incorrect because the --is-multi-region-trail is not enough as you also need to add the --include-global-service-events parameter. Plus, you cannot enable the Global Service Events using the CloudTrail console but by using AWS CLI.

References:
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-concepts.html#cloudtrail-concepts-global-service-events
http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html
https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail-by-using-the-aws-cli.html

Check out this AWS CloudTrail Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-cloudtrail/
A company is using AWS Organizations to manage their multi-account and multi-region AWS infrastructure. They are currently doing large-scale automation for their key daily processes to save costs. One of these key processes is sharing specified AWS resources, which an organizational account owns, with other AWS accounts of the company using AWS RAM. There is already an existing service which was previously managed by a separate organization account moderator, who also maintained the specific configuration details.
In this scenario, what could be a simple and effective solution that would allow the service to perform its tasks on the organization accounts on the moderator's behalf?

Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM.

Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes.

Use trusted access by running the enable-sharing-with-aws-organization command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.

Enable cross-account access with AWS Organizations in the Resource Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service.
See all questions
BackSkip question
C


Explanation
You can use trusted access to enable an AWS service that you specify, called the trusted service, to perform tasks in your organization and its accounts on your behalf. This involves granting permissions to the trusted service but does not otherwise affect the permissions for IAM users or roles. When you enable access, the trusted service can create an IAM role called a service-linked role in every account in your organization. That role has a permissions policy that allows the trusted service to do the tasks that are described in that service's documentation. This enables you to specify settings and configuration details that you would like the trusted service to maintain in your organization's accounts on your behalf.



AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. To enable trusted access with AWS Organizations:
From the AWS RAM CLI, use the enable-sharing-with-aws-organizations command.
Name of the IAM service-linked role that can be created in accounts when trusted access is enabled: AWSResourceAccessManagerServiceRolePolicy.

The option that says: Attach an IAM role on the service detailing all the allowed actions that it will be able to perform. Install an SSM agent in each of the worker VMs. Use AWS Systems Manager to build automation workflows that involve the daily key processes is incorrect because this is not the simplest way to automate interaction of AWS RAM with AWS Organizations. AWS Systems Manager is a tool that helps with automation of EC2 instances, on-premises servers, and other virtual machines. It might not support all the services being used by the key processes.
The option that says: Configure a service-linked role for AWS RAM and modify the permissions policy to specify what the role can and cannot do. Lastly, modify the trust policy of the role so that other processes can utilize AWS RAM is incorrect. This is not the simplest solution for integrating AWS RAM and AWS Organizations since using AWS Organization's trusted access will create the service-linked role for you. Also, the trust policy of a service-linked role cannot be modified. Only the linked AWS service can assume a service-linked role, which is why you cannot modify the trust policy of a service-linked role.
The option that says: Enable cross-account access with AWS Organizations in the Resources Access Manager Console. Mirror the configuration changes that was performed by the account that previously managed this service is incorrect because you should enable trusted access to AWS RAM, not a cross-account access.

References:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html
https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-ram.html
https://aws.amazon.com/blogs/security/introducing-an-easier-way-to-delegate-permissions-to-aws-services-service-linked-roles/
A Solutions Architect has been assigned to develop a workflow to ensure that the required patches of all of their Windows EC2 instances are properly identified and applied automatically. To maintain their system uptime requirements, it is of utmost importance to ensure that the EC2 instance reboots do not occur at the same time on all of their Windows instances. This is to avoid any loss of revenue that could be caused by any unavailability issues of their systems.
Which of the following will meet the above requirements?

Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a given schedule using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.

Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two non-overlapping maintenance windows and associate each with a different patch group. Using Patch Group tags, register targets with specific maintenance windows and lastly, assign the AWS-RunPatchBaseline document as a task within each maintenance window which has a different processing start time.

Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on your patch group. Set up a maintenance window and associate it with your patch group. Assign the AWS-RunPatchBaseline document as a task within your maintenance window.

Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create two CloudWatch Events rules which are configured to use a cron expression to automate the execution of patching for the two Patch Groups using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution.
See all questions
BackSkip question
B


Explanation
AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications.
You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.



Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager maintenance window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines themselves when you create or update them.
You can use a patch group to associate instances with a specific patch baseline. Patch groups help ensure that you are deploying the appropriate patches, based on the associated patch baseline rules, to the correct set of instances. Patch groups can also help you avoid deploying patches before they have been adequately tested. For example, you can create patch groups for different environments (such as Development, Test, and Production) and register each patch group to an appropriate patch baseline.



When you run AWS-RunPatchBaseline, you can target managed instances using their instance ID or tags. SSM Agent and Patch Manager will then evaluate which patch baseline to use based on the patch group value that you added to the instance.
You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a patch group must be defined with the tag key: Patch Group. Note that the key is case-sensitive. You can specify any value, for example, "web servers," but the key must be Patch Group.
The AWS-DefaultPatchBaseline baseline is primarily used to approve all Windows Server operating system patches that are classified as "CriticalUpdates" or "SecurityUpdates" and that have an MSRC severity of "Critical" or "Important". Patches are auto-approved seven days after release.
Hence, the option that says: Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two non-overlapping maintenance windows and associate each with a different patch group. Using Patch Group tags, register targets with specific maintenance windows and lastly, assign the AWS-RunPatchBaseline document as a task within each maintenance window which has a different processing start time is the correct answer as it properly uses two Patch Groups, non-overlapping maintenance windows and the AWS-DefaultPatchBaseline baseline to ensure that the EC2 instance reboots do not occur at the same time.
The option that says: Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on your patch group. Set up a maintenance window and associate it with your patch group. Assign the AWS-RunPatchBaseline document as a task within your maintenance window is incorrect because although it is correct to use a Patch Group, you must create another Patch Group to avoid any unavailability issues. Having two non-overlapping maintenance windows will ensure that there will be another set of running Windows EC2 instances while the other set is being patched.
The option that says: Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create two CloudWatch Events rules which are configured to use a cron expression to automate the execution of patching for the two Patch Groups using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution is incorrect because the AWS Systems Manager Run Command is primarily used to remotely manage the configuration of your managed instances while AWS Systems Manager State Manager is just a configuration management service that automates the process of keeping your Amazon EC2 and hybrid infrastructure in a state that you define. These two services, including CloudWatch Events, are not suitable to be used in this scenario. The better solution would be to use AWS Systems Manager Maintenance Windows which lets you define a schedule for when to perform potentially disruptive actions on your instances such as patching an operating system, updating drivers, or installing software or patches.
The option that says: Create a Patch Group with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Create a CloudWatch Events rule configured to use a cron expression to automate the execution of patching in a given schedule using the AWS Systems Manager Run command. Set up an AWS Systems Manager State Manager document to define custom commands which will be executed during patch execution is incorrect because, just as what is mentioned in Option 3, you have to use Maintenance Windows for scheduling the patches and you also need to set up two Patch Groups in this scenario instead of one.

References:
https://aws.amazon.com/blogs/mt/patching-your-windows-ec2-instances-using-aws-systems-manager-patch-manager/
https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-ssm-documents.html
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-patch-scheduletasks.html

Check out this AWS Systems Manager Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-systems-manager/
A data analytics startup has been chosen to develop a data analytics system that will track all statistics in the Fédération Internationale de Football Association (FIFA) World Cup which will also be used by other 3rd-party analytics sites. The system will record, store and provide statistical data reports about the top scorers, goal scores for each team, average goals, average passes and average yellow/red cards per match and many other details. FIFA fans all over the world will frequently access the statistics reports everyday and thus, it should be durably stored, highly available and highly scalable. In addition, the data analytics system will allow the users to vote for the best male and female FIFA player as well as the best male and female coach. Due to the popularity of the FIFA World Cup event, it is projected that there will be over 10 million queries on game day and could spike to 30 million queries over the course of time.
Which of the following is the most cost-effective solution that will meet these requirements?

1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table.
4. Delete the old DynamoDB tables everyday.

1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.

1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Set up a batch job that put reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.

1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
See all questions
BackSkip question
C


Explanation
In this scenario, you are required to have the following:
A durable storage for the generated reports.
A database that is highly available and can scale to handle millions of queries.
A Content Delivery Network that can distribute the report files to the users all over the world.
Hence, the following option is the best solution that satisfies all of these requirements. S3 provides the durable storage; Multi-AZ RDS with Read Replicas provide a scalable and highly available database and CloudFront provides the CDN:
1. Generate the FIFA reports from MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Set up a batch job that put reports in an S3 bucket.
3. Launch a CloudFront distribution to cache the content with a TTL set to expire objects daily.
The following option is incorrect because although the database is scalable and highly available, it neither has any durable data storage nor a CDN:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration with Read Replicas.
2. Generate the FIFA reports by querying the Read Replica.
3. Configure a daily job that performs a daily table cleanup.
The following option is incorrect because although this option handles and provides a better read capability for the system, it is still lacking a durable storage and a CDN:
1. Launch a MySQL database in Multi-AZ RDS deployments configuration.
2. Configure the application to generate reports from ElastiCache to improve the read performance of the system.
3. Utilize the default expire parameter for items in ElastiCache.
The following option is incorrect because it is not a cost-effective solution to maintain both RDS and a DynamoDB instance:
1. Launch a Multi-AZ MySQL RDS instance.
2. Query the RDS instance and store the results in a DynamoDB table.
3. Generate reports from DynamoDB table.
4. Delete the old DynamoDB tables everyday.

References:
https://aws.amazon.com/rds/details/multi-az/
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/
An online medical record system is using a fleet of Windows EC2 instances with several EBS volumes attached to it. Since the records that they are storing are confidential health files of their patients, there is a need to ensure that the latest security patches are installed to the EC2 instances. In addition, you also have to implement a system in your cloud architecture which checks all of your EC2 instances if they are using an approved Amazon Machine Image (AMI). The system that you will implement should not impede developers from launching instances using an unapproved AMI, but you still have to be notified if there are non-compliant EC2 instances in your VPC.
Which of the following should you implement to protect and monitor all of your instances as required above? (Choose 2)

Use AWS Shield Advanced to automatically patch all of your EC2 instances and detect uncompliant EC2 instances which do not use approved AMIs.

Use the AWS Config Managed Rule which automatically checks whether your running EC2 instances are using approved AMIs. Set up CloudWatch Alarms to notify you if there are any non-compliant instances running in your VPC.

Create an IAM policy that will restrict the developers from launching EC2 instances with an unapproved AMI.

Set up Amazon GuardDuty that continuously monitors your instances if the latest security patches are installed and if there is an instance that is using an unapproved AMI. Use CloudWatch Alarms to notify you if there are any non-compliant instances running in your VPC.

Set up a patch baseline which defines which patches are approved for installation on your instances using AWS Systems Manager Patch Manager.
See all questions
BackSkip question
C E

Explanation
In this scenario, you can use a combination of AWS Config Managed Rules and AWS Systems Manager Patch Manager to meet the requirements. Hence, the options that say: Set up a patch baseline which defines which patches are approved for installation on your instances using AWS Systems Manager Patch Manager and Use the AWS Config Managed Rule which automatically checks whether your running EC2 instances are using approved AMIs. Set up CloudWatch Alarms to notify you if there are any non-compliant instances running in your VPC are the correct answers.
AWS Systems Manager Patch Manager automates the process of patching managed instances with security-related updates. For Linux-based instances, you can also install patches for non-security updates. You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.



Patch Manager uses patch baselines, which include rules for auto-approving patches within days of their release, as well as a list of approved and rejected patches. You can install patches on a regular basis by scheduling patching to run as a Systems Manager Maintenance Window task. You can also install patches individually or to large groups of instances by using Amazon EC2 tags. You can add tags to your patch baselines themselves when you create or update them.
AWS Config provides AWS managed rules, which are predefined, customizable rules that AWS Config uses to evaluate whether your AWS resources comply with common best practices. For example, you could use a managed rule to quickly start assessing whether your Amazon Elastic Block Store (Amazon EBS) volumes are encrypted or whether specific tags are applied to your resources. You can set up and activate these rules without writing the code to create an AWS Lambda function, which is required if you want to create custom rules. The AWS Config console guides you through the process of configuring and activating a managed rule. You can also use the AWS Command Line Interface or AWS Config API to pass the JSON code that defines your configuration of a managed rule.
Creating an IAM policy that will restrict the developers from launching EC2 instances with an unapproved AMI is incorrect because although you can use an IAM policy to prohibit your developers to launch unapproved AMIs, this will impede their work which violates what the scenario requires. Remember that the scenario says that the system that you will implement should not impede developers to launch instances using an unapproved AMI.
The option that says: Set up Amazon GuardDuty that continuously monitors your instances if the latest security patches are installed and if there is an instance that is using an unapproved AMI. Use CloudWatch Alarms to notify you if there are any non-compliant instances running in your VPC is incorrect because Amazon GuardDuty is primarily used as a threat detection service that continuously monitors for malicious or unauthorized behavior to help you protect your AWS accounts and workloads. It monitors for activity such as unusual API calls or potentially unauthorized deployments that indicate a possible account compromise however, it does not check if your EC2 instances are using an approved AMI or not.
Using AWS Shield Advanced to automatically patch all of your EC2 instances and detecting uncompliant EC2 instances which do not use approved AMIs is incorrect because the AWS Shield Advanced service is most suitable to be used to prevent DDoS attacks in your AWS resources. It cannot check the specific AMIs that your EC2 instances are using.

References:
https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html
https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html
https://docs.aws.amazon.com/config/latest/developerguide/approved-amis-by-id.html

Check out these cheat sheets on AWS Config and AWS Systems Manager:
https://tutorialsdojo.com/aws-cheat-sheet-aws-config/
https://tutorialsdojo.com/aws-cheat-sheet-aws-systems-manager/
A


Explanation
The default health checks for an Auto Scaling group are EC2 status checks only. If an instance fails these status checks, the Auto Scaling group considers the instance unhealthy and replaces it. The health status of an Auto Scaling instance is either healthy or unhealthy. All instances in your Auto Scaling group start in the healthy state. Instances are assumed to be healthy unless Amazon EC2 Auto Scaling receives notification that they are unhealthy. This notification can come from one or more of the following sources:
- Amazon EC2
- Elastic Load Balancing (ELB)
- Custom health check
There are certain benefits of using ELB health checks as opposed to the default EC2 status checks. It can monitor if your application is running on a certain port (e.g. 3000) which you cannot do with a regular EC2 status check. In addition, you can use many other health checks that suit your requirements such as HealthyThresholdCount, HealthCheckPath, HealthyThresholdCount and many others.
After Amazon EC2 Auto Scaling marks an instance as unhealthy, it is scheduled for replacement. If you do not want instances to be replaced, you can suspend the health check process for any individual Auto Scaling group.



If you configure the Auto Scaling group to use Elastic Load Balancing health checks, it considers the instance unhealthy if it fails either the EC2 status checks or the load balancer health checks. If you attach multiple load balancers to an Auto Scaling group, all of them must report that the instance is healthy in order for it to consider the instance healthy. If one load balancer reports an instance as unhealthy, the Auto Scaling group replaces the instance, even if other load balancers report it as healthy.
Note that if you attach multiple load balancers to an Auto Scaling group, all of them must report that the instance is healthy in order for it to consider the instance healthy. If one load balancer reports an instance as unhealthy, the Auto Scaling group replaces the instance, even if other load balancers report it as healthy. That is why changing the health check type of your Auto Scaling Group to ELB is the correct answer.
Enabling the Proxy Protocol in the ELB is incorrect because Proxy Protocol is just an Internet protocol used to carry connection information from the source requesting the connection to the destination for which the connection was requested. It does not solve the health checks issue in the ELB.
Increasing the value for the Health check interval set on the Elastic Load Balancer is incorrect because increasing the health check interval set will not solve the problem.
Using CloudWatch to monitor the EC2 instances is incorrect because simply adding a CloudWatch monitor will not solve the issue on the ELB health checks. Remember that the issue here is determining the source that sends the Auto Scaling group a notification that the instances are unhealthy. In this scenario, it is apparent that the Auto Scaling group is using the default EC2 health check type, instead of an ELB type.

References:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/healthcheck.html
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-health-checks.html
https://aws.amazon.com/elasticloadbalancing/

Check out this AWS Elastic Load Balancing (ELB) Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-elastic-load-balancing-elb/

EC2 Instance Health Check vs ELB Health Check vs Auto Scaling and Custom Health Check:
https://tutorialsdojo.com/aws-cheat-sheet-ec2-instance-health-check-vs-elb-health-check-vs-auto-scaling-and-custom-health-check-2/

Here is an additional tutorial on why an Amazon EC2 Auto Scaling group terminates a healthy instance:
https://youtu.be/_ew-J3DQKZg
B

Explanation
The correct answer is Synchronous replication as the type of replication that RDS Multi-AZ deployments does is synchronous replication.
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Snapshot replication is incorrect because a snapshot replication refers to a replication method between databases where data is infrequently updated at specified times by copying data changes from the original database (publisher) to a receiving database (subscriber).
Asynchronous replication is incorrect because asynchronous replication is not implemented in RDS Multi-AZ deployments but in Read Replicas.
Cross-Region Replication (CRR) is incorrect because cross-region replication is a bucket-level configuration in S3 that enables automatic, asynchronous copying of objects across buckets in different AWS Regions.

Reference:
https://aws.amazon.com/rds/details/multi-az/

Check out this Amazon RDS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-relational-database-service-amazon-rds/

Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
https://tutorialsdojo.com/aws-cheat-sheet-aws-certified-solutions-architect-professional/
A


Explanation
An ElastiCache Redis cluster provides varying levels of data durability, performance, and cost for implementing disaster recovery or fault tolerance of your cached data. You can choose the following options to improve the data durability of your ElastiCache cluster:
- Daily automatic backups- Manual backups using Redis append-only file (AOF)- Setting up a Multi-AZ with Automatic Failover



By default, the data in a Redis node on ElastiCache resides only in memory and is not persistent. If a node is rebooted, or if the underlying physical server experiences a hardware failure, the data in the cache is lost.
If you require data durability, you can enable the Redis append-only file feature (AOF). When this feature is enabled, the node writes all of the commands that change cache data to an append-only file. When a node is rebooted and the cache engine starts, the AOF is "replayed"; the result is a warm Redis cache with all of the data intact.
AOF is disabled by default. To enable AOF for a cluster running Redis, you must create a parameter group with the appendonly parameter set to yes, and then assign that parameter group to your cluster. You can also modify the appendfsync parameter to control how often Redis writes to the AOF file.
Hence, the correct answer is: Use the Redis append-only file feature (AOF) to record all of the commands that change cache data to an append-only file. When a node is rebooted and the cache engine starts, the AOF is "replayed" and the result is a warm Redis cache with all of the data intact.
Purchasing extra large reserved nodes for your new ElastiCache Redis cluster and enabling the Multi-AZ with Auto-Failover option upon launch is incorrect because although setting up a Multi-AZ with automatic failover can optimally improve the data durability of your ElastiCache layer, this option is the most expensive one since you bought reserved nodes with extra large EC2 instances which will increase your overall cost. The top scorers data can easily be regenerated by the Lambda function so simply using Redis AOF is enough to meet the requirement.
Enabling automatic backups and setting the backup retention period to maximum is incorrect because the data loss potential for daily scheduled backups is high (up to a day's worth of data). Since you will only backup your ElastiCache once a day, there will be hours of gap in between the scheduled backup. If you scheduled the backup every 1 AM and the outage happened in the middle of the day (1 PM), then the data loss will be up to 12 hours.
Integrating your ElastiCache with Amazon ES then enabling Kibana and LogStash to re-generate all of the data to the Amazon ElastiCache cluster in the event of an outage is incorrect because integrating ElastiCache and Amazon ES (ElasticSearch) will not improve the data durability of your cluster. Logstash simply provides a convenient way to use the bulk API to upload data into your Amazon ES domain with the S3 plugin while Kibana is just an open-source visualization tool designed to work with Elasticsearch.

References:
https://aws.amazon.com/premiumsupport/knowledge-center/fault-tolerance-elasticache/
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.html
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RedisAOF.html

Check out this Amazon Elasticache Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-elasticache/

Redis Append-Only Files vs Redis Replication:
https://tutorialsdojo.com/aws-cheat-sheet-redis-append-only-files-vs-redis-replication/

Comparison of AWS Services Cheat Sheets:
https://tutorialsdojo.com/comparison-of-aws-services-for-udemy-students/
D


Explanation
A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs with rules similar to your security groups in order to add an additional layer of security to your VPC.



Setting up deny rules on your inbound Network Access control list associated with the web application tier subnet to block access to the group of attacking IP addresses is correct because you can add deny rules in Network ACL and block access to certain IP addresses.
Using AWS WAF to protect your VPC against web attacks and placing the online banking portal behind a CloudFront RTMP distribution is incorrect because although you can use AWS WAF to protect your application from common web attacks, placing the online banking portal behind a CloudFront RTMP distribution is wrong because this is primarily used for real-time streaming and not for web applications. You should place it on a CloudFront Web distribution instead.
Using AWS Shield Advanced to protect your VPC from common, most frequently occurring network and transport layer DDoS attacks is incorrect because although you can definitely use AWS Shield Advanced in protecting your cloud infrastructure, it is not cost-effective as compared to blocking the IP addresses using Network ACL. Take note that there are two types of AWS Shield: the Standard one which is free and the Advanced type which has an additional cost of around $3,000 per month.
Creating inbound rules in the Security Group of your EC2 instances to block the attacking IP addresses is incorrect because you cannot deny port access as well as specific IP addresses using security groups. By default, all requests are denied in your security groups so you are required to explicitly allow access from a particular IP address, port or range to access your EC2 instances using your security groups.

Reference:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

Check out this Amazon VPC Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-vpc/
Company XYZ has hired you as a Solutions Architect for their Flight Deals web application which is currently hosted on their on-premises data center. The website hosts high-resolution photos of top tourist destinations in the world and uses a third-party payment platform to accept payments. Recently, they have heavily invested on their global marketing campaign and there is a high probability that the incoming traffic to their Flight Deals website will increase in the coming days. Due to a tight deadline, the company does not have the time to fully migrate the website to AWS. A set of security rules that block common attack patterns, such as SQL injection and cross-site scripting should also be implemented to improve the website security.
Which of the following options will maintain the website's functionality despite the massive amount of incoming traffic?

Use the AWS Server Migration Service to easily migrate the website from your on-premises data center to your VPC. Create an Auto Scaling group to automatically scale the web tier based on the incoming traffic. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks.

Generate an AMI based on the existing Flight Deals website. Launch the AMI to a fleet of EC2 instances with Auto Scaling group enabled, for it to automatically scale up or scale down based on the incoming traffic. Place these EC2 instances behind an ALB which can balance traffic between the web servers in the on-premises data center and the web servers hosted in AWS.

Use CloudFront to cache and distribute the high resolution images and other static assets of the website. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks.

Create and configure an S3 bucket as a static website hosting. Move the web domain of the website from your on-premises data center to Route 53 then route the newly created S3 bucket as the origin. Enable Amazon S3 server-side encryption with AWS Key Management Service managed keys.
See all questions
BackSkip question
C


Explanation
Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the user is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns you define. You can get started quickly using Managed Rules for AWS WAF, a pre-configured set of rules managed by AWS or AWS Marketplace Sellers. The Managed Rules for WAF address issues like the OWASP Top 10 security risks. These rules are regularly updated as new issues emerge. AWS WAF includes a full-featured API that you can use to automate the creation, deployment, and maintenance of security rules.



AWS WAF is easy to deploy and protect applications deployed on either Amazon CloudFront as part of your CDN solution, the Application Load Balancer that fronts all your origin servers, or Amazon API Gateway for your APIs. There is no additional software to deploy, DNS configuration, SSL/TLS certificate to manage, or need for a reverse proxy setup. With AWS Firewall Manager integration, you can centrally define and manage your rules, and reuse them across all the web applications that you need to protect.
Hence, the option that says: Use CloudFront to cache and distribute the high resolution images and other static assets of the website. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks is correct because CloudFront will provide the scalability the website needs without doing major infrastructure changes. Take note that the website has a lot of high-resolution images which can easily be cached using CloudFront to alleviate the massive incoming traffic going to the on-premises web server and also provide a faster page load time for the web visitors.
The option that says: Use the AWS Server Migration Service to easily migrate the website from your on-premises data center to your VPC. Create an Auto Scaling group to automatically scale the web tier based on the incoming traffic. Deploy AWS WAF on the Amazon CloudFront distribution to protect the website from common web attacks is incorrect as migrating to AWS would be time-consuming compared with simply using CloudFront. Although this option can provide a more scalable solution, the scenario says that the company does not have ample time to do the migration.
The option that says: Create and configure an S3 bucket as a static website hosting. Move the web domain of the website from your on-premises data center to Route53 then route the newly created S3 bucket as the origin. Enable Amazon S3 server-side encryption with AWS Key Management Service managed keys is incorrect because the website is a dynamic website that accepts payments and bookings. Migrating your web domain to Route 53 may also take some time.
The option that says: Generate an AMI based on the existing Flight Deals website. Launch the AMI to a fleet of EC2 instances with Auto Scaling group enabled, for it to automatically scale up or scale down based on the incoming traffic. Place these EC2 instances behind an ALB which can balance traffic between the web servers in the on-premises data center and the web servers hosted in AWS is incorrect because it didn't mention any existing AWS Direct Connect or VPN connection. Although an Application Load Balancer can load balance the traffic between the EC2 instances in AWS Cloud and web servers located in the on-premises data center, your systems should be connected via Direct Connect or VPN connection first. In addition, the application seems to be used around the world because the company launched a global marketing campaign. Hence, CloudFront is a more suitable option for this scenario.

References:
https://aws.amazon.com/cloudfront/
https://aws.amazon.com/waf/

Check out this Amazon CloudFront Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-cloudfront/
CD

Explanation
In this scenario, it is best to use AWS Shield Advanced and AWS Config to protect your cloud infrastructure against DDoS attacks, as well as AWS Config which continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Hence, AWS Shield Advanced and AWS Config are the correct answers.
For higher levels of protection against attacks targeting your applications running on Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 resources, you can subscribe to AWS Shield Advanced. In addition to the network and transport layer protections that come with Standard, AWS Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF, a web application firewall. AWS Shield Advanced also gives you 24x7 access to the AWS DDoS Response Team (DRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing(ELB), Amazon CloudFront, and Amazon Route 53 charges.



AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
AWS Firewall Manager is incorrect because it is mainly used to simplify your AWS WAF administration and maintenance tasks across multiple accounts and resources. It does not protect your VPC against DDoS attacks.
AWS WAF is incorrect because even though it can help you block common attack patterns to your VPC such as SQL injection or cross-site scripting, this is still not enough to withstand DDoS attacks. It is better to use AWS Shield in this scenario.
AWS Systems Manager is incorrect because it is just a management service that helps you automatically collect software inventory, apply OS patches, and create system images, but this does not protect your VPC against DDoS attacks.

References:
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html
https://aws.amazon.com/config/
https://aws.amazon.com/shield/

Check out these AWS Shield and AWS Config Cheat Sheets:
https://tutorialsdojo.com/aws-cheat-sheet-aws-shield/
https://tutorialsdojo.com/aws-cheat-sheet-aws-config/
A telecommunications company is planning to host a WordPress website on an Amazon ECS Cluster which uses the Fargate launch type. For security purposes, the database credentials should be provided to the WordPress image by using environment variables. Your manager instructed you to ensure that the credentials are secure when passed to the image and that they cannot be viewed on the cluster itself. The credentials must be kept in a dedicated storage with lifecycle management and key rotation.
Which of the following is the most suitable solution in this scenario that you can implement with the least effort?

In the ECS task definition file of the ECS Cluster, store the database credentials using Docker Secrets to centrally manage this sensitive data and securely transmit it to only those containers that need access to it. Secrets are encrypted during transit and at rest. A given secret is only accessible to those services which have been granted explicit access to it via IAM Role, and only while those service tasks are running.

Store the database credentials using the AWS Secrets Manager and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Secrets Manager secret which contains the sensitive data, to present to the container.

In the ECS task definition file of the ECS Cluster, store the database credentials and encrypt with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definiton script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the database credentials.

Store the database credentials using the AWS Systems Manager Parameter Store and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container.
See all questions
BackSkip question
B


Explanation
Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. This feature is supported by tasks using both the EC2 and Fargate launch types.


Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of either the Secrets Manager secret or Systems Manager Parameter Store parameter containing the sensitive data to present to the container. The parameter that you reference can be from a different Region than the container using it, but must be from within the same account.
AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Using Secrets Manager, you can secure and manage secrets used to access resources in the AWS Cloud, on third-party services, and on-premises.
If you want a single store for configuration and secrets, you can use Parameter Store. If you want a dedicated secrets store with lifecycle management, use Secrets Manager. Hence, the correct answer is the option that says: Store the database credentials using the AWS Secrets Manager and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition which allows access to both KMS and AWS Secrets Manager. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Secrets Manager secret which contains the sensitive data, to present to the container.
The option that says: Store the database credentials using the AWS Systems Manager Parameter Store and then encrypt them using AWS KMS. Create an IAM Role for your Amazon ECS task execution role and reference it with your task definition, which allows access to both KMS and the Parameter Store. Within your container definition, specify secrets with the name of the environment variable to set in the container and the full ARN of the Systems Manager Parameter Store parameter containing the sensitive data to present to the container is incorrect because although the use of Systems Manager Parameter Store in securing sensitive data in ECS is valid, this service doesn't provide dedicated storage with lifecycle management and key rotation, unlike Secrets Manager.
The option that says: In the ECS task definition file of the ECS Cluster, store the database credentials and encrypt with KMS. Store the task definition JSON file in a private S3 bucket and ensure that HTTPS is enabled on the bucket to encrypt the data in-flight. Create an IAM role to the ECS task definiton script that allows access to the specific S3 bucket and then pass the --cli-input-json parameter when calling the ECS register-task-definition. Reference the task definition JSON file in the S3 bucket which contains the database credentials is incorrect because although the solution may work, it is not recommended to store sensitive credentials in S3. This entails a lot of overhead and manual configuration steps which can be simplified by simply using the Secrets Manager or Systems Manager Parameter Store.
The option that says: In the ECS task definition file of the ECS Cluster, store the database credentials using Docker Secrets to centrally manage this sensitive data and securely transmit it to only those containers that need access to it. Secrets are encrypted during transit and at rest. A given secret is only accessible to those services which have been granted explicit access to it via IAM Role, and only while those service tasks are running is incorrect because although you can use Docker Secrets to secure the sensitive database credentials, this feature is only applicable in Docker Swarm. In AWS, the recommended way to secure sensitive data is either through the use of Secrets Manager or Systems Manager Parameter Store.

References:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html
https://aws.amazon.com/blogs/mt/the-right-way-to-store-secrets-using-parameter-store/

Check out this Amazon ECS Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-elastic-container-service-amazon-ecs/

Check out this AWS Secrets Manager Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-aws-secrets-manager/
A fintech startup has developed a cloud-based payment processing system which accepts credit card payments as well as cryptocurrencies such as Bitcoin, Ripple and the likes. The system is deployed in AWS which uses EC2, DynamoDB, S3, and CloudFront to process the payments. Since they are accepting credit card information from the users, they are required to be compliant with the Payment Card Industry Data Security Standard (PCI DSS). On the recent 3rd-party audit, it was found that the credit card numbers are not properly encrypted and hence, their system failed the PCI DSS compliance test. You were hired by the fintech startup to solve this issue so they can release the product in the market as soon as possible. In addition, you also have to improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content.
In this scenario, what is the best option to protect and encrypt the sensitive credit card information of the users and to improve the cache hit ratio of your CloudFront distribution?

Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.

Create an origin access identity (OAI) and add it to the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio.

Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.

Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.
See all questions
BackSkip question
D

Explanation
Field-level encryption adds an additional layer of security along with HTTPS that lets you protect specific data throughout system processing so that only certain applications can see it. Field-level encryption allows you to securely upload user-submitted sensitive information to your web servers. The sensitive information provided by your clients is encrypted at the edge closer to the user and remains encrypted throughout your entire application stack, ensuring that only applications that need the data—and have the credentials to decrypt it—are able to do so.



To use field-level encryption, you configure your CloudFront distribution to specify the set of fields in POST requests that you want to be encrypted, and the public key to use to encrypt them. You can encrypt up to 10 data fields in a request. Hence, the correct answer for this scenario is the option that says: Configure the CloudFront distribution to enforce secure end-to-end connections to origin servers by using HTTPS and field-level encryption. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio.
You can improve performance by increasing the proportion of your viewer requests that are served from CloudFront edge caches instead of going to your origin servers for content; that is, by improving the cache hit ratio for your distribution. To increase your cache hit ratio, you can configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age. The shorter the cache duration, the more frequently CloudFront forwards another request to your origin to determine whether the object has changed and, if so, to get the latest version.
The option that says: Add a custom SSL in the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio is incorrect as although it provides secure end-to-end connections to origin servers, it is better to add field-level encryption to protect the credit card information.
The option that says: Configure the CloudFront distribution to use Signed URLs. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase your cache hit ratio is incorrect because a Signed URL provides a way to distribute private content but it doesn't encrypt the sensitive credit card information.
The option that says: Create an Origin Access Identity (OAI) and add it to the CloudFront distribution. Configure your origin to add User-Agent and Host headers to your objects to increase your cache hit ratio is incorrect because OAI is mainly used to restrict access to objects in S3 bucket, but not provide encryption to specific fields.

References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cache-hit-ratio.html

Check out this Amazon CloudFront Cheat Sheet:
https://tutorialsdojo.com/aws-cheat-sheet-amazon-cloudfront/

Tutorials Dojo's AWS Certified Solutions Architect Professional Exam Study Guide:
https://tutorialsdojo.com/aws-cheat-sheet-aws-certified-solutions-architect-professional/
You are working as a Senior Solutions Architect for a leading accounting firm which conducts monthly performance checks of their Windows and Linux EC2 instances. They have more than 200 On-Demand EC2 instances running in their production environment and you were instructed to ensure that each instance has a logging feature that collects various system details such as memory usage, disk space, and other metrics. The system logs will be analyzed using AWS Analytics tools and the results will be stored to an S3 bucket.
Which of the following is the most efficient way to collect and analyze logs from the instances with minimal effort?

Install AWS SDK on each On-Demand EC2 instance and create a custom daemon script that would collect and push data to CloudWatch Logs periodically. Enable CloudWatch detailed monitoring and use CloudWatch Logs Insights to analyze the log data of all instances.

Set up and install the AWS Systems Manager Agent (SSM Agent) on each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.

Set up and install AWS Inspector Agent on each On-Demand EC2 instance which will collect and push data to CloudWatch Logs periodically. Set up a CloudWatch dashboard to properly analyze the log data of all instances.

Set up and configure a unified CloudWatch Logs agent in each On-Demand EC2 instance which will automatically collect and push data to CloudWatch Logs. Analyze the log data with CloudWatch Logs Insights.
See all questions
BackSkip question
You are working as a Solutions Architect for a leading media company that stores their video animations in several S3 buckets. To protect the digital assets of the company, you were instructed to set up a monitoring system that will notify the IT Operations team if there are any buckets that allow public read or public write access. The team will then liaise with the compliance team if the bucket is indeed intended to be accessed publicly.
Which of the following is the MOST suitable solution that you can do in order to meet this requirement?

Enable Amazon S3 Block Public Access with BlockPublicAcls, IgnorePublicAcls, BlockPublicPolicy, and RestrictPublicBuckets settings.

Enable AWS Organizations and set up a Service Control Policy that will restrict all users to upload public objects to the S3 bucket. Set up a Lambda function and an SNS topic that will notify the IT Operations team if there is an attempt made by a user to upload a public object.

Enable AWS Config to monitor the S3 bucket ACLs and policies for compliance violations. Create an IAM Role and Policy that grants a Lambda function permission to read S3 bucket policies and send alerts through SNS. Create and configure a CloudWatch Events rule that triggers Lambda when AWS Config detects a policy violation. Create a Lambda function that uses the IAM role to review S3 bucket ACLs and policies, and notify the IT Operations team of out-of-compliance policies.

Use the s3-bucket-public-read-prohibited and s3-bucket-public-write-prohibited managed rules in AWS Config that will automatically notify the IT Operations team if someone uploaded publicly accessible and writable files to the S3 bucket.
See all questions
BackSkip question
An international foreign exchange company has a serverless forex trading application which was built using AWS SAM and is hosted on AWS Serverless Application Repository. They have millions of users worldwide who use their online portal 24/7 to trade currencies. However, they are receiving a lot of complaints that it takes a few minutes for their users to login to their portal lately, including occasional HTTP 504 errors. As the Solutions Architect, you are tasked to optimize the system and to significantly reduce the time to login to improve the customers' satisfaction.
Which of the following should you implement in order to improve the performance of the application with minimal cost? (Choose 2)

Set up multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. Deploy the Lambda function in each region using AWS SAM, in order to handle the requests faster.

Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.

Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.

Increase the cache hit ratio of your CloudFront distribution by configuring your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age.

Use Lambda@Edge to allow your Lambda functions to customize content that CloudFront delivers and to execute the authentication process in AWS locations closer to the users.
See all questions
BackSkip question
A government agency has multiple VPCs in various AWS regions across the United States that need to be linked up to an on-premises central office network in Washington, D.C. The central office requires inter-region VPC access over a private network that is dedicated to each region for enhanced security and more predictable data transfer performance. Your team is tasked to quickly build this network mesh and to minimize the management overhead to maintain these connections.
Which of the following options is the most secure, highly available, and durable solution that you should use to set up this kind of interconnectivity?

Implement a hub-and-spoke network topology in each region that routes all traffic through a network transit center using AWS Transit Gateway. Route traffic between VPCs and the on-premise network over AWS Site-to-Site VPN.

Create a link aggregation group (LAG) in the central office network to aggregate multiple connections at a single AWS Direct Connect endpoint in order to treat them as a single, managed connection. Use AWS Direct Connect Gateway to achieve inter-region VPC access to all of your AWS resources. Create a virtual private gateway in each VPC and then create a public virtual interface for each AWS Direct Connect connection to the Direct Connect Gateway.

Utilize AWS Direct Connect Gateway for inter-region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each AWS Direct Connect connection to the Direct Connect gateway.

Enable inter-region VPC peering which allows peering relationships to be established between VPCs across different AWS regions. This will ensure that the traffic will always stay on the global AWS backbone and will never traverse the public Internet.
See all questions
BackSkip question
Your startup company is building a web app that lets users post photos of good deeds in their neighborhood with a 143-character caption/article. You decided to write the application in ReactJS, a popular javascript framework, so that it would run on the broadest range of browsers, mobile phones, and tablets. Your app should provide access to Amazon DynamoDB to store the caption. The initial prototype shows that there aren't large spikes in usage. Which option provides the most cost-effective and scalable architecture for this application?

Register the web application with a Web Identity Provider such as Google, Facebook, Amazon or from any other popular social site. Create an IAM role for that web provider and set up permissions for the IAM role to allow PUT operations in DynamoDB. Serve your web application from an NGINX server hosted on a fleet of EC2 instances, with a load balancer and auto scaling. Add an IAM role to the EC2 instance to allow PUT operations to DynamoDB tables.

Configure the ReactJS client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) to provide signed credentials to an IAM user. This will allow PUT operations to DynamoDB. Serve your web application from an NGINX server hosted in a fleet of EC2 instances that are load-balanced and auto scaled. Your EC2 instances are configured with an IAM role that allows PUT operations in DynamoDB.

Configure the ReactJS client with temporary credentials from the Security Token Service using a Token Vending Machine (TVM) on an EC2 instance. This will provide signed credentials to an IAM user allowing PUT operations in DynamoDB table and GET operations in the S3 bucket. You serve your mobile application out of an S3 bucket enabled as a web site.

Register the web application with a Web Identity Provider such as Google, Facebook, Amazon or from any other popular social sites and use the AssumeRoleWithWebIdentity API of STS to generate temporary credentials. Create an IAM role for that web provider and set up permissions for the IAM role to allow GET operations in S3 and PUT operations in DynamoDB. Serve your web app out of an S3 bucket enabled as a website.
See all questions
BackSkip question
You are working as an AWS Developer for a mobile development company. They are currently developing new android and iOS mobile apps and are considering storing the customization data in AWS. This would provide a more uniform cross-platform experience to their users using multiple mobile devices to access the apps. The preference data for each user is estimated to be 50KB in size. Additionally, 3 million customers are expected to use the application on a regular basis, using their social login accounts for easier user authentication. How would you design a highly available, cost-effective, scalable, and secure solution to meet the above requirements?

Setup a table in DynamoDB containing an item for each user having the necessary attributes to hold the user preferences. The mobile app will query the user preferences directly from the table. Use STS, Web Identity Federation, and DynamoDB's Fine Grained Access Control for authentication and authorization.

Have the user preference data stored in S3, and set up a DynamoDB table with an item for each user and an item attribute referencing the user's S3 object. The mobile app will retrieve the S3 URL from DynamoDB and then access the S3 object directly utilizing STS, Web identity Federation, and S3 ACLs.

Launch an RDS MySQL instance in 2 availability zones to contain the user preference data. Deploy a public facing application on a server in front of the database which will manage authentication and access controls.

Create an RDS MySQL instance with multiple read replicas in 2 availability zones to store the user preference data. The mobile application will then query the user preferences from the read replicas. Finally, utilize MySQL's user management and access privilege system to handle security and access credentials of your users.
See all questions
BackSkip question
A leading financial company is planning to launch its MERN (MongoDB, Express, React, Node.js) application with an Amazon RDS MariaDB database to serve its clients worldwide. The application will run on both on-premises servers as well as Reserved EC2 instances. To comply with the company's strict security policy, the database credentials must be encrypted both at rest and in transit. These credentials will be used by the application servers to connect to the database. The Solutions Architect is tasked to manage all of the aspects of the application architecture and production deployment.
How should the Architect automate the deployment process of the application in the MOST secure manner?

Upload the database credentials with key rotation in AWS Secrets Manager. Set up a new IAM role that enables access and decryption of the database credentials then attach this role to all on-premises servers and EC2 instances. Use AWS Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using AWS Elastic Beanstalk.

Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Set up a new IAM role with an attached policy that enables access and decryption of the database credentials then attach this role to all on-premises servers and EC2 instances. Deploy the application packages to the EC2 instances and on-premises servers using AWS CodeDeploy.

Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Set up a new IAM role that enables access and decryption of the database credentials then attach this role to all on-premises servers and EC2 instances. Use AWS Elastic Beanstalk to host and manage the application on both on-premises servers and EC2 instances. Deploy the succeeding application revisions to AWS and on-premises servers using AWS Elastic Beanstalk.

Upload the database credentials with a Secure String data type in AWS Systems Manager Parameter Store. Set up a new IAM policy that enables access and decryption of the database credentials then attach this IAM policy to the instance profile for CodeDeploy-managed instances. Attach the same policy as well to the on-premises instances. Using AWS CodeDeploy, launch the application packages to the Amazon EC2 instances and on-premises servers.
See all questions
BackSkip question
You currently operate a sports web portal that covers the latest cricket news in Australia. You manage the main AWS account which has multiple AWS regions. The online application is hosted on a fleet of on-demand EC2 instances and an RDS database which are also deployed to other AWS regions. Your IT Security Compliance Officer has given you the task of developing a reliable and durable logging solution to track changes made to all of your EC2, IAM, and RDS resources in all of the AWS regions. The solution must ensure the integrity and confidentiality of your log data. Which of the following solutions would be the best option to choose?

Create a new trail in AWS CloudTrail with the global services option selected, and assign it an existing S3 bucket to store the logs. Create S3 ACLs and enable Multi Factor Authentication (MFA) delete on the S3 bucket storing your logs.

Create a new trail in CloudTrail and assign it a new S3 bucket to store the logs. Configure AWS SNS to send delivery notifications to your management system. Secure the S3 bucket that stores your logs using IAM roles and S3 bucket policies.

Create three new CloudTrail trails, each with its own S3 bucket to store the logs: one for the AWS Management console, one for AWS SDKs, and one for command line tools. Then create IAM roles and S3 bucket policies for the S3 buckets storing your logs.

Create a new trail in AWS CloudTrail with the global services option selected, and create one new Amazon S3 bucket to store the logs. Create IAM roles, S3 bucket policies, and enable Multi Factor Authentication (MFA) Delete on the S3 bucket storing your logs.
See all questions
BackSkip question
You are working as the Lead Systems Architect for a government bank in which you are handling a web application that retrieves and displays highly-sensitive information about the clients. The amount of traffic the site will receive is known and not expected to fluctuate. SSL will be used as part of the application's data security. Your chief information security officer (CISO) is concerned about the security of your SSL private key and she wants to ensure that the key cannot be accidentally or intentionally moved outside the corporate environment. You are also concerned that the application logs might contain some sensitive information, although you have already configured an encrypted EBS volume to store the data. In this scenario, the application logs must be stored securely and durably so that they can only be decrypted by the authorized government employees.
Which of the following is the most suitable and highly available architecture that can meet all of the requirements?

Distribute traffic to a set of web servers using an Elastic Load Balancer. To secure the SSL private key, upload the key to the load balancer and configure the load balancer to offload the SSL traffic. Lastly, write your application logs to an instance store volume that has been encrypted using a randomly generated AES key.

Distribute traffic to a set of web servers using an Elastic Load Balancer. Use TCP load balancing for the load balancer and configure your web servers to retrieve the SSL private key from a private Amazon S3 bucket on boot. Use another private Amazon S3 bucket to store your web server logs using Amazon S3 server-side encryption.

Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use CloudHSM deployed to two Availability Zones to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.

Distribute traffic to a set of web servers using an Elastic Load Balancer that performs TCP load balancing. Use an AWS CloudHSM to perform the SSL transactions and deliver your application logs to a private Amazon S3 bucket using server-side encryption.
See all questions
BackSkip question
An online delivery system, hosted in a fleet of EC2 instances, is deployed in multiple Availability Zones in the ap-southeast-1 region with an Application Load Balancer that evenly distributes the load. The system is using a MySQL RDS instance to store the deliveries and transactions of the system. To ensure business continuity, you are instructed to set up a disaster recovery system in which the RTO must be less than 3 hours and the RPO is 15 minutes when a system outage occurs. A system should also be implemented that can automatically discover, classify, and protect any personally identifiable information (PII) or intellectual property in your data store.
As the Solutions Architect, which disaster recovery strategy should you use to achieve the required RTO and RPO targets in the most cost-effective manner?

Set up asynchronous replication in the database using a Multi-AZ deployments configuration. Use AWS Shield to automatically discover, classify, and protect any personally identifiable information (PII) or intellectual property from your RDS database.

Schedule a database backup to AWS Storage Gateway every hour and store transaction logs to a separate S3 bucket every 5 minutes. Use AWS Shield to automatically discover, classify, and protect any personally identifiable information (PII) or intellectual property on your Storage Gateway.

Schedule a database backup to an S3 bucket every hour and store transaction logs to a separate S3 bucket every 5 minutes. Use Amazon Macie to automatically discover, classify, and protect your sensitive data.

Schedule 15-minute DB backups to Amazon Glacier. Store the transaction logs to an S3 bucket every 5 minutes. Use Amazon Macie to automatically discover, classify, and protect your sensitive data.
See all questions
BackSkip question
You are the Lead Solutions Architect for an IT consulting firm which has various teams and departments that have been grouped into several organizational units (OUs) using AWS Organizations. You received a report from the security team that there was a suspected breach in your environment where a third-party AWS account was suddenly added to your organization without any prior approval. The external account has high level access privileges to the accounts that you own but luckily, no detrimental action was performed.
What should you do to properly set up a monitoring system than notifies you for any changes to your AWS accounts? (Choose 2)

Create a trail in Amazon CloudTrail to capture all API calls to your AWS Organizations, including calls from the AWS Organizations console and from code calls to the AWS Organizations APIs. Use CloudWatch Events and SNS to raise events when administrator-specified actions occur in an organization and send a notification to you.

Use AWS Config to monitor the compliance of your AWS Organizations. Set up an SNS Topic or CloudWatch Events that will send alerts to you for any changes.

Monitor all changes to your organization using Systems Manager and use CloudWatch Events to notify you for any new activity to your account.

Set up a CloudWatch Dashboard to monitor any changes to your organizations and create an SNS topic that would send you a notification.

Provision an AWS-approved third-party monitoring tool from the AWS Marketplace that would send alerts if a breach was detected. Use AWS GuardDuty to analyze any possible breach and notify the administrators using AWS SNS.
See all questions
BackSkip question
A tech startup is planning to launch a new global mobile marketplace using AWS Amplify and AWS Mobile Hub. To lower the latency, the backend APIs will be launched to multiple AWS regions to process the sales and financial transactions on the region closest to the users. You are instructed to design the system architecture to ensure that the transactions made in one region are automatically replicated to other regions. In the coming months ahead, it is expected that the marketplace will have millions of users across North America, South America, Europe, and Asia.
Which of the following is the most scalable, cost-effective and highly available architecture that you should implement?

Create an Amazon Aurora Multi-Master database on all required regions. Store the individual transactions to the Amazon Aurora instance in the local region. Replicate the transactions table between regions using Aurora replication. In this set up, any changes made in one of the tables will be automatically replicated across all other tables.

Create a Global DynamoDB table in your preferred region which will automatically create new replica tables on all AWS regions. In each local region, store the individual transactions to a DynamoDB replica table in the same region. Any changes made in one of the replica tables will be automatically replicated across all other tables.

Create a Global DynamoDB table by choosing your preferred AWS region, enabling the DynamoDB Streams option and creating replica tables in the other AWS regions where you want to replicate your data. In each local region, store the individual transactions to a DynamoDB replica table in the same region.

In each local region, store the individual transactions to a DynamoDB table. Set up an AWS Lambda function to read recent writes from the table, and replay the data to DynamoDB tables in all other regions.
See all questions
BackSkip question
A top university has launched its serverless online portal using Lambda and API Gateway in AWS that enables its students to enroll, manage their class schedule, and see their grades online. After a few weeks, the portal abruptly stopped working and lost all of its data. The university hired an external cyber security consultant and based on the investigation, the outage was due to an SQL injection vulnerability on the portal's login page in which the attacker simply injected the malicious SQL code. You also need to track historical changes to the rules and metrics associated to your firewall.
Which of the following is the most suitable and cost-effective solution to avoid another SQL Injection attack against their infrastructure in AWS?

Block the IP address of the attacker in the Network Access Control List of your VPC and then set up a CloudFront distribution. Set up AWS WAF to add a web access control list (web ACL) in front of the CloudFront distribution to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.

Use AWS WAF to add a web access control list (web ACL) in front of the Lambda functions to block requests that contain malicious SQL code. Use AWS Firewall Manager, to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.

Create a new Application Load Balancer (ALB) and set up AWS WAF in the load balancer. Place the API Gateway behind the ALB and configure a web access control list (web ACL) in front of the ALB to block requests that contain malicious SQL code. Use AWS Firewall Manager to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.

Use AWS WAF to add a web access control list (web ACL) in front of the API Gateway to block requests that contain malicious SQL code. Use AWS Config to track changes to your web access control lists (web ACLs) such as the creation and deletion of rules including the updates to the WAF rule configurations.
See all questions
BackSkip question
A leading media company has a hybrid architecture where its on-premises data center is connected to AWS via a Direct Connect connection. They also have a repository of over 50-TB digital videos and media files. These files are stored on their on-premises tape library and are used by their Media Asset Management (MAM) system. Due to the sheer size of their data, they want to implement an automated catalog system that will enable them to search their files using facial recognition. A catalog will store the faces of the people who are present in these videos including a still image of each person. Eventually, the media company would like to migrate these media files to AWS including the MAM video contents.
Which of the following options provides a solution which uses the LEAST amount of ongoing management overhead and will cause MINIMAL disruption to the existing system?

Set up a tape gateway appliance on-premises and connect it to your AWS Storage Gateway. Configure the MAM solution to fetch the media files from the current archive and push them into the tape gateway in the AWS Cloud. Using Amazon Rekognition, build a collection from the catalog of faces. Utilize a Lambda function which invokes the Rekognition Javascript SDK to have Amazon Rekognition process the video directly from the tape gateway in real-time, retrieve the required metadata, and push the metadata into the MAM solution.

Use Amazon Kinesis Video Streams to set up a video ingestion stream and with Amazon Rekognition, build a collection of faces. Stream the media files from the MAM solution into Kinesis Video Streams and configure the Amazon Rekognition to process the streamed files. Launch a stream consumer to retrieve the required metadata, and push the metadata into the MAM solution. Finally, configure the stream to store the files in an S3 bucket.

Migrate all of the media files from the on-premises library into an EBS volume mounted on a large EC2 instance. Install an open-source facial recognition tool in the instance like OpenFace or OpenCV. Process the media files to retrieve the metadata and push this information into the MAM solution. Lastly, copy the media files to an S3 bucket.

Integrate the file system of your local data center to AWS Storage Gateway by setting up a file gateway appliance on-premises. Utilize the MAM solution to extract the media files from the current data store and send them into the file gateway. Build a collection using Amazon Rekognition by populating a catalog of faces from the processed media files. Use an AWS Lambda function to invoke Amazon Rekognition Javascript SDK to have it fetch the media file from the S3 bucket which is backing the file gateway, retrieve the needed metadata, and finally, persist the information into the MAM solution.
See all questions
BackSkip question
A well-funded startup company is building a mobile app that showcases their latest fashion accessories and gadgets. The marketing manager hired a famous model with millions of Instagram followers to promote their new products and hence, it is expected that the app will be a huge hit once it is launched in the market. It must have the ability to automatically scale to handle millions of views of its static contents and to allow users to store their own photos of themselves wearing fashionable accessories with a maximum of 100-character caption. In this scenario, which of the following would fulfill this requirement?

1. Use Cognito to handle the user authentication and management.
2. Use an RDS database to store user data.
3. Create an S3 bucket to store all of the user photos and other static files.
4. Distribute the static contents using CloudFront to improve scalability.

1. Configure an on-premises Active Directory (AD) server utilizing SAML 2.0 to manage the application users inside of the on-premises AD server.
2. Develop a custom code that authenticates against the LDAP server.
3. Use DynamoDB as the main database of the app and S3 as the scalable object storage.
4. Grant an IAM role assigned to the STS token to allow the end-user to access the required data in the DynamoDB table.
5. Distribute the static contents using CloudFront to improve scalability.

1. Set up a SAML 2.0-based Federation that let the users sign into the app using a third party identity provider such as Amazon, Google or Facebook.
2. Set up an RDS database and an S3 bucket to store the photos.
3. Use the AssumeRoleWithWebIdentity API call to assume the IAM role containing the proper permissions to communicate with the RDS database.
4. Distribute the static contents using S3 to improve scalability.

1. Use Cognito to handle the user authentication and management.
2. Launch a DynamoDB table to store user data.
3. Create an S3 bucket to store all of the user photos and other static files.
4. Distribute the static contents using CloudFront to improve scalability.
See all questions
BackSkip question
In an effort to ensure and strengthen data security, your company has launched a company-wide bug bounty program to find and patch up security vulnerabilities in your web applications as well as the underlying cloud resources. You are working as a Cloud Engineer and decided to focus on checking system vulnerabilities of your AWS resources.
Which of the following are the best techniques to avoid Distributed Denial of Service (DDoS) attacks for your cloud infrastructure hosted in AWS? (Choose 2)

Use an Application Load Balancer (ALB) to reduce the risk of overloading your application by distributing traffic across many backend instances. Integrate AWS WAF and the ALB to protect your web applications from common web exploits that could affect application availability.

Add multiple Elastic Network Interfaces to each EC2 instance and use Enhanced Networking to increase the network bandwidth.

Use an Amazon CloudFront distribution for both static and dynamic content of your web applications. Add CloudWatch alerts to automatically look and notify the Operations team for high CPUUtilization and NetworkIn metrics, as well as to trigger Auto Scaling of your EC2 instances.

Use S3 instead of EBS Volumes for storing data. Install the SSM agent to all of your instances and use AWS Systems Manager Patch Manager to automatically patch your instances.

Use Reserved EC2 instances to ensure that each instance has the maximum performance possible. Use AWS WAF to protect your web applications from common web exploits that could affect application availability.
See all questions
BackSkip question
An innovative Business Process Outsourcing (BPO) startup is planning to launch a scalable and cost-effective call center system using AWS. The system should be able to receive inbound calls from thousands of customers and generate user contact flows. Callers must have the capability to perform basic tasks such as changing their password or checking their balance without them having to speak to a call center agent. It should also have advanced deep learning functionalities such as automatic speech recognition (ASR) to achieve highly engaging user experiences and lifelike conversational interactions. A feature that allows the solution to query other business applications and send relevant data back to callers must also be implemented.
Which of the following is the MOST suitable solution that the Solutions Architect should implement?

Set up a cloud-based contact center using the Amazon Connect service. Create a conversational chatbot using Amazon Lex with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the AWS Ground Station service. Create a conversational chatbot using Amazon Alexa for Business with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Ground Station. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the Amazon Direct Connect service. Create a conversational chatbot using Amazon Rekognition with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with Amazon Direct Connect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.

Set up a cloud-based contact center using the AWS Elemental MediaConnect service. Create a conversational chatbot using Amazon Polly with automatic speech recognition and natural language understanding to recognize the intent of the caller then integrate it with AWS Elemental MediaConnect. Connect the solution to various business applications and other internal systems using AWS Lambda functions.
See all questions
BackSkip question
The department of education just recently decided to leverage on AWS cloud infrastructure to supplement their current on-premises network. They are building a new learning portal that teaches kids basic computer science concepts and provides innovative gamified courses for teenagers where they can gain higher rankings, power-ups and badges. A Solutions Architect is instructed to build a highly available cloud infrastructure in AWS with multiple Availability Zones. The department wants to increase the application's reliability and gain actionable insights using application logs. A Solutions Architect needs to aggregate logs, automate log analysis for errors and immediately notify the IT Operations team when errors breached a certain threshold.
Which of the following is the MOST suitable solution that the Architect should implement?

Download and install the AWS X-Ray agent in the on-premises servers and send the logs to AWS Lambda to turn log data into numerical metrics that identify and measure application errors. Store the metrics data in Systems Manager Parameter Store. Create a CloudWatch Alarm that monitors the metric and immediately notify the IT Operations team for any issues.

Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Events. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon Athena to monitor the metric filter and immediately notify the IT Operations team for any issues.

Download and install the Amazon Kinesis agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Use Amazon QuickSight to monitor the metric filter in CloudWatch and immediately notify the IT Operations team for any issues.

Download and install the Amazon CloudWatch agent in the on-premises servers and send the logs to Amazon CloudWatch Logs. Create a metric filter in CloudWatch to turn log data into numerical metrics to identify and measure application errors. Create a CloudWatch Alarm that monitors the metric filter and immediately notify the IT Operations team for any issues.
See all questions
BackSkip question
An electronics company has an on-premises network as well as a cloud infrastructure in AWS. The on-site data storage which is used by their enterprise document management system is heavily being used, and they are looking at utilizing the storage services in AWS for cost-effective backup and rapid disaster recovery. You are tasked to set up a storage solution that will provide a low-latency access to the enterprise document management system. Most of the documents uploaded in their system are printed circuit board (PCB) designs and schematic diagrams which are frequently used and accessed by their engineers, QA analysts, and their Research and Design department. Hence, you also have to ensure that these employees can access the entire dataset quickly, without sacrificing durability.
How can you satisfy the requirement for this scenario?

Create an S3 bucket and use the sync command to synchronize the data to and from your on-premises file server.

Use a Stored Volume Gateway to provide cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your on-premises application servers.

In AWS Storage Gateway, create a File gateway that enables you to store and retrieve objects in Amazon S3 using industry-standard file protocols such as Network File System (NFS) and Server Message Block (SMB).

Use a Cached volume gateway to retain low-latency access to your entire data set as well as your frequently accessed data.
See all questions
BackSkip question
You are working as a Solutions Architect for a leading IT consultancy company which has offices in San Francisco, Frankfurt, Tokyo, and Manila. They are using AWS Organizations to easily manage the multiple AWS accounts being used by their regional offices and subsidiaries. A new AWS account was recently added to a specific organizational unit (OU) which is responsible for the overall systems administration. The administrator noticed that the account is using a root-created AWS ECS Cluster with an attached service-linked role. For regulatory purposes, you created a custom SCP that would deny the new account from performing certain actions in relation to using ECS. However, after applying the policy, the new account could still perform the actions that it was supposed to be restricted from doing.
What could be the most likely reason for this problem?

SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can't be restricted by SCPs.

The ECS service is being run outside the jurisdiction of the organization. SCPs affect only the principals that are managed by accounts that are part of the organization.

There is an SCP attached to a higher-level OU that permits the actions of the service-linked role. This permission would therefore be inherited by the current OU, and override the SCP placed by the administrator.

The default SCP grants all permissions attached to every root, OU, and account. To apply stricter permissions, this policy is required to be modified.
See all questions
BackSkip question
A media company has a suite of internet-facing web applications hosted in US West (N. California) region in AWS. The architecture is composed of several On-Demand Amazon EC2 instances behind an Application Load Balancer, which is configured to use public SSL/TLS certificates. The Application Load Balancer also enables incoming HTTPS traffic through the fully qualified domain names (FQDNs) of the applications for SSL termination. A Solutions Architect has been instructed to upgrade the corporate web applications to a multi-region architecture that uses various AWS Regions such as ap-southeast-2, ca-central-1, eu-west-3, and so forth.
Which of the following approach should the Architect implement to ensure that all HTTPS services will continue to work without interruption?

In each new AWS Region, request for SSL/TLS certificates using the AWS Certificate Manager for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.

Use the AWS KMS in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.

Use the AWS Certificate Manager service in the US West (N. California) region to request for SSL/TLS certificates for each FQDN which will be used to all regions. Associate the new certificates to the new Application Load Balancer on each new AWS Region that the Architect will add.

In each new AWS Region, request for SSL/TLS certificates using AWS KMS for each FQDN. Associate the new certificates to the corresponding Application Load Balancer of the same AWS Region.
See all questions
BackSkip question
You have a production, development, and test environments in your software development department, and each environment contains tens to hundreds of EC2 instances, along with other AWS services. Recently, Ubuntu released a series of security patches for a critical flaw that was detected in their OS. Although this is an urgent matter, there is no guarantee yet that these patches will be bug-free and production-ready hence, you have to immediately patch all of your affected EC2 instances in all the environments, except for the production environment. The EC2 instances in the production environment will only be patched after you have verified that the patches work effectively. Each environment also has different baseline patch requirements that you will need to satisfy.
Using the AWS Systems Manager service, how should you perform this task with the least amount of effort?

Schedule a maintenance period in AWS Systems Manager Maintenance Windows for each environment, where the period is after business hours so as not to affect daily operations. During the maintenance period, Systems Manager will execute a cron job that will install the required patches for each EC2 instance in each environment. After that, verify in Systems Manager Managed Instances that your environments are fully patched and compliant.

Tag each instance based on its OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and then apply the patches specified in the corresponding patch baseline to each Patch Group. Afterwards, verify that the patches have been installed correctly using Patch Compliance. Record the changes to patch and association compliance statuses using AWS Config.

Tag each instance based on its environment and OS. Create a patch baseline in AWS Systems Manager Patch Manager for each environment. Categorize EC2 instances based on their tags using Patch Groups and apply the patches specified in the corresponding patch baseline to each Patch Group.

Tag each instance based on its environment and OS. Create various shell scripts for each environment that specifies which patch will serve as its baseline. Using AWS Systems Manager Run Command, place the EC2 instances into Target Groups and execute the script corresponding to each Target Group.
See all questions
BackSkip question
A company is planning to launch a global e-commerce marketplace that will be accessible to multiple countries and regions. The Solutions Architect must ensure that the clients are protected from common web vulnerabilities as well as "man-in-the-middle" attacks to secure their sensitive financial information.
Which of the following is MOST secure setup that the Architect should implement in this scenario?

For the web domain registration, use Amazon Route 53 and then register a 2048-bit RSASHA256 encryption key from a third-party certificate service. Enable Domain Name System Security Extensions (DNSSEC) by using a 3rd party DNS provider that uses customer managed keys. Register the SSL certificates in ACM and attach them to the Application Load Balancer of the global e-commerce marketplace. Configure the Server Name Identification extension in all user requests to the website.

For the web domain registration, use another DNS registry other than Amazon Route 53. Register a 2048-bit RSASHA256 encryption keys from a third-party certificate service. Enable Domain Name System Security Extensions (DNSSEC) by using a separate 3rd party DNS provider that uses customer managed keys. Use Amazon Route 53 to manage all DNS services. Register TLS/SSL certificates for the e-commerce marketplace using AWS Certificate Manager (ACM) then attach them to each Amazon EC2 instance. Configure the Server Name Identification extension in all user requests to the website.

For the web domain registration, use Amazon Route 53 and enable Domain Name System Security Extensions (DNSSEC). Also use Amazon Route 53 for all DNS services. Use the AWS Certificate Manager (ACM) to register TLS/SSL certificates for the e-commerce marketplace then attach them on the Application Load Balancer. Configure the Server Name Identification extension in all user requests to the website.

For the web domain registration, use Amazon Route 53 and enable Domain Name System Security Extensions (DNSSEC). Set up a BIND DNS server hosted in a Reserved EC2 instance for all DNS services. Use AWS Certificate Manager (ACM) to register TLS/SSL certificates for the e-commerce marketplace then attach them on the Application Load Balancer. Configure the Server Name Identification extension in all user requests to the website.
See all questions
BackSkip question
Last year, the big four banks in your country have collaborated to create a simple-to-use, mobile payment app that enables the users to easily transfer money and pay bills without the hassle of logging in to their online banking, entering the account details of the other party and spending time going through other security verification processes. With their mobile payment app, anyone can easily pay another person, split the bill with their friends or pay for their coffee in an instant with just a few taps in the app. The payment app is available on both Android and iOS devices, including a web portal that is deployed in AWS using OpsWorks Stacks and EC2 instances. It was a big success with over 5 millions users nationwide and has over 100 transactions every hour. After one year, a new feature that will enable the users to store their credit card information in the app is ready to be added to the existing web portal. However, due to PCI-DSS compliance, the new version of the APIs and web portal cannot be deployed to the existing application stack. As the Solutions Architect of this project, how would you deploy the new web portal for the mobile app without having any impact to your 5 million users?

Deploy a new OpsWorks stack that contains a new layer with the latest web portal version. Shift traffic between existing stack and new stack, running different versions of the web portal using Blue/Green deployment strategy by using Route53. Route only a small portion of incoming production traffic to use the new application stack while maintaining the old application stack. Check the features of the new portal; once it's 100% validated, slowly increase incoming production traffic to the new stack. If there are issues on the new stack, change Route53 to revert to old stack.

Create a new stack that contains the latest version of the web portal. Using Route 53 service, direct all the incoming traffic to the new stack at once so that all the customers get to access new features.

Deploy the new web portal using a Blue/Green deployment strategy with AWS CodeDeploy and Lambda in which the green environment represents the current web portal version serving production traffic while the blue environment is staged in running a different version of the web portal.

Forcibly upgrade the existing application stack in Production to be PCI-DSS compliant. Once done, deploy the new version of the web portal on the existing application stack.
See all questions
BackFinish test