SAP-C02 dumps
5 Star


Customer Rating & Feedbacks
98%


Exactly Questions Came From Dumps

Amazon SAP-C02 Question Answers

AWS Certified Solutions Architect - Professional Dumps June 2024

Are you tired of looking for a source that'll keep you updated on the AWS Certified Solutions Architect - Professional Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Amazon SAP-C02 Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a AWS Certified Professional Certified.

discount banner
PDF $100  $40
Test Engine $140  $56
PDF + Test Engine $180  $72

Here are Amazon SAP-C02 PDF available features:

435 questions with answers Updation Date : 13 Jun, 2024
1 day study required to pass exam 100% Passing Assurance
100% Money Back Guarantee Free 3 Months Updates
Last 24 Hours Result
91

Students Passed

92%

Average Marks

95%

Questions From Dumps

4910

Total Happy Clients

What is Amazon SAP-C02?

Amazon SAP-C02 is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The AWS Certified Professional Certification validates a candidate's expertise to work with Amazon. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the AWS Certified Solutions Architect - Professional Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Amazon SAP-C02 Practice Questions. Passing AWS Certified Solutions Architect - Professional will be your ticket to a better future!

Pass with Amazon SAP-C02 Braindumps!

Contrary to the belief that certification exams are generally hard to get through, passing AWS Certified Solutions Architect - Professional is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Amazon SAP-C02 PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Amazon AWS Certified Professional certification is all about having the right information. Hence, we filled our Amazon SAP-C02 Dumps with all the necessary data you need to pass. These carefully curated sets of AWS Certified Solutions Architect - Professional Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Amazon SAP-C02 Braindumps now!

We aim to provide all AWS Certified Professional certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Amazon SAP-C02 Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.

Why Choose Amazon SAP-C02 PDF?

Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the AWS Certified Solutions Architect - Professional candidates. Not every Amazon exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the AWS Certified Professional solutions. Amazon SAP-C02 Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Amazon SAP-C02 Dumps with ease.

If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].

Amazon SAP-C02 Sample Questions

Question # 1

A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWS
account to a new AWS account in the same AWS Region. Both accounts are members of
the same organization in AWS Organizations.
The company must minimize database service interruption before the company performs
DNS cutover to the new database.
Which migration strategy will meet this requirement?

A. Take a snapshot of the existing Aurora database. Share the snapshot with the new AWS
account. Create an Aurora DB cluster in the new account from the snapshot.

B. Create an Aurora DB cluster in the new AWS account. Use AWS Database Migration
Service (AWS DMS) to migrate data between the two Aurora DB clusters.

C. Use AWS Backup to share an Aurora database backup from the existing AWS account
to the new AWS account. Create an Aurora DB cluster in the new AWS account from the
snapshot.

D. Create an Aurora DB cluster in the new AWS account. Use AWS Application Migration
Service to migrate data between the two Aurora DB clusters.


Question # 2

A company is planning a migration from an on-premises data center to the AWS cloud. The
company plans to use multiple AWS accounts that are managed in an organization in AWS
organizations. The company will cost a small number of accounts initially and will add
accounts as needed. A solution architect must design a solution that turns on AWS
accounts.
What is the MOST operationally efficient solution that meets these requirements.

A. Create an AWS Lambda function that creates a new cloudTrail trail in all AWS account
in the organization. Invoke the Lambda function dally by using a scheduled action in
Amazon EventBridge.

B. Create a new CloudTrail trail in the organizations management account. Configure the trail to log all events for all AYYS accounts in the organization.

C. Create a new CloudTrail trail in all AWS accounts in the organization. Create new trails
whenever a new account is created.

D. Create an AWS systems Manager Automaton runbook that creates a cloud trail in all
AWS accounts in the organization. Invoke the automation by using Systems Manager State
Manager.


Question # 3

A solutions architect is preparing to deploy a new security tool into several previously
unused AWS Regions. The solutions architect will deploy the tool by using an AWS
CloudFormation stack set. The stack set's template contains an 1AM role that has a
custom name. Upon creation of the stack set. no stack instances are created successfully.
What should the solutions architect do to deploy the stacks successfully?

A. Enable the new Regions in all relevant accounts. Specify the
CAPABILITY_NAMED_IAM capability during the creation of the stack set.

B. Use the Service Quotas console to request a quota increase for the number of
CloudFormation stacks in each new Region in all relevant accounts. Specify the
CAPABILITYJAM capability during the creation of the stack set.

C. Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGED
permissions model during the creation of the stack set.

D. Specify an administration role ARN and the CAPABILITYJAM capability during the
creation of the stack set.


Question # 4

A company has an loT platform that runs in an on-premises environment. The platform
consists of a server that connects to loT devices by using the MQTT protocol. The platform
collects telemetry data from the devices at least once every 5 minutes The platform also
stores device metadata in a MongoDB cluster
An application that is installed on an on-premises machine runs periodic jobs to aggregate
and transform the telemetry and device metadata The application creates reports that
users view by using another web application that runs on the same on-premises machine
The periodic jobs take 120-600 seconds to run However, the web application is always
running.
The company is moving the platform to AWS and must reduce the operational overhead of
the stack.
Which combination of steps will meet these requirements with the LEAST operational
overhead? (Select THREE.)

A. Use AWS Lambda functions to connect to the loT devices

B. Configure the loT devices to publish to AWS loT Core
C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance

D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)

E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare the
reports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin to
serve the reports

F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2
instances to prepare the reports Use an ingress controller in the EKS cluster to serve the
reports


Question # 5

A company is designing an AWS environment tor a manufacturing application. The
application has been successful with customers, and the application's user base has
increased. The company has connected the AWS environment to the company's onpremises
data center through a 1 Gbps AWS Direct Connect connection. The company has
configured BGP for the connection.
The company must update the existing network connectivity solution to ensure that the
solution is highly available, fault tolerant, and secure.
Which solution win meet these requirements MOST cost-effectively?

A. Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data in
transit and provide resilience for the Direct Conned connection. Configure MACsec to
encrypt traffic inside the Direct Connect connection.

B. Provision another Direct Conned connection between the company's on-premises data
center and AWS to increase the transfer speed and provide resilience. Configure MACsec
to encrypt traffic inside the Dried Conned connection.

C. Configure multiple private VIFs. Load balance data across the VIFs between the onpremises
data center and AWS to provide resilience.

D. Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and to
provide resilience for the Direct Connect connection.


Question # 6

A company deploys workloads in multiple AWS accounts. Each account has a VPC with
VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file
is compressed with gzjp compression. The company must retain the log files indefinitely.
A security engineer occasionally analyzes the togs by using Amazon Athena to query the
VPC flow logs. The query performance is degrading over time as the number of ingested
togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.
Which solution will meet these requirements with the LARGEST performance
improvement?

A. Create an AWS Lambda function to decompress the gzip flies and to compress the tiles
with bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3
event notification for the S3 bucket.

B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration
to move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded

C. Update the VPC flow log configuration to store the files in Apache Parquet format.
Specify Hourly partitions for the log files.

D. Create a new Athena workgroup without data usage control limits. Use Athena engine
version 2.


Question # 7

An e-commerce company is revamping its IT infrastructure and is planning to use AWS
services. The company's CIO has asked a solutions architect to design a simple, highly
available, and loosely coupled order processing application. The application is responsible
for receiving and processing orders before storing them in an Amazon DynamoDB table.
The application has a sporadic traffic pattern and should be able to scale during marketing
campaigns to process the orders with minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?

A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances to
process them.

B. Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function to
process them.

C. Receive the orders using the AWS Step Functions program and launch an Amazon ECS
container to process them.

D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to
process them.


Question # 8

A company that is developing a mobile game is making game assets available in two AWS
Regions. Game assets are served from a set of Amazon EC2 instances behind an
Application Load Balancer (ALB) in each Region. The company requires game assets to be
fetched from the closest Region. If game assess become unavailable in the closest Region,
they should the fetched from the other Region. What should a solutions architect do to meet these requirement?

A. Create an Amazon CloudFront distribution. Create an origin group with one origin for
each ALB. Set one of the origins as primary.

B. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failover
routing record pointing to the two ALBs. Set the Evaluate Target Health value Yes.

C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Create
an Amazon Route 53 failover routing record pointing to the two CloudFront distributions.
Set the Evaluate Target Health value to Yes.

D. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency alias
record pointing to the two ALBs. Set the Evaluate Target Health value to Yes.


Question # 9

A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors.
Sensors send continuous data updates, and each update is less than 1 MB in size. The
agency has a fleet of on-premises application servers. These servers receive upda.es 'on
the sensors, convert the raw data into a human readable format, and write the results loan
on-premises relational database server. Data analysts then use simple SOL queries to
monitor the data.
The agency wants to increase overall application availability and reduce the effort that is
required to perform maintenance tasks These maintenance tasks, which include updates
and patches to the application servers, cause downtime. While an application server is
down, data is lost from sensors because the remaining servers cannot handle the entire
workload.
The agency wants a solution that optimizes operational overhead and costs. A solutions
architect recommends the use of AWS loT Core to collect the sensor data. What else should the solutions architect recommend to meet these requirements?

A. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function
to read the Kinesis Data Firehose data, convert it to .csv format, and insert it into an
Amazon Aurora MySQL DB instance. Instruct the data analysts to query the data directly
from the DB instance.

B. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda function
to read the Kinesis Data Firehose data, convert it to Apache Parquet format and save it to
an Amazon S3 bucket. Instruct the data analysts to query the data by using Amazon
Athena.

C. Send the sensor data to an Amazon Managed Service for Apache Flink {previously
known as Amazon Kinesis Data Analytics) application to convert the data to .csv format
and store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DB
instance. Instruct the data analysts to query the data directly from the DB instance.

D. Send the sensor data to an Amazon Managed Service for Apache Flink (previously
known as Amazon Kinesis Data Analytics) application to convert the data to Apache
Parquet format and store it in an Amazon S3 bucket Instruct the data analysis to query the
data by using Amazon Athena.


Question # 10

A company has many services running in its on-premises data center. The data center is
connected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data is
sensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are using
AWS.
Which solution will meet these requirements?

A. Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network Load
Balancer, and make the service available over DX.

B. Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind an
Application Load Balancer, and make the service available over DX.

C. Attach an internet gateway to the VPC. and ensure that network access control and
security group rules allow the relevant inbound and outbound traffic.

D. Attach a NAT gateway to the VPC. and ensue that network access control and security
group rules allow the relevant inbound and outbound traffic.


Question # 11

A company wants to establish a dedicated connection between its on-premises
infrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connect
connection to its account VPC. The architecture includes a transit gateway and a Direct
Connect gateway to connect multiple VPCs and the on-premises infrastructure.
The company must connect to VPC resources over a transit VIF by using the Direct
Connect connection.
Which combination of steps will meet these requirements? (Select TWO.)

A. Update the 1 Gbps Direct Connect connection to 10 Gbps.

B. Advertise the on-premises network prefixes over the transit VIF.
C. Adverse the VPC prefixes from the Direct Connect gateway to the on-premises network
over the transit VIF.

D. Update the Direct Connect connection's MACsec encryption mode attribute to must
encrypt.

E. Associate a MACsec Connection Key Name-Connectivity Association Key (CKN/CAK)
pair with the Direct Connect connection.


Question # 12

A company hosts an intranet web application on Amazon EC2 instances behind an
Application Load Balancer (ALB). Currently, users authenticate to the application against
an internal user database.
The company needs to authenticate users to the application by using an existing AWS
Directory Service for Microsoft Active Directory directory. All users with accounts in the
directory must have access to the application.
Which solution will meet these requirements?

A. Create a new app client in the directory. Create a listener rule for the ALB. Specify the
authenticate-oidc action for the listener rule. Configure the listener rule with the appropriate
issuer, client ID and secret, and endpoint details for the Active Directory service. Configure
the new app client with the callback URL that the ALB provides.

B. Configure an Amazon Cognito user pool. Configure the user pool with a federated
identity provider (IdP) that has metadata from the directory. Create an app client. Associate
the app client with the user pool. Create a listener rule for the ALB. Specify the
authenticate-cognito action for the listener rule. Configure the listener rule to use the user
pool and app client.

C. Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that has
an entity type of SAML 2.0 federation. Configure a role policy that allows access to the
ALB. Configure the new role as the default authenticated user role for the IdP. Create a
listener rule for the ALB. Specify the authenticate-oidc action for the listener rule.

D. Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as an
external identity provider (IdP) that uses SAML. Use the automatic provisioning method.
Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role
policy that allows access to the ALB. Attach the new role to all groups. Create a listener
rule for the ALB. Specify the authenticate-cognito action for the listener rule.


Question # 13

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon
EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an
Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to
use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain
the web fleet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the
instances during the outage. A subsequent investigation determined that the web server
metrics were within the normal range, but the database tier was experiencing high toad,
resulting in severely elevated query response times.
Which of the following changes together would remediate these issues while improving
monitoring capabilities for the availability and functionality of the entire application stack for
future growth? (Select TWO.)

A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint in
the web application to reduce the load on the backend database tier.

B. Configure the target group health check to point at a simple HTML page instead of a
product catalog page and the Amazon Route 53 health check against the product page to
evaluate full application functionality. Configure Ama7on CloudWatch alarms to notify
administrators when the site fails.

C. Configure the target group health check to use a TCP check of the Amazon EC2 web
server and the Amazon Route S3 health check against the product page to evaluate full
application functionality. Configure Amazon CloudWatch alarms to notify administrators
when the site fails.

D. Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover a
high-load, impaired RDS instance in the database tier.

E. Configure an Amazon Elastic ache cluster and place it between the web application and
RDS MySQL instances to reduce the load on the backend database tier.


Question # 14

A company needs to implement disaster recovery for a critical application that runs in a
single AWS Region. The application's users interact with a web frontend that is hosted on
Amazon EC2 Instances behind an Application Load Balancer (ALB). The application writes
to an Amazon RD5 tor MySQL DB instance. The application also outputs processed
documents that are stored in an Amazon S3 bucket
The company's finance team directly queries the database to run reports. During busy
periods, these queries consume resources and negatively affect application performance.
A solutions architect must design a solution that will provide resiliency during a disaster.
The solution must minimize data loss and must resolve the performance problems that
result from the finance team's queries.
Which solution will meet these requirements?

A. Migrate the database to Amazon DynamoDB and use DynamoDB global tables. Instruct
the finance team to query a global table in a separate Region. Create an AWS Lambda
function to periodically synchronize the contents of the original S3 bucket to a new S3
bucket in the separate Region. Launch EC2 instances and create an ALB in the separate
Region. Configure the application to point to the new S3 bucket.
B. Launch additional EC2 instances that host the application in a separate Region. Add the
additional instances to the existing ALB. In the separate Region, create a read replica of
the RDS DB instance. Instruct the finance team to run queries ageist the read replica. Use
S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 Docket in the
separate Region. During a disaster, promote the read replace to a standalone DB instance.
Configure the application to point to the new S3 bucket and to the newly project read
replica.

C. Create a read replica of the RDS DB instance in a separate Region. Instruct the finance
team to run queries against the read replica. Create AMIs of the EC2 instances mat host
the application frontend- Copy the AMIs to the separate Region. Use S3 Cross-Region
Replication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region.
During a disaster, promote the read replica to a standalone DB instance. Launch EC2
instances from the AMIs and create an ALB to present the application to end users.
Configure the application to point to the new S3 bucket.

D. Create hourly snapshots of the RDS DB instance. Copy the snapshots to a separate
Region. Add an Amazon Elastic ache cluster m front of the existing RDS database. Create
AMIs of the EC2 instances that host the application frontend Copy the AMIs to the separate
Region. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3
bucket in the separate Region. During a disaster, restore The database from the latest
RDS snapshot. Launch EC2 Instances from the AMIs and create an ALB to present the
application to end users. Configure the application to point to the new S3 bucket


Question # 15

A company wants to use Amazon Workspaces in combination with thin client devices to
replace aging desktops. Employees use the desktops to access applications that work with
clinical trial data. Corporate security policy states that access to the applications must be restricted to only company branch office locations. The company is considering adding an
additional branch office in the next 6 months.
Which solution meets these requirements with the MOST operational efficiency?

A. Create an IP access control group rule with the list of public addresses from the branch
offices. Associate the IP access control group with the Workspaces directory.

B. Use AWS Firewall Manager to create a web ACL rule with an IPSet with the list to public
addresses from the branch office Locations-Associate the web ACL with the Workspaces
directory.

C. Use AWS Certificate Manager (ACM) to issue trusted device certificates to the machines
deployed in the branch office locations. Enable restricted access on the Workspaces
directory.

D. Create a custom Workspace image with Windows Firewall configured to restrict access
to the public addresses of the branch offices. Use the image to deploy the Workspaces.


Question # 16

A software development company has multiple engineers who ate working remotely. The
company is running Active Directory Domain Services (AD DS) on an Amazon EC2
instance. The company's security policy states that al internal, nonpublic services that are
deployed in a VPC must be accessible through a VPN. Multi-factor authentication (MFA)
must be used for access to a VPN.
What should a solutions architect do to meet these requirements?

A. Create an AWS Sire-to-Site VPN connection. Configure Integration between a VPN and
AD DS. Use an Amazon Workspaces client with MFA support enabled to establish a VPN
connection.

B. Create an AWS Client VPN endpoint Create an AD Connector directory tor integration
with AD DS. Enable MFA tor AD Connector. Use AWS Client VPN to establish a VPN
connection.

C. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub.
Configure integration between AWS VPN CloudHub and AD DS. Use AWS Copilot to
establish a VPN connection.
D. Create an Amazon WorkLink endpoint. Configure integration between Amazon
WorkLink and AD DS. Enable MFA in Amazon WorkLink. Use AWS Client VPN to establish
a VPN connection.


Question # 17

A company needs to improve the reliability ticketing application. The application runs on an
Amazon Elastic Container Service (Amazon ECS) cluster. The company uses Amazon
CloudFront to servo the application. A single ECS service of the ECS cluster is the
CloudFront distribution's origin.
The application allows only a specific number of active users to enter a ticket purchasing
flow. These users are identified by an encrypted attribute in their JSON Web Token (JWT).
All other users are redirected to a waiting room module until there is available capacity for
purchasing.
The application is experiencing high loads. The waiting room modulo is working as
designed, but load on the waiting room is disrupting the application's availability. This
disruption is negatively affecting the application's ticket sale Transactions.
Which solution will provide the MOST reliability for ticket sale transactions during periods of
high load? '

A. Create a separate service in the ECS cluster for the waiting room. Use a separate
scaling configuration. Ensure that the ticketing service uses the JWT info-nation and
appropriately forwards requests to the waring room service.
B. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Split the wailing room module into a pod that is separate from the ticketing pod. Make the
ticketing pod part of a StatefuISeL Ensure that the ticketing pod uses the JWT information
and appropriately forwards requests to the waiting room pod.

C. Create a separate service in the ECS cluster for the waiting room. Use a separate
scaling configuration. Create a CloudFront function That inspects the JWT information and
appropriately forwards requests to the ticketing service or the waiting room service

D. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Split the wailing room module into a pod that is separate from the ticketing pod. Use AWS
App Mesh by provisioning the App Mesh controller for Kubermetes. Enable mTLS
authentication and service-to-service authentication for communication between the
ticketing pod and the waiting room pod. Ensure that the ticketing pod uses The JWT
information and appropriately forwards requests to the waiting room pod.


Question # 18

A company is currently in the design phase of an application that will need an RPO of less
than 5 minutes and an RTO of less than 10 minutes. The solutions architecture team is
forecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a database solution that will provide the company with the ability to fail
over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5
minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serve
as a backup in the event of a failure.

B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary
Region. In the event of a failure, promote the read replica to become the primary.

C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondary
Region. Use AWS DMS to keep the secondary Region in sync.

D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event of
a failure, promote the read replica to become the primary.


Question # 19

A company is using an organization in AWS organization to manage AWS accounts. For
each new project the company creates a new linked account. After the creation of a new
account, the root user signs in to the new account and creates a service request to increase the service quota for Amazon EC2 instances. A solutions architect needs to
automate this process.
Which solution will meet these requirements with tie LEAST operational overhead?

A. Create an Amazon EventBridge rule to detect creation of a new account Send the event
to an Amazon Simple Notification Service (Amazon SNS) topic that invokes an AWS
Lambda function. Configure the Lambda function to run the request-service-quota-increase
command to request a service quota increase for EC2 instances.

B. Create a Service Quotas request template in the management account. Configure the
desired service quota increases for EC2 instances.

C. Create an AWS Config rule in the management account to set the service quota for EC2
instances.

D. Create an Amazon EventBridge rule to detect creation of a new account. Send the event
to an Amazon simple Notification service (Amazon SNS) topic that involves an AWS
Lambda function. Configure the Lambda function to run the create-case command to
request a service quota increase for EC2 instances.


Question # 20

A company needs to gather data from an experiment in a remote location that does not
have internet connectivity. During the experiment, sensors that are connected to a total
network will generate 6 TB of data in a preprimary formal over the course of 1 week. The
sensors can be configured to upload their data files to an FTP server periodically, but the
sensors do not have their own FTP server. The sensors also do not support other
protocols. The company needs to collect the data centrally and move lie data to object
storage in the AWS Cloud as soon. as possible after the experiment.
Which solution will meet these requirements?

A. Order an AWS Snowball Edge Compute Optimized device. Connect the device to the
local network. Configure AWS DataSync with a target bucket name, and unload the data
over NFS to the device. After the experiment return the device to AWS so that the data can
be loaded into Amazon S3.

B. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device
to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device
to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS)
volume.

C. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device
to the local network. Launch an Amazon EC2 instance on the device. Install and configure
an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2
instance. After the experiment, return the device to AWS so that the data can be loaded
into Amazon S3.

D. Order an AWS Snowcone device. Connect the device to the local network. Configure
the device to use Amazon FSx. Configure the sensors to upload data to the device.
Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon
S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic
Block Store (Amazon EBS) volume.


Question # 21

A company has Linux-based Amazon EC2 instances. Users must access the instances by
using SSH with EC2 SSH Key pairs. Each machine requires a unique EC2 Key pair.
The company wants to implement a key rotation policy that will, upon request,
automatically rotate all the EC2 key pairs and keep the key in a securely encrypted place.
The company will accept less than 1 minute of downtime during key rotation.
Which solution will meet these requirement?

A. Store all the keys in AWS Secrets Manager. Define a Secrets Manager rotation
schedule to invoke an AWS Lambda function to generate new key pairs. Replace public
Keys on EC2 instances. Update the private keys in Secrets Manager.

B. Store all the keys in Parameter. Store, a capability of AWS Systems Manager, as a
string. Define a Systems Manager maintenance window to invoke an AWS Lambda
function to generate new key pairs. Replace public keys on EC2 instance. Update the
private keys in parameter.

C. Import the EC2 key pairs into AWS Key Management Service (AWS KMS). Configure
automatic key rotation for these key pairs. Create an Amazon EventlBridge scheduled rule
to invoke an AWS Lambda function to initiate the key rotation AWS KMS.

D. Add all the EC2 instances to Feet Manager, a capability of AWS Systems Manager.
Define a Systems Manager maintenance window to issue a Systems Manager Run
Command document to generate new Key pairs and to rotate public keys to all the
instances in Feet Manager.


Question # 22

A company has a Windows-based desktop application that is packaged and deployed to the users' Windows machines. The company recently acquired another company that has

employees who primarily use machines with a Linux operating system. The acquiring
company has decided to migrate and rehost the Windows-based desktop application lo
AWS.
All employees must be authenticated before they use the application. The acquiring
company uses Active Directory on premises but wants a simplified way to manage access
to the application on AWS (or all the employees.
Which solution will rehost the application on AWS with the LEAST development effort?

A. Set up and provision an Amazon Workspaces virtual desktop for every employee.
Implement authentication by using Amazon Cognito identity pools. Instruct employees to
run the application from their provisioned Workspaces virtual desktops.

B. Create an Auto Scarlet group of Windows-based Ama7on EC2 instances. Join each
EC2 instance to the company's Active Directory domain. Implement authentication by using
the Active Directory That is running on premises. Instruct employees to run the application
by using a Windows remote desktop.

C. Use an Amazon AppStream 2.0 image builder to create an image that includes the
application and the required configurations. Provision an AppStream 2.0 On-Demand fleet
with dynamic Fleet Auto Scaling process for running the image. Implement authentication
by using AppStream 2.0 user pools. Instruct the employees to access the application by
starling browse'-based AppStream 2.0 streaming sessions.

D. Refactor and containerize the application to run as a web-based application. Run the
application in Amazon Elastic Container Service (Amazon ECS) on AWS Fargate with step
scaling policies Implement authentication by using Amazon Cognito user pools. Instruct the
employees to run the application from their browsers.


Question # 23

A company is developing an application that will display financial reports. The company
needs a solution that can store financial Information that comes from multiple systems. The
solution must provide the reports through a web interface and must serve the data will less
man 500 milliseconds or latency to end users. The solution also must be highly available
and must have an RTO or 30 seconds.
Which solution will meet these requirements?

A. Use an Amazon Redshift cluster to store the data. Use a state website that is hosted on
Amazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service
(Amazon EKS) cluster to provide the reports to the application.

B. Use Amazon S3 to store the data Use Amazon Athena to provide the reports to the
application. Use AWS App Runner to serve the application to view the reports.

C. Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStight
dashboard with direct Query datasets to provide the reports to the application.

D. Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS Elastic
Beanstalk to provide the reports to the application.


Question # 24

A company is planning to migrate an on-premises data center to AWS. The company
currently hosts the data center on Linux-based VMware VMs. A solutions architect must
collect information about network dependencies between the VMs. The information must
be in the form of a diagram that details host IP addresses, hostnames, and network
connection information.
Which solution will meet these requirements?

A. Use AWS Application Discovery Service. Select an AWS Migration Hub home AWS
Region. Install the AWS Application Discovery Agent on the on-premises servers for data
collection. Grant permissions to Application Discovery Service to use the Migration Hub
network diagrams.

B. Use the AWS Application Discovery Service Agentless Collector for server data
collection. Export the network diagrams from the AWS Migration Hub in .png format.

C. Install the AWS Application Migration Service agent on the on-premises servers for data
collection. Use AWS Migration Hub data in Workload Discovery on AWS to generate
network diagrams.

D. Install the AWS Application Migration Service agent on the on-premises servers for data
collection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatch
dashboard to generate network diagrams.


Question # 25

A company maintains information on premises in approximately 1 million .csv files that are
hosted on a VM. The data initially is 10 TB in size and grows at a rate of 1 TB each week.
The company needs to automate backups of the data to the AWS Cloud.
Backups of the data must occur daily. The company needs a solution that applies custom
filters to back up only a subset of the data that is located in designated source directories.
The company has set up an AWS Direct Connect connection.
Which solution will meet the backup requirements with the LEAST operational overhead?

A. Use the Amazon S3 CopyObject API operation with multipart upload to copy the existing
data to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3
daily.

B. Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule the
backup plan to run daily.

C. Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor.
Configure a DataSync task to replicate the data to Amazon S3 daily.

D. Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync for
incremental backups to Amazon S3 daily.


Question # 26

A company needs to migrate an on-premises SFTP site to AWS. The SFTP site currently
runs on a Linux VM. Uploaded files are made available to downstream applications through
an NFS share.
As part of the migration to AWS, a solutions architect must implement high availability. The
solution must provide external vendors with a set of static public IP addresses that the
vendors can allow. The company has set up an AWS Direct Connect connection between
its on-premises data center and its VPC.
Which solution will meet these requirements with the least operational overhead?

A. Create an AWS Transfer Family server, configure an internet-facing VPC endpoint for
the Transfer Family server, specify an Elastic IP address for each subnet, configure the
Transfer Family server to pace files into an Amazon Elastic Files System (Amazon EFS)
file system that is deployed across multiple Availability Zones Modify the configuration on
the downstream applications that access the existing NFS share to mount the EFS
endpoint instead.

B. Create an AWS Transfer Family server. Configure a publicly accessible endpoint for the
Transfer Family server. Configure the Transfer Family server to place files into an Amazon
Elastic Files System [Amazon EFS} the system that is deployed across multiple Availability
Zones. Modify the configuration on the downstream applications that access the existing
NFS share to mount the its endpoint instead.

C. Use AWS Application Migration service to migrate the existing Linux VM to an Amazon
EC2 instance. Assign an Elastic IP address to the EC2 instance. Mount an Amazon Elastic
Fie system (Amazon EFS) the system to the EC2 instance. Configure the SFTP server to
place files in. the EFS file system. Modify the configuration on the downstream applications
that access the existing NFS share to mount the EFS endpoint instead.

D. Use AWS Application Migration Service to migrate the existing Linux VM to an AWS
Transfer Family server. Configure a publicly accessible endpoint for the Transfer Family
server. Configure the Transfer Family sever to place files into an Amazon FSx for Luster
the system that is deployed across multiple Availability Zones. Modify the configuration on
the downstream applications that access the existing NFS share to mount the FSx for
Luster endpoint instead.


Question # 27

A company's factory and automaton applications are running in a single VPC More than 23
applications run on a combination of Amazon EC2, Amazon Elastic Container Service
(Amazon ECS), are Amazon RDS.
The company has software engineers spread across three teams. One of the three teams
owns each application, and each team is responsible for the cost and performance of all of
its applications. Team resources have tags that represent their application and team. The
learns use IAH access for daily activities.
The company needs to determine which costs on the monthly AWS bill are attributable to
each application or team. The company also must be able to create reports to compare
costs item the last 12 months and to help forecast costs tor the next 12 months. A solution
architect must recommend an AWS Billing and Cost Management solution that provides these cost reports.
Which combination of actions will meet these requirement? Select THREE.)

A. Activate the user-defined cost allocation tags that represent the application and the
team.

B. Activate the AWS generated cost allocation tags that represent the application and the
team.

C. Create a cost category for each application in Billing and Cost Management

D. Activate IAM access to Billing and Cost Management.

E. Create a cost budget

F. Enable Cost Explorer.


Question # 28

A company's compliance audit reveals that some Amazon Elastic Block Store (Amazon
EBS) volumes that were created in an AWS account were not encrypted. A solutions
architect must Implement a solution to encrypt all new EBS volumes at rest
Which solution will meet this requirement with the LEAST effort?

A. Create an Amazon EventBridge rule to detect the creation of unencrypted EBS volumes.
Invoke an AWS Lambda function to delete noncompliant volumes.

B. Use AWS Audit Manager with data encryption.

C. Create an AWS Config rule to detect the creation of a new EBS volume. Encrypt the
volume by using AWS Systems Manager Automation.

D. Turn in EBS encryption by default in all AWS Regions.


Question # 29

A company is preparing to deploy an Amazon Elastic Kubernetes Service (Amazon EKS)
cluster for a workload. The company expects the cluster to support an
unpredictable number of stateless pods. Many of the pods will be created during a short
time period as the workload automatically scales the number of replicas that the workload
uses.
Which solution will MAXIMIZE node resilience?

A. Use a separate launch template to deploy the EKS control plane into a second cluster
that is separate from the workload node groups.

B. Update the workload node groups. Use a smaller number of node groups and larger
instances in the node groups.

C. Configure the Kubernetes Cluster Autoscaler to ensure that the compute capacity of the
workload node groups stays under provisioned.

D. Configure the workload to use topology spread constraints that are based on Availability
Zone.


Question # 30

A company wants to design a disaster recovery (DR) solution for an application that runs in
the company's data center. The application writes to an SMB file share and creates a copy
on a second file share. Both file shares are in the data center. The application uses two
types of files: metadata files and image files.
The company wants to store the copy on AWS. The company needs the ability to use SMB
to access the data from either the data center or AWS if a disaster occurs. The copy of the
data is rarely accessed but must be available within 5 minutes.
Which solution will meet these requirements MOST cost-effectively?

A. Deploy AWS Outposts with Amazon S3 storage. Configure a Windows Amazon EC2
instance on Outposts as a file server.

B. Deploy an Amazon FSx File Gateway. Configure an Amazon FSx for Windows File
Server Multi-AZ file system that uses SSD storage.

C. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3
Standard-Infrequent Access (S3 Standard-IA) for the metadata files and to use S3 Glacier
Deep Archive for the image files.

D. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3
Standard-Infrequent Access (S3 Standard-IA) for the metadata files and image files.


Question # 31

A solutions architect needs to improve an application that is hosted in the AWS Cloud. The
application uses an Amazon Aurora MySQL DB instance that is experiencing overloaded
connections. Most of the application's operations insert records into the database. The
application currently stores credentials in a text-based configuration file.
The solutions architect needs to implement a solution so that the application can handle the
current connection load. The solution must keep the credentials secure and must provide
the ability to rotate the credentials automatically on a regular basis.
Which solution will meet these requirements?

A. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connection
credentials as a secret in AWS Secrets Manager.

B. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connection
credentials in AWS Systems Manager Parameter Store.

C. Create an Aurora Replica. Store the connection credentials as a secret in AWS Secrets
Manager.

D. Create an Aurora Replica. Store the connection credentials in AWS Systems Manager
Parameter Store.


Question # 32

A company is migrating an on-premises application and a MySQL database to AWS. The
application processes highly sensitive data, and new data is constantly updated in the
database. The data must not be transferred over the internet. The company also must
encrypt the data in transit and at rest.
The database is 5 TB in size. The company already has created the database schema in
an Amazon RDS for MySQL DB instance. The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF.
A solutions architect needs to design a solution that will migrate the data to AWS with the
least possible downtime.
Which solution will meet these requirements?

A. Perform a database backup. Copy the backup files to an AWS Snowball Edge Storage
Optimized device. Import the backup to Amazon S3. Use server-side encryption with
Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for
encryption in transit. Import the data from Amazon S3 to the DB instance.

B. Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Create
a DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS.
Configure a DMS task to copy data from the on-premises database to the DB instance by
using full load plus change data capture (CDC). Use the AWS Key Management Service
(AWS KMS) default key for encryption at rest. Use TLS for encryption in transit.

C. Perform a database backup. Use AWS DataSync to transfer the backup files to Amazon
S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for
encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the
DB instance.

D. Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWS
PrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use serverside
encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest.
Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.


Question # 33

A company is serving files to its customers through an SFTP server that is accessible over
the internet The SFTP server is running on a single Amazon EC2 instance with an Elastic
IP address attached Customers connect to the SFTP server through its Elastic IP address
and use SSH for authentication The EC2 instance also has an attached security group that
allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability minimize the
complexity of infrastructure management and minimize the disruption to customers who
access files. The solution must not change the way customers connect
Which solution will meet these requirements?

A. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket
to be used for SFTP file hosting Create an AWS Transfer Family server. Configure the
Transfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IP
address with the new endpoint. Point the Transfer Family server to the S3 bucket Sync all
files from the SFTP server to the S3 bucket.

B. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket
to be used for SFTP file hosting Create an AWS Transfer Family Server Configure the
Transfer Family server with a VPC-hosted, internet-facing endpoint Associate the SFTP
Elastic IP address with the new endpoint Attach the security group with customer IP
addresses to the new endpoint Point the Transfer Family server to the S3 bucket. Sync all
files from the SFTP server to the S3 bucket.

C. Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon
Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an
AWS Fargate task definition to run an SFTP server Specify the EFS file system as a mount
in the task definition Create a Fargate service by using the task definition, and place a
Network Load Balancer (NLB) in front of the service. When configuring the service, attach
the security group with customer IP addresses to the tasks that run the SFTP server
Associate the Elastic IP address with the NLB Sync all files from the SFTP server to the S3
bucket.

D. Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach
Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting.
Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an
Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling
group that instances that are launched should attach the new multi-attach EBS volume
Configure the Auto Scaling group to automatically add instances behind the NLB. configure
the Auto Scaling group to use the security group that allows customer IP addresses for the
EC2 instances that the Auto Scaling group launches Sync all files from the SFTP server to
the new multi-attach EBS volume.


Question # 34

An online retail company hosts its stateful web-based application and MySQL database in
an on-premises data center on a single server. The company wants to increase its
customer base by conducting more marketing campaigns and promotions. In preparation,
the company wants to migrate its application and database to AWS to increase the
reliability of its architecture.
Which solution should provide the HIGHEST level of reliability?

A. Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy the
application in an Auto Scaling group on Amazon EC2 instances behind an Application Load
Balancer. Store sessions in Amazon Neptune.

B. Migrate the database to Amazon Aurora MySQL. Deploy the application in an Auto
Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store
sessions in an Amazon ElastiCache for Redis replication group.

C. Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploy
the application in an Auto Scaling group on Amazon EC2 instances behind a Network Load
Balancer. Store sessions in Amazon Kinesis Data Firehose.

D. Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy the
application in an Auto Scaling group on Amazon EC2 instances behind an Application Load
Balancer. Store sessions in Amazon ElastiCache for Memcached.


Question # 35

A car rental company has built a serverless REST API to provide data to its mobile app.
The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda
functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently
opened the API to mobile apps of partners. A significant increase in the number of requests
resulted, causing sporadic database memory errors. Analysis of the API traffic indicates
that clients are making multiple HTTP GET requests for the same queries in a short period
of time. Traffic is concentrated during business hours, with spikes around holidays and
other events.
The company needs to improve its ability to support the additional usage while minimizing
the increase in costs associated with the solution.
Which strategy meets these requirements?

A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable
caching in the production stage.

B. Implement an Amazon ElastiCache for Redis cache to store the results of the database
calls. Modify the Lambda functions to use the cache.

C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amount
of available memory.

D. Enable throttling in the API Gateway production stage. Set the rate and burst values to
limit the incoming calls.


Question # 36

A company has a web application that securely uploads pictures and videos to an Amazon
S3 bucket. The company requires that only authenticated users are allowed to post
content. The application generates a presigned URL that is used to upload objects through
a browser interface. Most users are reporting slow upload times for objects larger than 100
MB.
What can a Solutions Architect do to improve the performance of these uploads while
ensuring only authenticated users are allowed to post content?

A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has a
resource as an S3 service proxy. Configure the PUT method for this resource to expose
the S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLS
authorizer. Have the browser interface use API Gateway instead of the presigned URL to
upload objects.

B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as an
S3 service proxy. Configure the PUT method for this resource to expose the S3 PutObject
operation. Secure the API Gateway using an AWS Lambda authorizer. Have the browser
interface use API Gateway instead of the presigned URL to upload API objects.

C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint when
generating the presigned URL. Have the browser interface upload the objects to this URL
using the S3 multipart upload API.

D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUT
and POST methods for the CloudFront cache behavior. Update the CloudFront origin to
use an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution.


Question # 37

A company has a website that runs on four Amazon EC2 instances that are behind an
Application Load Balancer (ALB). When the ALB detects that an EC2 instance is no longer
available, an Amazon CloudWatch alarm enters the ALARM state. A member of the
company's operations team then manually adds a new EC2 instance behind the ALB.
A solutions architect needs to design a highly available solution that automatically handles
the replacement of EC2 instances. The company needs to minimize downtime during the
switch to the new solution.
Which set of steps should the solutions architect take to meet these requirements?

A. Delete the existing ALB. Create an Auto Scaling group that is configured to handle the
web application traffic. Attach a new launch template to the Auto Scaling group. Create a
new ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instances
to the Auto Scaling group.

B. Create an Auto Scaling group that is configured to handle the web application traffic.
Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to
the existing ALB. Attach the existing EC2 instances to the Auto Scaling group.

C. Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that is
configured to handle the web application traffic. Attach a new launch template to the Auto
Scaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait for
the Auto Scaling group to launch the minimum number of EC2 instances.

D. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group to
the existing ALB. Wait for the existing ALB to register the existing EC2 instances with the
Auto Scaling group.


Question # 38

A company is deploying a third-party firewall appliance solution from AWS Marketplace to
monitor and protect traffic that leaves the company's AWS environments. The company
wants to deploy this appliance into a shared services VPC and route all outbound internetbound
traffic through the appliances.
A solutions architect needs to recommend a deployment method that prioritizes reliability
and minimizes failover time between firewall appliances within a single AWS Region. The
company has set up routing from the shared services VPC to other VPCs.
Which steps should the solutions architect recommend to meet these requirements?
(Select THREE.)

A. Deploy two firewall appliances into the shared services VPC, each in a separate
Availability Zone.

B. Create a new Network Load Balancer in the shared services VPC. Create a new target
group, and attach it to the new Network Load Balancer. Add each of the firewall appliance
instances to the target group.

C. Create a new Gateway Load Balancer in the shared services VPC. Create a new target
group, and attach it to the new Gateway Load Balancer. Add each of the firewall appliance
instances to the target group.

D. Create a VPC interface endpoint. Add a route to the route table in the shared services
VPC. Designate the new endpoint as the next hop for traffic that enters the shared services
VPC from other VPCs.

E. Deploy two firewall appliances into the shared services VPC. each in the same
Availability Zone.
F. Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in the
shared services VPC. Designate the new endpoint as the next hop for traffic that enters the
shared services VPC from other VPCs.


Question # 39

An ecommerce company runs an application on AWS. The application has an Amazon API
Gateway API that invokes an AWS Lambda function. The data is stored in an Amazon RDS
for PostgreSQL DB instance.
During the company's most recent flash sale, a sudden increase in API calls negatively
affected the application's performance. A solutions architect reviewed the Amazon
CloudWatch metrics during that time and noticed a significant increase in Lambda
invocations and database connections. The CPU utilization also was high on the DB
instance.
What should the solutions architect recommend to optimize the application's performance?

A. Increase the memory of the Lambda function. Modify the Lambda function to close the
database connections when the data is retrieved.

B. Add an Amazon ElastiCache for Redis cluster to store the frequently accessed data
from the RDS database.

C. Create an RDS proxy by using the Lambda console. Modify the Lambda function to use
the proxy endpoint.

D. Modify the Lambda function to connect to the database outside of the function's handler.
Check for an existing database connection before creating a new connection.


Question # 40

A company hosts a software as a service (SaaS) solution on AWS. The solution has an
Amazon API Gateway API that serves an HTTPS endpoint. The API uses AWS Lambda
functions for compute. The Lambda functions store data in an Amazon Aurora Serverless
VI database.
The company used the AWS Serverless Application Model (AWS SAM) to deploy the
solution. The solution extends across multiple Availability Zones and has no disaster
recovery (DR) plan.
A solutions architect must design a DR strategy that can recover the solution in another
AWS Region. The solution has an R TO of 5 minutes and an RPO of 1 minute.
What should the solutions architect do to meet these requirements?

A. Create a read replica of the Aurora Serverless VI database in the target Region. Use
AWS SAM to create a runbook to deploy the solution to the target Region. Promote the
read replica to primary in case of disaster.

B. Change the Aurora Serverless VI database to a standard Aurora MySQL global
database that extends across the source Region and the target Region. Use AWS SAM to
create a runbook to deploy the solution to the target Region.

C. Create an Aurora Serverless VI DB cluster that has multiple writer instances in the target
Region. Launch the solution in the target Region. Configure the two Regional solutions to
work in an active-passive configuration.

D. Change the Aurora Serverless VI database to a standard Aurora MySQL global
database that extends across the source Region and the target Region. Launch the
solution in the target Region. Configure the two Regional solutions to work in an activepassive
configuration.


Question # 41

A company is deploying a new cluster for big data analytics on AWS. The cluster will run
across many Linux Amazon EC2 instances that are spread across multiple Availability
Zones.
All of the nodes in the cluster must have read and write access to common underlying file
storage. The file storage must be highly available, must be resilient, must be compatible
with the Portable Operating System Interface (POSIX). and must accommodate high levels
of throughput.
Which storage solution will meet these requirements?

A. Provision an AWS Storage Gateway file gateway NFS file share that is attached to an
Amazon S3 bucket. Mount the NFS file share on each EC2 instance in the duster.

B. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses
General Purpose performance mode. Mount the EFS file system on each EC2 instance in
the cluster.

C. Provision a new Amazon Elastic Block Store (Amazon EBS) volume that uses the io2
volume type. Attach the EBS volume to all of the EC2 instances in the cluster.

D. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses Max
I/O performance mode. Mount the EFS file system on each EC2 instance in the cluster.


Question # 42

A company deploys a new web application. As pari of the setup, the company configures
AWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The company
develops an Amazon Athena query that runs once daily to return AWS WAF log data from
the previous 24 hours. The volume of daily logs is constant. However, over time, the same
query is taking more time to run.
A solutions architect needs to design a solution to prevent the query time from continuing to
increase. The solution must minimize operational overhead.
Which solution will meet these requirements?

A. Create an AWS Lambda function that consolidates each day's AWS WAF logs into one
log file.

B. Reduce the amount of data scanned by configuring AWS WAF to send logs to a
different S3 bucket each day.

C. Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 by
date and time. Create external tables for Amazon Redshift. Configure Amazon Redshift
Spectrum to query the data source.

D. Modify the Kinesis Data Firehose configuration and Athena table definition to partition
the data by date and time. Change the Athena query to view the relevant partitions.


Question # 43

A solutions architect has an operational workload deployed on Amazon EC2 instances in
an Auto Scaling Group The VPC architecture spans two Availability Zones (AZ) with a
subnet in each that the Auto Scaling group is targeting. The VPC is connected to an onpremises
environment and connectivity cannot be interrupted The maximum size of the
Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:
VPCCIDR 10 0 0 0/23
AZ1 subnet CIDR: 10 0 0 0724
AZ2 subnet CIDR: 10.0.1 0724
Since deployment, a third AZ has become available in the Region The solutions architect
wants to adopt the new AZ without adding additional IPv4 address space and without
service downtime. Which solution will meet these requirements?

A. Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1
subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use the
AZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using the
second half of the address space from the original AZ1 subnet Create a new AZ3 subnet
using half the original AZ2 subnet address space, then update the Auto Scaling group to
target all three new subnets.

B. Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnet
using hall the address space. Update the Auto Scaling group to use this new subnet.
Repeat this for the second AZ. Define a new subnet in AZ3: then update the Auto Scaling
group to target all three new subnets

C. Create a new VPC with the same IPv4 address space and define three subnets, with
one for each AZ Update the existing Auto Scaling group to target the new subnets in the
new VPC

D. Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet to
have halt the previous address space Adjust the Auto Scaling group to also use the AZ1
subnet again. When the instances are healthy, adjust the Auto Seating group to use the
AZ1 subnet only. Update the current AZ2 subnet and assign the second half of the address
space from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2
subnet address space, then update the Auto Scaling group to target all three new subnets


Question # 44

A data analytics company has an Amazon Redshift cluster that consists of several reserved
nodes. The cluster is experiencing unexpected bursts of usage because a team of
employees is compiling a deep audit analysis report. The queries to generate the report are
complex read queries and are CPU intensive.
Business requirements dictate that the cluster must be able to service read and write
queries at all times. A solutions architect must devise a solution that accommodates the
bursts of usage.
Which solution meets these requirements MOST cost-effectively?

A. Provision an Amazon EMR cluster. Offload the complex data processing tasks.

B. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by
using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch
reach 80%.

C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by
using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch
reach 80%.

D. Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.


Question # 45

An online survey company runs its application in the AWS Cloud. The application is
distributed and consists of microservices that run in an automatically scaled Amazon
Elastic Container Service (Amazon ECS) cluster. The ECS cluster is a target for an
Application Load Balancer (ALB). The ALB is a custom origin for an Amazon CloudFront
distribution.
The company has a survey that contains sensitive data. The sensitive data must be
encrypted when it moves through the application. The application's data-handling
microservice is the only microservice that should be able to decrypt the data.
Which solution will meet these requirements?

A. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to
the data-handling microservice. Create a field-level encryption profile and a configuration.
Associate the KMS key and the configuration with the CloudFront cache behavior.

B. Create an RSA key pair that is dedicated to the data-handling microservice. Upload the
public key to the CloudFront distribution. Create a field-level encryption profile and a
configuration. Add the configuration to the CloudFront cache behavior.

C. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to
the data-handling microservice. Create a Lambda@Edge function. Program the function to
use the KMS key to encrypt the sensitive data.

D. Create an RSA key pair that is dedicated to the data-handling microservice. Create a
Lambda@Edge function. Program the function to use the private key of the RSA key pair to
encrypt the sensitive data.


Question # 46

A company uses an organization in AWS Organizations to manage the company's AWS
accounts. The company uses AWS CloudFormation to deploy all infrastructure. A finance
team wants to buikJ a chargeback model The finance team asked each business unit to tag
resources by using a predefined list of project values.
When the finance team used the AWS Cost and Usage Report in AWS Cost Explorer and
filtered based on project, the team noticed noncompliant project values. The company
wants to enforce the use of project tags for new resources.
Which solution will meet these requirements with the LEAST effort?

A. Create a tag policy that contains the allowed project tag values in the organization's
management account. Create an SCP that denies the cloudformation:CreateStack API
operation unless a project tag is added. Attach the SCP to each OU.

B. Create a tag policy that contains the allowed project tag values in each OU. Create an
SCP that denies the cloudformation:CreateStack API operation unless a project tag is
added. Attach the SCP to each OU.

C. Create a tag policy that contains the allowed project tag values in the AWS management
account. Create an 1AM policy that denies the cloudformation:CreateStack API operation
unless a project tag is added. Assign the policy to each user.

D. Use AWS Service Catalog to manage the CloudFoanation stacks as products. Use a
TagOptions library to control project tag values. Share the portfolio with all OUs that are in
the organization.


Question # 47

A company is running a serverless application that consists of several AWS Lambda
functions and Amazon DynamoDB tables. The company has created new functionality that
requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB
cluster is located in three subnets in a VPC.
Which of the possible solutions will allow the Lambda functions to access the Neptune DB
cluster and DynamoDB tables? (Select TWO.)

A. Create three public subnets in the Neptune VPC, and route traffic through an internet
gateway. Host the Lambda functions in the three new public subnets.

B. Create three private subnets in the Neptune VPC, and route internet traffic through a
NAT gateway. Host the Lambda functions in the three new private subnets.

C. Host the Lambda functions outside the VPC. Update the Neptune security group to allow
access from the IP ranges of the Lambda functions.

D. Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune
database, and have the Lambda functions access Neptune over the VPC endpoint.

E. Create three private subnets in the Neptune VPC. Host the Lambda functions in the
three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB
traffic to the VPC endpoint.


Question # 48

A company is running multiple workloads in the AWS Cloud. The company has separate
units for software development. The company uses AWS Organizations and federation with
SAML to give permissions to developers to manage resources in their AWS accounts. The
development units each deploy their production workloads into a common production
account.
Recently, an incident occurred in the production account in which members of a
development unit terminated an EC2 instance that belonged to a different development
unit. A solutions architect must create a solution that prevents a similar incident from
happening in the future. The solution also must allow developers the possibility to manage
the instances used for their workloads.
Which strategy will meet these requirements?

A. Create separate OUs in AWS Organizations for each development unit. Assign the
created OUs to the company AWS accounts. Create separate SCPs with a deny action and
a StringNotEquals condition for the DevelopmentUnit resource tag that matches the
development unit name. Assign the SCP to the corresponding OU.

B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)
session tag during SAML federation. Update the IAM policy for the developers' assumed
IAM role with a deny action and a StringNotEquals condition for the DevelopmentUnit
resource tag and aws:PrincipalTag/ DevelopmentUnit.

C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)
session tag during SAML federation. Create an SCP with an allow action and a
StringEquals condition for the DevelopmentUnit resource tag and
aws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU.

D. Create separate IAM policies for each development unit. For every IAM policy, add an
allow action and a StringEquals condition for the DevelopmentUnit resource tag and the
development unit name. During SAML federation, use AWS Security Token Service (AWS
STS) to assign the IAM policy and match the development unit name to the assumed IAM
role.


Question # 49

A company has an organization in AWS Organizations that includes a separate AWS
account for each of the company's departments. Application teams from different
departments develop and deploy solutions independently.
The company wants to reduce compute costs and manage costs appropriately across
departments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the company
selects compute resources.
Which solution will meet these requirements?

A. Use AWS Budgets for each department. Use Tag Editor to apply tags to appropriate
resources. Purchase EC2 Instance Savings Plans.

B. Configure AWS Organizations to use consolidated billing. Implement a tagging strategy
that identifies departments. Use SCPs to apply tags to appropriate resources. Purchase
EC2 Instance Savings Plans.

C. Configure AWS Organizations to use consolidated billing. Implement a tagging strategy
that identifies departments. Use Tag Editor to apply tags to appropriate resources.
Purchase Compute Savings Plans.

D. Use AWS Budgets for each department. Use SCPs to apply tags to appropriate
resources. Purchase Compute Savings Plans.


Question # 50

A company is developing a web application that runs on Amazon EC2 instances in an Auto
Scaling group behind a public-facing Application Load Balancer (ALB). Only users from a
specific country are allowed to access the application. The company needs the ability to log
the access requests that have been blocked. The solution should require the least possible
maintenance.
Which solution meets these requirements?

A. Create an IPSet containing a list of IP ranges that belong to the specified country.
Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate
from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL
with the ALB.

B. Create an AWS WAF web ACL. Configure a rule to block any requests that do not
originate from the specified country. Associate the rule with the web ACL. Associate the
web ACL with the ALB.

C. Configure AWS Shield to block any requests that do not originate from the specified
country. Associate AWS Shield with the ALB.

D. Create a security group rule that allows ports 80 and 443 from IP ranges that belong to
the specified country. Associate the security group with the ALB.


Question # 51

A company is migrating to the cloud. It wants to evaluate the configurations of virtual
machines in its existing data center environment to ensure that it can size new Amazon
EC2 instances accurately. The company wants to collect metrics, such as CPU. memory,
and disk utilization, and it needs an inventory of what processes are running on each
instance. The company would also like to monitor network connections to map
communications between servers.
Which would enable the collection of this data MOST cost effectively?

A. Use AWS Application Discovery Service and deploy the data collection agent to each
virtual machine in the data center.

B. Configure the Amazon CloudWatch agent on all servers within the local environment
and publish metrics to Amazon CloudWatch Logs.

C. Use AWS Application Discovery Service and enable agentless discovery in the existing
visualization environment.

D. Enable AWS Application Discovery Service in the AWS Management Console and
configure the corporate firewall to allow scans over a VPN.


Question # 52

A company uses AWS Organizations to manage a multi-account structure. The company
has hundreds of AWS accounts and expects the number of accounts to increase. The
company is building a new application that uses Docker images. The company will push
the Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accounts
that are within the company's organization should have
access to the images.
The company has a CI/CD process that runs frequently. The company wants to retain all
the tagged images. However, the company wants to retain only the five most recent untagged images.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create a private repository in Amazon ECR. Create a permissions policy for the
repository that allows only required ECR operations. Include a condition to allow the ECR
operations if the value of the aws:PrincipalOrglD condition key is equal to the ID of the
company's organization. Add a lifecycle rule to the ECR repository that deletes all
untagged images over the count of five.

B. Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Set
permissions so that any account can assume the role if the value of the aws:PrincipalOrglD
condition key is equal to the ID of the company's organization. Add a lifecycle rule to the
ECR repository that deletes all untagged images over the count of five.

C. Create a private repository in Amazon ECR. Create a permissions policy for the
repository that includes only required ECR operations. Include a condition to allow the ECR
operations for all account IDs in the organization. Schedule a daily Amazon EventBridge
rule to invoke an AWS Lambda function that deletes all untagged images over the count of
five.

D. Create a public repository in Amazon ECR. Configure Amazon ECR to use an interface
VPC endpoint with an endpoint policy that includes the required permissions for images
that the company needs to pull. Include a condition to allow the ECR operations for all
account IDs in the company's organization. Schedule a daily Amazon EventBridge rule to
invoke an AWS Lambda function that deletes all untagged images over the count of five.


Question # 53

A company wants to send data from its on-premises systems to Amazon S3 buckets. The
company created the S3 buckets in three different accounts. The company must send the
data privately without the data traveling across the internet The company has no existing
dedicated connectivity to AWS
Which combination of steps should a solutions architect take to meet these requirements?
(Select TWO.)

A. Establish a networking account in the AWS Cloud Create a private VPC in the
networking account. Set up an AWS Direct Connect connection with a private VIF between
the on-premises environment and the private VPC.

B. Establish a networking account in the AWS Cloud Create a private VPC in the
networking account. Set up an AWS Direct Connect connection with a public VlF between
the on-premises environment and the private VPC.

C. Create an Amazon S3 interface endpoint in the networking account.

D. Create an Amazon S3 gateway endpoint in the networking account.

E. Establish a networking account in the AWS Cloud Create a private VPC in the
networking account. Peer VPCs from the accounts that host the S3 buckets with the VPC
in the network account.


Question # 54

A company runs an unauthenticated static website (www.example.com) that includes a
registration form for users. The website uses Amazon S3 for hosting and uses Amazon
CloudFront as the content delivery network with AWS WAF configured. When the
registration form is submitted, the website calls an Amazon API Gateway API endpoint that
invokes an AWS Lambda function to process the payload and forward the payload to an
external API call.
During testing, a solutions architect encounters a cross-origin resource sharing (CORS)
error. The solutions architect confirms that the CloudFront distribution origin has the
Access-Control-Allow-Origin header set to www.example.com.
What should the solutions architect do to resolve the error?

A. Change the CORS configuration on the S3 bucket. Add rules for CORS to the Allowed
Origin element for www.example.com.

B. Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-
Control-Allow-Origin header is set to www.example.com.

C. Enable the CORS setting on the API Gateway API endpoint. Ensure that the API
endpoint is configured to return all responses that have the Access-Control -Allow-Origin
header set to www.example.com.

D. Enable the CORS setting on the Lambda function. Ensure that the return code of the
function has the Access-Control-Allow-Origin header set to www.example.com.


Question # 55

A company migrated an application to the AWS Cloud. The application runs on two
Amazon EC2 instances behind an Application Load Balancer (ALB). Application data is
stored in a MySQL database that runs on an additional EC2 instance. The application's use
of the database is read-heavy.
The loads static content from Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. The static content is updated frequently and must be
copied to each EBS volume.
The load on the application changes throughout the day. During peak hours, the application
cannot handle all the incoming requests. Trace data shows that the database cannot
handle the read load during peak hours.
Which solution will improve the reliability of the application?

A. Migrate the application to a set of AWS Lambda functions. Set the Lambda functions as
targets for the ALB. Create a new single EBS volume for the static content. Configure the
Lambda functions to read from the new EBS volume. Migrate the database to an Amazon
RDS for MySQL Multi-AZ DB cluster.

B. Migrate the application to a set of AWS Step Functions state machines. Set the state
machines as targets for the ALB. Create an Amazon Elastic File System (Amazon EFS) file
system for the static content. Configure the state machines to read from the EFS file
system. Migrate the database to Amazon Aurora MySQL Serverless v2 with a reader DB
instance.

C. Containerize the application. Migrate the application to an Amazon Elastic Container
Service (Amazon ECS) Cluster. Use the AWS Fargate launch type for the tasks that host
the application. Create a new single EBS volume the static content. Mount the new EBS
volume on the ECS duster. Configure AWS Application Auto Scaling on ECS cluster. Set
the ECS service as a target for the ALB. Migrate the database to an Amazon RDS for
MySOL Multi-AZ DB cluster.

D. Containerize the application. Migrate the application to an Amazon Elastic Container
Service (Amazon ECS) cluster. Use the AWS Fargate launch type for the tasks that host
the application. Create an Amazon Elastic File System (Amazon EFS) file system for the
static content. Mount the EFS file system to each container. Configure AWS Application
Auto Scaling on the ECS cluster Set the ECS service as a target for the ALB. Migrate the
database to Amazon Aurora MySQL Serverless v2 with a reader DB instance.


Question # 56

A company is using Amazon API Gateway to deploy a private REST API that will provide
access to sensitive data. The API must be accessible only from an application that is deployed in a VPC. The company deploys the API successfully. However, the API is not
accessible from an Amazon EC2 instance that is deployed in the VPC.
Which solution will provide connectivity between the EC2 instance and the API?

A. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that
allows apigateway:* actions. Disable private DNS naming for the VPC endpoint. Configure
an API resource policy that allows access from the VPC. Use the VPC endpoint's DNS
name to access the API.

B. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy that
allows the execute-api:lnvoke action. Enable private DNS naming for the VPC endpoint.
Configure an API resource policy that allows access from the VPC endpoint. Use the API
endpoint's DNS names to access the API. Most Voted

C. Create a Network Load Balancer (NLB) and a VPC link. Configure private integration
between API Gateway and the NLB. Use the API endpoint's DNS names to access the
API.

D. Create an Application Load Balancer (ALB) and a VPC Link. Configure private
integration between API Gateway and the ALB. Use the ALB endpoint's DNS name to
access the API.


Question # 57

A solutions architect is creating an application that stores objects in an Amazon S3 bucket The solutions architect must deploy the application in two AWS Regions that will be used simultaneously The objects in the two S3 buckets must remain synchronized with each other. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE)

A. Use AWS Lambda functions to connect to the loT devices

B. Configure the loT devices to publish to AWS loT Core

C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance

D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)

E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare the
reports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin to
serve the reports

F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2
instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports


Question # 58

A solutions architect is creating an application that stores objects in an Amazon S3 bucket
The solutions architect must deploy the application in two AWS Regions that will be used
simultaneously The objects in the two S3 buckets must remain synchronized with each
other.
Which combination of steps will meet these requirements with the LEAST operational
overhead? (Select THREE)

A. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-
Region Access Point

B. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets

C. Modify the application to store objects in each S3 bucket.

D. Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket to
the other S3 bucket.
E. Enable S3 Versioning for each S3 bucket

F. Configure an event notification for each S3 bucket to invoke an AVVS Lambda function
to copy objects from one S3 bucket to the other S3 bucket.


Question # 59

A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application should

dynamically scale to meet user demand and maintain resiliency. Additionally, the
application must have disaster recover capabilities in an active-passive configuration with
the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?

A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both
VPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones
(AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs in
each Region as part of an Auto Scaling group spanning both VPCs and served by the ALB.

B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)
to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of
an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1
Region. Create an Amazon Route 53 record set with a failover routing policy and health
checks enabled to provide high availability across both Regions.

C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect both
VPCs. Deploy an Application Load Balancer (ALB) that spans both VPCs. Deploy EC2
instances across multiple Availability Zones as part of an Auto Scaling group in each VPC
served by the ALB. Create an Amazon Route 53 record that points to the ALB.

D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)
to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part of
an Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1
Region. Create separate Amazon Route 53 records in each Region that point to the ALB in
the Region. Use Route 53 health checks to provide high availability across both Regions.


Question # 60

A company needs to monitor a growing number of Amazon S3 buckets across two AWS
Regions. The company also needs to track the percentage of objects that are
encrypted in Amazon S3. The company needs a dashboard to display this information for
internal compliance teams.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create a new S3 Storage Lens dashboard in each Region to track bucket and
encryption metrics. Aggregate data from both Region dashboards into a single dashboard
in Amazon QuickSight for the compliance teams.

B. Deploy an AWS Lambda function in each Region to list the number of buckets and the
encryption status of objects. Store this data in Amazon S3. Use Amazon Athena queries to
display the data on a custom dashboard in Amazon QuickSight for the compliance teams.

C. Use the S3 Storage Lens default dashboard to track bucket and encryption metrics.
Give the compliance teams access to the dashboard directly in the S3 console.

D. Create an Amazon EventBridge rule to detect AWS Cloud Trail events for S3 object
creation. Configure the rule to invoke an AWS Lambda function to record encryption
metrics in Amazon DynamoDB. Use Amazon QuickSight to display the metrics in a
dashboard for the compliance teams.


Question # 61

A financial services company runs a complex, multi-tier application on Amazon EC2
instances and AWS Lambda functions. The application stores temporary data in Amazon
S3. The S3 objects are valid for only 45 minutes and are deleted after 24 hours.
The company deploys each version of the application by launching an AWS
CloudFormation stack. The stack creates all resources that are required to run the
application. When the company deploys and validates a new application version, the
company deletes the CloudFormation stack of the old version.
The company recently tried to delete the CloudFormation stack of an old application
version, but the operation failed. An analysis shows that CloudFormation failed to delete an
existing S3 bucket. A solutions architect needs to resolve this issue without making major
changes to the application's architecture.
Which solution meets these requirements?

A. Implement a Lambda function that deletes all files from a given S3 bucket. Integrate this
Lambda function as a custom resource into the CloudFormation stack. Ensure that the
custom resource has a DependsOn attribute that points to the S3 bucket's resource.

B. Modify the CloudFormation template to provision an Amazon Elastic File System
(Amazon EFS) file system to store the temporary files there instead of in Amazon S3.
Configure the Lambda functions to run in the same VPC as the file system. Mount the file
system to the EC2 instances and Lambda functions.

C. Modify the CloudFormation stack to create an S3 Lifecycle rule that expires all objects
45 minutes after creation. Add a DependsOn attribute that points to the S3 bucket's
resource.

D. Modify the CloudFormation stack to attach a DeletionPolicy attribute with a value of
Delete to the S3 bucket.


Question # 62

A company is currently in the design phase of an application that will need an RPO of less
than 5 minutes and an RTO of less than 10 minutes. The solutions architecture team is
forecasting that the database will store approximately 10 TB of data. As part of the design,
they are looking for a database solution that will provide the company with the ability to fail
over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?

A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5
minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serve
as a backup in the event of a failure.

B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondary
Region. In the event of a failure, promote the read replica to become the primary.

C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondary
Region. Use AWS DMS to keep the secondary Region in sync.

D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event of
a failure, promote the read replica to become the primary.


Question # 63

A financial company needs to create a separate AWS account for a new digital wallet
application. The company uses AWS Organizations to manage its accounts. A solutions
architect uses the 1AM user Supportl from the management account to create a new
member account with [email protected] as the email address.
What should the solutions architect do to create IAM users in the new member account?

A. Sign in to the AWS Management Console with AWS account root user credentials by
using the 64-character password from the initial AWS Organizations email
[email protected]. Set up the IAM users as required.

B. From the management account, switch roles to assume the
OrganizationAccountAccessRole role with the account ID of the new member account. Set
up the IAM users as required.

C. Go to the AWS Management Console sign-in page. Choose "Sign in using root account
credentials." Sign in in by using the email address [email protected] and the
management account's root password. Set up the IAM users as required.

D. Go to the AWS Management Console sign-in page. Sign in by using the account ID of
the new member account and the Supportl IAM credentials. Set up the IAM users as required.


Question # 64

A company has a solution that analyzes weather data from thousands of weather stations.
The weather stations send the data over an Amazon API Gateway REST API that has an
AWS Lambda function integration. The Lambda function calls a third-party service for data
pre-processing. The third-party service gets overloaded and fails the pre-processing,
causing a loss of data.
A solutions architect must improve the resiliency of the solution. The solutions architect
must ensure that no data is lost and that data can be processed later if failures occur.
What should the solutions architect do to meet these requirements?

A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the queue
as the dead-letter queue for the API.

B. Create two Amazon Simple Queue Service (Amazon SQS) queues: a primary queue
and a secondary queue. Configure the secondary queue as the dead-letter queue for the
primary queue. Update the API to use a new integration to the primary queue. Configure
the Lambda function as the invocation target for the primary queue.

C. Create two Amazon EventBridge event buses: a primary event bus and a secondary
event bus. Update the API to use a new integration to the primary event bus. Configure an
EventBridge rule to react to all events on the primary event bus. Specify the Lambda
function as the target of the rule. Configure the secondary event bus as the failure
destination for the Lambda function.

D. Create a custom Amazon EventBridge event bus. Configure the event bus as the failure
destination for the Lambda function.


Question # 65

A research center is migrating to the AWS Cloud and has moved its on-premises 1 PB
object storage to an Amazon S3 bucket. One hundred scientists are using this object
storage to store their work-related documents. Each scientist has a personal folder on the
object store. All the scientists are members of a single IAM user group.
The research center's compliance officer is worried that scientists will be able to access
each other's work. The research center has a strict obligation to report on which scientist
accesses which documents. The team that is responsible for these reports has little AWS experience and wants a
ready-to-use solution that minimizes operational overhead.
Which combination of actions should a solutions architect take to meet these
requirements? (Select TWO.)

A. Create an identity policy that grants the user read and write access. Add a condition that
specifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on the
scientists' IAM user group.

B. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket.
Store the trail output in another S3 bucket. Use Amazon Athena to query the logs and
generate reports.

C. Enable S3 server access logging. Configure another S3 bucket as the target for log
delivery. Use Amazon Athena to query the logs and generate reports.

D. Create an S3 bucket policy that grants read and write access to users in the scientists'
IAM user group.

E. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket
and write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatch
connector to query the logs and generate reports.


Question # 66

A company is using AWS Organizations with a multi-account architecture. The company's
current security configuration for the account architecture includes SCPs, resource-based
policies, identity-based policies, trust policies, and session policies.
A solutions architect needs to allow an IAM user in Account A to assume a role in Account
B.
Which combination of steps must the solutions architect take to meet this requirement?
(Select THREE.)

A. Configure the SCP for Account A to allow the action.

B. Configure the resource-based policies to allow the action.

C. Configure the identity-based policy on the user in Account A to allow the action.

D. Configure the identity-based policy on the user in Account B to allow the action.

E. Configure the trust policy on the target role in Account B to allow the action.

F. Configure the session policy to allow the action and to be passed programmatically by
the GetSessionToken API operation.


Question # 67

A company is migrating its infrastructure to the AWS Cloud. The company must comply
with a variety of regulatory standards for different projects. The company needs a multiaccount
environment.
A solutions architect needs to prepare the baseline infrastructure. The solution must
provide a consistent baseline of management and security, but it must allow flexibility for
different compliance requirements within various AWS accounts. The solution also needs
to integrate with the existing on-premises Active Directory Federation Services (AD FS)
server.
Which solution meets these requirements with the LEAST amount of operational
overhead?

A. Create an organization in AWS Organizations. Create a single SCP for least privilege
access across all accounts. Create a single OU for all accounts. Configure an IAM identity
provider for federation with the on-premises AD FS server. Configure a central logging
account with a defined process for log generating services to send log events to the central
account. Enable AWS Config in the central account with conformance packs for all
accounts.

B. Create an organization in AWS Organizations. Enable AWS Control Tower on the
organization. Review included controls (guardrails) for SCPs. Check AWS Config for areas that require additions. Add OUS as necessary. Connect AWS IAM Identity Center (AWS
Single Sign-On) to the on-premises AD FS server.

C. Create an organization in AWS Organizations. Create SCPs for least privilege access.
Create an OU structure, and use it to group AWS accounts. Connect AWS IAM Identity
Center (AWS Single Sign-On) to the on-premises AD FS server. Configure a central
logging account with a defined process for log generating services to send log events to the
central account. Enable AWS Config in the central account with aggregators and
conformance packs.

D. Create an organization in AWS Organizations. Enable AWS Control Tower on the
organization. Review included controls (guardrails) for SCPs. Check AWS Config for areas
that require additions. Configure an IAM identity provider for federation with the onpremises
AD FS server.


Question # 68

A company needs to store and process image data that will be uploaded from mobile
devices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays,
with thousands of uploads per minute. The app is rarely used at any other time. A user is
notified when image processing is complete.
Which combination of actions should a solutions architect take to ensure image processing
can scale to handle the load? (Select THREE.)

A. Upload files from the mobile software directly to Amazon S3. Use S3 event notifications
to create a message in an Amazon MQ queue.

B. Upload files from the mobile software directly to Amazon S3. Use S3 event notifications
to create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue.

C. Invoke an AWS Lambda function to perform image processing when a message is
available in the queue.

D. Invoke an S3 Batch Operations job to perform image processing when a message is
available in the queue

E. Send a push notification to the mobile app by using Amazon Simple Notification Service
(Amazon SNS) when processing is complete.

F. Send a push notification to the mobile app by using Amazon Simple Email Service
(Amazon SES) when processing is complete.


Question # 69

A company has mounted sensors to collect information about environmental parameters
such as humidity and light throughout all the company's factories. The company needs to
stream and analyze the data in the AWS Cloud in real time. If any of the parameters fall out
of acceptable ranges, the factory operations team must receive a notification immediately.
Which solution will meet these requirements?

A. Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS Step
Functions to consume and analyze the data in the Kinesis Data Firehose delivery stream.
use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.

B. Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK)
cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate task to analyze the
data. Use Amazon Simple Email Service (Amazon SES) to notify the operations team.

C. Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function to
consume the Kinesis data stream and to analyze the data. Use Amazon Simple Notification
Service (Amazon SNS) to notify the operations team.

D. Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automatically
scaled and containerized service in Amazon Elastic Container Service (Amazon ECS) to
consume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notify
the operations team.


Question # 70

A software company needs to create short-lived test environments to test pull requests as
part of its development process. Each test environment consists of a single Amazon EC2 instance that is in an Auto Scaling group.
The test environments must be able to communicate with a central server to report test
results. The central server is located in an on-premises data center. A solutions architect
must implement a solution so that the company can create and delete test environments
without any manual intervention. The company has created a transit gateway with a VPN
attachment to the on-premises network.
Which solution will meet these requirements with the LEAST operational overhead?

A. Create an AWS CloudFormation template that contains a transit gateway attachment
and related routing configurations. Create a CloudFormation stack set that includes this
template. Use CloudFormation StackSets to deploy a new stack for each VPC in the
account. Deploy a new VPC for each test environment.

B. Create a single VPC for the test environments. Include a transit gateway attachment and
related routing configurations. Use AWS CloudFormation to deploy all test environments
into the VPC.

C. Create a new OU in AWS Organizations for testing. Create an AWS CloudFormation
template that contains a VPC, necessary networking resources, a transit gateway
attachment, and related routing configurations. Create a CloudFormation stack set that
includes this template. Use CloudFormation StackSets for deployments into each account
under the testing 01.1. Create a new account for each test environment.

D. Convert the test environment EC2 instances into Docker images. Use AWS
CloudFormation to configure an Amazon Elastic Kubernetes Service (Amazon EKS) cluster
in a new VPC, create a transit gateway attachment, and create related routing
configurations. Use Kubernetes to manage the deployment and lifecycle of the test
environments.


Question # 71

A company is deploying AWS Lambda functions that access an Amazon RDS for
PostgreSQL database. The company needs to launch the Lambda functions in a QA
environment and in a production environment.
The company must not expose credentials within application code and must rotate
passwords automatically.
Which solution will meet these requirements?

A. Store the database credentials for both environments in AWS Systems Manager
Parameter Store. Encrypt the credentials by using an AWS Key Management Service
(AWS KMS) key. Within the application code of the Lambda functions, pull the credentials
from the Parameter Store parameter by using the AWS SDK for Python (Bot03). Add a role
to the Lambda functions to provide access to the Parameter Store parameter.

B. Store the database credentials for both environments in AWS Secrets Manager with
distinct key entry for the QA environment and the production environment. Turn on rotation.
Provide a reference to the Secrets Manager key as an environment variable for the
Lambda functions.
C. Store the database credentials for both environments in AWS Key Management Service
(AWS KMS). Turn on rotation. Provide a reference to the credentials that are stored in
AWS KMS as an environment variable for the Lambda functions.

D. Create separate S3 buckets for the QA environment and the production environment.
Turn on server-side encryption with AWS KMS keys (SSE-KMS) for the S3 buckets. Use
an object naming pattern that gives each Lambda function's application code the ability to
pull the correct credentials for the function's corresponding environment. Grant each
Lambda function's execution role access to Amazon S3.


Question # 72

A company has a legacy application that runs on multiple .NET Framework components.
The components share the same Microsoft SQL Server database and
communicate with each other asynchronously by using Microsoft Message Queueing
(MSMQ).
The company is starting a migration to containerized .NET Core components and wants to
refactor the application to run on AWS. The .NET Core components require complex
orchestration. The company must have full control over networking and host configuration.
The application's database model is strongly relational.
Which solution will meet these requirements?

A. Host the .NET Core components on AWS App Runner. Host the database on Amazon
RDS for SQL Server. Use Amazon EventBridge for asynchronous messaging.

B. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)
with the AWS Fargate launch type. Host the database on Amazon DynamoDB. Use
Amazon Simple Notification Service (Amazon SNS) for asynchronous messaging.

C. Host the .NET Core components on AWS Elastic Beanstalk. Host the database on Amazon Aurora PostgreSQL Serverless v2. Use Amazon Managed Streaming for Apache
Kafka (Amazon MSK) for asynchronous messaging.

D. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)
with the Amazon EC2 launch type. Host the database on Amazon Aurora MySQL
Serverless v2. Use Amazon Simple Queue Service (Amazon SQS) for asynchronous
messaging.


Question # 73

A research company is running daily simul-ations in the AWS Cloud to meet high demand.
The simu-lations run on several hundred Amazon EC2 instances that are based on
Amazon Linux 2. Occasionally, a simu-lation gets stuck and requires a cloud operations
engineer to solve the problem by connecting to an EC2 instance through SSH.
Company policy states that no EC2 instance can use the same SSH key and that all
connections must be logged in AWS CloudTrail.
How can a solutions architect meet these requirements?

A. Launch new EC2 instances, and generate an individual SSH key for each instance.
Store the SSH key in AWS Secrets Manager. Create a new IAM policy, and attach it to the
engineers' IAM role with an Allow statement for the GetSecretValue action. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through any
SSH client.
B. Create an AWS Systems Manager document to run commands on EC2 instances to set
a new unique SSH key. Create a new IAM policy, and attach it to the engineers' IAM role
with an Allow statement to run Systems Manager documents. Instruct the engineers to run
the document to set an SSH key and to connect through any SSH client.

C. Launch new EC2 instances without setting up any SSH key for the instances. Set up
EC2 Instance Connect on each instance. Create a new IAM policy, and attach it to the
engineers' IAM role with an Allow statement for the SendSSHPublicKey action. Instruct the
engineers to connect to the instance by using a browser-based SSH client from the EC2
console.

D. Set up AWS Secrets Manager to store the EC2 SSH key. Create a new AWS Lambda
function to create a new SSH key and to call AWS Systems Manager Session Manager to
set the SSH key on the EC2 instance. Configure Secrets Manager to use the Lambda
function for automatic rotation once daily. Instruct the engineers to fetch the SSH key from
Secrets Manager when they connect through any SSH client.


Question # 74

A company wants to migrate its on-premises data center to the AWS Cloud. This includes
thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and
PHP applications with MYSQL, and Oracle databases. There are many dependent services
hosted either in the same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to
understand the current environment and estimate the cloud resource costs after the
migration.
Which tools or services should solutions architect use to plan the cloud migration? (Choose
three.)

A. AWS Application Discovery Service

B. AWS SMS

C. AWS x-Ray

D. AWS Cloud Adoption Readiness Tool (CART)

E. Amazon Inspector

F. AWS Migration Hub


Question # 75

A company runs many workloads on AWS and uses AWS Organizations to manage its
accounts. The workloads are hosted on Amazon EC2. AWS Fargate. and AWS Lambda.
Some of the workloads have unpredictable demand. Accounts record high usage in some
months and low usage in other months.
The company wants to optimize its compute costs over the next 3 years A solutions
architect obtains a 6-month average for each of the accounts across the organization to
calculate usage.
Which solution will provide the MOST cost savings for all the organization's compute
usage?

A. Purchase Reserved Instances for the organization to match the size and number of the
most common EC2 instances from the member accounts.

B. Purchase a Compute Savings Plan for the organization from the management account
by using the recommendation at the management account level

C. Purchase Reserved Instances for each member account that had high EC2 usage
according to the data from the last 6 months.

D. Purchase an EC2 Instance Savings Plan for each member account from the management account based on EC2 usage data from the last 6 months.


Question # 76

A solutions architect is determining the DNS strategy for an existing VPC. The VPC is
provisioned to use the 10.24.34.0/24 CIDR block. The VPC also uses Amazon Route 53
Resolver for DNS. New requirements mandate that DNS queries must use private hosted
zones. Additionally, instances that have public IP addresses must receive corresponding
public hostnames.
Which solution will meet these requirements to ensure that the domain names are correctly
resolved within the VPC?

A. Create a private hosted zone. Activate the enableDnsSupport attribute and the
enableDnsHostnames attribute for the VPC. Update the VPC DHCP options set to include
domain-name-servers-10.24.34.2.

B. Create a private hosted zone. Associate the private hosted zone with the VPC. Activate
the enableDnsSupport attribute and the enableDnsHostnames attribute for the VPC.
Create a new VPC DHCP options set, and configure domain-nameservers=
AmazonProvidedDNS. Associate the new DHCP options set with the VPC.

C. Deactivate the enableDnsSupport attribute for the VPC. Activate the
enableDnsHostnames attribute for the VPC. Create a new VPC DHCP options set, and
configure domain-name-servers=10.24.34.2. Associate the new DHCP options set with the
VPC.

D. Create a private hosted zone. Associate the private hosted zone with the VPC. Activate
the enableDnsSupport attribute for the VPC. Deactivate the enableDnsHostnames attribute
for the VPC. Update the VPC DHCP options set to include domain-nameservers=
AmazonProvidedDNS.


Question # 77

A large company is migrating ils entire IT portfolio to AWS. Each business unit in the
company has a standalone AWS account that supports both development and test
environments. New accounts to support production workloads will be needed soon.
The finance department requires a centralized method for payment but must maintain
visibility into each group's spending to allocate costs.
The security team requires a centralized mechanism to control 1AM usage in all the
company's accounts.
What combination of the following options meet the company's needs with the LEAST
effort? (Select TWO.)

A. Use a collection of parameterized AWS CloudFormation templates defining common
1AM permissions that are launched into each account. Require all new and existing
accounts to launch the appropriate stacks to enforce the least privilege model.

B. Use AWS Organizations to create a new organization from a chosen payer account and
define an organizational unit hierarchy. Invite the existing accounts to join the organization
and create new accounts using Organizations.

C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.

D. Enable all features of AWS Organizations and establish appropriate service control
policies that filter 1AM permissions for sub-accounts.

E. Consolidate all of the company's AWS accounts into a single AWS account. Use tags for
billing purposes and the lAM's Access Advisor feature to enforce the least privilege model.


Question # 78

An enterprise company is building an infrastructure services platform for its users. The
company has the following requirements:
Provide least privilege access to users when launching AWS infrastructure so
users cannot provision unapproved services.
Use a central account to manage the creation of infrastructure services.
Provide the ability to distribute infrastructure services to multiple accounts in AWS
Organizations.
Provide the ability to enforce tags on any infrastructure that is started by users.
Which combination of actions using AWS services will meet these requirements? (Choose
three.)

A. Develop infrastructure services using AWS Cloud Formation templates. Add the
templates to a central Amazon S3 bucket and add the-IAM roles or users that require
access to the S3 bucket policy.

B. Develop infrastructure services using AWS Cloud Formation templates. Upload each
template as an AWS Service Catalog product to portfolios created in a central AWS
account. Share these portfolios with the Organizations structure created for the company.

C. Allow user IAM roles to have AWSCloudFormationFullAccess and
AmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS account
root user level to deny all services except AWS CloudFormation and Amazon S3.

D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use an
automation script to import the central portfolios to local AWS accounts, copy the
TagOption assign users access and apply launch constraints.

E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required by
the company. Apply the TagOption to AWS Service Catalog products or portfolios.

F. Use the AWS CloudFormation Resource Tags property to enforce the application of tags
to any CloudFormation templates that will be created for users.


Question # 79

A company is migrating a legacy application from an on-premises data center to AWS. The
application consists of a single application server and a Microsoft SQL Server database server. Each server is deployed on a VMware VM that consumes 500 TB
of data across multiple attached volumes.
The company has established a 10 Gbps AWS Direct Connect connection from the closest
AWS Region to its on-premises data center. The Direct Connect connection is not currently
in use by other services.
Which combination of steps should a solutions architect take to migrate the application with
the LEAST amount of downtime? (Choose two.)

A. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the
database server VM to AWS.

B. Use VM Import/Export to import the application server VM.

C. Export the VM images to an AWS Snowball Edge Storage Optimized device.

D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate the
application server VM to AWS.

E. Use an AWS Database Migration Service (AWS DMS) replication instance to migrate
the database to an Amazon RDS DB instance.


Question # 80

A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for the
application's database. The DB cluster contains one small primary instance and three
larger replica instances. The application runs on an AWS Lambda function. The application
makes many short-lived connections to the database's replica instances to perform readonly
operations.
During periods of high traffic, the application becomes unreliable and the database reports
that too many connections are being established. The frequency of high-traffic periods is
unpredictable.
Which solution will improve the reliability of the application?

A. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only
endpoint for the proxy. Update the Lambda function to connect to the proxy endpoint.

B. Increase the max_connections setting on the DB cluster's parameter group. Reboot all
the instances in the DB cluster. Update the Lambda function to connect to the DB cluster
endpoint.

C. Configure instance scaling for the DB cluster to occur when the DatabaseConnections
metric is close to the max _ connections setting. Update the Lambda function to connect to
the Aurora reader endpoint.

D. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-only
endpoint for the Aurora Data API on the proxy. Update the Lambda function to connect to
the proxy endpoint.


Question # 81

A company is planning to migrate its on-premises transaction-processing application to
AWS. The application runs inside Docker containers that are hosted on VMS in the
company's data center. The Docker containers have shared storage where the application
records transaction data.
The transactions are time sensitive. The volume of transactions inside the application is
unpredictable. The company must implement a low-latency storage solution that will
automatically scale throughput to meet increased demand. The company cannot develop
the application further and cannot continue to administer the Docker hosting environment.
How should the company migrate the application to AWS to meet these requirements?

A. Migrate the containers that run the application to Amazon Elastic Kubernetes Service
(Amazon EKS). Use Amazon S3 to store the transaction data that the containers share.

B. Migrate the containers that run the application to AWS Fargate for Amazon Elastic
Container Service (Amazon ECS). Create an Amazon Elastic File System (Amazon EFS)
file system. Create a Fargate task definition. Add a volume to the task definition to point to
the EFS file system

C. Migrate the containers that run the application to AWS Fargate for Amazon Elastic
Container Service (Amazon ECS). Create an Amazon Elastic Block Store (Amazon EBS)
volume. Create a Fargate task definition. Attach the EBS volume to each running task.

D. Launch Amazon EC2 instances. Install Docker on the EC2 instances. Migrate the
containers to the EC2 instances. Create an Amazon Elastic File System (Amazon EFS) file
system. Add a mount point to the EC2 instances for the EFS file system.


Question # 82

An online retail company is migrating its legacy on-premises .NET application to AWS. The
application runs on load-balanced frontend web servers, load-balanced application servers,
and a Microsoft SQL Server database.
The company wants to use AWS managed services where possible and does not want to
rewrite the application. A solutions architect needs to implement a solution to resolve scaling issues and minimize licensing costs as the application scales.
Which solution will meet these requirements MOST cost-effectively?

A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application Load
Balancer for the web tier and for the application tier. Use Amazon Aurora PostgreSQL with
Babelfish turned on to replatform the SOL Server database.

B. Create images of all the servers by using AWS Database Migration Service (AWS
DMS). Deploy Amazon EC2 instances that are based on the on-premises imports. Deploy
the instances in an Auto Scaling group behind a Network Load Balancer for the web tier
and for the application tier. Use Amazon DynamoDB as the database tier.

C. Containerize the web frontend tier and the application tier. Provision an Amazon Elastic
Kubernetes Service (Amazon EKS) cluster. Create an Auto Scaling group behind a
Network Load Balancer for the web tier and for the application tier. Use Amazon RDS for
SOL Server to host the database.

D. Separate the application functions into AWS Lambda functions. Use Amazon API
Gateway for the web frontend tier and the application tier. Migrate the data to Amazon S3.
Use Amazon Athena to query the data.


Question # 83

A company is deploying a third-party web application on AWS. The application is packaged
as a Docker image. The company has deployed the Docker image as an AWS
Fargate service in Amazon Elastic Container Service (Amazon ECS). An Application Load
Balancer (ALB) directs traffic to the application.
The company needs to give only a specific list of users the ability to access the application
from the internet. The company cannot change the application and cannot integrate the
application with an identity provider. All users must be authenticated through multi-factor
authentication (MFA).
Which solution will meet these requirements?

A. Create a user pool in Amazon Cognito. Configure the pool for the application. Populate
the pool with the required users. Configure the pool to require MFA. Configure a listener
rule on the ALB to require authentication through the Amazon Cognito hosted UI.

B. Configure the users in AWS Identity and Access Management (IAM). Attach a resource
policy to the Fargate service to require users to use MFA. Configure a listener rule on the
ALB to require authentication through IAM.

C. Configure the users in AWS Identity and Access Management (IAM). Enable AWS IAM
Identity Center (AWS Single Sign-On). Configure resource protection for the ALB. Create a resource protection rule to require users to use MFA.

D. Create a user pool in AWS Amplify. Configure the pool for the application. Populate the
pool with the required users. Configure the pool to require MFA. Configure a listener rule
on the ALB to require authentication through the Amplify hosted UI.


Question # 84

A company built an ecommerce website on AWS using a three-tier web architecture. The
application is Java-based and composed of an Amazon CloudFront distribution, an Apache
web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend
Amazon Aurora MySQL database.
Last month, during a promotional sales event, users reported errors and timeouts while
adding items to their shopping carts. The operations team recovered the logs created by
the web servers and reviewed Aurora DB cluster performance metrics. Some of the web
servers were terminated before logs could be collected and the Aurora metrics were not
sufficient for query performance analysis.
Which combination of steps must the solutions architect take to improve application
performance visibility during peak traffic events? (Choose three.)

A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to Amazon
CloudWatch Logs.

B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instances
and implement tracing of SQL queries with the X-Ray SDK for Java.

C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to Amazon
Kinesis

D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to send
the Apache logs to CloudWatch Logs.

E. Enable and configure AWS CloudTrail to collect and analyze application activity from
Amazon EC2 and Aurora.

F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream to
AWS X-Ray.


Question # 85

A company provides a software as a service (SaaS) application that runs in the AWS
Cloud. The application runs on Amazon EC2 instances behind a Network Load Balancer
(NLB). The instances are in an Auto Scaling group and are distributed across three
Availability Zones in a single AWS Region.
The company is deploying the application into additional Regions. The company must
provide static IP addresses for the application to customers so that the customers can add
the IP addresses to allow lists.
The solution must automatically route customers to the Region that is geographically
closest to them.
Which solution will meet these requirements?

A. Create an Amazon CloudFront distribution. Create a CloudFront origin group. Add the
NLB for each additional Region to the origin group. Provide customers with the IP address
ranges of the distribution's edge locations.

B. Create an AWS Global Accelerator standard accelerator. Create a standard accelerator
endpoint for the NLB in each additional Region. Provide customers with the Global
Accelerator IP address.

C. Create an Amazon CloudFront distribution. Create a custom origin for the NLB in each
additional Region. Provide customers with the IP address ranges of the distribution's edge
locations.

D. Create an AWS Global Accelerator custom routing accelerator. Create a listener for the
custom routing accelerator. Add the IP address and ports for the NLB in each additional
Region. Provide customers with the Global Accelerator IP address.


Question # 86

A company has a project that is launching Amazon EC2 instances that are larger than
required. The project's account cannot be part of the company's organization in AWS
Organizations due to policy restrictions to keep this activity outside of corporate IT. The
company wants to allow only the launch of t3.small
EC2 instances by developers in the project's account. These EC2 instances must be
restricted to the us-east-2 Region.
What should a solutions architect do to meet these requirements?

A. Create a new developer account. Move all EC2 instances, users, and assets into useast-
2. Add the account to the company's organization in AWS Organizations. Enforce a tagging policy that denotes Region affinity.

B. Create an SCP that denies the launch of all EC2 instances except t3.small EC2
instances in us-east-2. Attach the SCP to the project's account.

C. Create and purchase a t3.small EC2 Reserved Instance for each developer in us-east-2.
Assign each developer a specific EC2 instance with their name as the tag.

D. Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2.
Attach the policy to the roles and groups that the developers use in the project's account.


Question # 87

A large company recently experienced an unexpected increase in Amazon RDS and
Amazon DynamoDB costs. The company needs to increase visibility into details of AWS
Billing and Cost Management There are various accounts associated with AWS
Organizations, including many development and production accounts There is no
consistent tagging strategy across the organization, but there are guidelines in place that
require all infrastructure to be deployed using AWS CloudFormation with consistent
tagging. Management requires cost center numbers and project ID numbers for all existing
and future DynamoDB tables and RDS instances.
Which strategy should the solutions architect provide to meet these requirements?

A. Use Tag Editor to tag existing resources Create cost allocation tags to define the cost
center and project ID and allow 24 hours for tags to propagate to existing resources.

B. Use an AWS Config rule to alert the finance team of untagged resources Create a
centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB
resources every hour using a cross-account role.

C. Use Tag Editor to tag existing resources Create cost allocation tags to define the cost
center and project ID Use SCPs to restrict resource creation that do not have the cost
center and project ID on the resource.

D. Create cost allocation tags to define the cost center and project ID and allow 24 hours
for tags to propagate to existing resources Update existing federated roles to restrict
privileges to provision resources that do not include the cost center and project ID on the
resource.


Question # 88

A company wants to migrate its website from an on-premises data center onto AWS. At the
same time, it wants to migrate the website to a containerized microservice-based
architecture to improve the availability and cost efficiency. The company's security policy
states that privileges and network permissions must be configured according to best
practice, using least privilege.
A Solutions Architect must create a containerized architecture that meets the security
requirements and has deployed the application to an Amazon ECS cluster.
What steps are required after the deployment to meet the requirements? (Choose two.)

A. Create tasks using the bridge network mode.

B. Create tasks using the awsvpc network mode.

C. Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instances
to access other resources.

D. Apply security groups to the tasks, and pass IAM credentials into the container at launch
time to access other resources.

E. Apply security groups to the tasks, and use IAM roles for tasks to access other
resources.


Question # 89

A company is migrating an application from on-premises infrastructure to the AWS Cloud.
During migration design meetings, the company expressed concerns about the availability
and recovery options for its legacy Windows file server. The file server contains sensitive
business-critical data that cannot be recreated in the event of data corruption or data loss.
According to compliance requirements, the data must not travel across the public internet.
The company wants to move to AWS managed services where possible.
The company decides to store the data in an Amazon FSx for Windows File Server file
system. A solutions architect must design a solution that copies the data to another AWS
Region for disaster recovery (DR) purposes.
Which solution will meet these requirements?

A. Create a destination Amazon S3 bucket in the DR Region. Establish connectivity
between the FSx for Windows File Server file system in the primary Region and the S3
bucket in the DR Region by using Amazon FSx File Gateway. Configure the S3 bucket as a
continuous backup source in FSx File Gateway.

B. Create an FSx for Windows File Server file system in the DR Region. Establish
connectivity between the VPC in the primary Region and the VPC in the DR Region by
using AWS Site-to-Site VPN. Configure AWS DataSync to communicate by using VPN
endpoints.

C. Create an FSx for Windows File Server file system in the DR Region. Establish
connectivity between the VPC in the primary Region and the VPC in the DR Region by using VPC peering. Configure AWS DataSync to communicate by using interface VPC
endpoints with AWS PrivateLink.

D. Create an FSx for Windows File Server file system in the DR Region. Establish
connectivity between the VPC in the primary Region and the VPC in the DR Region by
using AWS Transit Gateway in each Region. Use AWS Transfer Family to copy files
between the FSx for Windows File Server file system in the primary Region and the FSx for
Windows File Server file system in the DR Region over the private AWS backbone
network.


Question # 90

A company is building an application on AWS. The application sends logs to an Amazon
Elasticsearch Service (Amazon ES) cluster for analysis. All data must be stored within a
VPC.
Some of the company's developers work from home. Other developers work from three
different company office locations. The developers need to access
Amazon ES to analyze and visualize logs directly from their local development machines.
Which solution will meet these requirements?

A. Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint
with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the
developers to connect by using the client for Client VPN.

B. Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN.
Create an attachment to the transit gateway. Instruct the developers to connect by using an
OpenVPN client.

C. Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect
connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF
with the transit gateway. Instruct the developers to connect to the Direct Connect
connection

D. Create and configure a bastion host in a public subnet of the VPC. Configure the bastion
host security group to allow SSH access from the company CIDR ranges. Instruct the
developers to connect by using SSH.


Question # 91

A company owns a chain of travel agencies and is running an application in the AWS
Cloud. Company employees use the application to search for information about travel
destinations. Destination content is updated four times each year.
Two fixed Amazon EC2 instances serve the application. The company uses an Amazon
Route 53 public hosted zone with a multivalue record of travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB
as its primary data store. The company uses a self-hosted Redis instance as a caching
solution.
During content updates, the load on the EC2 instances and the caching solution increases
drastically. This increased load has led to downtime on several occasions. A solutions
architect must update the application so that the application is highly available and can
handle the load that is generated by the content updates.
Which solution will meet these requirements?

A. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to
use DAX. Create an Auto Scaling group for the EC2 instances. Create an Application Load
Balancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53
record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduled
scaling for the EC2 instances before the content updates.

B. Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache.
Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFront
distribution, and set the Auto Scaling group as an origin for the distribution. Update the
Route 53 record to use a simple routing policy that targets the CloudFront distribution's
DNS alias. Manually scale up EC2 instances before the content updates.

C. Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCache
Create an Auto Scaling group for the EC2 instances. Create an Application Load Balancer
(ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record to
use a simple routing policy that targets the ALB's DNS alias. Configure scheduled scaling
for the application before the content updates.

D. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application to
use DAX. Create an Auto Scaling group for the EC2 instances. Create an Amazon
CloudFront distribution, and set the Auto Scaling group as an origin for the distribution.
Update the Route 53 record to use a simple routing policy that targets the CloudFront
distribution's DNS alias. Manually scale up EC2 instances before the content updates.


Question # 92

A company that provisions job boards for a seasonal workforce is seeing an increase in
traffic and usage. The backend services run on a pair of Amazon EC2 instances behind an
Application Load Balancer with Amazon DynamoDB as the datastore. Application read and
write traffic is slow during peak seasons.
Which option provides a scalable application architecture to handle peak seasons with the
LEAST development effort?

A. Migrate the backend services to AWS Lambda. Increase the read and write capacity of
DynamoDB.

B. Migrate the backend services to AWS Lambda. Configure DynamoDB to use global
tables.

C. Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling.

D. Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service
(Amazon SQS) and an AWS Lambda function to write to DynamoDB.


Question # 93

A company has a new application that needs to run on five Amazon EC2 instances in a
single AWS Region. The application requires high-through put. low-latency network
connections between all to the EC2 instances where the application will run. There is no
requirement for the application to be fault tolerant.
Which solution will meet these requirements?

A. Launch five new EC2 instances into a cluster placement group. Ensure that the EC2
instance type supports enhanced networking.

B. Launch five new EC2 instances into an Auto Scaling group in the same Availability
Zone. Attach an extra elastic network interface to each EC2 instance.

C. Launch five new EC2 instances into a partition placement group. Ensure that the EC2
instance type supports enhanced networking.

D. Launch five new EC2 instances into a spread placement group Attach an extra elastic
network interface to each EC2 instance.


Question # 94

A company wants to migrate to AWS. The company is running thousands of VMs in a
VMware ESXi environment. The company has no configuration management database and
has little Knowledge about the utilization of the VMware portfolio.
A solutions architect must provide the company with an accurate inventory so that the
company can plan for a cost-effective migration.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Systems Manager Patch Manager to deploy Migration Evaluator to each VM.
Review the collected data in Amazon QuickSight. Identify servers that have high utilization.
Remove the servers that have high utilization from the migration list. Import the data to
AWS Migration Hub.

B. Export the VMware portfolio to a csv file. Check the disk utilization for each server.
Remove servers that have high utilization. Export the data to AWS Application Migration
Service. Use AWS Server Migration Service (AWS SMS) to migrate the remaining servers.

C. Deploy the Migration Evaluator agentless collector to the ESXi hypervisor. Review the
collected data in Migration Evaluator. Identify inactive servers. Remove the inactive servers
from the migration list. Import the data to AWS Migration Hub.

D. Deploy the AWS Application Migration Service Agent to each VM. When the data is
collected, use Amazon Redshift to import and analyze the data. Use Amazon QuickSight
for data visualization.


Question # 95

A company has migrated a legacy application to the AWS Cloud. The application runs on
three Amazon EC2 instances that are spread across three Availability Zones. One EC2
instance is in each Availability Zone. The EC2 instances are running in three private
subnets of the VPC and are set up as targets for an Application Load Balancer (ALB) that
is associated with three public subnets.
The application needs to communicate with on-premises systems. Only traffic from IP
addresses in the company's IP address range are allowed to access the on-premises
systems. The company's security team is bringing only one IP address from its internal IP
address range to the cloud. The company has added this IP address to the allow list for the
company firewall. The company also has created an Elastic IP address for this IP address.
A solutions architect needs to create a solution that gives the application the ability to
communicate with the on-premises systems. The solution also must be able to mitigate
failures automatically.
Which solution will meet these requirements?

A. Deploy three NAT gateways, one in each public subnet. Assign the Elastic IP address to
the NAT gateways. Turn on health checks for the NAT gateways. If a NAT gateway fails a
health check, recreate the NAT gateway and assign the Elastic IP address to the new NAT
gateway.

B. Replace the ALB with a Network Load Balancer (NLB). Assign the Elastic IP address to
the NLB Turn on health checks for the NLB. In the case of a failed health check, redeploy
the NLB in different subnets.

C. Deploy a single NAT gateway in a public subnet. Assign the Elastic IP address to the
NAT gateway. Use Amazon CloudWatch with a custom metric to
monitor the NAT gateway. If the NAT gateway is unhealthy, invoke an AWS Lambda
function to create a new NAT gateway in a different subnet. Assign the Elastic IP address
to the new NAT gateway.

D. Assign the Elastic IP address to the ALB. Create an Amazon Route 53 simple record
with the Elastic IP address as the value. Create a Route 53 health check. In the case of a
failed health check, recreate the ALB in different subnets.


Question # 96

A company has created an OU in AWS Organizations for each of its engineering teams
Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts
A solutions architect must design a solution so that each OU can view a breakdown of
usage costs across its AWS accounts. Which solution meets these requirements?

A. Create an AWS Cost and Usage Report (CUR) for each OU by using AWS Resource
Access Manager Allow each team to visualize the CUR through an Amazon QuickSight
dashboard.

B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizations
management account- Allow each team to visualize the CUR through an Amazon
QuickSight dashboard

C. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations member
account Allow each team to visualize the CUR through an Amazon QuickSight dashboard.

D. Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager Allow
each team to visualize the CUR through Systems Manager OpsCenter dashboards


Question # 97

A company built an application based on AWS Lambda deployed in an AWS
CloudFormation stack. The last production release of the web application introduced an
issue that resulted in an outage lasting several minutes. A solutions architect must adjust
the deployment process to support a canary release.
Which solution will meet these requirements?

A. Create an alias for every new deployed version of the Lambda function. Use the AWS
CLI update-alias command with the routing-config parameter to distribute the load.

B. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53
weighted routing policy to distribute the load.

C. Create a version for every new deployed Lambda function. Use the AWS CLI updatefunction-
configuration command with the routing-config parameter to distribute the load.

D. Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in the
Deployment configuration to distribute the load.


Question # 98

A company is running a critical application that uses an Amazon RDS for MySQL database
to store data. The RDS DB instance is deployed in Multi-AZ mode.
A recent RDS database failover test caused a 40-second outage to the application A
solutions architect needs to design a solution to reduce the outage time to less than 20
seconds.
Which combination of steps should the solutions architect take to meet these
requirements? (Select THREE.)

A. Use Amazon ElastiCache for Memcached in front of the database

B. Use Amazon ElastiCache for Redis in front of the database.

C. Use RDS Proxy in front of the database

D. Migrate the database to Amazon Aurora MySQL

E. Create an Amazon Aurora Replica

F. Create an RDS for MySQL read replica


Question # 99

A company has multiple AWS accounts. The company recently had a security audit that
revealed many unencrypted Amazon Elastic Block Store (Amazon EBS) volumes attached to Amazon EC2 instances.
A solutions architect must encrypt the unencrypted volumes and ensure that unencrypted
volumes will be detected automatically in the future. Additionally, the company wants a
solution that can centrally manage multiple AWS accounts with a focus on compliance and
security.
Which combination of steps should the solutions architect take to meet these
requirements? (Choose two.)

A. Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on
the strongly recommended guardrails. Join all accounts to the organization. Categorize the
AWS accounts into OUs.

B. Use the AWS CLI to list all the unencrypted volumes in all the AWS accounts. Run a
script to encrypt all the unencrypted volumes in place.

C. Create a snapshot of each unencrypted volume. Create a new encrypted volume from
the unencrypted snapshot. Detach the existing volume, and replace it with the encrypted
volume.

D. Create an organization in AWS Organizations. Set up AWS Control Tower, and turn on
the mandatory guardrails. Join all accounts to the organization. Categorize the AWS
accounts into OUs.

E. Turn on AWS CloudTrail. Configure an Amazon EventBridge (Amazon CloudWatch
Events) rule to detect and automatically encrypt unencrypted volumes.


Question # 100

An online gaming company needs to optimize the cost of its workloads on AWS. The
company uses a dedicated account to host the production environment for its online
gaming application and an analytics application.
Amazon EC2 instances host the gaming application and must always be vailable. The EC2
instances run all year. The analytics application uses data that is stored in Amazon S3. The
analytics application can be interrupted and resumed without issue.
Which solution will meet these requirements MOST cost-effectively?

A. Purchase an EC2 Instance Savings Plan for the online gaming application instances.
Use On-Demand Instances for the analytics application.

B. Purchase an EC2 Instance Savings Plan for the online gaming application instances.
Use Spot Instances for the analytics application.

C. Use Spot Instances for the online gaming application and the analytics application. Set
up a catalog in AWS Service Catalog to provision services at a discount.

D. Use On-Demand Instances for the online gaming application. Use Spot Instances for the
analytics application. Set up a catalog in AWS Service Catalog to provision services at a
discount.


Question # 101

A company uses a load balancer to distribute traffic to Amazon EC2 instances in a single
Availability Zone. The company is concerned about security and wants a solutions architect
to re-architect the solution to meet the following requirements:
• Inbound requests must be filtered for common vulnerability attacks.
• Rejected requests must be sent to a third-party auditing application.
• All resources should be highly available.
Which solution meets these requirements?

A. Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an
Application Load Balancer (ALB) and select the previously created Auto Scaling group as
the target. Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create a
web ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWS
Lambda function to frequently push the Amazon Inspector report to the third-party auditing
application.

B. Configure an Application Load Balancer (ALB) and add the EC2 instances as targets
Create a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name and
enable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequently
push the logs to the third-party auditing application.

C. Configure an Application Load Balancer (ALB) along with a target group adding the EC2
instances as targets. Create an Amazon Kinesis Data Firehose with the destination of the
third-party auditing application. Create a web ACL in WAF. Create an AWS WAF using the
web ACL and ALB then enable logging by selecting the Kinesis Data Firehose as the
destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as
the subscriber.

D. Configure a Multi-AZ Auto Scaling group using the application's AMI. Create an
Application Load Balancer (ALB) and select the previously created Auto Scaling group as
the target. Create an Amazon Kinesis Data Firehose with a destination of the third-party
auditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACL
and ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as the
subscriber.


Question # 102

A company needs to aggregate Amazon CloudWatch logs from its AWS accounts into one
central logging account. The collected logs must remain in the AWS Region of
creation. The central logging account will then process the logs, normalize the logs into
standard output format, and stream the output logs to a security tool for more processing.
A solutions architect must design a solution that can handle a large volume of logging data
that needs to be ingested. Less logging will occur outside normal business hours than
during normal business hours. The logging solution must scale with the anticipated load.
The solutions architect has decided to use an AWS Control Tower design to handle the
multi-account logging process.
Which combination of steps should the solutions architect take to meet the requirements?
(Select THREE.)

A. Create a destination Amazon Kinesis data stream in the central logging account.

B. Create a destination Amazon Simple Queue Service (Amazon SQS) queue in the
central logging account.

C. Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to
the Amazon Kinesis data stream. Create a trust policy. Specify the trust policy in the IAM
role. In each member account, create a subscription filter for each log group to send data to
the Kinesis data stream.

D. Create an IAM role that grants Amazon CloudWatch Logs the permission to add data to
the Amazon Simple Queue Service (Amazon SQS) queue. Create a trust policy. Specify the trust policy in the IAM role. In each member account, create a single subscription filter
for all log groups to send data to the SQS queue.

E. Create an AWS Lambda function. Program the Lambda function to normalize the logs in
the central logging account and to write the logs to the security tool.

F. Create an AWS Lambda function. Program the Lambda function to normalize the logs in
the member accounts and to write the logs to the security tool.


Question # 103

A large payroll company recently merged with a small staffing company. The unified
company now has multiple business units, each with its own existing AWS account.
A solutions architect must ensure that the company can centrally manage the billing and
access policies for all the AWS accounts. The solutions architect configures AWS
Organizations by sending an invitation to all member accounts of the company from a
centralized management account. What should the solutions architect do next to meet these requirements?

A. Create the OrganizationAccountAccess IAM group in each member account. Include the
necessary IAM roles for each administrator.

B. Create the OrganizationAccountAccessPoIicy IAM policy in each member account.
Connect the member accounts to the management account by using cross- account
access.

C. Create the OrganizationAccountAccessRoIe IAM role in each member account. Grant
permission to the management account to assume the IAM role.

D. Create the OrganizationAccountAccessRoIe IAM role in the management account.
Attach the AdministratorAccess AWS managed policy to the IAM role. Assign the IAM role
to the administrators in each member account.


Question # 104

A company runs a web application on AWS. The web application delivers static content
from an Amazon S3 bucket that is behind an Amazon CloudFront distribution. The
application serves dynamic content by using an Application Load Balancer (ALB) that
distributes requests to a fleet of Amazon EC2 instances in Auto Scaling groups. The
application uses a domain name setup in Amazon Route 53.
Some users reported occasional issues when the users attempted to access the website
during peak hours. An operations team found that the ALB sometimes returned HTTP 503
Service Unavailable errors. The company wants to display a custom error message page
when these errors occur. The page should be displayed immediately for this error code.
Which solution will meet these requirements with the LEAST operational overhead?

A. Set up a Route 53 failover routing policy. Configure a health check to determine the
status of the ALB endpoint and to fail over to the failover S3 bucket endpoint.

B. Create a second CloudFront distribution and an S3 static website to host the custom
error page. Set up a Route 53 failover routing policy. Use an active-passive configuration
between the two distributions.

C. Create a CloudFront origin group that has two origins. Set the ALB endpoint as the
primary origin. For the secondary origin, set an S3 bucket that is configured to host a static
website Set up origin failover for the CloudFront distribution. Update the S3 static website
to incorporate the custom error page.

D. Create a CloudFront function that validates each HTTP response code that the ALB
returns. Create an S3 static website in an S3 bucket. Upload the custom error page to the
S3 bucket as a failover. Update the function to read the S3 bucket and to serve the error
page to the end users.


Question # 105

A company's solutions architect needs to provide secure Remote Desktop connectivity to
users for Amazon EC2 Windows instances that are hosted in a VPC. The solution must
integrate centralized user management with the company's on-premises Active Directory.
Connectivity to the VPC is through the internet. The company has hardware that can be
used to establish an AWS Site-to-Site VPN connection.
Which solution will meet these requirements MOST cost-effectively?

A. Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active
Directory. Establish a trust with the on-premises Active Directory. Deploy an EC2 instance
as a bastion host in the VPC. Ensure that the EC2 instance is joined to the domain. Use
the bastion host to access the target instances through RDP.

B. Configure AWS IAM Identity Center (AWS Single Sign-On) to integrate with the onpremises
Active Directory by using the AWS Directory Service for Microsoft Active
Directory AD Connector. Configure permission sets against user groups for access to AWS
Systems Manager. Use Systems Manager Fleet Manager to access the target instances
through RDP.

C. Implement a VPN between the on-premises environment and the target VPC. Ensure
that the target instances are joined to the on-premises Active Directory domain over the
VPN connection. Configure RDP access through the VPN. Connect from the company's
network to the target instances.

D. Deploy a managed Active Directory by using AWS Directory Service for Microsoft Active
Directory. Establish a trust with the on-premises Active Directory. Deploy a Remote
Desktop Gateway on AWS by using an AWS Quick Start. Ensure that the Remote Desktop
Gateway is joined to the domain. Use the Remote Desktop Gateway to access the target
instances through RDP.


Question # 106

A team of data scientists is using Amazon SageMaker instances and SageMaker APIs to
train machine learning (ML) models. The SageMaker instances are deployed in a
VPC that does not have access to or from the internet. Datasets for ML model training are
stored in an Amazon S3 bucket. Interface VPC endpoints provide access to Amazon S3
and the SageMaker APIs.
Occasionally, the data scientists require access to the Python Package Index (PyPl)
repository to update Python packages that they use as part of their workflow. A solutions
architect must provide access to the PyPI repository while ensuring that the SageMaker instances remain isolated from the internet.
Which solution will meet these requirements?

A. Create an AWS CodeCommit repository for each package that the data scientists need
to access. Configure code synchronization between the PyPl repository and the
CodeCommit repository. Create a VPC endpoint for CodeCommit.

B. Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internet
with a network ACL that allows access to only the PyPl repository endpoint.

C. Create a NAT instance in the VPC. Configure VPC routes to allow access to the
internet. Configure SageMaker notebook instance firewall rules that allow access to only
the PyPI repository endpoint.

D. Create an AWS CodeArtifact domain and repository. Add an external connection for
public:pypi to the CodeArtifact repository. Configure the Python client to use the
CodeArtifact repository. Create a VPC endpoint for CodeArtifact.


Question # 107

A company plans to deploy a new private intranet service on Amazon EC2 instances inside
a VPC. An AWS Site-to-Site VPN connects the VPC to the company's on-premise network. The new service must communicate with existing on-premises services The onpremises
services are accessible through the use of hostnames that reside in the company
example DNS zone This DNS zone is wholly hosted on premises and is available only on
the company's private network.
A solutions architect must ensure that the new service can resolve hostnames on the
company example domain to integrate with existing services.
Which solution meets these requirements?

A. Create an empty private zone in Amazon Route 53 for company example Add an
additional NS record to the company's on-premises company example zone that points to
the authoritative name servers for the new private zone in Route 53

B. Turn on DNS hostnames for the VPC Configure a new outbound endpoint with Amazon
Route 53 Resolver. Create a Resolver rule to forward requests for company example to the
on-premises name servers

C. Turn on DNS hostnames for the VPC Configure a new inbound resolver endpoint with
Amazon Route 53 Resolver. Configure the on-premises DNS server to forward requests for
company example to the new resolver.

D. Use AWS Systems Manager to configure a run document that will install a hosts file that
contains any required hostnames. Use an Amazon EventBndge rule to run the document
when an instance is entering the running state.


Question # 108

A company runs its application in the eu-west-1 Region and has one account for each of its
environments development, testing, and production All the environments are running 24
hours a day 7 days a week by using stateful Amazon EC2 instances and Amazon RDS for
MySQL databases The databases are between 500 GB and 800 GB in size
The development team and testing team work on business days during business hours, but
the production environment operates 24 hours a day. 7 days a week. The company wants
to reduce costs AH resources are tagged with an environment tag with either development,
testing, or production as the key.
What should a solutions architect do to reduce costs with the LEAST operational effort?

A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs once every
day Configure the rule to invoke one AWS Lambda function that starts or stops instances
based on the tag day and time.

B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every
business day in the evening. Configure the rule to invoke an AWS Lambda function that
stops instances based on the tag-Create a second EventBridge (CloudWatch Events) rule
that runs every business day in the morning Configure the second rule to invoke another
Lambda function that starts instances based on the tag

C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs every
business day in the evening Configure the rule to invoke an AWS Lambda function that
terminates instances based on the tag Create a second EventBridge (CloudWatch Events)
rule that runs every business day in the morning Configure the second rule to invoke
another Lambda function that restores the instances from their last backup based on the
tag.

D. Create an Amazon EventBridge rule that runs every hour. Configure the rule to invoke
one AWS Lambda function that terminates or restores instances from their last backup
based on the tag. day, and time.


Question # 109

A company has used infrastructure as code (IaC) to provision a set of two Amazon EC2
instances. The instances have remained the same for several years.
The company's business has grown rapidly in the past few months. In response the
company's operations team has implemented an Auto Scaling group to manage the
sudden increases in traffic. Company policy requires a monthly installation of security
updates on all operating systems that are running. The most recent security update required a reboot. As a result, the Auto Scaling group
terminated the instances and replaced them with new, unpatched instances.
Which combination of steps should a solutions architect recommend to avoid a recurrence
of this issue? (Choose two.)

A. Modify the Auto Scaling group by setting the Update policy to target the oldest launch
configuration for replacement.

B. Create a new Auto Scaling group before the next patch maintenance. During the
maintenance window, patch both groups and reboot the instances.

C. Create an Elastic Load Balancer in front of the Auto Scaling group. Configure monitoring
to ensure that target group health checks return healthy after the Auto Scaling group
replaces the terminated instances.

D. Create automation scripts to patch an AMI, update the launch configuration, and invoke
an Auto Scaling instance refresh.

E. Create an Elastic Load Balancer in front of the Auto Scaling group. Configure
termination protection on the instances.


Question # 110

A company has application services that have been containerized and deployed on multiple
Amazon EC2 instances with public IPs. An Apache Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for
PostgreSQL. The company expects a significant increase of orders on its platform when a
new version of its flagship product is released.
What changes to the current architecture will reduce operational overhead and support the
product release?

A. Create an EC2 Auto Scaling group behind an Application Load Balancer. Create
additional read replicas for the DB instance. Create Amazon Kinesis data streams and
configure the application services to use the data streams. Store and serve static content
directly from Amazon S3.

B. Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DB
instance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis data
streams and configure the application services to use the data streams. Store and serve
static content directly from Amazon S3.

C. Deploy the application on a Kubernetes cluster created on the EC2 instances behind an
Application Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storage
auto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster and
configure the application services to use the cluster. Store static content in Amazon S3
behind an Amazon CloudFront distribution.

D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWS
Fargate and enable auto scaling behind an Application Load Balancer. Create additional
read replicas for the DB instance. Create an Amazon Managed Streaming for Apache
Kafka cluster and configure the application services to use the cluster. Store static content
in Amazon S3 behind an Amazon CloudFront distribution.


Question # 111

A company manages hundreds of AWS accounts centrally in an organization in AWS
Organizations. The company recently started to allow product teams to create and manage
their own S3 access points in their accounts. The S3 access points can be accessed only
within VPCs not on the internet.
What is the MOST operationally efficient way to enforce this requirement?

A. Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unless
the s3: AccessPointNetworkOngm condition key evaluates to VPC.

B. Create an SCP at the root level in the organization to deny the s3 CreateAccessPoint
action unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC.

C. Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS account
that allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigin
condition key evaluates to VPC.

D. Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3
AccessPointNetworkOrigin condition key evaluates to VPC.


Question # 112

A company's CISO has asked a Solutions Architect to re-engineer the company's current
CI/CD practices to make sure patch deployments to its applications can happen as quickly
as possible with minimal downtime if vulnerabilities are discovered. The company must
also be able to quickly roll back a change in case of errors.
The web application is deployed in a fleet of Amazon EC2 instances behind an Application
Load Balancer. The company is currently using GitHub to host the application source code,
and has configured an AWS CodeBuild project to build the application. The company also
intends to use AWS CodePipeline to trigger builds from GitHub commits using the existing
CodeBuild project.
What CI/CD configuration meets all of the requirements?

A. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for inplace
deployment. Monitor the newly deployed code, and, if there are any issues, push
another code update.
B. Configure CodePipeline with a deploy stage using AWS CodeDeploy configured for
blue/green deployments. Monitor the newly deployed code, and, if there are any issues,
trigger a manual rollback using CodeDeploy.

C. Configure CodePipeline with a deploy stage using AWS CloudFormation to create a
pipeline for test and production stacks. Monitor the newly deployed code, and, if there are
any issues, push another code update.

D. Configure the CodePipeline with a deploy stage using AWS OpsWorks and in-place
deployments. Monitor the newly deployed code, and, if there are any issues, push another
code update.


Question # 113

A company is planning to migrate its on-premises VMware cluster of 120 VMS to AWS.
The VMS have many different operating systems and many custom software
packages installed. The company also has an on-premises NFS server that is 10 TB in
size. The company has set up a 10 GbpsAWS Direct Connect connection to AWS for the
migration
Which solution will complete the migration to AWS in the LEAST amount of time?

A. Export the on-premises VMS and copy them to an Amazon S3 bucket. Use VM
Import/Export to create AMIS from the VM images that are stored in Amazon S3. Order an
AWS Snowball Edge device. Copy the NFS server data to the device. Restore the NFS
server data to an Amazon EC2 instance that has NFS configured.

B. Configure AWS Application Migration Service with a connection to the VMware cluster.
Create a replication job for the VMS. Create an Amazon Elastic File System (Amazon EFS)
file system. Configure AWS DataSync to copy the NFS server data to the EFS file system
over the Direct Connect connection.

C. Recreate the VMS on AWS as Amazon EC2 instances. Install all the required software
packages. Create an Amazon FSx for Lustre file system. Configure AWS DataSync to copy
the NFS server data to the FSx for Lustre file system over the Direct Connect connection.

D. Order two AWS Snowball Edge devices. Copy the VMS and the NFS server data to the
devices. Run VM Import/Export after the data from the devices is loaded to an Amazon S3
bucket. Create an Amazon Elastic File System (Amazon EFS) file system. Copy the NFS server data from Amazon S3 to the EFS file system.


Question # 114

A Solutions Architect wants to make sure that only AWS users or roles with suitable
permissions can access a new Amazon API Gateway endpoint. The Solutions
Architect wants an end-to-end view of each request to analyze the latency of the request
and create service maps.
How can the Solutions Architect design the API Gateway access control and perform
request inspections?

A. For the API Gateway method, set the authorization to AWS_IAM. Then, give the IAM user or role execute-api:Invoke permission on the REST API resource. Enable the API

caller to sign requests with AWS Signature when accessing the endpoint. Use AWS X-Ray
to trace and analyze user requests to API Gateway.

B. For the API Gateway resource, set CORS to enabled and only return the company's
domain in Access-Control-Allow-Origin headers. Then, give the IAM user or role executeapi:
Invoke permission on the REST API resource. Use Amazon CloudWatch to trace and
analyze user requests to API Gateway.

C. Create an AWS Lambda function as the custom authorizer, ask the API client to pass
the key and secret when making the call, and then use Lambda to validate the key/secret
pair against the IAM system. Use AWS X-Ray to trace and analyze user requests to API
Gateway.

D. Create a client certificate for API Gateway. Distribute the certificate to the AWS users
and roles that need to access the endpoint. Enable the API caller to pass the client
certificate when accessing the endpoint. Use Amazon CloudWatch to trace and analyze
user requests to API Gateway.


Question # 115

A live-events company is designing a scaling solution for its ticket application on AWS. The
application has high peaks of utilization during sale events. Each sale event is a one-time
event that is scheduled. The application runs on Amazon EC2 instances that are in an Auto
Scaling group.
The application uses PostgreSQL for the database layer.
The company needs a scaling solution to maximize availability during the sale events.
Which solution will meet these requirements?

A. Use a predictive scaling policy for the EC2 instances. Host the database on an Amazon
Aurora PostgreSQL Serverless v2 Multi-AZ DB instance with automatically scaling read
replicas. Create an AWS Step Functions state machine to run parallel AWS Lambda
functions to pre-warm the database before a sale event. Create an Amazon EventBridge
rule to invoke the state machine.

B. Use a scheduled scaling policy for the EC2 instances. Host the database on an Amazon
RDS for PostgreSQL Multi-AZ DB instance with automatically scaling read replicas. Create
an Amazon EventBridge rule that invokes an AWS Lambda function to create a larger read
replica before a sale event. Fail over to the larger read replica. Create another EventBridge
rule that invokes another Lambda function to scale down the read replica after the sale
event.

C. Use a predictive scaling policy for the EC2 instances. Host the database on an Amazon
RDS for PostgreSQL Multi-AZ DB instance with automatically scaling read replicas. Create
an AWS Step Functions state machine to run parallel AWS Lambda functions to pre-warm
the database before a sale event. Create an Amazon EventBridge rule to invoke the state
machine.

D. Use a scheduled scaling policy for the EC2 instances. Host the database on an Amazon
Aurora PostgreSQL Multi-AZ DB cluster. Create an Amazon EventBridge rule that invokes
an AWS Lambda function to create a larger Aurora Replica before a sale event. Fail over to
the larger Aurora Replica. Create another EventBridge rule that invokes another Lambda
function to scale down the Aurora Replica after the sale event.


Question # 116

A company is building an image service on the web that will allow users to upload and
search random photos. At peak usage, up to 10.000 users worldwide will upload their
images. The service will then overlay text on the uploaded images, which will then be
published on the company website.
Which design should a solutions architect implement?

A. Store the uploaded images in Amazon Elastic File System (Amazon EFS). Send
application log information about each image to Amazon CloudWatch Logs Create a fleet
of Amazon EC2 instances that use CloudWatch Logs to determine which images need to
be processed Place processed images in another directory in Amazon EFS. Enable
Amazon CloudFront and configure the origin to be the one of the EC2 instances in the fleet

B. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event
notification to send a message to Amazon Simple Notification Service (Amazon SNS)
Create a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) to pull
messages from Amazon SNS to process the images and place them in Amazon Elastic File
System (Amazon EFS) Use Amazon CloudWatch metrics for the SNS message volume to
scale out EC2 instances. Enable Amazon CloudFront and configure the origin to be the
ALB in front of the EC2 instances

C. Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event
notification to send a message to the Amazon Simple Queue Service (Amazon SQS)
queue Create a fleet of Amazon EC2 instances to pull messages from the SQS queue to
process the images and place them in another S3 bucket. Use Amazon CloudWatch
metncs for queue depth to scale out EC2 instances Enable Amazon CloudFront and
configure the origin to be the S3 bucket that contains the processed images.

D. Store the uploaded images on a shared Amazon Elastic Block Store (Amazon EBS)
volume amounted to a fleet of Amazon EC2 Spot instances. Create an Amazon
DynamoDB table that contains information about each uploaded image and whether it has
been processed Use an Amazon EventBndge rule to scale out EC2 instances. Enable
Amazon CloudFront and configure the origin to reference an Elastic Load Balancer in front
of the fleet of EC2 instances.


Question # 117

A company is building an application that will run on an AWS Lambda function. Hundreds
of customers will use the application. The company wants to give each customer a quota of
requests for a specific time period. The quotas must match customer usage patterns. Some
customers must receive a higher quota for a shorter time period.
Which solution will meet these requirements?

A. Create an Amazon API Gateway REST API with a proxy integration to invoke the
Lambda function. For each customer, configure an API Gateway usage plan that includes
an appropriate request quota. Create an API key from the usage plan for each user that the
customer needs.

B. Create an Amazon API Gateway HTTP API with a proxy integration to invoke the
Lambda function. For each customer, configure an API Gateway usage plan that includes
an appropriate request quota. Configure route-level throttling for each usage plan. Create
an API key from the usage plan for each user that the customer needs.

C. Create a Lambda function alias for each customer. Include a concurrency limit with an
appropriate request quota. Create a Lambda function URL for each function alias. Share
the Lambda function URL for each alias with the relevant customer.

D. Create an Application Load Balancer (ALB) in a VPC. Configure the Lambda function as
a target for the ALB. Configure an AWS WAF web ACL for the ALB. For each customer,
configure a rate-based rule that includes an appropriate request quota.


Question # 118

A company runs applications in hundreds of production AWS accounts. The company uses
AWS Organizations with all features enabled and has a centralized backup
operation that uses AWS Backup.
The company is concerned about ransomware attacks. To address this concern, the
company has created a new policy that all backups must be resilient to breaches of
privileged-user credentials in any production account.
Which combination of steps will meet this new requirement? (Select THREE.)

A. Implement cross-account backup with AWS Backup vaults in designated non-production accounts.


B. Add an SCP that restricts the modification of AWS Backup vaults.

C. Implement AWS Backup Vault Lock in compliance mode.

D. Configure the backup frequency, lifecycle, and retention period to ensure that at least
one backup always exists in the cold tier.

E. Configure AWS Backup to write all backups to an Amazon S3 bucket in a designated
non-production account. Ensure that the S3 bucket has S3 Object Lock enabled.

F. Implement least privilege access for the IAM service role that is assigned to AWS
Backup.


Question # 119

A financial services company has an asset management product that thousands of
customers use around the world. The customers provide feedback about the product
through surveys. The company is building a new analytical solution that runs on Amazon
EMR to analyze the data from these surveys. The following user personas need to access
the analytical solution to perform different actions:
• Administrator: Provisions the EMR cluster for the analytics team based on the team's
requirements
• Data engineer: Runs E TL scripts to process, transform, and enrich the datasets
• Data analyst: Runs SQL and Hive queries on the data
A solutions architect must ensure that all the user personas have least privilege access to
only the resources that they need. The user personas must be able to launch only
applications that are approved and authorized. The solution also must ensure tagging for
all resources that the user personas create.
Which solution will meet these requirements?

A. Create IAM roles for each user persona. Attach identity-based policies to define which
actions the user who assumes the role can perform. Create an AWS Config rule to check
for noncompliant resources. Configure the rule to notify the administrator to remediate the
noncompliant resources.

B. Set up Kerberos-based authentication for EMR clusters upon launch. Specify a
Kerberos security configuration along with cluster-specific Kerberos options.

C. Use AWS Service Catalog to control the Amazon EMR versions available for
deployment, the cluster configuration, and the permissions for each user persona.

D. Launch the EMR cluster by using AWS CloudFormation. Attach resource-based policies
to the EMR cluster during cluster creation. Create an AWS Config rule to check for
noncompliant clusters and noncompliant Amazon S3 buckets. Configure the rule to notify
the administrator to remediate the noncompliant resources.


Question # 120

A company implements a containerized application by using Amazon Elastic Container
Service (Amazon ECS) and Amazon API Gateway. The application data is stored in
Amazon Aurora databases and Amazon DynamoDB databases The company automates
infrastructure provisioning by using AWS CloudFormation The company automates
application deployment by using AWS CodePipeline.
A solutions architect needs to implement a disaster recovery (DR) strategy that meets an
RPO of 2 hours and an RTO of 4 hours.
Which solution will meet these requirements MOST cost-effectively'?

A. Set up an Aurora global database and DynamoDB global tables to replicate the
databases to a secondary AWS Region. In the primary Region and in the secondary
Region, configure an API Gateway API with a Regional Endpoint Implement Amazon
CloudFront with origin failover to route traffic to the secondary Region during a DR scenario

B. Use AWS Database Migration Service (AWS DMS). Amazon EventBridge. and AWS
Lambda to replicate the Aurora databases to a secondary AWS Region Use DynamoDB
Streams EventBridge, and Lambda to replicate the DynamoDB databases to the secondary
Region. In the primary Region and in the secondary Region, configure an API Gateway API
with a Regional Endpoint Implement Amazon Route 53 failover routing to switch traffic from
the primary Region to the secondary Region.

C. Use AWS Backup to create backups of the Aurora databases and the DynamoDB
databases in a secondary AWS Region. In the primary Region and in the secondary
Region, configure an API Gateway API with a Regional endpoint. Implement Amazon
Route 53 failover routing to switch traffic from the primary Region to the secondary Region

D. Set up an Aurora global database and DynamoDB global tables to replicate the
databases to a secondary AWS Region. In the primary Region and in the secondary
Region, configure an API Gateway API with a Regional endpoint Implement Amazon Route
53 failover routing to switch traffic from the primary Region to the secondary Region


Question # 121

A company is planning to migrate an Amazon RDS for Oracle database to an RDS for
PostgreSQL DB instance in another AWS account. A solutions architect needs to design a
migration strategy that will require no downtime and that will minimize the amount of time
necessary to complete the migration. The migration strategy must replicate all existing data
and any new data that is created during the migration The target database must be
identical to the source database at completion of the migration process
All applications currently use an Amazon Route 53 CNAME record as their endpoint for
communication with the RDS for Oracle DB instance The RDS for Oracle DB instance is in
a private subnet.
Which combination of steps should the solutions architect take to meet these
requirements? (Select THREE)

A. Create a new RDS for PostgreSQL DB instance in the target account Use the AWS
Schema Conversion Tool (AWS SCT) to migrate the database schema from the source
database to the target database

B. Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for
PostgreSQL DB instance in the target account with the schema and initial data from the
source database

C. Configure VPC peering between the VPCs in the two AWS accounts to provide
connectivity to both DB instances from the target account. Configure the security groups
that are attached to each DB instance to allow traffic on the database port from the VPC in
the target account.

D. Temporarily allow the source DB instance to be publicly accessible to provide
connectivity from the VPC in the target account Configure the security groups that are
attached to each DB instance to allow traffic on the database port from the VPC in the
target account.

E. Use AWS Database Migration Service (AWS DMS) in the target account to perform a full
load plus change data capture (CDC) migration from the source database to the target
database When the migration is complete, change the CNAME record to point to the target
DB instance endpoint

F. Use AWS Database Migration Service (AWS DMS) in the target account to perform a
change data capture (CDC) migration from the source database to the target database
When the migration is complete change the CNAME record to point to the target DB
instance endpoint.


Question # 122

A company hosts a web application on AWS in the us-east-1 Region The application
servers are distributed across three Availability Zones behind an Application Load
Balancer. The database is hosted in a MySQL database on an Amazon EC2 instance A solutions architect needs to design a Cross-Region data recovery solution using AWS
services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The
solutions architect is deploying application servers in us-west-2, and has configured
Amazon Route 53 hearth checks and DNS failover to us-west-2
Which additional step should the solutions architect take?

A. Migrate the database to an Amazon RDS tor MySQL instance with a cross-Region read
replica in us-west-2

B. Migrate the database to an Amazon Aurora global database with the primary in us-east-
1 and the secondary in us-west-2

C. Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ
deployment.

D. Create a MySQL standby database on an Amazon EC2 instance in us-west-2


Question # 123

A company is deploying a new API to AWS. The API uses Amazon API Gateway with a
Regional API endpoint and an AWS Lambda function for hosting. The API retrieves data
from an external vendor API, stores data in an Amazon DynamoDB global table, and
retrieves data from the DynamoDB global table. The API key for the vendor's API is stored
in AWS Secrets Manager and is encrypted with a customer managed key in AWS Key
Management Service (AWS KMS). The company has deployed its own API into a single
AWS Region.
A solutions architect needs to change the API components of the company's API to ensure
that the components can run across multiple Regions in an active-active configuration.
Which combination of changes will meet this requirement with the LEAST operational
overhead? (Choose three.)

A. Deploy the API to multiple Regions. Configure Amazon Route 53 with custom domain
names that route traffic to each Regional API endpoint. Implement a Route 53 multivalue
answer routing policy.

B. Create a new KMS multi-Region customer managed key. Create a new KMS customer
managed replica key in each in-scope Region.

C. Replicate the existing Secrets Manager secret to other Regions. For each in-scope
Region's replicated secret, select the appropriate KMS key.

D. Create a new AWS managed KMS key in each in-scope Region. Convert an existing
key to a multi-Region key. Use the multi-Region key in other Regions.

E. Create a new Secrets Manager secret in each in-scope Region. Copy the secret value
from the existing Region to the new secret in each in-scope Region.

F. Modify the deployment process for the Lambda function to repeat the deployment across in-scope Regions. Turn on the multi-Region option for the existing API. Select the Lambda
function that is deployed in each Region as the backend for the multi-Region API.


Question # 124

A company is running a workload that consists of thousands of Amazon EC2 instances. The workload is running in a VPC that contains several public subnets and private subnets.

The public subnets have a route for 0.0.0.0/0 to an existing internet gateway. The private
subnets have a route for 0.0.0.0/0 to an existing NAT gateway.
A solutions architect needs to migrate the entire fleet of EC2 instances to use IPv6. The
EC2 instances that are in private subnets must not be accessible from the public internet.
What should the solutions architect do to meet these requirements?

A. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all
subnets. Update all the VPC route tables, and add a route for ::/0 to the internet gateway.

B. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the
VPC and all subnets. Update the VPC route tables for all private subnets, and add a route
for ::/0 to the NAT gateway.

C. Update the existing VPC, and associate an Amazon-provided IPv6 CIDR block with the
VPC and all subnets. Create an egress-only internet gateway. Update the VPC route tables
for all private subnets, and add a route for ::/0 to the egress-only internet gateway.

D. Update the existing VPC, and associate a custom IPv6 CIDR block with the VPC and all
subnets. Create a new NAT gateway, and enable IPv6 support. Update the VPC route
tables for all private subnets, and add a route for ::/0 to the IPv6-enabled NAT gateway.


Question # 125

A solutions architect is reviewing an application's resilience before launch. The application
runs on an Amazon EC2 instance that is deployed in a private subnet of a VPC.
The EC2 instance is provisioned by an Auto Scaling group that has a minimum capacity of I
and a maximum capacity of I. The application stores data on an Amazon RDS for MySQL
DB instance. The VPC has subnets configured in three Availability Zones and is configured
with a single NAT gateway.
The solutions architect needs to recommend a solution to ensure that the application will
operate across multiple Availability Zones.
Which solution will meet this requirement?

A. Deploy an additional NAT gateway in the other Availability Zones. Update the route
tables with appropriate routes. Modify the RDS for MySQL DB instance to a Multi-AZ
configuration. Configure the Auto Scaling group to launch instances across Availability
Zones. Set the minimum capacity and maximum capacity of the Auto Scaling group to 3.

B. Replace the NAT gateway with a virtual private gateway. Replace the RDS for MySQL
DB instance with an Amazon Aurora MySQL DB cluster. Configure the Auto Scaling group
to launch instances across all subnets in the VPC. Set the minimum capacity and
maximum capacity of the Auto Scaling group to 3.

C. Replace the NAT gateway with a NAT instance. Migrate the RDS for MySQL DB
instance to an RDS for PostgreSQL DB instance. Launch a new EC2 instance in the other
Availability Zones.

D. Deploy an additional NAT gateway in the other Availability Zones. Update the route tables with appropriate routes. Modify the RDS for MySQL DB instance to turn on
automatic backups and retain the backups for 7 days. Configure the Auto Scaling group to
launch instances across all subnets in the VPC. Keep the minimum capacity and the
maximum capacity of the Auto Scaling group at 1.


Question # 126

A company is running an application on premises. The application uses a set of web
servers that host a static React-based single-page application (SPA), a Node.js API, and a
MYSQL database server. The database is read intensive. The company will need to
expand the database's storage at an unpredictable rate.
The company must migrate the application to AWS. The company also must modernize the
architecture to reduce infrastructure management and increase scalability.
Which solution will meet these requirements with the LEAST operational overhead?

A. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon
RDS for MySQL. Use AWS Application Migration Service to migrate the web application to
a fleet of Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer.
Use a Spot Fleet with a request type of request to host the API.

B. Use AWS Database Migration Service (AWS DMS) to migrate the database to Amazon
Aurora MySQL. Copy the web files to an Amazon S3 bucket and set up web hosting. Copy
the API code to AWS Lambda functions. Configure Amazon API Gateway to point to the
Lambda functions.

C. Use AWS Database Migration Service (AWS DMS) to migrate the database to a MySQL
database that runs on Amazon EC2 instances. Use AWS DataSync to migrate the web files and API files to an Amazon FSx for Windows File Server file system. Set up a fleet of EC2
instances in an Auto Scaling group as web servers. Mount the FSx for Windows File Server
file system.

D. Use AWS Application Migration Service to migrate the database to Amazon EC2
instances. Copy the web files to containers that run on Amazon Elastic Kubernetes Service
(Amazon EKS). Set up an Elastic Load Balancing (ELB) load balancer for the EC2
instances and EKS containers. Copy the API code to AWS Lambda functions. Configure
Amazon API Gateway to point to the Lambda functions.


Question # 127

A company has AWS accounts that are in an organization in AWS rganizations. The
company wants to track Amazon EC2 usage as a metric.
The company's architecture team must receive a daily alert if the EC2 usage is more than
10% higher than the average EC2 usage from the last 30 days.
Which solution will meet these requirements?

A. Configure AWS Budgets in the organization's management account. Specify a usage
type of EC2 running hours. Specify a daily period. Set the budget amount to be 10% more
than the reported average usage for the last 30 days from AWS Cost Explorer.

B. Configure an alert to notify the architecture team if the usage threshold is met. Configure
AWS Cost Anomaly Detection in the organization's management account. Configure a
monitor type of AWS Service. Apply a filter of Amazon EC2. Configure an alert subscription
to notify the architecture team if the usage is 10% more than the average usage for the last
30 days.

C. Enable AWS Trusted Advisor in the organization's management account. Configure a
cost optimization advisory alert to notify the architecture team if the EC2 usage is 10%
more than the reported average usage for the last 30 days.

D. Configure Amazon Detective in the organization's management account. Configure an
EC2 usage anomaly alert to notify the architecture team if Detective identifies a usage
anomaly of more than 10%.


Question # 128

A solutions architect must update an application environment within AWS Elastic Beanstalk
using a blue/green deployment methodology The solutions architect creates an
environment that is identical to the existing application environment and deploys the
application to the new environment.
What should be done next to complete the update?

A. Redirect to the new environment using Amazon Route 53

B. Select the Swap Environment URLs option

C. Replace the Auto Scaling launch configuration

D. Update the DNS records to point to the green environment


Question # 129

A solutions architect works for a government agency that has strict disaster recovery
requirements. All Amazon Elastic Block Store (Amazon EBS) snapshots are required to be saved in at least two additional AWS Regions. The agency also is required to maintain the
lowest possible operational overhead.
Which solution meets these requirements?

A. Configure a policy in Amazon Data Lifecycle Manager (Amazon DLM) to run once daily
to copy the EBS snapshots to the additional Regions.

B. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda
function to copy the EBS snapshots to the additional Regions.

C. Set up AWS Backup to create the EBS snapshots. Configure Amazon S3 cross-Region
replication to copy the EBS snapshots to the additional Regions.

D. Schedule Amazon EC2 Image Builder to run once daily to create an AMI and copy the
AMI to the additional Regions


Question # 130

A software as a service (SaaS) company uses AWS to host a service that is powered by
AWS PrivateLink. The service consists of proprietary software that runs on three Amazon
EC2 instances behind a Network Load Balancer (NL B). The instances are in private
subnets in multiple Availability Zones in the eu-west-2 Region. All the company's
customers are in eu-west-2. However, the company now acquires a new customer in the us-east-I Region. The
company creates a new VPC and new subnets in us-east-I. The company establishes
inter-Region VPC peering between the VPCs in the two Regions.
The company wants to give the new customer access to the SaaS service, but the
company does not want to immediately deploy new EC2 resources in us-east-I
Which solution will meet these requirements?

A. Configure a PrivateLink endpoint service in us-east-I to use the existing NL B that is in
eu-west-2. Grant specific AWS accounts access to connect to the SaaS service.

B. Create an NL B in us-east-I . Create an IP target group that uses the IP addresses of the
company's instances in eu-west-2 that host the SaaS service. Configure a PrivateLink
endpoint service that uses the NLB that is in us-east-I . Grant specific AWS accounts
access to connect to the SaaS service.

C. Create an Application Load Balancer (ALB) in front of the EC2 instances in eu-west-2.
Create an NLB in us-east-I . Associate the NLB that is in us-east-I with an ALB target group
that uses the ALB that is in eu-west-2. Configure a PrivateLink endpoint service that uses
the NLB that is in us-east-I . Grant specific AWS accounts access to connect to the SaaS
service.

D. Use AWS Resource Access Manager (AWS RAM) to share the EC2 instances that are
in eu-west-2. In us-east-I , create an NLB and an instance target group that includes the
shared EC2 instances from eu-west-2. Configure a PrivateLink endpoint service that uses
the NL B that is in us-east-I. Grant specific AWS accounts access to connect to the SaaS
service.


Question # 131

A company operates a fleet of servers on premises and operates a fleet of Amazon EC2
instances in its organization in AWS Organizations. The company's AWS accounts contain
hundreds of VPCs. The company wants to connect its AWS accounts to its on-premises
network. AWS Site-to-Site VPN connections are already established to a single AWS
account. The company wants to control which VPCs can communicate with other VPCs.
Which combination of steps will achieve this level of control with the LEAST operational
effort? (Choose three.)

A. Create a transit gateway in an AWS account. Share the transit gateway across accounts
by using AWS Resource Access Manager (AWS RAM).

B. Configure attachments to all VPCs and VPNs.

C. Set up transit gateway route tables. Associate the VPCs and VPNs with the route tables.

D. Configure VPC peering between the VPCs.

E. Configure attachments between the VPCs and VPNs.

F. Set up route tables on the VPCs and VPNs.


Amazon SAP-C02 Frequently Asked Questions


Customers Feedback

What our clients say about SAP-C02 Learning Materials

    Quentin     Jun 14, 2024
In comparison to other websites, this platform offers more affordable exam resources that contain the exact same questions and answers. I was able to achieve an outstanding score of 90%, and I am grateful for the Dumps provided by Salesforcexamdumps.com.
    Xavier     Jun 13, 2024
If you're looking for a reliable source of SAP-C02 dumps, look no further than Salesforcexamdumps.com. The dumps are up-to-date and accurate, and the explanations are clear and easy to understand. I would highly recommend these dumps to anyone preparing for the SAP-C02 exam.
    Patrick     Jun 13, 2024
I wanted to say thanks these SAP-C02 dumps are up-to-dated, accurate and authentic i passed my exam. highly recommended
    Vanessa     Jun 12, 2024
I started using these dumps, I knew I was in good hands. Because I used these dumps some time ago Thanks to these dumps, I passed the exam with ease and can confidently say that Salesforcexamdumps.com is the Fantastic
    Oscar     Jun 12, 2024
I cannot express how impressed I am with the SAP-C02 PDF Guide from Salesforcexamdumps.com. I just share my experience All the questions came from the dumps, except for two new ones. Thanks
    Ryan Ali     Jun 11, 2024
These SAP-C02 Practice Tests exceeded my expectations in every way possible. The material is comprehensive, well-organized, and updated regularly to ensure it covers the latest exam topics. I was able to pass the exam on my first attempt
    Jack     Jun 11, 2024
I am immensely grateful for the invaluable resource provided by this platform. Without it, passing my SAP-C02 exam would have been an insurmountable challenge. Thank you for your assistance and support throughout the exam preparation process.
    Zachary     Jun 10, 2024
I purchased SAP-C02 Dumps from Salesforcexamdumps.com and I have to say, it was a great study material. The dumps were comprehensive and covered all the topics I needed to know for the SAP-C02 exam. I was able to pass the exam on my first try with flying colors thanks to these SAP-C02 dumps.
    Uma     Jun 10, 2024
After reading only 5 days I Clear My AWS Certified Solutions Architect - Professional Exam With 880/ 1000 Marks.
    Frederick     Jun 09, 2024
I am thoroughly impressed with the accuracy and quality of the SAP-C02 dumps. The exam material provided by this platform has exceeded my expectations, and I'm more satisfied with the results.
    Alex Turner     Jun 09, 2024
I am excited to announce that I passed the exam, and I couldn't have done it without the invaluable assistance provided by Salesforcexamdumps.com exam dumps. The questions were remarkably similar to those in the actual exam, and I am extremely grateful for this amazing resource.
    Umar     Jun 08, 2024
I got 97% score Thanks !!! Awesome!!
    Hina Khan     Jun 08, 2024
The SAP-C02 dumps from Salesforcexamdumps.com were just what I needed to prepare for my exam. The dumps were well-organized and covered all the important topics in a concise and clear manner. I passed the exam without any difficulty and am grateful for these helpful dumps.
    Katherine     Jun 07, 2024
Highly recommended! I Passed my SAP-C02 Exam easily.
    Samuel     Jun 07, 2024
Hello everyone, I am delighted to share with you that I passed my SAP-C02 exam on my first attempt, all thanks to the Dumps that I came across. I couldn't be more thrilled with the results, and I owe it all to these wonderful dumps!

Leave a comment

Your email address will not be published. Required fields are marked *

Rating / Feedback About This Exam