AWS Certified Solutions Architect - Professional Dumps September 2024
Are you tired of looking for a source that'll keep you updated on the AWS Certified Solutions Architect - Professional Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Amazon SAP-C02 Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a AWS Certified Professional Certified.
Amazon SAP-C02 is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The AWS Certified Professional Certification validates a candidate's expertise to work with Amazon. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the AWS Certified Solutions Architect - Professional Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Amazon SAP-C02 Practice Questions. Passing AWS Certified Solutions Architect - Professional will be your ticket to a better future!
Pass with Amazon SAP-C02 Braindumps!
Contrary to the belief that certification exams are generally hard to get through, passing AWS Certified Solutions Architect - Professional is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Amazon SAP-C02 PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Amazon AWS Certified Professional certification is all about having the right information. Hence, we filled our Amazon SAP-C02 Dumps with all the necessary data you need to pass. These carefully curated sets of AWS Certified Solutions Architect - Professional Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Amazon SAP-C02 Braindumps now!
We aim to provide all AWS Certified Professional certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Amazon SAP-C02 Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.
Why Choose Amazon SAP-C02 PDF?
Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the AWS Certified Solutions Architect - Professional candidates. Not every Amazon exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the AWS Certified Professional solutions. Amazon SAP-C02 Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Amazon SAP-C02 Dumps with ease.
If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].
Amazon SAP-C02 Sample Questions
Question # 1
A startup company recently migrated a large ecommerce website to AWS The website has
experienced a 70% increase in sates Software engineers are using a private GitHub
repository to manage code The DevOps team is using Jenkins for builds and unit testing
The engineers need to receive notifications for bad builds and zero downtime during
deployments The engineers also need to ensure any changes to production are seamless
for users and can be rolled back in the event of a major issue
The software engineers have decided to use AWS CodePipeline to manage their build and
deployment process
Which solution will meet these requirements'?
A. Use GitHub websockets to trigger the CodePipeline pipeline Use the Jenkins plugin forAWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any badbuilds Deploy in an in-place all-at-once deployment configuration using AWS CodeDeploy B. Use GitHub webhooks to trigger the CodePipelme pipeline Use the Jenkins plugin forAWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue'green deployment using AWS CodeDeploy C. Use GitHub websockets to trigger the CodePipelme pipeline. Use AWS X-Ray for unittesting and static code analysis Send alerts to an Amazon SNS topic for any bad buildsDeploy in a blue/green deployment using AWS CodeDeploy. D. Use GitHub webhooks to trigger the CodePipeline pipeline Use AWS X-Ray for unittesting and static code analysis Send alerts to an Amazon SNS topic for any bad buildsDeploy in an m-place. all-at-once deployment configuration using AWS CodeDeploy
Answer: B
Explanation:
GitHub Webhooks to Trigger CodePipeline:
Unit Testing with Jenkins and AWS CodeBuild:
Notifications for Bad Builds:
Blue/Green Deployment with AWS CodeDeploy:
This solution provides seamless, zero-downtime deployments, and the ability to quickly roll
back changes if necessary, fulfilling the requirements of the startup company.
References
AWS DevOps Blog on Integrating Jenkins with AWS CodeBuild and CodeDeploy
32.
Plain English Guide to AWS CodePipeline with GitHub33.
Jenkins Plugin for AWS CodePipeline34.
Question # 2
To abide by industry regulations, a solutions architect must design a solution that will store
a company's critical data in multiple public AWS Regions, including in the United States,
where the company's headquarters is located The solutions architect is required to provide
access to the data stored in AWS to the company's global WAN network The security team
mandates that no traffic accessing this data should traverse the public internet
How should the solutions architect design a highly available solution that meets the
requirements and is cost-effective'?
A. Establish AWS Direct Connect connections from the company headquarters to all AWSRegions in use the company WAN to send traffic over to the headquarters and then to the respective DX connection to access the data B. Establish two AWS Direct Connect connections from the company headquarters to anAWS Region Use the company WAN to send traffic over a DX connection Use inter-regionVPC peering to access the data in other AWS Regions C. Establish two AWS Direct Connect connections from the company headquarters to anAWS Region Use the company WAN to send traffic over a DX connection Use an AWStransit VPC solution to access data in other AWS Regions D. Establish two AWS Direct Connect connections from the company headquarters to anAWS Region Use the company WAN to send traffic over a DX connection Use DirectConnect Gateway to access data in other AWS Regions.
Answer: D
Explanation:
Establish AWS Direct Connect Connections:
Use Company WAN:
Set Up Direct Connect Gateway:
By using Direct Connect and Direct Connect Gateway, the company can achieve secure,
reliable, and cost-effective access to data stored across multiple AWS Regions without
using the public internet, ensuring compliance with industry regulations.
References
AWS Direct Connect Documentation
Building a Scalable and Secure Multi-VPC AWS Network Infrastructure (AWS
Documentation) (AWS Documentation).
Question # 3
A company has developed a new release of a popular video game and wants to make it
available for public download The new release package is approximately 5 GB in size. The
company provides downloads for existing releases from a Linux-based publicly facing FTP
site hosted in an on-premises data center The company expects the new release will be
downloaded by users worldwide The company wants a solution that provides improved
download performance and low transfer costs regardless of a user's location
Which solutions will meet these requirements'?
A. Store the game files on Amazon EBS volumes mounted on Amazon EC2 instanceswithin an Auto Scaling group Configure an FTP service on the EC2 instances Use anApplication Load Balancer in front of the Auto Scaling group. Publish the game downloadURL for users to download the package B. Store the game files on Amazon EFS volumes that are attached to Amazon EC2instances within an Auto Scaling group Configure an FTP service on each of the EC2instances Use an Application Load Balancer in front of the Auto Scaling group Publish thegame download URL for users to download the package C. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload thegame files to the S3 bucket Use Amazon CloudFront for the website Publish the gamedownload URL for users to download the package D. Configure Amazon Route 53 and an Amazon S3 bucket for website hosting Upload thegame files to the S3 bucket Set Requester Pays for the S3 bucket Publish the game download URL for users to download the package
Answer: C
Explanation:
Create an S3 Bucket:
Upload Game Files:
Configure Amazon Route 53:
Use Amazon CloudFront:
Publish the Download URL:
This solution leverages the scalability of Amazon S3 and the performance benefits of
CloudFront to provide an optimal download experience for users globally while minimizing
costs.
References
Amazon CloudFront Documentation
Amazon S3 Static Website Hosting
Question # 4
A company runs an application in (he cloud that consists of a database and a website
Users can post data to the website, have the data processed, and have the data sent back
to them in an email Data is stored in a MySQL database running on an Amazon EC2
instance The database is running in a VPC with two private subnets The website is running
on Apache Tomcat in a single EC2 instance in a different VPC with one public subnet
There is a single VPC peering connection between the database and website VPC. The website has suffered several outages during the last month due to high traffic
Which actions should a solutions architect take to increase the reliability of the application?
(Select THREE.)
A. Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behindan Application Load Balancer B. Provision an additional VPC peering connection C. Migrate the MySQL database to Amazon Aurora with one Aurora Replica D. Provision two NAT gateways in the database VPC. E. Move the Tomcat server to the database VPC F. Create an additional public subnet in a different Availability Zone in the website VPC
Answer: A,C,F
Explanation:
Auto Scaling Group with Application Load Balancer:
Migrate to Amazon Aurora with Replica:
Additional Public Subnet:
References
AWS Well-Architected Framework
Amazon Aurora Documentation
Question # 5
A company provides a centralized Amazon EC2 application hosted in a single shared VPC
The centralized application must be accessible from client applications running in the VPCs
of other business units The centralized application front end is configured with a Network
Load Balancer (NLB) for scalability Up to 10 business unit VPCs will need to be connected to the shared VPC Some ot the
business unit VPC CIDR blocks overlap with the shared VPC and some overlap with each
other Network connectivity to the centralized application in the shared VPC should be
allowed from authorized business unit VPCs only
Which network configuration should a solutions architect use to provide connectivity from
the client applications in the business unit VPCs to the centralized application in the shared
VPC?
A. Create an AWS Transit Gateway Attach the shared VPC and the authorized businessunit VPCs to the transit gateway Create a single transit gateway route table and associateit with all of the attached VPCs Allow automatic propagation of routes from the attachmentsinto the route table Configure VPC routing tables to send traffic to the transit gateway B. Create a VPC endpoint service using the centralized application NLB and enable theoption to require endpoint acceptance Create a VPC endpoint in each of the business unitVPCs using the service name of the endpoint service. Accept authorized endpoint requestsfrom the endpoint service console. C. Create a VPC peering connection from each business unit VPC to the shared VPCAccept the VPC peering connections from the shared VPC console Configure VPC routingtables to send traffic to the VPC peering connection D. Configure a virtual private gateway for the shared VPC and create customer gatewaysfor each of the authorized business unit VPCs Establish a Site-to-Site VPN connection fromthe business unit VPCs to the shared VPC Configure VPC routing tables to send traffic tothe VPN connection
Answer: B
Explanation:
Create VPC Endpoint Service:
Set Up VPC Endpoints in Business Unit VPCs:
Accept Endpoint Requests:
Configure Routing:
This solution ensures secure, private connectivity between the business unit VPCs and the
shared VPC, even if there are overlapping CIDR blocks. It leverages AWS PrivateLink and
VPC endpoints to provide scalable and controlled access (AWS Documentation) (Amazon
Web Services, Inc.).
Question # 6
An events company runs a ticketing platform on AWS. The company's customers configure
and schedule their events on the platform The events result in large increases of traffic to
the platform The company knows the date and time of each customer's events
The company runs the platform on an Amazon Elastic Container Service (Amazon ECS)
cluster The ECS cluster consists of Amazon EC2 On-Demand Instances that are in an Auto
Scaling group. The Auto Scaling group uses a predictive scaling policy
The ECS cluster makes frequent requests to an Amazon S3 bucket to download ticket
assets The ECS cluster and the S3 bucket are in the same AWS Region and the same
AWS account Traffic between the ECS cluster and the S3 bucket flows across a NAT
gateway
The company needs to optimize the cost of the platform without decreasing the platform's
availability
Which combination of steps will meet these requirements? (Select TWO)
A. Create a gateway VPC endpoint for the S3 bucket B. Add another ECS capacity provider that uses an Auto Scaling group of Spot InstancesConfigure the new capacity provider strategy to have the same weight as the existingcapacity provider strategy C. Create On-Demand Capacity Reservations for the applicable instance type for the timeperiod of the scheduled scaling policies D. Enable S3 Transfer Acceleration on the S3 bucket E. Replace the predictive scaling policy with scheduled scaling policies for the scheduled events
Answer: A,B
Explanation:
Gateway VPC Endpoint for S3:
Add Spot Instances to ECS Cluster: Configure Capacity Provider Strategy:
By implementing a gateway VPC endpoint for S3 and incorporating Spot Instances into the
ECS cluster, the company can significantly reduce operational costs without compromising
on the availability or performance of the platform.
References
AWS Cost Optimization Blog on VPC Endpoints
AWS ECS Documentation on Capacity Providers
Question # 7
A company uses AWS Organizations to manage its development environment. Each
development team at the company has its own AWS account Each account has a single
VPC and CIDR blocks that do not overlap.
The company has an Amazon Aurora DB cluster in a shared services account All the
development teams need to work with live data from the DB cluster
Which solution will provide the required connectivity to the DB cluster with the LEAST
operational overhead?
A. Create an AWS Resource Access Manager (AWS RAM) resource share tor the DBcluster. Share the DB cluster with all the development accounts B. Create a transit gateway in the shared services account Create an AWS ResourceAccess Manager (AWS RAM) resource share for the transit gateway Share the transitgateway with all the development accounts Instruct the developers to accept the resourceshare Configure networking. C. Create an Application Load Balancer (ALB) that points to the IP address of the DBcluster Create an AWS PrivateLink endpoint service that uses the ALB Add permissions toallow each development account to connect to the endpoint service D. Create an AWS Site-to-Site VPN connection in the shared services account Configurenetworking Use AWS Marketplace VPN software in each development account to connectto the Site-to-Site VPN connection
Answer: B
Explanation:
Create a Transit Gateway:
Configure Transit Gateway Attachments:
Create Resource Share with AWS RAM: Accept Resource Shares in Development Accounts:
Configure VPC Attachments in Development Accounts:
Update Route Tables:
Using a transit gateway simplifies the network management and reduces operational
overhead by providing a scalable and efficient way to interconnect multiple VPCs across
different AWS accounts.
References
AWS Database Blog on RDS Proxy for Cross-Account Access48.
AWS Architecture Blog on Cross-Account and Cross-Region Aurora Setup49.
DEV Community on Managing Multiple AWS Accounts with Organizations51.
Question # 8
A company wants to migrate virtual Microsoft workloads from an on-premises data center
to AWS The company has successfully tested a few sample workloads on AWS. The
company also has created an AWS Site-to-Site VPN connection to a VPC A solutions
architect needs to generate a total cost of ownership (TCO) report for the migration of all
the workloads from the data center
Simple Network Management Protocol (SNMP) has been enabled on each VM in the data
center The company cannot add more VMs m the data center and cannot install additional
software on the VMs The discovery data must be automatically imported into AWS
Migration Hub
Which solution will meet these requirements?
A. Use the AWS Application Migration Service agentless service and the AWS MigrationHub Strategy Recommendations to generate the TCO report B. Launch a Windows Amazon EC2 instance Install the Migration Evaluator agentlesscollector on the EC2 instance Configure Migration Evaluator to generate the TCO report C. Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentlesscollector on the EC2 instance. Configure Migration Hub to generate the TCO report D. Use the AWS Migration Readiness Assessment tool inside the VPC Configure MigrationEvaluator to generate the TCO report
A software as a service (SaaS) company provides a media software solution to customers
The solution is hosted on 50 VPCs across various AWS Regions and AWS accounts One
of the VPCs is designated as a management VPC The compute resources in the VPCs
work independently The company has developed a new feature that requires all 50 VPCs to be able to
communicate with each other. The new feature also requires one-way access from each
customer's VPC to the company's management VPC The management VPC hosts a
compute resource that validates licenses for the media software solution
The number of VPCs that the company will use to host the solution will continue to increase
as the solution grows
Which combination of steps will provide the required VPC connectivity with the LEAST
operational overhead'' (Select TWO.)
A. Create a transit gateway Attach all the company's VPCs and relevant subnets to thetransit gateway B. Create VPC peering connections between all the company's VPCs C. Create a Network Load Balancer (NLB) that points to the compute resource for licensevalidation. Create an AWS PrivateLink endpoint service that is available to each customer'sVPC Associate the endpoint service with the NLB D. Create a VPN appliance in each customer's VPC Connect the company's managementVPC to each customer's VPC by using AWS Site-to-Site VPN E. Create a VPC peering connection between the company's management VPC and eachcustomer's VPC
Answer: A,C
Explanation:
Create a Transit Gateway:
Set Up AWS PrivateLink:
This combination leverages the benefits of AWS Transit Gateway for scalable and
centralized routing, and AWS PrivateLink for secure and private service access, meeting
the requirement with minimal operational overhead.
References
Amazon VPC-to-Amazon VPC Connectivity Options
AWS PrivateLink - Building a Scalable and Secure Multi-VPC AWS Network
Infrastructure
Connecting Your VPC to Other VPCs and Networks Using a Transit Gateway
Question # 10
A company creates an AWS Control Tower landing zone to manage and govern a multiaccount
AWS environment. The company's security team will deploy preventive controls
and detective controls to monitor AWS services across all the accounts. The security team
needs a centralized view of the security state of all the accounts.
Which solution will meet these requirements'?
A. From the AWS Control Tower management account, use AWS CloudFormationStackSets to deploy an AWS Config conformance pack to all accounts in the organization B. Enable Amazon Detective for the organization in AWS Organizations Designate oneAWS account as the delegated administrator for Detective C. From the AWS Control Tower management account, deploy an AWS CloudFormationstack set that uses the automatic deployment option to enable Amazon Detective for theorganization D. Enable AWS Security Hub for the organization in AWS Organizations Designate oneAWS account as the delegated administrator for Security Hub
Answer: D
Explanation:
Enable AWS Security Hub:
Designate a Delegated Administrator:
Deploy Controls Across Accounts:
Utilize AWS Security Hub Features:
By integrating AWS Security Hub with AWS Control Tower and using a delegated
administrator account, you can achieve a centralized and comprehensive view of your
organization’s security posture, facilitating effective management and remediation of
security issues.
References
AWS Security Hub now integrates with AWS Control Tower77
AWS Control Tower and Security Hub Integration76
AWS Security Hub Features79
Question # 11
A medical company is running a REST API on a set of Amazon EC2 instances The EC2
instances run in an Auto Scaling group behind an Application Load Balancer (ALB) The
ALB runs in three public subnets, and the EC2 instances run in three private subnets The
company has deployed an Amazon CloudFront distribution that has the ALB as the only origin
Which solution should a solutions architect recommend to enhance the origin security?
A. Store a random string in AWS Secrets Manager Create an AWS Lambda function forautomatic secret rotation Configure CloudFront to inject the random string as a customHTTP header for the origin request Create an AWS WAF web ACL rule with a string matchrule for the custom header Associate the web ACL with the ALB B. Create an AWS WAF web ACL rule with an IP match condition of the CloudFront serviceIP address ranges Associate the web ACL with the ALB Move the ALB into the threeprivate subnets C. Store a random string in AWS Systems Manager Parameter Store Configure ParameterStore automatic rotation for the string Configure CloudFront to inject the random string as acustom HTTP header for the origin request Inspect the value of the custom HTTP header,and block access in the ALB D. Configure AWS Shield Advanced. Create a security group policy to allow connectionsfrom CloudFront service IP address ranges. Add the policy to AWS Shield Advanced, andattach the policy to the ALB
Answer: A
Explanation:
Store Secret in AWS Secrets Manager:
Set Up Automatic Rotation:
Configure CloudFront Custom Header:
Create AWS WAF Web ACL:
By using this method, you can ensure that only requests coming through CloudFront (which
injects the custom header) can reach the ALB, enhancing the origin security
Question # 12
A company is running its solution on AWS in a manually created VPC. The company is
using AWS CloudFormation to provision other parts of the infrastructure According to a
new requirement the company must manage all infrastructure in an automatic way
What should the comp any do to meet this new requirement with the LEAST effort?
A. Create a new AWS Cloud Development Kit (AWS CDK) stack that strictly provisions theexisting VPC resources and configuration Use AWS CDK to import the VPC into the stackand to manage the VPC B. Create a CloudFormation stack set that creates the VPC Use the stack set to import theVPC into the stack C. Create a new CloudFormation template that strictly provisions the existing VPCresources and configuration From the CloudFormation console, create a new stack byimporting the existing resources D. Create a new CloudFormation template that creates the VPC Use the AWS ServerlessApplication Model (AWS SAM) CLI to import the VPC
Answer: C
Explanation:
Creating the Template:
Using the CloudFormation Console:
Specifying the Template:
Identifying the Resources:
Creating the Stack:
Executing the Change Set:
Verification and Drift Detection:
This approach allows the company to manage their VPC and other resources via
CloudFormation without the need to recreate resources, ensuring a smooth transition to automated infrastructure management.
References
Creating a stack from existing resources - AWS CloudFormation (AWS
Documentation).
Generating templates for existing resources - AWS CloudFormation (AWS
Documentation).
Bringing existing resources into CloudFormation management (AWS
Documentation).
Question # 13
A company is launching a new online game on Amazon EC2 instances. The game must be
available globally. The company plans to run the game in three AWS Regions: us-east-1,
eu-west-1, and ap-southeast-1. The game's leaderboards. player inventory, and event
status must be available across Regions.
A solutions architect must design a solution that will give any Region the ability to scale to
handle the load of all Regions. Additionally, users must automatically connect to the Region
that provides the least latency.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an EC2 Spot Fleet. Attach the Spot Fleet to a Network Load Balancer (NLB) ineach Region. Create an AWS Global Accelerator IP address that points to the NLB. Createan Amazon Route 53 latency-based routing entry for the Global Accelerator IP address.Save the game metadata to an Amazon RDS for MySQL DB instance in each Region. Setup a read replica in the other Regions. B. Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to aNetwork Load Balancer (NLB) in each Region. For each Region, create an Amazon Route53 entry that uses geoproximity routing and points to the NLB in that Region. Save thegame metadata to MySQL databases on EC2 instances in each Region. Save the gamemetadata to MySQL databases on EC2 instances in each Region. Set up replicationbetween the database EC2 instances in each Region. C. Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to aNetwork Load Balancer (NLB) in each Region. For each Region, create an Amazon Route53 entry that uses latency-based routing and points to the NLB in that Region. Save thegame metadata to an Amazon DynamoDB global table. D. Use EC2 Global View. Deploy the EC2 instances to each Region. Attach the instancesto a Network Load Balancer (NLB). Deploy a DNS server on an EC2 instance in eachRegion. Set up custom logic on each DNS server to redirect the user to the Region thatprovides the lowest latency. Save the game metadata to an Amazon Aurora globaldatabase.
Answer: C
Explanation:
The best option is to use an Auto Scaling group, a Network Load Balancer, Amazon Route
53, and Amazon DynamoDB to create a scalable, highly available, and low-latency online
game application. An Auto Scaling group can automatically adjust the number of EC2
instances based on the demand and traffic in each Region. A Network Load Balancer can
distribute the incoming traffic across the EC2 instances and handle millions of requests per
second. Amazon Route 53 can use latency-based routing to direct the users to the Region
that provides the best performance. Amazon DynamoDB global tables can replicate the
game metadata across multiple Regions, ensuring consistency and availability of the data.
This approach minimizes the operational overhead and cost, as it leverages fully managed
services and avoids custom logic or replication.
Option A is not optimal because using an EC2 Spot Fleet can introduce the risk of losing
the EC2 instances if the Spot price exceeds the bid price, which can affect the availability
and performance of the game. Using AWS Global Accelerator can improve the network
performance, but it is not necessary if Amazon Route 53 can already route the users to the
closest Region. Using Amazon RDS for MySQL can store the game metadata, but it
requires setting up read replicas and managing the replication lag across Regions, which
can increase the complexity and cost.
Option B is not optimal because using geoproximity routing can direct the users to the
Region that is geographically closer, but it does not account for the network latency or
performance. Using MySQL databases on EC2 instances can store the game metadata,
but it requires managing the EC2 instances, the database software, the backups, the
patches, and the replication across Regions, which can increase the operational overhead
and cost.
Option D is not optimal because using EC2 Global View is not a valid service. Using a
custom DNS server on an EC2 instance can redirect the user to the Region that provides
the lowest latency, but it requires developing and maintaining the custom logic, as well as
managing the EC2 instance, which can increase the operational overhead and cost. Using
Amazon Aurora global database can store the game metadata, but it is more expensive
and complex than using Amazon DynamoDB global tables.
References: Auto Scaling groups
Network Load Balancer
Amazon Route 53
Amazon DynamoDB global tables
Question # 14
A company is planning to migrate an application from on premises to the AWS Cloud The
company will begin the migration by moving the application underlying data storage to
AWS The application data is stored on a shared tile system on premises and the
application servers connect to the shared file system through SMB
A solutions architect must implement a solution that uses an Amazon S3 bucket for shared
storage. Until the application is fully migrated and code is rewritten to use native Amazon
S3 APIs the application must continue to have access to the data through SMB The
solutions architect must migrate the application data to AWS (o its new location while still
allowing the on-premises application to access the data
Which solution will meet these requirements?
A. Create a new Amazon FSx for Windows File Server file system Configure AWSDataSync with one location for the on-premises file share and one location for the newAmazon FSx file system Create a new DataSync task to copy the data from the onpremisesfile share location to the Amazon FSx file system B. Create an S3 bucket for the application Copy the data from the on-premises storage to the S3 bucket C. Deploy an AWS Server Migration Service (AWS SMS) VM to the on-premisesenvironment Use AWS SMS to migrate the file storage server from on premises to anAmazon EC2 instance D. Create an S3 bucket for the application Deploy a new AWS Storage Gateway filegateway on an on-premises VM Create a new file share that stores data in the S3 bucketand is associated with the file gateway Copy the data from the on-premises storage to thenew file gateway endpoint
Answer: D
Explanation:
Create an S3 Bucket:
Deploy AWS Storage Gateway:
Configure the File Gateway:
Create a New File Share:
Copy Data to the File Gateway:
Ensure Secure and Efficient Data Transfer:
This approach allows your existing on-premises applications to continue accessing data via
SMB while leveraging the scalability and durability of Amazon S3.
References
AWS Storage Gateway Overview67.
AWS DataSync and Storage Gateway Hybrid Architecture66.
AWS S3 File Gateway Details68.
Question # 15
A company has an application that analyzes and stores image data on premises The
application receives millions of new image files every day Files are an average of 1 MB in
size The files are analyzed in batches of 1 GB When the application analyzes a batch the
application zips the images together The application then archives the images as a single
file in an on-premises NFS server for long-term storage
The company has a Microsoft Hyper-V environment on premises and has compute
capacity available The company does not have storage capacity and wants to archive the
images on AWS The company needs the ability to retrieve archived data within t week of a
request.
The company has a 10 Gbps AWS Direct Connect connection between its on-premises
data center and AWS. The company needs to set bandwidth limits and schedule archived
images to be copied to AWS dunng non-business hours.
Which solution will meet these requirements MOST cost-effectively?
A. Deploy an AWS DataSync agent on a new GPU-based Amazon EC2 instance Configurethe DataSync agent to copy the batch of files from the NFS on-premises server to AmazonS3 Glacier Instant Retrieval After the successful copy delete the data from the on-premisesstorage B. Deploy an AWS DataSync agent as a Hyper-V VM on premises Configure the DataSyncagent to copy the batch of files from the NFS on-premises server to Amazon S3 GlacierDeep Archive After the successful copy delete the data from the on-premises storage C. Deploy an AWS DataSync agent on a new general purpose Amazon EC2 instanceConfigure the DataSync agent to copy the batch of files from the NFS on-premises serverto Amazon S3 Standard After the successful copy deletes the data from the on-premisesstorage Create an S3 Lifecycle rule to transition objects from S3 Standard to S3 GlacierDeep Archive after 1 day D. Deploy an AWS Storage Gateway Tape Gateway on premises in the Hyper-Venvironment Connect the Tape Gateway to AWS Use automatic tape creation Specify anAmazon S3 Glacier Deep Archive pool Eject the tape after the batch of images is copied
Answer: B
Explanation:
Deploy DataSync Agent:
Configure Source and Destination:
Create DataSync Tasks:
Set Bandwidth Limits: Delete On-Premises Data:
This approach leverages AWS DataSync for efficient, secure, and automated data transfer,
and S3 Glacier Deep Archive for cost-effective long-term storage.
References
AWS DataSync Overview41.
AWS Storage Blog on DataSync Migration40.
Amazon S3 Transfer Acceleration Documentation42.
Question # 16
A solutions architect is creating an AWS CloudFormation template from an existing
manually created non-production AWS environment The CloudFormation template can be
destroyed and recreated as needed The environment contains an Amazon EC2 instance
The EC2 instance has an instance profile that the EC2 instance uses to assume a role in a
parent account
The solutions architect recreates the role in a CloudFormation template and uses the same
role name When the CloudFormation template is launched in the child account, the EC2
instance can no longer assume the role in the parent account because of insufficient
permissions
What should the solutions architect do to resolve this issue?
A. In the parent account edit the trust policy for the role that the EC2 instance needs toassume Ensure that the target role ARN in the existing statement that allows the stsAssumeRole action is correct Save the trust policy B. In the parent account edit the trust policy for the role that the EC2 instance needs toassume Add a statement that allows the sts AssumeRole action for the root principal of thechild account Save the trust policy C. Update the CloudFormation stack again Specify only the CAPABILITY_NAMED_IAMcapability D. Update the CloudFormation stack again Specify the CAPABIUTYJAM capability and theCAPABILITY_NAMEDJAM capability
Answer: A
Explanation: Edit the Trust Policy:
action for the role ARN in the child account.
Update the Role ARN:
Save and Test:
This ensures that the EC2 instance in the child account can assume the role in the parent
account, resolving the permission issue.
References
AWS IAM Documentation on Trust Policies51.
Question # 17
A company runs a software-as-a-service <SaaS) application on AWS The application
consists of AWS Lambda functions and an Amazon RDS for MySQL Multi-AZ database
During market events the application has a much higher workload than normal Users notice
slow response times during the peak periods because of many database connections The
company needs to improve the scalable performance and availability of the database
Which solution meets these requirements'?
A. Create an Amazon CloudWatch alarm action that triggers a Lambda function to add anAmazon RDS for MySQL read replica when resource utilization hits a threshold B. Migrate the database to Amazon Aurora, and add a read replica Add a databaseconnection pool outside of the Lambda handler function C. Migrate the database to Amazon Aurora and add a read replica Use Amazon Route 53weighted records D. Migrate the database to Amazon Aurora and add an Aurora Replica Configure AmazonRDS Proxy to manage database connection pools
Answer: D
Explanation:
Migrate to Amazon Aurora:
Add Aurora Replica:
Configure Amazon RDS Proxy:
References
AWS Database Blog on RDS Proxy (Amazon Web Services, Inc.).
AWS Compute Blog on Using RDS Proxy with Lambda (Amazon Web Services, Inc.).
Question # 18
A company has multiple lines of business (LOBs) that toll up to the parent company. The
company has asked its solutions architect to develop a solution with the following
requirements • Produce a single AWS invoice for all of the AWS accounts used by its LOBs.
• The costs for each LOB account should be broken out on the invoice
• Provide the ability to restrict services and features in the LOB accounts, as defined by the
company's governance policy
• Each LOB account should be delegated full administrator permissions regardless of the
governance policy
Which combination of steps should the solutions architect take to meet these
requirements'? (Select TWO.)
A. Use AWS Organizations to create an organization in the parent account for each LOBThen invite each LOB account to the appropriate organization B. Use AWS Organizations to create a single organization in the parent account Then,invite each LOB's AWS account lo join the organization. C. Implement service quotas to define the services and features that are permitted andapply the quotas to each LOB. as appropriate D. Create an SCP that allows only approved services and features then apply the policy tothe LOB accounts E. Enable consolidated billing in the parent account's billing console and link the LOB accounts
Answer: B,E
Explanation:
Create AWS Organization:
Invite LOB Accounts:
Enable Consolidated Billing:
Apply Service Control Policies (SCPs):
By consolidating billing and using AWS Organizations, the company can achieve
centralized billing and governance while maintaining independent administrative control for
each LOB account
Question # 19
A company needs to improve the security of its web-based application on AWS. The
application uses Amazon CloudFront with two custom origins. The first custom origin routes
requests to an Amazon API Gateway HTTP API. The second custom origin routes traffic to an Application Load Balancer (ALB) The application integrates with an OpenlD Connect
(OIDC) identity provider (IdP) for user management.
A security audit shows that a JSON Web Token (JWT) authorizer provides access to the
API The security audit also shows that the ALB accepts requests from unauthenticated
users
A solutions architect must design a solution to ensure that all backend services respond to
only authenticated users
Which solution will meet this requirement?
A. Configure the ALB to enforce authentication and authorization by integrating the ALBwith the IdP Allow only authenticated users to access the backend services B. Modify the CloudFront configuration to use signed URLs Implement a permissive signingpolicy that allows any request to access the backend services C. Create an AWS WAF web ACL that filters out unauthenticated requests at the ALB level.Allow only authenticated traffic to reach the backend services. D. Enable AWS CloudTrail to log all requests that come to the ALB Create an AWSLambda function to analyze the togs and block any requests that come fromunauthenticated users.
Answer: A
Explanation:
Integrate ALB with OIDC IdP:
Set Up Authentication Rules:
Restrict Unauthenticated Access:
Update CloudFront Configuration:
By enforcing authentication at the ALB level, you ensure that all backend services are
accessed only by authenticated users, enhancing the overall security of the web application
Question # 20
A delivery company is running a serverless solution in tneAWS Cloud The solution
manages user data, delivery information and past purchase details The solution consists of
several microservices The central user service stores sensitive data in an Amazon
DynamoDB table Several of the other microservices store a copy of parts of the sensitive
data in different storage services
The company needs the ability to delete user information upon request As soon as the
central user service deletes a user every other microservice must also delete its copy of the data immediately
Which solution will meet these requirements?
A. Activate DynamoDB Streams on the DynamoDB table Create an AWS Lambda triggerfor the DynamoDB stream that will post events about user deletion in an Amazon SimpleQueue Service (Amazon SQS) queue Configure each microservice to poll the queue anddelete the user from the DynamoDB table B. Set up DynamoDB event notifications on the DynamoDB table Create an AmazonSimple Notification Service (Amazon SNS) topic as a target for the DynamoDB eventnotification Configure each microservice to subscribe to the SNS topic and to delete theuser from the DynamoDB table C. Configure the central user service to post an event on a custom Amazon EventBridgeevent bus when the company deletes a user Create an EventBndge rule for eachmicroservice to match the user deletion event pattern and invoke logic in the microserviceto delete the user from the DynamoDB table D. Configure the central user service to post a message on an Amazon Simple QueueService (Amazon SQS) queue when the company deletes a user Configure eachmicroservice to create an event filter on the SQS queue and to delete the user from theDynamoDB table
Answer: C
Explanation:
Set Up EventBridge Event Bus:
Post Events on User Deletion:
Create EventBridge Rules for Microservices:
Invoke Microservice Logic:
Using Amazon EventBridge ensures a scalable, reliable, and decoupled approach to
handle the deletion of user data across multiple microservices. This setup allows each
microservice to independently process user deletion events without direct dependencies on
other services.
References
AWS EventBridge Documentation
DynamoDB Streams and AWS Lambda Triggers
Implementing the Transactional Outbox Pattern with EventBridge Pipes (AWS
Documentation) (Amazon Web Services, Inc.) (Amazon Web Services, Inc.) (AWS
Documentation) (AWS Cloud Community).
Question # 21
A company has developed an application that is running Windows Server on VMware
vSphere VMs that the company hosts on premises The application data is stored in a
proprietary format that must be read through the application The company manually
provisioned the servers and the application
As part of its disaster recovery plan, the company wants the ability to host its application on
AWS temporarily if the company's on-premises environment becomes unavailable The
company wants the application to return to on-premises hosting after a disaster recovery
event is complete The RPO is 5 minutes.
Which solution meets these requirements with the LEAST amount of operational
overhead?
A. Configure AWS DataSync Replicate the data to Amazon Elastic Block Store (AmazonEBS) volumes When the on-premises environment is unavailable, use AWS Cloud Formation templates to provision Amazon EC2 instances and attach the EBS volumes B. Configure AWS Elastic Disaster Recovery Replicate the data to replication Amazon EC2instances that are attached to Amazon Elastic Block Store (Amazon EBS) volumes Whenthe on-premises environment is unavailable use Elastic Disaster Recovery to launch EC2instances that use the replicated volumes C. Provision an AWS Storage Gateway file gateway. Replicate the data to an Amazon S3bucket When the on-premises environment is unavailable, use AWS Backup to restore thedata to Amazon Elastic Block Store (Amazon EBS) volumes and launch Amazon EC2instances from these EBS volumes D. Provision an Amazon FSx for Windows File Server file system on AWS Replicate thedata to the file system When the on-premises environment is unavailable, use AWS CloudFormat ion templates to provision Amazon EC2 instances and use AWS CloudFormationInit commands to mount the Amazon FSx file shares
Answer: B
Explanation:
Set Up AWS Elastic Disaster Recovery:
Configure Replication Settings:
Monitor Data Replication:
Disaster Recovery (Failover):
Failback Process:
Using AWS Elastic Disaster Recovery provides a low-overhead, automated solution for
disaster recovery that ensures minimal data loss and meets the RPO requirement of 5
minutes (Amazon Web Services, Inc.) (AWS Documentation).
Question # 22
A company that develops consumer electronics with offices in Europe and Asia has 60 TB
of software images stored on premises in Europe The company wants to transfer the
images to an Amazon S3 bucket in the ap-northeast-1 Region New software images are
created daily and must be encrypted in transit The company needs a solution that does not
require custom development to automatically transfer all existing and new software images
to Amazon S3
What is the next step in the transfer process?
A. Deploy an AWS DataSync agent and configure a task to transfer the images to the S3bucket B. Configure Amazon Kinesis Data Firehose to transfer the images using S3 TransferAcceleration C. Use an AWS Snowball device to transfer the images with the S3 bucket as the target D. Transfer the images over a Site-to-Site VPN connection using the S3 API with multipartupload
Answer: A
Explanation:
Deploy AWS DataSync Agent:
Configure Source and Destination Locations:
Create and Schedule DataSync Tasks:
Encryption in Transit:
Monitoring and Management:
AWS DataSync is an efficient solution that automates and accelerates the process of
transferring large amounts of data to AWS, handling encryption, data integrity checks, and
optimizing network usage without requiring custom development.
References
AWS Storage Blog on DataSync40. AWS DataSync Documentation41.
Question # 23
A company runs an unauthenticated static website (www.example.com) that includes a
registration form for users. The website uses Amazon S3 for hosting and uses Amazon
CloudFront as the content delivery network with AWS WAF configured. When the
registration form is submitted, the website calls an Amazon API Gateway API endpoint that
invokes an AWS Lambda function to process the payload and forward the payload to an
external API call.
During testing, a solutions architect encounters a cross-origin resource sharing (CORS)
error. The solutions architect confirms that the CloudFront distribution origin has the
What should the solutions architect do to resolve the error?
A. Change the CORS configuration on the S3 bucket. Add rules for CORS to the AllowedOrigin element for www.example.com. B. Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-Control-Allow-Origin header is set to www.example.com. C. Enable the CORS setting on the API Gateway API endpoint. Ensure that the APIendpoint is configured to return all responses that have the Access-Control -Allow-Originheader set to www.example.com. D. Enable the CORS setting on the Lambda function. Ensure that the return code of thefunction has the Access-Control-Allow-Origin header set to www.example.com.
Answer: C
Explanation:
CORS errors occur when a web page hosted on one domain tries to make a request to a
server hosted on another domain. In this scenario, the registration form hosted on the static
website is trying to make a request to the API Gateway API endpoint hosted on a different
domain, which is causing the error. To resolve this error, the Access-Control-Allow-Origin
header needs to be set to the domain from which the request is being made. In this case,
the header is already set to www.example.com on the CloudFront distribution origin.
Therefore, the solutions architect should enable the CORS setting on the API Gateway API
endpoint and ensure that the API endpoint is configured to return all responses that have
the Access-Control-Allow-Origin header set to www.example.com. This will allow the API
endpoint to respond to requests from the static website without a CORS error.
What should the solutions architect do to resolve the error?
A. Change the CORS configuration on the S3 bucket. Add rules for CORS to the AllowedOrigin element for www.example.com. B. Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-Control-Allow-Origin header is set to www.example.com. C. Enable the CORS setting on the API Gateway API endpoint. Ensure that the APIendpoint is configured to return all responses that have the Access-Control -Allow-Originheader set to www.example.com. D. Enable the CORS setting on the Lambda function. Ensure that the return code of thefunction has the Access-Control-Allow-Origin header set to www.example.com.
Answer: C
Explanation:
CORS errors occur when a web page hosted on one domain tries to make a request to a
server hosted on another domain. In this scenario, the registration form hosted on the static
website is trying to make a request to the API Gateway API endpoint hosted on a different
domain, which is causing the error. To resolve this error, the Access-Control-Allow-Origin
header needs to be set to the domain from which the request is being made. In this case,
the header is already set to www.example.com on the CloudFront distribution origin.
Therefore, the solutions architect should enable the CORS setting on the API Gateway API
endpoint and ensure that the API endpoint is configured to return all responses that have
the Access-Control-Allow-Origin header set to www.example.com. This will allow the API
endpoint to respond to requests from the static website without a CORS error.
A company uses AWS Organizations AWS account. A solutions architect must design a
solution in which only administrator roles are allowed to use IAM actions. However the
solutions archited does not have access to all the AWS account throughout the company.
Which solution meets these requirements with the LEAST operational overhead?
A. Create an SCP that applies to at the AWS accounts to allow I AM actions only foradministrator roles. Apply the SCP to the root OLI. B. Configure AWS CloudTrai to invoke an AWS Lambda function for each event that isrelated to 1AM actions. Configure the function to deny the action. If the user who invokedthe action is not an administator. C. Create an SCP that applies to all the AWS accounts to deny 1AM actions for all usersexcept for those with administrator roles. Apply the SCP to the root OU. D. Set an 1AM permissions boundary that allows 1AM actions. Attach the permissionsboundary to every administrator role across all the AWS accounts.
Answer: A
Explanation:
To restrict IAM actions to only administrator roles across all AWS accounts in an
organization, the most operationally efficient solution is to create a Service Control Policy
(SCP) that allows IAM actions exclusively for administrator roles and apply this SCP to the
root Organizational Unit (OU) of AWS Organizations. This method ensures a centralized
governance mechanism that uniformly applies the policy across all accounts, thereby
minimizing the need for individual account-level configurations and reducing operational
complexity.
References: AWS Documentation on AWS Organizations and Service Control Policies
offers comprehensive information on creating and managing SCPs for organizational-wide
policy enforcement. This approach aligns with AWS best practices for managing
permissions and ensuring secure and compliant account configurations within an AWS
Organization.
Question # 26
A company use an organization in AWS Organizations to manage multiple AWS accounts.
The company hosts some applications in a VPC in the company's snared services account.
The company has attached a transit gateway to the VPC in the Shared services account.
The company is developing a new capability and has created a development environment
that requires access to the applications that are in the snared services account. The
company intends to delete and recreate resources frequently in the development account.
The company also wants to give a development team the ability to recreate the team's
connection to the shared services account as required.
Which solution will meet these requirements?
A. Create a transit gateway in the development account. Create a transit gateway peeringrequest to the shared services account. Configure the snared services transit gateway toautomatically accept peering connections. B. Turn on automate acceptance for the transit gateway in the shared services account.Use AWS Resource Access Manager (AWS RAM) to share the transit gateway resource inthe shared services account with the development account. Accept the resource in tie development account. Create a transit gateway attachment in the development account. C. Turn on automate acceptance for the transit gateway in the shared services account.Create a VPC endpoint. Use the endpoint policy to grant permissions on the VPC endpointfor the development account. Configure the endpoint service to automatically acceptconnection requests. Provide the endpoint details to the development team. D. Create an Amazon EventBridge rule to invoke an AWS Lambda function that acceptsthe transit gateway attachment value the development account makes an attachmentrequest. Use AWS Network Manager to store. The transit gateway in the shared servicesaccount with the development account. Accept the transit gateway in the developmentaccount.
Answer: B
Explanation: For a development environment that requires frequent resource recreation
and connectivity to applications hosted in a shared services account, the most efficient
solution involves using AWS Resource Access Manager (RAM) and the transit gateway in
the shared services account. By turning on automatic acceptance for the transit gateway in
the shared services account and sharing it with the development account through AWS
RAM, the development team can easily recreate their connection as needed without
manual intervention. This setup allows for scalable, flexible connectivity between accounts
while minimizing operational overhead and ensuring consistent access to shared services.
References: AWS Documentation on AWS Resource Access Manager and Transit
Gateway provides guidance on sharing network resources across AWS accounts and
enabling automatic acceptance for transit gateway attachments. This approach is also
supported by AWS best practices for multi-account strategies using AWS Organizations
and network architecture.
Question # 27
A company has a web application that uses Amazon API Gateway. AWS Lambda and
Amazon DynamoDB A recent marketing campaign has increased demand Monitoring
software reports that many requests have significantly longer response times than before
the marketing campaign
A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that
errors are occurring on 20% of the requests. In CloudWatch. the Lambda function.
Throttles metric represents 1% of the requests and the Errors metric represents 10% of the
requests Application logs indicate that, when errors occur there is a call to DynamoDB
What change should the solutions architect make to improve the current response times as
the web application becomes more popular'?
A. Increase the concurrency limit of the Lambda function B. Implement DynamoDB auto scaling on the table C. Increase the API Gateway throttle limit D. Re-create the DynamoDB table with a better-partitioned primary index.
Answer: B
Explanation:
Enable DynamoDB Auto Scaling:
Configure Auto Scaling Policies:
Monitor and Adjust:
By enabling DynamoDB auto scaling, you ensure that the database can handle the
fluctuating traffic volumes without manual intervention, improving response times and
reducing errors.
References
AWS Compute Blog on Using API Gateway as a Proxy for DynamoDB60.
AWS Database Blog on DynamoDB Accelerator (DAX)59.
Question # 28
A company wants to migrate an Amazon Aurora MySQL DB cluster from an existing AWS
account to a new AWS account in the same AWS Region. Both accounts are members of
the same organization in AWS Organizations.
The company must minimize database service interruption before the company performs
DNS cutover to the new database.
Which migration strategy will meet this requirement?
A. Take a snapshot of the existing Aurora database. Share the snapshot with the new AWSaccount. Create an Aurora DB cluster in the new account from the snapshot. B. Create an Aurora DB cluster in the new AWS account. Use AWS Database MigrationService (AWS DMS) to migrate data between the two Aurora DB clusters. C. Use AWS Backup to share an Aurora database backup from the existing AWS accountto the new AWS account. Create an Aurora DB cluster in the new AWS account from thesnapshot. D. Create an Aurora DB cluster in the new AWS account. Use AWS Application MigrationService to migrate data between the two Aurora DB clusters.
Answer: B
Explanation:
The best migration strategy to meet the requirement of minimizing database service
interruption before the DNS cutover is to use AWS DMS to migrate data between the two
Aurora DB clusters. AWS DMS can perform continuous replication of data with high
availability and consolidate databases into a petabyte-scale data warehouse by streaming
data to Amazon Redshift and Amazon S31. AWS DMS supports homogeneous migrations
such as migrating from one Aurora MySQL DB cluster to another, as well as
heterogeneous migrations between different database platforms2. AWS DMS also supports cross-account migrations, as long as the source and target databases are in the same
AWS Region3.
The other options are not optimal for the following reasons:
Option A: Taking a snapshot of the existing Aurora database and restoring it in the new
account would require a downtime during the snapshot and restore process, which could
be significant for large databases. Moreover, any changes made to the source database
after the snapshot would not be replicated to the target database, resulting in data
inconsistency4.
Option C: Using AWS Backup to share an Aurora database backup from the existing AWS
account to the new AWS account would have the same drawbacks as option A, as AWS
Backup uses snapshots to create backups of Aurora databases.
Option D: Using AWS Application Migration Service to migrate data between the two
Aurora DB clusters is not a valid option, as AWS Application Migration Service is designed
to migrate applications, not databases, to AWS. AWS Application Migration Service can
migrate applications from on-premises or other cloud environments to AWS, using
agentless or agent-based methods.
References:
1: What Is AWS Database Migration Service? - AWS Database Migration Service
2: Sources for Data Migration - AWS Database Migration Service
3: AWS Database Migration Service FAQs
4: Working with DB Cluster Snapshots - Amazon Aurora
[Backing Up and Restoring an Amazon Aurora DB Cluster - Amazon Aurora]
A company is planning a migration from an on-premises data center to the AWS cloud. The
company plans to use multiple AWS accounts that are managed in an organization in AWS
organizations. The company will cost a small number of accounts initially and will add
accounts as needed. A solution architect must design a solution that turns on AWS
accounts.
What is the MOST operationally efficient solution that meets these requirements.
A. Create an AWS Lambda function that creates a new cloudTrail trail in all AWS accountin the organization. Invoke the Lambda function dally by using a scheduled action inAmazon EventBridge. B. Create a new CloudTrail trail in the organizations management account. Configure the trail to log all events for all AYYS accounts in the organization. C. Create a new CloudTrail trail in all AWS accounts in the organization. Create new trailswhenever a new account is created. D. Create an AWS systems Manager Automaton runbook that creates a cloud trail in allAWS accounts in the organization. Invoke the automation by using Systems Manager StateManager.
Answer: B
Explanation:
The most operationally efficient solution for turning on AWS CloudTrail across multiple
AWS accounts managed within an AWS Organization is to create a single CloudTrail trail in
the organization's management account and configure it to log events for all accounts
within the organization. This approach leverages CloudTrail's ability to consolidate logs
from all accounts in an organization, thereby simplifying management, reducing overhead,
and ensuring consistent logging across accounts. This method eliminates the need for
manual intervention in each account, making it an operationally efficient choice for
organizations planning to scale their AWS usage.
References:
AWS CloudTrail Documentation: Provides detailed instructions on setting up
CloudTrail, including how to configure it for an organization.
AWS Organizations Documentation: Offers insights into best practices for
managing multiple AWS accounts and how services like CloudTrail integrate with
AWS Organizations.
AWS Best Practices for Security and Governance: Guides on how to effectively
use AWS services to maintain a secure and well-governed AWS environment, with
a focus on centralized logging and monitoring.
Question # 30
A solutions architect is preparing to deploy a new security tool into several previously
unused AWS Regions. The solutions architect will deploy the tool by using an AWS
CloudFormation stack set. The stack set's template contains an 1AM role that has a
custom name. Upon creation of the stack set. no stack instances are created successfully.
What should the solutions architect do to deploy the stacks successfully?
A. Enable the new Regions in all relevant accounts. Specify theCAPABILITY_NAMED_IAM capability during the creation of the stack set. B. Use the Service Quotas console to request a quota increase for the number ofCloudFormation stacks in each new Region in all relevant accounts. Specify theCAPABILITYJAM capability during the creation of the stack set. C. Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGEDpermissions model during the creation of the stack set. D. Specify an administration role ARN and the CAPABILITYJAM capability during thecreation of the stack set.
Answer: A
Explanation: The CAPABILITY_NAMED_IAM capability is required when creating or
updating CloudFormation stacks that contain IAM resources with custom names. This
capability acknowledges that the template might create IAM resources that have broad
permissions or affect other resources in the AWS account. The stack set’s template
contains an IAM role that has a custom name, so this capability is needed. Enabling the new Regions in all relevant accounts is also necessary to deploy the stack set across
multiple Regions and accounts.
Option B is incorrect because the Service Quotas console is used to view and manage the
quotas for AWS services, not for CloudFormation stacks. The number of stacks per Region
per account is not a service quota that can be increased.
Option C is incorrect because the SELF_MANAGED permissions model is used when the
administrator wants to retain full permissions to manage stack sets and stack instances.
This model does not affect the creation of the stack set or the requirement for the
CAPABILITY_NAMED_IAM capability.
Option D is incorrect because an administration role ARN is optional when creating a stack
set. It is used to specify a role that CloudFormation assumes to create stack instances in
the target accounts. It does not affect the creation of the stack set or the requirement for
the CAPABILITY_NAMED_IAM capability.
References:
1: AWS CloudFormation stack sets
2: Acknowledging IAM resources in AWS CloudFormation templates
3: AWS CloudFormation stack set permissions
Question # 31
A company has an loT platform that runs in an on-premises environment. The platform
consists of a server that connects to loT devices by using the MQTT protocol. The platform
collects telemetry data from the devices at least once every 5 minutes The platform also
stores device metadata in a MongoDB cluster
An application that is installed on an on-premises machine runs periodic jobs to aggregate
and transform the telemetry and device metadata The application creates reports that
users view by using another web application that runs on the same on-premises machine
The periodic jobs take 120-600 seconds to run However, the web application is always
running.
The company is moving the platform to AWS and must reduce the operational overhead of
the stack.
Which combination of steps will meet these requirements with the LEAST operational
overhead? (Select THREE.)
A. Use AWS Lambda functions to connect to the loT devices B. Configure the loT devices to publish to AWS loT Core C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility) E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare thereports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin toserve the reports F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2instances to prepare the reports Use an ingress controller in the EKS cluster to serve thereports
A company is designing an AWS environment tor a manufacturing application. The
application has been successful with customers, and the application's user base has
increased. The company has connected the AWS environment to the company's onpremises
data center through a 1 Gbps AWS Direct Connect connection. The company has
configured BGP for the connection.
The company must update the existing network connectivity solution to ensure that the
solution is highly available, fault tolerant, and secure.
Which solution win meet these requirements MOST cost-effectively?
A. Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data intransit and provide resilience for the Direct Conned connection. Configure MACsec toencrypt traffic inside the Direct Connect connection. B. Provision another Direct Conned connection between the company's on-premises datacenter and AWS to increase the transfer speed and provide resilience. Configure MACsecto encrypt traffic inside the Dried Conned connection. C. Configure multiple private VIFs. Load balance data across the VIFs between the onpremisesdata center and AWS to provide resilience. D. Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and toprovide resilience for the Direct Connect connection.
Answer: A
Explanation:
To enhance the network connectivity solution's availability, fault tolerance, and security in a
cost-effective manner, adding a dynamic private IP AWS Site-to-Site VPN as a secondary
path is a viable option. This VPN serves as a resilient backup for the Direct Connect
connection, ensuring continuous data flow even if the primary path fails. Implementing
MACsec (Media Access Control Security) on the Direct Connect connection further secures
the data in transit by providing encryption, thus addressing the security requirement. This solution strikes a balance between cost and operational efficiency, avoiding the higher
expenses associated with provisioning an additional Direct Connect connection.
References: AWS Documentation on AWS Direct Connect and AWS Site-to-Site VPN
provides insights into setting up resilient and secure network connections. Additionally,
information on MACsec offers guidance on how to implement encryption for Direct Connect
connections, aligning with best practices for secure and highly available network
architectures.
Question # 33
A company deploys workloads in multiple AWS accounts. Each account has a VPC with
VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file
is compressed with gzjp compression. The company must retain the log files indefinitely.
A security engineer occasionally analyzes the togs by using Amazon Athena to query the
VPC flow logs. The query performance is degrading over time as the number of ingested
togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.
Which solution will meet these requirements with the LARGEST performance
improvement?
A. Create an AWS Lambda function to decompress the gzip flies and to compress the tileswith bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3event notification for the S3 bucket. B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configurationto move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded C. Update the VPC flow log configuration to store the files in Apache Parquet format.Specify Hourly partitions for the log files. D. Create a new Athena workgroup without data usage control limits. Use Athena engineversion 2.
Answer: C
Explanation:
Converting VPC flow logs to store in Apache Parquet format and specifying hourly
partitions significantly improves query performance and reduces storage space usage.
Apache Parquet is a columnar storage file format optimized for analytical queries, allowing
Athena to scan less data and improve query performance. Partitioning logs by hour further
enhances query efficiency by limiting the amount of data scanned during queries,
addressing the issue of degrading performance over time due to the growing volume of
ingested logs.
References: AWS Documentation on VPC Flow Logs and Amazon Athena provides
insights into configuring VPC flow logs in Apache Parquet format and using Athena for
querying log data. This approach is recommended for efficient log analysis and storage
optimization.
Question # 34
An e-commerce company is revamping its IT infrastructure and is planning to use AWS
services. The company's CIO has asked a solutions architect to design a simple, highly
available, and loosely coupled order processing application. The application is responsible
for receiving and processing orders before storing them in an Amazon DynamoDB table.
The application has a sporadic traffic pattern and should be able to scale during marketing
campaigns to process the orders with minimal delays.
Which of the following is the MOST reliable approach to meet the requirements?
A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances toprocess them. B. Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function toprocess them. C. Receive the orders using the AWS Step Functions program and launch an Amazon ECScontainer to process them. D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances toprocess them.
Answer: B
Explanation:
The best option is to use Amazon SQS and AWS Lambda to create a serverless order
processing application. Amazon SQS is a fully managed message queue service that can
decouple the order receiving and processing components, making the application more
scalable and fault-tolerant. AWS Lambda is a serverless compute service that can
automatically scale to handle the incoming messages from the SQS queue and process
them according to the business logic. AWS Lambda can also integrate with Amazon
DynamoDB to store the processed orders in a fast and flexible NoSQL database. This
approach eliminates the need to provision, manage, or scale any servers or containers,
and reduces the operational overhead and cost.
Option A is not reliable because using an EC2-hosted database to receive the orders
introduces a single point of failure and a scalability bottleneck. EC2 instances also require
more management and configuration than serverless services. Option C is not reliable because using AWS Step Functions to receive the orders adds
unnecessary complexity and cost to the application. AWS Step Functions is a service that
coordinates multiple AWS services into a serverless workflow, but it is not designed to
handle high-volume, sporadic, or unpredictable traffic patterns. AWS Step Functions also
charges per state transition, which can be expensive for a large number of orders.
Launching an ECS container to process each order also requires more resources and
management than invoking a Lambda function.
Option D is not reliable because using Amazon Kinesis Data Streams to receive the orders
is not suitable for this use case. Amazon Kinesis Data Streams is a service that enables
real-time processing of streaming data at scale, but it is not meant for asynchronous
message queuing. Amazon Kinesis Data Streams requires consumers to poll the data from
the stream, which can introduce latency and complexity. Amazon Kinesis Data Streams
also charges per shard hour, which can be expensive for a sporadic traffic pattern.
References:
Amazon SQS
AWS Lambda
Amazon DynamoDB
AWS Step Functions
Amazon ECS
Question # 35
A company that is developing a mobile game is making game assets available in two AWS
Regions. Game assets are served from a set of Amazon EC2 instances behind an
Application Load Balancer (ALB) in each Region. The company requires game assets to be
fetched from the closest Region. If game assess become unavailable in the closest Region,
they should the fetched from the other Region. What should a solutions architect do to meet these requirement?
A. Create an Amazon CloudFront distribution. Create an origin group with one origin foreach ALB. Set one of the origins as primary. B. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failoverrouting record pointing to the two ALBs. Set the Evaluate Target Health value Yes. C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Createan Amazon Route 53 failover routing record pointing to the two CloudFront distributions.Set the Evaluate Target Health value to Yes. D. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency aliasrecord pointing to the two ALBs. Set the Evaluate Target Health value to Yes.
Answer: A
Explanation:
To ensure that game assets are fetched from the closest region and have a fallback option
in case the assets become unavailable in the closest region, a solution architect should
leverage Amazon CloudFront, a global content delivery network (CDN) service. By creating
an Amazon CloudFront distribution and setting up origin groups, the architect can specify
multiple origins (in this case, the Application Load Balancers in each region). The primary
origin will serve content under normal circumstances, and if the content becomes
unavailable, CloudFront will automatically switch to the secondary origin. This approach not
only meets the requirement of regional proximity and redundancy but also optimizes
latency and enhances the gaming experience by serving assets from the nearest
geographical location to the end-user.
References: AWS Documentation on Amazon CloudFront and origin groups provides
detailed instructions on setting up distributions with multiple origins for high availability and
performance optimization. Additionally, AWS whitepapers and best practices on content
delivery and global applications offer insights into effectively utilizing CloudFront and other
AWS services to achieve low latency and high availability.
Question # 36
A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors.
Sensors send continuous data updates, and each update is less than 1 MB in size. The
agency has a fleet of on-premises application servers. These servers receive upda.es 'on
the sensors, convert the raw data into a human readable format, and write the results loan
on-premises relational database server. Data analysts then use simple SOL queries to
monitor the data.
The agency wants to increase overall application availability and reduce the effort that is
required to perform maintenance tasks These maintenance tasks, which include updates
and patches to the application servers, cause downtime. While an application server is
down, data is lost from sensors because the remaining servers cannot handle the entire
workload.
The agency wants a solution that optimizes operational overhead and costs. A solutions
architect recommends the use of AWS loT Core to collect the sensor data. What else should the solutions architect recommend to meet these requirements?
A. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to .csv format, and insert it into anAmazon Aurora MySQL DB instance. Instruct the data analysts to query the data directlyfrom the DB instance. B. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to Apache Parquet format and save it toan Amazon S3 bucket. Instruct the data analysts to query the data by using AmazonAthena. C. Send the sensor data to an Amazon Managed Service for Apache Flink {previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to .csv formatand store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DBinstance. Instruct the data analysts to query the data directly from the DB instance. D. Send the sensor data to an Amazon Managed Service for Apache Flink (previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to ApacheParquet format and store it in an Amazon S3 bucket Instruct the data analysis to query thedata by using Amazon Athena.
Answer: B
Explanation:
To enhance application availability and reduce maintenance-induced downtime, sending
sensor data to Amazon Kinesis Data Firehose, processing it with an AWS Lambda
function, converting it to Apache Parquet format, and storing it in Amazon S3 is an effective
strategy. This approach leverages serverless architectures for scalability and reliability.
Data analysts can then query the optimized data using Amazon Athena, a serverless
interactive query service, which supports complex queries on data stored in S3 without the
need for traditional database servers, optimizing operational overhead and costs.
References: AWS Documentation on AWS IoT Core, Amazon Kinesis Data Firehose,
AWS Lambda, Amazon S3, and Amazon Athena provides a comprehensive framework for
building a scalable, serverless data processing pipeline. This solution aligns with AWS best
practices for processing and analyzing large-scale data streams efficiently.
Question # 37
A company has many services running in its on-premises data center. The data center is
connected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data is
sensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are using
AWS.
Which solution will meet these requirements?
A. Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network LoadBalancer, and make the service available over DX. B. Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind anApplication Load Balancer, and make the service available over DX. C. Attach an internet gateway to the VPC. and ensure that network access control andsecurity group rules allow the relevant inbound and outbound traffic. D. Attach a NAT gateway to the VPC. and ensue that network access control and securitygroup rules allow the relevant inbound and outbound traffic.
Answer: B
Explanation:
To offer services to other companies using AWS without traversing the internet, creating a
VPC Endpoint Service hosted behind an Application Load Balancer (ALB) and making it
available over AWS Direct Connect (DX) is the most suitable solution. This approach
ensures that the service traffic remains within the AWS network, adhering to the
requirement that connectivity must not traverse the internet. An ALB is capable of handling
HTTP/HTTPS traffic, making it appropriate for web-based services. Utilizing DX for
connectivity between the on-premises data center and AWS further secures and optimizes
the network path.
References:
AWS Direct Connect Documentation: Explains how to set up DX for private
connectivity between AWS and an on-premises network.
details on creating and configuring endpoint services for private, secure access to
services hosted in AWS.
AWS Application Load Balancer Documentation: Offers guidance on configuring
ALBs to distribute HTTP/HTTPS traffic efficiently.
Question # 38
A company wants to establish a dedicated connection between its on-premises
infrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connect
connection to its account VPC. The architecture includes a transit gateway and a Direct
Connect gateway to connect multiple VPCs and the on-premises infrastructure.
The company must connect to VPC resources over a transit VIF by using the Direct
Connect connection.
Which combination of steps will meet these requirements? (Select TWO.)
A. Update the 1 Gbps Direct Connect connection to 10 Gbps. B. Advertise the on-premises network prefixes over the transit VIF. C. Adverse the VPC prefixes from the Direct Connect gateway to the on-premises networkover the transit VIF. D. Update the Direct Connect connection's MACsec encryption mode attribute to mustencrypt. E. Associate a MACsec Connection Key Name-Connectivity Association Key (CKN/CAK)pair with the Direct Connect connection.
Answer: B,C
Explanation:
To connect VPC resources over a transit Virtual Interface (VIF) using a Direct Connect
connection, the company should advertise the on-premises network prefixes over the
transit VIF and advertise the VPC prefixes from the Direct Connect gateway to the onpremises
network over the same VIF. This configuration ensures seamless connectivity
between the on-premises infrastructure and the AWS VPCs through the transit gateway,
facilitating efficient and secure communication across the network.
References: AWS Documentation on AWS Direct Connect and transit gateways provides
detailed instructions on configuring transit VIFs and routing for Direct Connect connections.
This setup is recommended in AWS best practices for establishing dedicated network
connections between on-premises environments and AWS to achieve low-latency, highthroughput,
and secure connectivity.
Question # 39
A company hosts an intranet web application on Amazon EC2 instances behind an
Application Load Balancer (ALB). Currently, users authenticate to the application against
an internal user database.
The company needs to authenticate users to the application by using an existing AWS
Directory Service for Microsoft Active Directory directory. All users with accounts in the
directory must have access to the application.
Which solution will meet these requirements?
A. Create a new app client in the directory. Create a listener rule for the ALB. Specify theauthenticate-oidc action for the listener rule. Configure the listener rule with the appropriateissuer, client ID and secret, and endpoint details for the Active Directory service. Configurethe new app client with the callback URL that the ALB provides. B. Configure an Amazon Cognito user pool. Configure the user pool with a federatedidentity provider (IdP) that has metadata from the directory. Create an app client. Associatethe app client with the user pool. Create a listener rule for the ALB. Specify theauthenticate-cognito action for the listener rule. Configure the listener rule to use the userpool and app client. C. Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that hasan entity type of SAML 2.0 federation. Configure a role policy that allows access to theALB. Configure the new role as the default authenticated user role for the IdP. Create alistener rule for the ALB. Specify the authenticate-oidc action for the listener rule. D. Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as anexternal identity provider (IdP) that uses SAML. Use the automatic provisioning method.Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a rolepolicy that allows access to the ALB. Attach the new role to all groups. Create a listenerrule for the ALB. Specify the authenticate-cognito action for the listener rule.
Answer: A
Explanation:
The correct solution is to use the authenticate-oidc action for the ALB listener rule and
configure it with the details of the AWS Directory Service for Microsoft Active Directory
directory. This way, the ALB can use OpenID Connect (OIDC) to authenticate users
against the directory and grant them access to the intranet web application. The app client
in the directory is used to register the ALB as an OIDC client and provide the necessary
credentials and endpoints. The callback URL is the URL that the ALB redirects the user to
after a successful authentication. This solution does not require any additional services or
roles, and it leverages the existing directory accounts for all users.
The other solutions are incorrect because they either use the wrong action for the ALB
listener rule, or they involve unnecessary or incompatible services or roles. For example:
Solution B is incorrect because it uses Amazon Cognito user pool, which is a
separate user directory service that does not integrate with AWS Directory Service
for Microsoft Active Directory. To use this solution, the company would have to
migrate or synchronize their users from the directory to the user pool, which is not
required by the question. Moreover, the authenticate-cognito action for the ALB
listener rule only works with Amazon Cognito user pools, not with federated
identity providers (IdPs) that have metadata from the directory.
Solution C is incorrect because it uses IAM as an identity provider (IdP), which is
not compatible with AWS Directory Service for Microsoft Active Directory. IAM can
only be used as an IdP for web identity federation, which allows users to sign in
with social media or other third-party IdPs, not with Active Directory. Moreover, the
authenticate-oidc action for the ALB listener rule requires an OIDC IdP, not a
SAML 2.0 federation IdP, which is what IAM provides.
Solution D is incorrect because it uses AWS IAM Identity Center (AWS Single Sign-On), which is a service that simplifies the management of SSO access to
multiple AWS accounts and business applications. This service is not needed for
the scenario in the question, which only involves a single intranet web application.
Moreover, the authenticate-cognito action for the ALB listener rule does not work
with external IdPs that use SAML, such as AWS IAM Identity Center.
References:
Authenticate users using an Application Load Balancer
What is AWS Directory Service for Microsoft Active Directory?
Using OpenID Connect for user authentication
Question # 40
A public retail web application uses an Application Load Balancer (ALB) in front of Amazon
EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an
Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to
use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain
the web fleet size based on the ALB health check.
Recently, the application experienced an outage. Auto Scaling continuously replaced the
instances during the outage. A subsequent investigation determined that the web server
metrics were within the normal range, but the database tier was experiencing high toad,
resulting in severely elevated query response times.
Which of the following changes together would remediate these issues while improving
monitoring capabilities for the availability and functionality of the entire application stack for
future growth? (Select TWO.)
A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint inthe web application to reduce the load on the backend database tier. B. Configure the target group health check to point at a simple HTML page instead of aproduct catalog page and the Amazon Route 53 health check against the product page toevaluate full application functionality. Configure Ama7on CloudWatch alarms to notifyadministrators when the site fails. C. Configure the target group health check to use a TCP check of the Amazon EC2 webserver and the Amazon Route S3 health check against the product page to evaluate fullapplication functionality. Configure Amazon CloudWatch alarms to notify administratorswhen the site fails. D. Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover ahigh-load, impaired RDS instance in the database tier. E. Configure an Amazon Elastic ache cluster and place it between the web application andRDS MySQL instances to reduce the load on the backend database tier.
Answer: A,E
Explanation:
Configuring read replicas for Amazon RDS MySQL and using the single reader endpoint in
the web application can significantly reduce the load on the backend database tier,
improving overall application performance. Additionally, implementing an Amazon
ElastiCache cluster between the web application and RDS MySQL instances can further
reduce database load by caching frequently accessed data, thereby enhancing the
application's resilience and scalability. These changes address the root cause of the
outage by alleviating the database tier's high load and preventing similar issues in the
future.
References: AWS Documentation on Amazon RDS Read Replicas and Amazon
ElastiCache provides comprehensive guidance on improving application performance and
scalability by offloading read traffic from the primary database and caching common
queries. These solutions are in line with AWS best practices for building resilient and
scalable web applications.
Question # 41
A company needs to implement disaster recovery for a critical application that runs in a
single AWS Region. The application's users interact with a web frontend that is hosted on
Amazon EC2 Instances behind an Application Load Balancer (ALB). The application writes
to an Amazon RD5 tor MySQL DB instance. The application also outputs processed
documents that are stored in an Amazon S3 bucket
The company's finance team directly queries the database to run reports. During busy
periods, these queries consume resources and negatively affect application performance.
A solutions architect must design a solution that will provide resiliency during a disaster.
The solution must minimize data loss and must resolve the performance problems that
result from the finance team's queries.
Which solution will meet these requirements?
A. Migrate the database to Amazon DynamoDB and use DynamoDB global tables. Instructthe finance team to query a global table in a separate Region. Create an AWS Lambdafunction to periodically synchronize the contents of the original S3 bucket to a new S3bucket in the separate Region. Launch EC2 instances and create an ALB in the separateRegion. Configure the application to point to the new S3 bucket. B. Launch additional EC2 instances that host the application in a separate Region. Add theadditional instances to the existing ALB. In the separate Region, create a read replica ofthe RDS DB instance. Instruct the finance team to run queries ageist the read replica. UseS3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3 Docket in theseparate Region. During a disaster, promote the read replace to a standalone DB instance.Configure the application to point to the new S3 bucket and to the newly project readreplica. C. Create a read replica of the RDS DB instance in a separate Region. Instruct the financeteam to run queries against the read replica. Create AMIs of the EC2 instances mat hostthe application frontend- Copy the AMIs to the separate Region. Use S3 Cross-RegionReplication (CRR) from the original S3 bucket to a new S3 bucket in the separate Region.During a disaster, promote the read replica to a standalone DB instance. Launch EC2instances from the AMIs and create an ALB to present the application to end users.Configure the application to point to the new S3 bucket. D. Create hourly snapshots of the RDS DB instance. Copy the snapshots to a separateRegion. Add an Amazon Elastic ache cluster m front of the existing RDS database. CreateAMIs of the EC2 instances that host the application frontend Copy the AMIs to the separateRegion. Use S3 Cross-Region Replication (CRR) from the original S3 bucket to a new S3bucket in the separate Region. During a disaster, restore The database from the latestRDS snapshot. Launch EC2 Instances from the AMIs and create an ALB to present theapplication to end users. Configure the application to point to the new S3 bucket
Answer: C
Explanation:
Implementing a disaster recovery strategy that minimizes data loss and addresses
performance issues involves creating a read replica of the RDS DB instance in a separate
region and directing the finance team's queries to this replica. This solution alleviates the
performance impact on the primary database. Using Amazon S3 Cross-Region Replication
(CRR) ensures that processed documents are available in the disaster recovery region. In
the event of a disaster, the read replica can be promoted to a standalone DB instance, and
EC2 instances can be launched from pre-created AMIs to serve the web frontend, thereby
Region Replication, and Amazon EC2 AMIs provides comprehensive guidance on
implementing a robust disaster recovery solution. This approach is in line with AWS best
practices for high availability and disaster recovery planning.
Question # 42
A company wants to use Amazon Workspaces in combination with thin client devices to
replace aging desktops. Employees use the desktops to access applications that work with
clinical trial data. Corporate security policy states that access to the applications must be restricted to only company branch office locations. The company is considering adding an
additional branch office in the next 6 months.
Which solution meets these requirements with the MOST operational efficiency?
A. Create an IP access control group rule with the list of public addresses from the branchoffices. Associate the IP access control group with the Workspaces directory. B. Use AWS Firewall Manager to create a web ACL rule with an IPSet with the list to publicaddresses from the branch office Locations-Associate the web ACL with the Workspacesdirectory. C. Use AWS Certificate Manager (ACM) to issue trusted device certificates to the machinesdeployed in the branch office locations. Enable restricted access on the Workspacesdirectory. D. Create a custom Workspace image with Windows Firewall configured to restrict accessto the public addresses of the branch offices. Use the image to deploy the Workspaces.
Answer: A
Explanation: Utilizing an IP access control group rule with the list of public addresses from
branch offices and associating it with the Amazon WorkSpaces directory is the most
operationally efficient solution. This method ensures that access to WorkSpaces is
restricted to specified locations, aligning with the corporate security policy. This approach
offers simplicity and flexibility, especially with the potential addition of a new branch office,
as updating the IP access control group is straightforward.
References: AWS Documentation on Amazon WorkSpaces and IP Access Control Groups
provides detailed instructions on how to implement access restrictions based on IP
addresses. This solution aligns with best practices for securing virtual desktops while
maintaining operational efficiency.
Question # 43
A software development company has multiple engineers who ate working remotely. The
company is running Active Directory Domain Services (AD DS) on an Amazon EC2
instance. The company's security policy states that al internal, nonpublic services that are
deployed in a VPC must be accessible through a VPN. Multi-factor authentication (MFA)
must be used for access to a VPN.
What should a solutions architect do to meet these requirements?
A. Create an AWS Sire-to-Site VPN connection. Configure Integration between a VPN andAD DS. Use an Amazon Workspaces client with MFA support enabled to establish a VPNconnection. B. Create an AWS Client VPN endpoint Create an AD Connector directory tor integrationwith AD DS. Enable MFA tor AD Connector. Use AWS Client VPN to establish a VPNconnection. C. Create multiple AWS Site-to-Site VPN connections by using AWS VPN CloudHub.Configure integration between AWS VPN CloudHub and AD DS. Use AWS Copilot toestablish a VPN connection. D. Create an Amazon WorkLink endpoint. Configure integration between AmazonWorkLink and AD DS. Enable MFA in Amazon WorkLink. Use AWS Client VPN to establisha VPN connection.
Answer: B
Explanation:
Setting up an AWS Client VPN endpoint and integrating it with Active Directory Domain
Services (AD DS) using an AD Connector directory enables secure remote access to
internal services deployed in a VPC. Enabling multi-factor authentication (MFA) for AD
Connector enhances security by adding an additional layer of authentication. This solution
meets the company's requirements for secure remote access through a VPN with MFA,
ensuring that the security policy is adhered to while providing a seamless experience for
the remote engineers.
References: AWS Documentation on AWS Client VPN and AD Connector provides
detailed instructions on setting up a Client VPN endpoint and integrating it with existing
Active Directory for authentication. This solution aligns with AWS best practices for secure
remote access to AWS resources.
Question # 44
A company needs to improve the reliability ticketing application. The application runs on an
Amazon Elastic Container Service (Amazon ECS) cluster. The company uses Amazon
CloudFront to servo the application. A single ECS service of the ECS cluster is the
CloudFront distribution's origin.
The application allows only a specific number of active users to enter a ticket purchasing
flow. These users are identified by an encrypted attribute in their JSON Web Token (JWT).
All other users are redirected to a waiting room module until there is available capacity for
purchasing.
The application is experiencing high loads. The waiting room modulo is working as
designed, but load on the waiting room is disrupting the application's availability. This
disruption is negatively affecting the application's ticket sale Transactions.
Which solution will provide the MOST reliability for ticket sale transactions during periods of
high load? '
A. Create a separate service in the ECS cluster for the waiting room. Use a separatescaling configuration. Ensure that the ticketing service uses the JWT info-nation andappropriately forwards requests to the waring room service. B. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Split the wailing room module into a pod that is separate from the ticketing pod. Make theticketing pod part of a StatefuISeL Ensure that the ticketing pod uses the JWT informationand appropriately forwards requests to the waiting room pod. C. Create a separate service in the ECS cluster for the waiting room. Use a separatescaling configuration. Create a CloudFront function That inspects the JWT information andappropriately forwards requests to the ticketing service or the waiting room service D. Move the application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.Split the wailing room module into a pod that is separate from the ticketing pod. Use AWSApp Mesh by provisioning the App Mesh controller for Kubermetes. Enable mTLSauthentication and service-to-service authentication for communication between theticketing pod and the waiting room pod. Ensure that the ticketing pod uses The JWTinformation and appropriately forwards requests to the waiting room pod.
Answer: C
Explanation:
Implementing a CloudFront function that inspects the JWT information and appropriately
forwards requests either to the ticketing service or the waiting room service within the
Amazon ECS cluster enhances reliability during high load periods. This solution segregates
the load between the main application and the waiting room, ensuring that the ticketing
service remains unaffected by the high load on the waiting room. Using CloudFront
functions for request routing based on JWT attributes allows for efficient distribution of user
traffic, thereby maintaining the application's availability and performance during peak times.
References: AWS Documentation on Amazon CloudFront Functions provides guidance on
creating and deploying functions that can inspect and manipulate HTTP(S) requests at the
edge, close to the users. This approach is in line with best practices for scaling and
managing high-traffic web applications.
Question # 45
A company is currently in the design phase of an application that will need an RPO of less
than 5 minutes and an RTO of less than 10 minutes. The solutions architecture team is
forecasting that the database will store approximately 10 TB of data. As part of the design, they are looking for a database solution that will provide the company with the ability to fail
over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?
A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serveas a backup in the event of a failure. B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondaryRegion. In the event of a failure, promote the read replica to become the primary. C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondaryRegion. Use AWS DMS to keep the secondary Region in sync. D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event ofa failure, promote the read replica to become the primary.
Answer: B
Explanation: The best solution is to deploy an Amazon RDS instance with a cross-Region
read replica in a secondary Region. This will provide the company with a database solution
that can fail over to the secondary Region in case of a disaster. The read replica will have
minimal replication lag and can be promoted to become the primary in less than 10
minutes, meeting the RTO requirement. The RPO requirement of less than 5 minutes can
also be met by using synchronous replication within the primary Region and asynchronous
replication across Regions. This solution will also have the lowest cost compared to the
other options, as it does not involve additional services or resources. References: [Amazon
RDS User Guide], [Amazon Aurora User Guide]
Question # 46
A company is using an organization in AWS organization to manage AWS accounts. For
each new project the company creates a new linked account. After the creation of a new
account, the root user signs in to the new account and creates a service request to increase the service quota for Amazon EC2 instances. A solutions architect needs to
automate this process.
Which solution will meet these requirements with tie LEAST operational overhead?
A. Create an Amazon EventBridge rule to detect creation of a new account Send the eventto an Amazon Simple Notification Service (Amazon SNS) topic that invokes an AWSLambda function. Configure the Lambda function to run the request-service-quota-increasecommand to request a service quota increase for EC2 instances. B. Create a Service Quotas request template in the management account. Configure thedesired service quota increases for EC2 instances. C. Create an AWS Config rule in the management account to set the service quota for EC2instances. D. Create an Amazon EventBridge rule to detect creation of a new account. Send the eventto an Amazon simple Notification service (Amazon SNS) topic that involves an AWSLambda function. Configure the Lambda function to run the create-case command torequest a service quota increase for EC2 instances.
Answer: A
Explanation:
Automating the process of increasing service quotas for Amazon EC2 instances in new
AWS accounts with minimal operational overhead can be effectively achieved by using
Amazon EventBridge, Amazon SNS, and AWS Lambda. An EventBridge rule can detect
the creation of a new account and trigger an SNS topic, which in turn invokes a Lambda
function. This function can then programmatically request a service quota increase for EC2
instances using the AWS Service Quotas API. This approach streamlines the process,
reduces manual intervention, and ensures that new accounts are automatically configured
with the desired service quotas.
References:
Amazon EventBridge Documentation: Provides guidance on setting up event rules
for detecting AWS account creation.
AWS Lambda Documentation: Details how to create and configure Lambda
functions to perform automated tasks, such as requesting service quota increases.
AWS Service Quotas Documentation: Offers information on managing and
requesting increases for AWS service quotas programmatically.
Question # 47
A company needs to gather data from an experiment in a remote location that does not
have internet connectivity. During the experiment, sensors that are connected to a total
network will generate 6 TB of data in a preprimary formal over the course of 1 week. The
sensors can be configured to upload their data files to an FTP server periodically, but the
sensors do not have their own FTP server. The sensors also do not support other
protocols. The company needs to collect the data centrally and move lie data to object
storage in the AWS Cloud as soon. as possible after the experiment.
Which solution will meet these requirements?
A. Order an AWS Snowball Edge Compute Optimized device. Connect the device to thelocal network. Configure AWS DataSync with a target bucket name, and unload the dataover NFS to the device. After the experiment return the device to AWS so that the data canbe loaded into Amazon S3. B. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the deviceto the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the deviceto AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS)volume. C. Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the deviceto the local network. Launch an Amazon EC2 instance on the device. Install and configurean FTP server on the EC2 instance. Configure the sensors to upload data to the EC2instance. After the experiment, return the device to AWS so that the data can be loadedinto Amazon S3. D. Order an AWS Snowcone device. Connect the device to the local network. Configurethe device to use Amazon FSx. Configure the sensors to upload data to the device.Configure AWS DataSync on the device to synchronize the uploaded data with an AmazonS3 bucket Return the device to AWS so that the data can be loaded as an Amazon ElasticBlock Store (Amazon EBS) volume.
Answer: C
Explanation: For collecting data from remote sensors without internet connectivity, using
an AWS Snowcone device with an Amazon EC2 instance running an FTP server presents
a practical solution. This setup allows the sensors to upload data to the EC2 instance via
FTP, and after the experiment, the Snowcone device can be returned to AWS for data
ingestion into Amazon S3. This approach minimizes operational complexity and ensures
efficient data transfer to AWS for further processing or storage.
References: AWS Documentation on AWS Snowcone and Amazon EC2 provides detailed
guidance on deploying compute and storage capabilities in edge locations. This solution
leverages AWS's edge computing devices to address challenges associated with data
collection in remote or disconnected environments.
Question # 48
A company has Linux-based Amazon EC2 instances. Users must access the instances by
using SSH with EC2 SSH Key pairs. Each machine requires a unique EC2 Key pair.
The company wants to implement a key rotation policy that will, upon request,
automatically rotate all the EC2 key pairs and keep the key in a securely encrypted place.
The company will accept less than 1 minute of downtime during key rotation.
Which solution will meet these requirement?
A. Store all the keys in AWS Secrets Manager. Define a Secrets Manager rotationschedule to invoke an AWS Lambda function to generate new key pairs. Replace publicKeys on EC2 instances. Update the private keys in Secrets Manager. B. Store all the keys in Parameter. Store, a capability of AWS Systems Manager, as astring. Define a Systems Manager maintenance window to invoke an AWS Lambdafunction to generate new key pairs. Replace public keys on EC2 instance. Update theprivate keys in parameter. C. Import the EC2 key pairs into AWS Key Management Service (AWS KMS). Configureautomatic key rotation for these key pairs. Create an Amazon EventlBridge scheduled ruleto invoke an AWS Lambda function to initiate the key rotation AWS KMS. D. Add all the EC2 instances to Feet Manager, a capability of AWS Systems Manager.Define a Systems Manager maintenance window to issue a Systems Manager RunCommand document to generate new Key pairs and to rotate public keys to all theinstances in Feet Manager.
Answer: A
Explanation:
To meet the requirements for automatic key rotation of EC2 SSH key pairs with minimal
downtime, storing the keys in AWS Secrets Manager and defining a rotation schedule is
the most suitable solution. AWS Secrets Manager supports automatic rotation of secrets,
including SSH keys, by invoking a Lambda function that can handle the creation of new key
pairs and the replacement of public keys on EC2 instances. Updating the corresponding
private keys in Secrets Manager ensures secure and centralized management of SSH
keys, complying with the key rotation policy and minimizing operational overhead.
References:
AWS Secrets Manager Documentation: Describes how to store and rotate secrets,
including SSH keys, using Secrets Manager and Lambda functions.
AWS Lambda Documentation: Provides information on creating Lambda functions
for custom secret rotation logic.
AWS Best Practices for Security: Highlights the importance of key rotation and
how AWS services like Secrets Manager can facilitate secure and automated key
management.
Question # 49
A company has a Windows-based desktop application that is packaged and deployed to the users' Windows machines. The company recently acquired another company that has
employees who primarily use machines with a Linux operating system. The acquiring
company has decided to migrate and rehost the Windows-based desktop application lo
AWS.
All employees must be authenticated before they use the application. The acquiring
company uses Active Directory on premises but wants a simplified way to manage access
to the application on AWS (or all the employees.
Which solution will rehost the application on AWS with the LEAST development effort?
A. Set up and provision an Amazon Workspaces virtual desktop for every employee.Implement authentication by using Amazon Cognito identity pools. Instruct employees torun the application from their provisioned Workspaces virtual desktops. B. Create an Auto Scarlet group of Windows-based Ama7on EC2 instances. Join eachEC2 instance to the company's Active Directory domain. Implement authentication by usingthe Active Directory That is running on premises. Instruct employees to run the applicationby using a Windows remote desktop. C. Use an Amazon AppStream 2.0 image builder to create an image that includes theapplication and the required configurations. Provision an AppStream 2.0 On-Demand fleetwith dynamic Fleet Auto Scaling process for running the image. Implement authenticationby using AppStream 2.0 user pools. Instruct the employees to access the application bystarling browse'-based AppStream 2.0 streaming sessions. D. Refactor and containerize the application to run as a web-based application. Run theapplication in Amazon Elastic Container Service (Amazon ECS) on AWS Fargate with stepscaling policies Implement authentication by using Amazon Cognito user pools. Instruct theemployees to run the application from their browsers.
Answer: C
Explanation: Amazon AppStream 2.0 offers a streamlined solution for rehosting a
Windows-based desktop application on AWS with minimal development effort. By creating
an AppStream 2.0 image that includes the application and using an On-Demand fleet for
streaming, the application becomes accessible from any device, including Linux machines.
AppStream 2.0 user pools can be used for authentication, simplifying access management
without the need for extensive changes to the application or infrastructure.
References: AWS Documentation on Amazon AppStream 2.0 provides insights into setting
up application streaming solutions. This approach is recommended for delivering desktop
applications to diverse operating systems without the complexity of managing virtual
desktops or extensive application refactoring.
Question # 50
A company is developing an application that will display financial reports. The company
needs a solution that can store financial Information that comes from multiple systems. The
solution must provide the reports through a web interface and must serve the data will less
man 500 milliseconds or latency to end users. The solution also must be highly available
and must have an RTO or 30 seconds.
Which solution will meet these requirements?
A. Use an Amazon Redshift cluster to store the data. Use a state website that is hosted onAmazon S3 with backend APIs that ate served by an Amazon Elastic Cubemates Service(Amazon EKS) cluster to provide the reports to the application. B. Use Amazon S3 to store the data Use Amazon Athena to provide the reports to theapplication. Use AWS App Runner to serve the application to view the reports. C. Use Amazon DynamoDB to store the data, use an embedded Amazon QuickStightdashboard with direct Query datasets to provide the reports to the application. D. Use Amazon Keyspaces (for Apache Cassandra) to store the data, use AWS ElasticBeanstalk to provide the reports to the application.
Answer: C
Explanation: For an application requiring low-latency access to financial information and
high availability with a Recovery Time Objective (RTO) of 30 seconds, using Amazon
DynamoDB for data storage and Amazon QuickSight for reporting is the most suitable
solution. DynamoDB offers fast, consistent, and single-digit millisecond latency for data
retrieval, meeting the latency requirements. QuickSight's ability to directly query
DynamoDB datasets and provide embedded dashboards for reporting enables real-time
financial report generation. This combination ensures high availability and meets the RTO
requirement, providing a robust solution for the application's needs.
References:
Amazon DynamoDB Documentation: Describes the features and benefits of
DynamoDB, emphasizing its performance and scalability for applications requiring
low-latency access to data.
Amazon QuickSight Documentation: Provides information on using QuickSight for
creating and embedding interactive dashboards, including direct querying of
DynamoDB datasets for real-time data visualization.
Question # 51
A company is planning to migrate an on-premises data center to AWS. The company
currently hosts the data center on Linux-based VMware VMs. A solutions architect must
collect information about network dependencies between the VMs. The information must
be in the form of a diagram that details host IP addresses, hostnames, and network
connection information.
Which solution will meet these requirements?
A. Use AWS Application Discovery Service. Select an AWS Migration Hub home AWSRegion. Install the AWS Application Discovery Agent on the on-premises servers for datacollection. Grant permissions to Application Discovery Service to use the Migration Hubnetwork diagrams. B. Use the AWS Application Discovery Service Agentless Collector for server datacollection. Export the network diagrams from the AWS Migration Hub in .png format. C. Install the AWS Application Migration Service agent on the on-premises servers for datacollection. Use AWS Migration Hub data in Workload Discovery on AWS to generatenetwork diagrams. D. Install the AWS Application Migration Service agent on the on-premises servers for datacollection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatchdashboard to generate network diagrams.
Answer: B
Explanation: To effectively gather information about network dependencies between VMs
in an on-premises data center for migration to AWS, it's crucial to use tools that can
capture detailed application and server dependencies. The AWS Application Discovery
Service is designed for this purpose, particularly when migrating from environments like
Linux-based VMware VMs. By installing the AWS Application Discovery Agent on the onpremises
servers, the service can collect necessary data such as host IP addresses,
hostnames, and network connection information. This data is crucial for creating a
comprehensive network diagram that outlines the interactions and dependencies between various components of the on-premises infrastructure. The integration with AWS Migration
Hub enhances this process by allowing the visualization of these dependencies in a
network diagram format, aiding in the planning and execution of the migration process. This
approach ensures a thorough understanding of the on-premises environment, which is
essential for a successful migration to AWS.
References:
AWS Documentation on Application Discovery Service: This provides detailed guidance on
how to use the Application Discovery Service, including the installation and configuration of
the Discovery Agent.
AWS Migration Hub User Guide: Offers insights on how to integrate Application Discovery
Service data with Migration Hub for comprehensive migration planning and tracking.
AWS Solutions Architect Professional Learning Path: Contains advanced topics and best
practices for migrating complex on-premises environments to AWS, emphasizing the use of
AWS services and tools for effective migration planning and execution.
Question # 52
A company maintains information on premises in approximately 1 million .csv files that are
hosted on a VM. The data initially is 10 TB in size and grows at a rate of 1 TB each week.
The company needs to automate backups of the data to the AWS Cloud.
Backups of the data must occur daily. The company needs a solution that applies custom
filters to back up only a subset of the data that is located in designated source directories.
The company has set up an AWS Direct Connect connection.
Which solution will meet the backup requirements with the LEAST operational overhead?
A. Use the Amazon S3 CopyObject API operation with multipart upload to copy the existingdata to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3daily. B. Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule thebackup plan to run daily. C. Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor.Configure a DataSync task to replicate the data to Amazon S3 daily. D. Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync forincremental backups to Amazon S3 daily.
Answer: C
Explanation:
AWS DataSync is an online data transfer service that is designed to help customers get their data to and from AWS quickly, easily, and securely. Using DataSync, you can copy
data from your on-premises NFS or SMB shares directly to Amazon S3, Amazon EFS, or
Amazon FSx for Windows File Server. DataSync uses a purpose-built, parallel transfer
protocol for speeds up to 10x faster than open source tools. DataSync also has built-in
verification of data both in flight and at rest, so you can be confident that your data was
transferred successfully. DataSync allows you to apply filters to select which files or folders
to transfer, based on file name, size, or modification time. You can also schedule your
DataSync tasks to run daily, weekly, or monthly, or on demand. DataSync is integrated with
AWS Direct Connect, so you can take advantage of your existing private connection to
AWS. DataSync is also a fully managed service, so you do not need to provision,
configure, or maintain any infrastructure for data transfer.
Option A is incorrect because the Amazon S3 CopyObject API operation does not support
filtering or scheduling, and it would require you to write and maintain custom scripts to
automate the backup process.
Option B is incorrect because AWS Backup does not support filtering or transferring data
from on-premises sources to Amazon S3. AWS Backup is a fully managed backup service
that makes it easy to centralize and automate the backup of data across AWS services.
Option D is incorrect because AWS Snowball Edge is a physical device that is used for
offline data transfer when network bandwidth is limited or unavailable. It is not suitable for
daily backups or incremental transfers. AWS Snowball Edge also does not support filtering
or scheduling.
References:
1: Considering four different replication options for data in Amazon S3
2: Protect your file and backup archives using AWS DataSync and Amazon S3
Glacier
3: AWS DataSync FAQs
Question # 53
A company needs to migrate an on-premises SFTP site to AWS. The SFTP site currently
runs on a Linux VM. Uploaded files are made available to downstream applications through
an NFS share.
As part of the migration to AWS, a solutions architect must implement high availability. The
solution must provide external vendors with a set of static public IP addresses that the
vendors can allow. The company has set up an AWS Direct Connect connection between
its on-premises data center and its VPC.
Which solution will meet these requirements with the least operational overhead?
A. Create an AWS Transfer Family server, configure an internet-facing VPC endpoint forthe Transfer Family server, specify an Elastic IP address for each subnet, configure theTransfer Family server to pace files into an Amazon Elastic Files System (Amazon EFS)file system that is deployed across multiple Availability Zones Modify the configuration onthe downstream applications that access the existing NFS share to mount the EFSendpoint instead. B. Create an AWS Transfer Family server. Configure a publicly accessible endpoint for theTransfer Family server. Configure the Transfer Family server to place files into an AmazonElastic Files System [Amazon EFS} the system that is deployed across multiple AvailabilityZones. Modify the configuration on the downstream applications that access the existingNFS share to mount the its endpoint instead. C. Use AWS Application Migration service to migrate the existing Linux VM to an AmazonEC2 instance. Assign an Elastic IP address to the EC2 instance. Mount an Amazon ElasticFie system (Amazon EFS) the system to the EC2 instance. Configure the SFTP server toplace files in. the EFS file system. Modify the configuration on the downstream applicationsthat access the existing NFS share to mount the EFS endpoint instead. D. Use AWS Application Migration Service to migrate the existing Linux VM to an AWSTransfer Family server. Configure a publicly accessible endpoint for the Transfer Familyserver. Configure the Transfer Family sever to place files into an Amazon FSx for Lusterthe system that is deployed across multiple Availability Zones. Modify the configuration onthe downstream applications that access the existing NFS share to mount the FSx forLuster endpoint instead.
Answer: A
Explanation:
To migrate an on-premises SFTP site to AWS with high availability and a set of static public
IP addresses for external vendors, the best solution is to create an AWS Transfer Family
server with an internet-facing VPC endpoint. Assigning Elastic IP addresses to each subnet
and configuring the server to store files in an Amazon Elastic File System (EFS) that spans
multiple Availability Zones ensures high availability and consistent access. This approach
minimizes operational overhead by leveraging AWS managed services and eliminates the
need to manage underlying infrastructure.
References: AWS Documentation on AWS Transfer Family and Amazon Elastic File
System provides detailed instructions on setting up a highly available SFTP environment
on AWS. This solution is in line with AWS best practices for migrating and modernizing
applications with minimal disruption and ensuring high availability and security.
Question # 54
A company's factory and automaton applications are running in a single VPC More than 23
applications run on a combination of Amazon EC2, Amazon Elastic Container Service
(Amazon ECS), are Amazon RDS.
The company has software engineers spread across three teams. One of the three teams
owns each application, and each team is responsible for the cost and performance of all of
its applications. Team resources have tags that represent their application and team. The
learns use IAH access for daily activities.
The company needs to determine which costs on the monthly AWS bill are attributable to
each application or team. The company also must be able to create reports to compare
costs item the last 12 months and to help forecast costs tor the next 12 months. A solution
architect must recommend an AWS Billing and Cost Management solution that provides these cost reports.
Which combination of actions will meet these requirement? Select THREE.)
A. Activate the user-defined cost allocation tags that represent the application and theteam. B. Activate the AWS generated cost allocation tags that represent the application and theteam. C. Create a cost category for each application in Billing and Cost Management D. Activate IAM access to Billing and Cost Management. E. Create a cost budget F. Enable Cost Explorer.
Answer: A,C,F
Explanation:
To attribute AWS costs to specific applications or teams and enable detailed cost analysis
and forecasting, the solution architect should recommend the following actions: A.
Activating user-defined cost allocation tags for resources associated with each application
and team allows for detailed tracking of costs by these identifiers. C. Creating a cost
category for each application within AWS Billing and Cost Management enables the
organization to group costs according to application, facilitating detailed reporting and
analysis. F. Enabling Cost Explorer is essential for analyzing and visualizing AWS
spending over time. It provides the capability to view historical costs and forecast future
expenses, supporting the company's requirement for cost comparison and forecasting.
References:
AWS Billing and Cost Management Documentation: Covers the activation of cost
allocation tags, creation of cost categories, and the use of Cost Explorer for cost
management.
AWS Tagging Strategies: Provides best practices for implementing tagging
strategies that support cost allocation and reporting.
AWS Cost Explorer Documentation: Details how to use Cost Explorer to analyze
and forecast AWS costs.
Question # 55
A company's compliance audit reveals that some Amazon Elastic Block Store (Amazon
EBS) volumes that were created in an AWS account were not encrypted. A solutions
architect must Implement a solution to encrypt all new EBS volumes at rest
Which solution will meet this requirement with the LEAST effort?
A. Create an Amazon EventBridge rule to detect the creation of unencrypted EBS volumes.Invoke an AWS Lambda function to delete noncompliant volumes. B. Use AWS Audit Manager with data encryption. C. Create an AWS Config rule to detect the creation of a new EBS volume. Encrypt thevolume by using AWS Systems Manager Automation. D. Turn in EBS encryption by default in all AWS Regions.
Answer: D
Explanation:
The most effortless way to ensure that all new Amazon Elastic Block Store (EBS) volumes
are encrypted at rest is to enable EBS encryption by default in all AWS Regions. This
setting automatically encrypts all new EBS volumes and snapshots created in the account,
thereby ensuring compliance with encryption policies without the need for manual
intervention or additional monitoring.
References: AWS Documentation on Amazon EBS encryption provides guidance on
enabling EBS encryption by default. This approach aligns with AWS best practices for data
protection and compliance, ensuring that all new EBS volumes adhere to encryption
requirements with minimal operational effort.
Question # 56
A company is preparing to deploy an Amazon Elastic Kubernetes Service (Amazon EKS)
cluster for a workload. The company expects the cluster to support an
unpredictable number of stateless pods. Many of the pods will be created during a short
time period as the workload automatically scales the number of replicas that the workload
uses.
Which solution will MAXIMIZE node resilience?
A. Use a separate launch template to deploy the EKS control plane into a second clusterthat is separate from the workload node groups. B. Update the workload node groups. Use a smaller number of node groups and largerinstances in the node groups. C. Configure the Kubernetes Cluster Autoscaler to ensure that the compute capacity of theworkload node groups stays under provisioned. D. Configure the workload to use topology spread constraints that are based on AvailabilityZone.
Answer: D
Explanation:
Configuring the workload to use topology spread constraints that are based on Availability
Zone will maximize the node resilience of the workload node groups. This will ensure that
the pods are evenly distributed across different Availability Zones, reducing the impact of
failures or disruptions in one Availability Zone2. This will also improve the availability and
scalability of the workload node groups, as they can leverage the low-latency, highthroughput,
and highly redundant networking between Availability Zones1.
Question # 57
A company wants to design a disaster recovery (DR) solution for an application that runs in
the company's data center. The application writes to an SMB file share and creates a copy
on a second file share. Both file shares are in the data center. The application uses two
types of files: metadata files and image files.
The company wants to store the copy on AWS. The company needs the ability to use SMB
to access the data from either the data center or AWS if a disaster occurs. The copy of the
data is rarely accessed but must be available within 5 minutes.
Which solution will meet these requirements MOST cost-effectively?
A. Deploy AWS Outposts with Amazon S3 storage. Configure a Windows Amazon EC2instance on Outposts as a file server. B. Deploy an Amazon FSx File Gateway. Configure an Amazon FSx for Windows FileServer Multi-AZ file system that uses SSD storage. C. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3Standard-Infrequent Access (S3 Standard-IA) for the metadata files and to use S3 GlacierDeep Archive for the image files. D. Deploy an Amazon S3 File Gateway. Configure the S3 File Gateway to use Amazon S3Standard-Infrequent Access (S3 Standard-IA) for the metadata files and image files.
Answer: C
Explanation:
The correct solution is to use an Amazon S3 File Gateway to store the copy of the SMB file
share on AWS. An S3 File Gateway enables on-premises applications to store and access
objects in Amazon S3 using the SMB protocol. The S3 File Gateway can also be accessed from AWS using the SMB protocol, which provides the ability to use the data from either
the data center or AWS if a disaster occurs. The S3 File Gateway supports tiering of data
to different S3 storage classes based on the file type. This allows the company to optimize
the storage costs by using S3 Standard-Infrequent Access (S3 Standard-IA) for the
metadata files, which are rarely accessed but must be available within 5 minutes, and S3
Glacier Deep Archive for the image files, which are the lowest-cost storage class and
suitable for long-term retention of data that is rarely accessed. This solution is the most
cost-effective because it does not require any additional hardware, software, or replication
services.
The other solutions are incorrect because they either use more expensive or unnecessary
services or components, or they do not meet the requirements. For example:
Solution A is incorrect because it uses AWS Outposts with Amazon S3 storage,
which is a very expensive and complex solution for the scenario in the question.
AWS Outposts is a service that extends AWS infrastructure, services, APIs, and
tools to virtually any data center, co-location space, or on-premises facility. It is
designed for customers who need low latency and local data processing. Amazon
S3 storage on Outposts provides a subset of S3 features and APIs to store and
retrieve data on Outposts. However, this solution does not provide SMB access to
the data on Outposts, which requires a Windows EC2 instance on Outposts as a
file server. This adds more cost and complexity to the solution, and it does not
provide the ability to access the data from AWS if a disaster occurs.
Solution B is incorrect because it uses Amazon FSx File Gateway and Amazon
FSx for Windows File Server Multi-AZ file system that uses SSD storage, which
are both more expensive and unnecessary services for the scenario in the
question. Amazon FSx File Gateway is a service that enables on-premises
applications to store and access data in Amazon FSx for Windows File Server
using the SMB protocol. Amazon FSx for Windows File Server is a fully managed
service that provides native Windows file shares with the compatibility, features,
and performance that Windows-based applications rely on. However, this solution
does not meet the requirements because it does not provide the ability to use
different storage classes for the metadata files and image files, and it does not
provide the ability to access the data from AWS if a disaster occurs. Moreover,
using a Multi-AZ file system that uses SSD storage is overprovisioned and costly
for the scenario in the question, which involves rarely accessed data that must be
available within 5 minutes.
Solution D is incorrect because it uses an S3 File Gateway that uses S3 Standard-
IA for both the metadata files and image files, which is not the most cost-effective
solution for the scenario in the question. S3 Standard-IA is a storage class that
offers high durability, availability, and performance for infrequently accessed data.
However, it is more expensive than S3 Glacier Deep Archive, which is the lowestcost
storage class and suitable for long-term retention of data that is rarely
accessed. Therefore, using S3 Standard-IA for the image files, which are likely to
be larger and more numerous than the metadata files, is not optimal for the
storage costs.
References:
What is S3 File Gateway? Using Amazon S3 storage classes with S3 File Gateway
Accessing your file shares from AWS
Question # 58
A solutions architect needs to improve an application that is hosted in the AWS Cloud. The
application uses an Amazon Aurora MySQL DB instance that is experiencing overloaded
connections. Most of the application's operations insert records into the database. The
application currently stores credentials in a text-based configuration file.
The solutions architect needs to implement a solution so that the application can handle the
current connection load. The solution must keep the credentials secure and must provide
the ability to rotate the credentials automatically on a regular basis.
Which solution will meet these requirements?
A. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connectioncredentials as a secret in AWS Secrets Manager. B. Deploy an Amazon RDS Proxy layer in front of the DB instance. Store the connectioncredentials in AWS Systems Manager Parameter Store. C. Create an Aurora Replica. Store the connection credentials as a secret in AWS SecretsManager. D. Create an Aurora Replica. Store the connection credentials in AWS Systems ManagerParameter Store.
A company is migrating an on-premises application and a MySQL database to AWS. The
application processes highly sensitive data, and new data is constantly updated in the
database. The data must not be transferred over the internet. The company also must
encrypt the data in transit and at rest.
The database is 5 TB in size. The company already has created the database schema in
an Amazon RDS for MySQL DB instance. The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF.
A solutions architect needs to design a solution that will migrate the data to AWS with the
least possible downtime.
Which solution will meet these requirements?
A. Perform a database backup. Copy the backup files to an AWS Snowball Edge StorageOptimized device. Import the backup to Amazon S3. Use server-side encryption withAmazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS forencryption in transit. Import the data from Amazon S3 to the DB instance. B. Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Createa DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS.Configure a DMS task to copy data from the on-premises database to the DB instance byusing full load plus change data capture (CDC). Use the AWS Key Management Service(AWS KMS) default key for encryption at rest. Use TLS for encryption in transit. C. Perform a database backup. Use AWS DataSync to transfer the backup files to AmazonS3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) forencryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to theDB instance. D. Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWSPrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use serversideencryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest.Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
Answer: B
Explanation: The best solution is to use AWS Database Migration Service (AWS DMS) to
migrate the data to AWS. AWS DMS is a web service that can migrate data from various
sources to various targets, including MySQL databases. AWS DMS can perform full load
and change data capture (CDC) migrations, which means that it can copy the existing data
and also capture the ongoing changes to keep the source and target databases in sync.
This minimizes the downtime during the migration process. AWS DMS also supports
encryption at rest and in transit by using AWS Key Management Service (AWS KMS) and
TLS, respectively. This ensures that the data is protected during the migration. AWS DMS
can also leverage AWS Direct Connect to transfer the data over a private connection,
avoiding the internet. This solution meets all the requirements of the
company. References: AWS Database Migration Service Documentation, Migrating Data to
Amazon RDS for MySQL or MariaDB, Using SSL to Encrypt a Connection to a DB Instance
Question # 60
A company is serving files to its customers through an SFTP server that is accessible over
the internet The SFTP server is running on a single Amazon EC2 instance with an Elastic
IP address attached Customers connect to the SFTP server through its Elastic IP address
and use SSH for authentication The EC2 instance also has an attached security group that
allows access from all customer IP addresses.
A solutions architect must implement a solution to improve availability minimize the
complexity of infrastructure management and minimize the disruption to customers who
access files. The solution must not change the way customers connect
Which solution will meet these requirements?
A. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucketto be used for SFTP file hosting Create an AWS Transfer Family server. Configure theTransfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IPaddress with the new endpoint. Point the Transfer Family server to the S3 bucket Sync allfiles from the SFTP server to the S3 bucket. B. Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucketto be used for SFTP file hosting Create an AWS Transfer Family Server Configure theTransfer Family server with a VPC-hosted, internet-facing endpoint Associate the SFTPElastic IP address with the new endpoint Attach the security group with customer IPaddresses to the new endpoint Point the Transfer Family server to the S3 bucket. Sync allfiles from the SFTP server to the S3 bucket. C. Disassociate the Elastic IP address from the EC2 instance. Create a new AmazonElastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create anAWS Fargate task definition to run an SFTP server Specify the EFS file system as a mountin the task definition Create a Fargate service by using the task definition, and place aNetwork Load Balancer (NLB) in front of the service. When configuring the service, attachthe security group with customer IP addresses to the tasks that run the SFTP serverAssociate the Elastic IP address with the NLB Sync all files from the SFTP server to the S3bucket. D. Disassociate the Elastic IP address from the EC2 instance. Create a multi-attachAmazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting.Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create anAuto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scalinggroup that instances that are launched should attach the new multi-attach EBS volumeConfigure the Auto Scaling group to automatically add instances behind the NLB. configurethe Auto Scaling group to use the security group that allows customer IP addresses for theEC2 instances that the Auto Scaling group launches Sync all files from the SFTP server tothe new multi-attach EBS volume.
An online retail company hosts its stateful web-based application and MySQL database in
an on-premises data center on a single server. The company wants to increase its
customer base by conducting more marketing campaigns and promotions. In preparation,
the company wants to migrate its application and database to AWS to increase the
reliability of its architecture.
Which solution should provide the HIGHEST level of reliability?
A. Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy theapplication in an Auto Scaling group on Amazon EC2 instances behind an Application LoadBalancer. Store sessions in Amazon Neptune. B. Migrate the database to Amazon Aurora MySQL. Deploy the application in an AutoScaling group on Amazon EC2 instances behind an Application Load Balancer. Storesessions in an Amazon ElastiCache for Redis replication group. C. Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploythe application in an Auto Scaling group on Amazon EC2 instances behind a Network LoadBalancer. Store sessions in Amazon Kinesis Data Firehose. D. Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy theapplication in an Auto Scaling group on Amazon EC2 instances behind an Application LoadBalancer. Store sessions in Amazon ElastiCache for Memcached.
Answer: B
Explanation:
Explanation: This option allows the company to use Amazon Aurora MySQL, which is a
fully managed relational database service that is compatible with MySQL and offers up to
five times better performance than standard MySQL1. By migrating the database to Aurora
MySQL, the company can benefit from its high availability, durability, scalability, and
security features1. By deploying the application in an Auto Scaling group on Amazon EC2
instances behind an Application Load Balancer, the company can ensure that the
application can handle varying levels of traffic and distribute the requests across multiple
instances2. By storing sessions in an Amazon ElastiCache for Redis replication group, the
company can improve the performance and reliability of the session data by using a fast,
in-memory data store that supports replication and failover3.
References:
What is Amazon Aurora?
What is Auto Scaling?
What is Amazon ElastiCache?
Question # 62
A car rental company has built a serverless REST API to provide data to its mobile app.
The app consists of an Amazon API Gateway API with a Regional endpoint, AWS Lambda
functions, and an Amazon Aurora MySQL Serverless DB cluster. The company recently
opened the API to mobile apps of partners. A significant increase in the number of requests
resulted, causing sporadic database memory errors. Analysis of the API traffic indicates
that clients are making multiple HTTP GET requests for the same queries in a short period
of time. Traffic is concentrated during business hours, with spikes around holidays and
other events.
The company needs to improve its ability to support the additional usage while minimizing
the increase in costs associated with the solution.
Which strategy meets these requirements?
A. Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enablecaching in the production stage. B. Implement an Amazon ElastiCache for Redis cache to store the results of the databasecalls. Modify the Lambda functions to use the cache. C. Modify the Aurora Serverless DB cluster configuration to increase the maximum amountof available memory. D. Enable throttling in the API Gateway production stage. Set the rate and burst values tolimit the incoming calls.
Answer: A
Explanation:
Explanation: This option allows the company to use Amazon CloudFront to improve the
latency and availability of the API requests by caching the responses at the edge locations
closest to the clients1. By enabling caching in the production stage, the company can
reduce the number of calls made to the backend services, such as Lambda functions and
Aurora Serverless DB cluster, and save on costs and resources2. This option also helps to
handle traffic spikes and reduce database memory errors by serving cached responses
instead of querying the database repeatedly.
References:
Choosing an API endpoint type
Enabling API caching to enhance responsiveness
Question # 63
A company has a web application that securely uploads pictures and videos to an Amazon
S3 bucket. The company requires that only authenticated users are allowed to post
content. The application generates a presigned URL that is used to upload objects through
a browser interface. Most users are reporting slow upload times for objects larger than 100
MB.
What can a Solutions Architect do to improve the performance of these uploads while
ensuring only authenticated users are allowed to post content?
A. Set up an Amazon API Gateway with an edge-optimized API endpoint that has aresource as an S3 service proxy. Configure the PUT method for this resource to exposethe S3 PutObject operation. Secure the API Gateway using a COGNITO_USER_POOLSauthorizer. Have the browser interface use API Gateway instead of the presigned URL toupload objects. B. Set up an Amazon API Gateway with a regional API endpoint that has a resource as anS3 service proxy. Configure the PUT method for this resource to expose the S3 PutObjectoperation. Secure the API Gateway using an AWS Lambda authorizer. Have the browserinterface use API Gateway instead of the presigned URL to upload API objects. C. Enable an S3 Transfer Acceleration endpoint on the S3 bucket. Use the endpoint whengenerating the presigned URL. Have the browser interface upload the objects to this URLusing the S3 multipart upload API. D. Configure an Amazon CloudFront distribution for the destination S3 bucket. Enable PUTand POST methods for the CloudFront cache behavior. Update the CloudFront origin touse an origin access identity (OAI). Give the OAI user s3:PutObject permissions in the bucket policy. Have the browser interface upload objects using the CloudFront distribution.
Answer: C
Explanation:
Explanation: S3 Transfer Acceleration is a feature that enables fast, easy, and secure
transfers of files over long distances between your client and an S3 bucket1. It works by
leveraging the CloudFront edge network to route your requests to S3 over an optimized
network path1. By using a Transfer Acceleration endpoint when generating a presigned
URL, you can allow authenticated users to upload objects faster and more
reliably2. Additionally, using the S3 multipart upload API can improve the performance of
large object uploads by breaking them into smaller parts and uploading them in parallel3.
References:
S3 Transfer Acceleration
Using Transfer Acceleration with presigned URLs
Uploading objects using multipart upload API
Question # 64
A company has a website that runs on four Amazon EC2 instances that are behind an
Application Load Balancer (ALB). When the ALB detects that an EC2 instance is no longer
available, an Amazon CloudWatch alarm enters the ALARM state. A member of the
company's operations team then manually adds a new EC2 instance behind the ALB.
A solutions architect needs to design a highly available solution that automatically handles
the replacement of EC2 instances. The company needs to minimize downtime during the
switch to the new solution.
Which set of steps should the solutions architect take to meet these requirements?
A. Delete the existing ALB. Create an Auto Scaling group that is configured to handle theweb application traffic. Attach a new launch template to the Auto Scaling group. Create anew ALB. Attach the Auto Scaling group to the new ALB. Attach the existing EC2 instancesto the Auto Scaling group. B. Create an Auto Scaling group that is configured to handle the web application traffic.Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group tothe existing ALB. Attach the existing EC2 instances to the Auto Scaling group. C. Delete the existing ALB and the EC2 instances. Create an Auto Scaling group that isconfigured to handle the web application traffic. Attach a new launch template to the AutoScaling group. Create a new ALB. Attach the Auto Scaling group to the new ALB. Wait forthe Auto Scaling group to launch the minimum number of EC2 instances. D. Create an Auto Scaling group that is configured to handle the web application traffic. Attach a new launch template to the Auto Scaling group. Attach the Auto Scaling group tothe existing ALB. Wait for the existing ALB to register the existing EC2 instances with theAuto Scaling group.
Answer: B
Explanation: The Auto Scaling group can automatically launch and terminate EC2
instances based on the demand and health of the web application. The launch template
can specify the configuration of the EC2 instances, such as the AMI, instance type, security
group, and user data. The existing ALB can distribute the traffic to the EC2 instances in the
Auto Scaling group. The existing EC2 instances can be attached to the Auto Scaling group
without deleting them or the ALB. This option minimizes downtime and preserves the
current setup of the web application. References: [What is Amazon EC2 Auto Scaling?],
[Launch templates], [Attach a load balancer to your Auto Scaling group], [Attach EC2
instances to your Auto Scaling group]
Question # 65
A company is deploying a third-party firewall appliance solution from AWS Marketplace to
monitor and protect traffic that leaves the company's AWS environments. The company
wants to deploy this appliance into a shared services VPC and route all outbound internetbound
traffic through the appliances.
A solutions architect needs to recommend a deployment method that prioritizes reliability
and minimizes failover time between firewall appliances within a single AWS Region. The
company has set up routing from the shared services VPC to other VPCs.
Which steps should the solutions architect recommend to meet these requirements?
(Select THREE.)
A. Deploy two firewall appliances into the shared services VPC, each in a separateAvailability Zone. B. Create a new Network Load Balancer in the shared services VPC. Create a new targetgroup, and attach it to the new Network Load Balancer. Add each of the firewall applianceinstances to the target group. C. Create a new Gateway Load Balancer in the shared services VPC. Create a new targetgroup, and attach it to the new Gateway Load Balancer. Add each of the firewall applianceinstances to the target group. D. Create a VPC interface endpoint. Add a route to the route table in the shared servicesVPC. Designate the new endpoint as the next hop for traffic that enters the shared servicesVPC from other VPCs. E. Deploy two firewall appliances into the shared services VPC. each in the sameAvailability Zone. F. Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in theshared services VPC. Designate the new endpoint as the next hop for traffic that enters theshared services VPC from other VPCs.
Answer: A,C,F
Explanation:
The best solution is to deploy two firewall appliances into the shared services VPC, each in
a separate Availability Zone, and create a new Gateway Load Balancer to distribute traffic
to them. A Gateway Load Balancer is designed for high performance and high availability
scenarios with third-party network virtual appliances, such as firewalls. It operates at the
network layer and maintains flow stickiness and symmetry to a specific appliance instance.
It also uses the GENEVE protocol to encapsulate traffic between the load balancer and the
appliances. To route traffic from other VPCs to the Gateway Load Balancer, a VPC
Gateway Load Balancer endpoint is needed. This is a VPC endpoint that provides private
connectivity between the appliances in the shared services VPC and the application
servers in other VPCs. The endpoint must be added as the next hop in the route table for
the shared services VPC. This solution ensures reliability and minimizes failover time
between firewall appliances within a single AWS Region. References: What is a Gateway
An ecommerce company runs an application on AWS. The application has an Amazon API
Gateway API that invokes an AWS Lambda function. The data is stored in an Amazon RDS
for PostgreSQL DB instance.
During the company's most recent flash sale, a sudden increase in API calls negatively
affected the application's performance. A solutions architect reviewed the Amazon
CloudWatch metrics during that time and noticed a significant increase in Lambda
invocations and database connections. The CPU utilization also was high on the DB
instance.
What should the solutions architect recommend to optimize the application's performance?
A. Increase the memory of the Lambda function. Modify the Lambda function to close thedatabase connections when the data is retrieved. B. Add an Amazon ElastiCache for Redis cluster to store the frequently accessed datafrom the RDS database. C. Create an RDS proxy by using the Lambda console. Modify the Lambda function to usethe proxy endpoint. D. Modify the Lambda function to connect to the database outside of the function's handler.Check for an existing database connection before creating a new connection.
Answer: C
Explanation: This option will optimize the application’s performance by reducing the overhead of opening
and closing database connections for each Lambda invocation. An RDS proxy is a fully
managed database proxy for Amazon RDS that makes applications more scalable, more
resilient to database failures, and more secure1. It allows applications to pool and share
connections established with the database, improving database efficiency and application
scalability1. By creating an RDS proxy by using the Lambda console, you can easily
configure your Lambda function to use the proxy endpoint instead of the direct database
endpoint2. This will enable your Lambda function to reuse existing connections from the
proxy’s connection pool, reducing the latency and CPU utilization caused by establishing
new connections for each invocation. It will also prevent connection saturation or
exhaustion on the database, which can degrade performance or cause errors3.
Question # 67
A company hosts a software as a service (SaaS) solution on AWS. The solution has an
Amazon API Gateway API that serves an HTTPS endpoint. The API uses AWS Lambda
functions for compute. The Lambda functions store data in an Amazon Aurora Serverless
VI database.
The company used the AWS Serverless Application Model (AWS SAM) to deploy the
solution. The solution extends across multiple Availability Zones and has no disaster
recovery (DR) plan.
A solutions architect must design a DR strategy that can recover the solution in another
AWS Region. The solution has an R TO of 5 minutes and an RPO of 1 minute.
What should the solutions architect do to meet these requirements?
A. Create a read replica of the Aurora Serverless VI database in the target Region. UseAWS SAM to create a runbook to deploy the solution to the target Region. Promote theread replica to primary in case of disaster. B. Change the Aurora Serverless VI database to a standard Aurora MySQL globaldatabase that extends across the source Region and the target Region. Use AWS SAM tocreate a runbook to deploy the solution to the target Region. C. Create an Aurora Serverless VI DB cluster that has multiple writer instances in the targetRegion. Launch the solution in the target Region. Configure the two Regional solutions towork in an active-passive configuration. D. Change the Aurora Serverless VI database to a standard Aurora MySQL globaldatabase that extends across the source Region and the target Region. Launch thesolution in the target Region. Configure the two Regional solutions to work in an activepassiveconfiguration.
Answer: D
Explanation:
Explanation: This option allows the solutions architect to use Aurora global database to
Question # 68
A company is deploying a new cluster for big data analytics on AWS. The cluster will run
across many Linux Amazon EC2 instances that are spread across multiple Availability
Zones.
All of the nodes in the cluster must have read and write access to common underlying file
storage. The file storage must be highly available, must be resilient, must be compatible
with the Portable Operating System Interface (POSIX). and must accommodate high levels
of throughput.
Which storage solution will meet these requirements?
A. Provision an AWS Storage Gateway file gateway NFS file share that is attached to anAmazon S3 bucket. Mount the NFS file share on each EC2 instance in the duster. B. Provision a new Amazon Elastic File System (Amazon EFS) file system that usesGeneral Purpose performance mode. Mount the EFS file system on each EC2 instance inthe cluster. C. Provision a new Amazon Elastic Block Store (Amazon EBS) volume that uses the io2volume type. Attach the EBS volume to all of the EC2 instances in the cluster. D. Provision a new Amazon Elastic File System (Amazon EFS) file system that uses MaxI/O performance mode. Mount the EFS file system on each EC2 instance in the cluster.
Answer: D
Explanation:
The best solution is to provision a new Amazon Elastic File System (Amazon EFS) file
system that uses Max I/O performance mode and mount the EFS file system on each EC2
instance in the cluster. Amazon EFS is a fully managed, scalable, and elastic file storage
service that supports the POSIX standard and can be accessed by multiple EC2 instances
concurrently. Amazon EFS offers two performance modes: General Purpose and Max I/O.
Max I/O mode is designed for highly parallelized workloads that can tolerate higher
latencies than the General Purpose mode. Max I/O mode provides higher levels of
aggregate throughput and operations per second, which are suitable for big data analytics
applications. This solution meets all the requirements of the
A company deploys a new web application. As pari of the setup, the company configures
AWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The company
develops an Amazon Athena query that runs once daily to return AWS WAF log data from
the previous 24 hours. The volume of daily logs is constant. However, over time, the same
query is taking more time to run.
A solutions architect needs to design a solution to prevent the query time from continuing to
increase. The solution must minimize operational overhead.
Which solution will meet these requirements?
A. Create an AWS Lambda function that consolidates each day's AWS WAF logs into onelog file. B. Reduce the amount of data scanned by configuring AWS WAF to send logs to adifferent S3 bucket each day. C. Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 bydate and time. Create external tables for Amazon Redshift. Configure Amazon RedshiftSpectrum to query the data source. D. Modify the Kinesis Data Firehose configuration and Athena table definition to partitionthe data by date and time. Change the Athena query to view the relevant partitions.
Answer: D
Explanation: The best solution is to modify the Kinesis Data Firehose configuration and
Athena table definition to partition the data by date and time. This will reduce the amount of
data scanned by Athena and improve the query performance. Changing the Athena query
to view the relevant partitions will also help to filter out unnecessary data. This solution
requires minimal operational overhead as it does not involve creating additional resources
or changing the log format. References: [AWS WAF Developer Guide], [Amazon Kinesis
Data Firehose User Guide], [Amazon Athena User Guide]
Question # 70
A solutions architect has an operational workload deployed on Amazon EC2 instances in
an Auto Scaling Group The VPC architecture spans two Availability Zones (AZ) with a
subnet in each that the Auto Scaling group is targeting. The VPC is connected to an onpremises
environment and connectivity cannot be interrupted The maximum size of the
Auto Scaling group is 20 instances in service. The VPC IPv4 addressing is as follows:
VPCCIDR 10 0 0 0/23
AZ1 subnet CIDR: 10 0 0 0724
AZ2 subnet CIDR: 10.0.1 0724
Since deployment, a third AZ has become available in the Region The solutions architect
wants to adopt the new AZ without adding additional IPv4 address space and without
service downtime. Which solution will meet these requirements?
A. Update the Auto Scaling group to use the AZ2 subnet only Delete and re-create the AZ1subnet using half the previous address space Adjust the Auto Scaling group to also use the new AZI subnet When the instances are healthy, adjust the Auto Scaling group to use theAZ1 subnet only Remove the current AZ2 subnet Create a new AZ2 subnet using thesecond half of the address space from the original AZ1 subnet Create a new AZ3 subnetusing half the original AZ2 subnet address space, then update the Auto Scaling group totarget all three new subnets. B. Terminate the EC2 instances in the AZ1 subnet Delete and re-create the AZ1 subnetusing hall the address space. Update the Auto Scaling group to use this new subnet.Repeat this for the second AZ. Define a new subnet in AZ3: then update the Auto Scalinggroup to target all three new subnets C. Create a new VPC with the same IPv4 address space and define three subnets, withone for each AZ Update the existing Auto Scaling group to target the new subnets in thenew VPC D. Update the Auto Scaling group to use the AZ2 subnet only Update the AZ1 subnet tohave halt the previous address space Adjust the Auto Scaling group to also use the AZ1subnet again. When the instances are healthy, adjust the Auto Seating group to use theAZ1 subnet only. Update the current AZ2 subnet and assign the second half of the addressspace from the original AZ1 subnet Create a new AZ3 subnet using half the original AZ2subnet address space, then update the Auto Scaling group to target all three new subnets
A data analytics company has an Amazon Redshift cluster that consists of several reserved
nodes. The cluster is experiencing unexpected bursts of usage because a team of
employees is compiling a deep audit analysis report. The queries to generate the report are
complex read queries and are CPU intensive.
Business requirements dictate that the cluster must be able to service read and write
queries at all times. A solutions architect must devise a solution that accommodates the
bursts of usage.
Which solution meets these requirements MOST cost-effectively?
A. Provision an Amazon EMR cluster. Offload the complex data processing tasks. B. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster byusing a classic resize operation when the cluster's CPU metrics in Amazon CloudWatchreach 80%. C. Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster byusing an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatchreach 80%. D. Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.
Answer: C
Explanation:
The best solution is to deploy an AWS Lambda function to add capacity to the Amazon
Redshift cluster by using an elastic resize operation when the cluster’s CPU metrics in
Amazon CloudWatch reach 80%. This solution will enable the cluster to scale up or down
quickly by adding or removing nodes within minutes. This will improve the performance of
the complex read queries and also reduce the cost by scaling down when the demand
decreases. This solution is more cost-effective than using a classic resize operation, which
takes longer and requires more downtime. It is also more suitable than using Amazon
EMR, which is designed for big data processing rather than data
warehousing. References: Amazon Redshift Documentation, Resizing clusters in Amazon
Redshift, [Amazon EMR Documentation]c
Question # 72
An online survey company runs its application in the AWS Cloud. The application is
distributed and consists of microservices that run in an automatically scaled Amazon
Elastic Container Service (Amazon ECS) cluster. The ECS cluster is a target for an
Application Load Balancer (ALB). The ALB is a custom origin for an Amazon CloudFront
distribution.
The company has a survey that contains sensitive data. The sensitive data must be
encrypted when it moves through the application. The application's data-handling
microservice is the only microservice that should be able to decrypt the data.
Which solution will meet these requirements?
A. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated tothe data-handling microservice. Create a field-level encryption profile and a configuration.Associate the KMS key and the configuration with the CloudFront cache behavior. B. Create an RSA key pair that is dedicated to the data-handling microservice. Upload thepublic key to the CloudFront distribution. Create a field-level encryption profile and aconfiguration. Add the configuration to the CloudFront cache behavior. C. Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated tothe data-handling microservice. Create a Lambda@Edge function. Program the function touse the KMS key to encrypt the sensitive data. D. Create an RSA key pair that is dedicated to the data-handling microservice. Create aLambda@Edge function. Program the function to use the private key of the RSA key pair toencrypt the sensitive data.
Answer: B
Explanation: The best solution is to create an RSA key pair that is dedicated to the data-handling
microservice and upload the public key to the CloudFront distribution. Then, create a fieldlevel
encryption profile and a configuration, and add the configuration to the CloudFront
cache behavior. This solution will ensure that the sensitive data is encrypted at the edge
locations of CloudFront, close to the end users, and remains encrypted throughout the
application stack. Only the data-handling microservice, which has access to the private key
of the RSA key pair, can decrypt the data. This solution does not require any additional
resources or code changes, and leverages the built-in feature of CloudFront field-level
encryption. For more information about CloudFront field-level encryption, see Using fieldlevel
encryption to help protect sensitive data.
Question # 73
A company uses an organization in AWS Organizations to manage the company's AWS
accounts. The company uses AWS CloudFormation to deploy all infrastructure. A finance
team wants to buikJ a chargeback model The finance team asked each business unit to tag
resources by using a predefined list of project values.
When the finance team used the AWS Cost and Usage Report in AWS Cost Explorer and
filtered based on project, the team noticed noncompliant project values. The company
wants to enforce the use of project tags for new resources.
Which solution will meet these requirements with the LEAST effort?
A. Create a tag policy that contains the allowed project tag values in the organization'smanagement account. Create an SCP that denies the cloudformation:CreateStack APIoperation unless a project tag is added. Attach the SCP to each OU. B. Create a tag policy that contains the allowed project tag values in each OU. Create anSCP that denies the cloudformation:CreateStack API operation unless a project tag isadded. Attach the SCP to each OU. C. Create a tag policy that contains the allowed project tag values in the AWS managementaccount. Create an 1AM policy that denies the cloudformation:CreateStack API operationunless a project tag is added. Assign the policy to each user. D. Use AWS Service Catalog to manage the CloudFoanation stacks as products. Use aTagOptions library to control project tag values. Share the portfolio with all OUs that are inthe organization.
Answer: A
Explanation:
The best solution is to create a tag policy that contains the allowed project tag values in the
organization’s management account and create an SCP that denies the
cloudformation:CreateStack API operation unless a project tag is added. A tag policy is a
type of policy that can help standardize tags across resources in the organization’s
accounts. A tag policy can specify the allowed tag keys, values, and case treatment for compliance. A service control policy (SCP) is a type of policy that can restrict the actions
that users and roles can perform in the organization’s accounts. An SCP can deny access
to specific API operations unless certain conditions are met, such as having a specific tag.
By creating a tag policy in the management account and attaching it to each OU, the
organization can enforce consistent tagging across all accounts. By creating an SCP that
denies the cloudformation:CreateStack API operation unless a project tag is added, the
organization can prevent users from creating new resources without proper tagging. This
solution will meet the requirements with the least effort, as it does not involve creating
additional resources or modifying existing ones. References: Tag policies - AWS
Organizations, Service control policies - AWS Organizations, AWS CloudFormation User
Guide
Question # 74
A company is running a serverless application that consists of several AWS Lambda
functions and Amazon DynamoDB tables. The company has created new functionality that
requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB
cluster is located in three subnets in a VPC.
Which of the possible solutions will allow the Lambda functions to access the Neptune DB
cluster and DynamoDB tables? (Select TWO.)
A. Create three public subnets in the Neptune VPC, and route traffic through an internetgateway. Host the Lambda functions in the three new public subnets. B. Create three private subnets in the Neptune VPC, and route internet traffic through aNAT gateway. Host the Lambda functions in the three new private subnets. C. Host the Lambda functions outside the VPC. Update the Neptune security group to allowaccess from the IP ranges of the Lambda functions. D. Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptunedatabase, and have the Lambda functions access Neptune over the VPC endpoint. E. Create three private subnets in the Neptune VPC. Host the Lambda functions in thethree new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDBtraffic to the VPC endpoint.
Answer: B,E
Explanation:
Explanation: This option allows the company to use private subnets and VPC endpoints to
connect the Lambda functions to the Neptune DB cluster and DynamoDB tables securely
and efficiently1. By creating three private subnets in the Neptune VPC, the company can
isolate the Lambda functions from the public internet and reduce the attack surface2. By
routing internet traffic through a NAT gateway, the company can enable the Lambda
functions to access AWS services that are outside the VPC, such as Amazon S3 or Amazon CloudWatch3. By hosting the Lambda functions in the three new private subnets,
the company can ensure that the Lambda functions can access the Neptune DB cluster
within the same VPC4. By creating a VPC endpoint for DynamoDB, the company can
enable the Lambda functions to access DynamoDB tables without going through the
internet or a NAT gateway5. By routing DynamoDB traffic to the VPC endpoint, the
company can improve the performance and availability of the DynamoDB access5.
References:
Configuring a Lambda function to access resources in a VPC
Working with VPCs and subnets
NAT gateways
Accessing Amazon Neptune from AWS Lambda
VPC endpoints for DynamoDB
Question # 75
A company is running multiple workloads in the AWS Cloud. The company has separate
units for software development. The company uses AWS Organizations and federation with
SAML to give permissions to developers to manage resources in their AWS accounts. The
development units each deploy their production workloads into a common production
account.
Recently, an incident occurred in the production account in which members of a
development unit terminated an EC2 instance that belonged to a different development
unit. A solutions architect must create a solution that prevents a similar incident from
happening in the future. The solution also must allow developers the possibility to manage
the instances used for their workloads.
Which strategy will meet these requirements?
A. Create separate OUs in AWS Organizations for each development unit. Assign thecreated OUs to the company AWS accounts. Create separate SCPs with a deny action anda StringNotEquals condition for the DevelopmentUnit resource tag that matches thedevelopment unit name. Assign the SCP to the corresponding OU. B. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)session tag during SAML federation. Update the IAM policy for the developers' assumedIAM role with a deny action and a StringNotEquals condition for the DevelopmentUnitresource tag and aws:PrincipalTag/ DevelopmentUnit. C. Pass an attribute for DevelopmentUnit as an AWS Security Token Service (AWS STS)session tag during SAML federation. Create an SCP with an allow action and aStringEquals condition for the DevelopmentUnit resource tag andaws:PrincipalTag/DevelopmentUnit. Assign the SCP to the root OU. D. Create separate IAM policies for each development unit. For every IAM policy, add anallow action and a StringEquals condition for the DevelopmentUnit resource tag and thedevelopment unit name. During SAML federation, use AWS Security Token Service (AWSSTS) to assign the IAM policy and match the development unit name to the assumed IAMrole.
Answer: B
Explanation:
Explanation: This option allows the solutions architect to use session tags to pass
additional information about the federated user, such as the development unit name, to
AWS1. Session tags are key-value pairs that you can define in your identity provider (IdP)
and pass in your SAML assertion1. By using a deny action and a StringNotEquals condition
in the IAM policy, you can prevent developers from accessing or modifying EC2 instances that belong to a different development unit2. This way, you can enforce fine-grained access
control and prevent accidental or malicious incidents.
References:
Passing session tags in SAML assertions
Using tags for attribute-based access control
Question # 76
A company has an organization in AWS Organizations that includes a separate AWS
account for each of the company's departments. Application teams from different
departments develop and deploy solutions independently.
The company wants to reduce compute costs and manage costs appropriately across
departments. The company also wants to improve visibility into billing for individual departments. The company does not want to lose operational flexibility when the company
selects compute resources.
Which solution will meet these requirements?
A. Use AWS Budgets for each department. Use Tag Editor to apply tags to appropriateresources. Purchase EC2 Instance Savings Plans. B. Configure AWS Organizations to use consolidated billing. Implement a tagging strategythat identifies departments. Use SCPs to apply tags to appropriate resources. PurchaseEC2 Instance Savings Plans. C. Configure AWS Organizations to use consolidated billing. Implement a tagging strategythat identifies departments. Use Tag Editor to apply tags to appropriate resources.Purchase Compute Savings Plans. D. Use AWS Budgets for each department. Use SCPs to apply tags to appropriateresources. Purchase Compute Savings Plans.
Answer: C
Question # 77
A company is developing a web application that runs on Amazon EC2 instances in an Auto
Scaling group behind a public-facing Application Load Balancer (ALB). Only users from a
specific country are allowed to access the application. The company needs the ability to log
the access requests that have been blocked. The solution should require the least possible
maintenance.
Which solution meets these requirements?
A. Create an IPSet containing a list of IP ranges that belong to the specified country.Create an AWS WAF web ACL. Configure a rule to block any requests that do not originatefrom an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACLwith the ALB. B. Create an AWS WAF web ACL. Configure a rule to block any requests that do notoriginate from the specified country. Associate the rule with the web ACL. Associate theweb ACL with the ALB. C. Configure AWS Shield to block any requests that do not originate from the specifiedcountry. Associate AWS Shield with the ALB. D. Create a security group rule that allows ports 80 and 443 from IP ranges that belong tothe specified country. Associate the security group with the ALB.
Answer: B
Explanation:
The best solution is to create an AWS WAF web ACL and configure a rule to block any
requests that do not originate from the specified country. This will ensure that only users
from the allowed country can access the application. AWS WAF also provides logging
capabilities that can capture the access requests that have been blocked. This solution
requires the least possible maintenance as it does not involve updating IP ranges or
A company is migrating to the cloud. It wants to evaluate the configurations of virtual
machines in its existing data center environment to ensure that it can size new Amazon
EC2 instances accurately. The company wants to collect metrics, such as CPU. memory,
and disk utilization, and it needs an inventory of what processes are running on each
instance. The company would also like to monitor network connections to map
communications between servers.
Which would enable the collection of this data MOST cost effectively?
A. Use AWS Application Discovery Service and deploy the data collection agent to eachvirtual machine in the data center. B. Configure the Amazon CloudWatch agent on all servers within the local environmentand publish metrics to Amazon CloudWatch Logs. C. Use AWS Application Discovery Service and enable agentless discovery in the existingvisualization environment. D. Enable AWS Application Discovery Service in the AWS Management Console andconfigure the corporate firewall to allow scans over a VPN.
Answer: A
Explanation: The AWS Application Discovery Service can help plan migration projects by
collecting data about on-premises servers, such as configuration, performance, and
network connections. The data collection agent is a lightweight software that can be
installed on each server to gather this information. This option is more cost-effective than
agentless discovery, which requires deploying a virtual appliance in the VMware
environment, or using CloudWatch agent, which incurs additional charges for CloudWatch
Logs. Scanning the servers over a VPN is not a valid option for AWS Application Discovery
Service. References: What is AWS Application Discovery Service?, Data collection
methods
Question # 79
A company uses AWS Organizations to manage a multi-account structure. The company
has hundreds of AWS accounts and expects the number of accounts to increase. The
company is building a new application that uses Docker images. The company will push
the Docker images to Amazon Elastic Container Registry (Amazon ECR). Only accounts
that are within the company's organization should have
access to the images.
The company has a CI/CD process that runs frequently. The company wants to retain all
the tagged images. However, the company wants to retain only the five most recent untagged images.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a private repository in Amazon ECR. Create a permissions policy for therepository that allows only required ECR operations. Include a condition to allow the ECRoperations if the value of the aws:PrincipalOrglD condition key is equal to the ID of thecompany's organization. Add a lifecycle rule to the ECR repository that deletes alluntagged images over the count of five. B. Create a public repository in Amazon ECR. Create an IAM role in the ECR account. Setpermissions so that any account can assume the role if the value of the aws:PrincipalOrglDcondition key is equal to the ID of the company's organization. Add a lifecycle rule to theECR repository that deletes all untagged images over the count of five. C. Create a private repository in Amazon ECR. Create a permissions policy for therepository that includes only required ECR operations. Include a condition to allow the ECRoperations for all account IDs in the organization. Schedule a daily Amazon EventBridgerule to invoke an AWS Lambda function that deletes all untagged images over the count offive. D. Create a public repository in Amazon ECR. Configure Amazon ECR to use an interfaceVPC endpoint with an endpoint policy that includes the required permissions for imagesthat the company needs to pull. Include a condition to allow the ECR operations for allaccount IDs in the company's organization. Schedule a daily Amazon EventBridge rule toinvoke an AWS Lambda function that deletes all untagged images over the count of five.
Answer: A
Explanation:
Explanation: This option allows the company to use a private repository in Amazon ECR to
store and manage its Docker images securely and efficiently1. By creating a permissions
policy for the repository that allows only required ECR operations, such as
ecr:PutImage, and ecr:InitiateLayerUpload2, the company can restrict access to the
repository and prevent unauthorized actions. By including a condition to allow the ECR
operations if the value of the aws:PrincipalOrgID condition key is equal to the ID of the
company’s organization, the company can ensure that only accounts that are within its
organization can access the images3. By adding a lifecycle rule to the ECR repository that
deletes all untagged images over the count of five, the company can reduce storage costs
and retain only the most recent untagged images4.
References:
Amazon ECR private repositories
Amazon ECR repository policies
Restricting access to AWS Organizations members Amazon ECR lifecycle policies
Question # 80
A company wants to send data from its on-premises systems to Amazon S3 buckets. The
company created the S3 buckets in three different accounts. The company must send the
data privately without the data traveling across the internet The company has no existing
dedicated connectivity to AWS
Which combination of steps should a solutions architect take to meet these requirements?
(Select TWO.)
A. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Set up an AWS Direct Connect connection with a private VIF betweenthe on-premises environment and the private VPC. B. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Set up an AWS Direct Connect connection with a public VlF betweenthe on-premises environment and the private VPC. C. Create an Amazon S3 interface endpoint in the networking account. D. Create an Amazon S3 gateway endpoint in the networking account. E. Establish a networking account in the AWS Cloud Create a private VPC in thenetworking account. Peer VPCs from the accounts that host the S3 buckets with the VPCin the network account.
What should the solutions architect do to resolve the error?
A. Change the CORS configuration on the S3 bucket. Add rules for CORS to the AllowedOrigin element for www.example.com. B. Enable the CORS setting in AWS WAF. Create a web ACL rule in which the Access-Control-Allow-Origin header is set to www.example.com. C. Enable the CORS setting on the API Gateway API endpoint. Ensure that the APIendpoint is configured to return all responses that have the Access-Control -Allow-Originheader set to www.example.com. D. Enable the CORS setting on the Lambda function. Ensure that the return code of thefunction has the Access-Control-Allow-Origin header set to www.example.com.
Answer: C
Explanation:
CORS errors occur when a web page hosted on one domain tries to make a request to a
server hosted on another domain. In this scenario, the registration form hosted on the static
website is trying to make a request to the API Gateway API endpoint hosted on a different
domain, which is causing the error. To resolve this error, the Access-Control-Allow-Origin
header needs to be set to the domain from which the request is being made. In this case,
the header is already set to www.example.com on the CloudFront distribution origin.
Therefore, the solutions architect should enable the CORS setting on the API Gateway API
endpoint and ensure that the API endpoint is configured to return all responses that have
the Access-Control-Allow-Origin header set to www.example.com. This will allow the API
endpoint to respond to requests from the static website without a CORS error.
A company migrated an application to the AWS Cloud. The application runs on two
Amazon EC2 instances behind an Application Load Balancer (ALB). Application data is
stored in a MySQL database that runs on an additional EC2 instance. The application's use
of the database is read-heavy.
The loads static content from Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. The static content is updated frequently and must be
copied to each EBS volume.
The load on the application changes throughout the day. During peak hours, the application
cannot handle all the incoming requests. Trace data shows that the database cannot
handle the read load during peak hours.
Which solution will improve the reliability of the application?
A. Migrate the application to a set of AWS Lambda functions. Set the Lambda functions astargets for the ALB. Create a new single EBS volume for the static content. Configure theLambda functions to read from the new EBS volume. Migrate the database to an AmazonRDS for MySQL Multi-AZ DB cluster. B. Migrate the application to a set of AWS Step Functions state machines. Set the statemachines as targets for the ALB. Create an Amazon Elastic File System (Amazon EFS) filesystem for the static content. Configure the state machines to read from the EFS filesystem. Migrate the database to Amazon Aurora MySQL Serverless v2 with a reader DBinstance. C. Containerize the application. Migrate the application to an Amazon Elastic ContainerService (Amazon ECS) Cluster. Use the AWS Fargate launch type for the tasks that hostthe application. Create a new single EBS volume the static content. Mount the new EBSvolume on the ECS duster. Configure AWS Application Auto Scaling on ECS cluster. Setthe ECS service as a target for the ALB. Migrate the database to an Amazon RDS forMySOL Multi-AZ DB cluster. D. Containerize the application. Migrate the application to an Amazon Elastic ContainerService (Amazon ECS) cluster. Use the AWS Fargate launch type for the tasks that hostthe application. Create an Amazon Elastic File System (Amazon EFS) file system for thestatic content. Mount the EFS file system to each container. Configure AWS ApplicationAuto Scaling on the ECS cluster Set the ECS service as a target for the ALB. Migrate thedatabase to Amazon Aurora MySQL Serverless v2 with a reader DB instance.
Answer: D
Explanation:
This solution will improve the reliability of the application by addressing the issues of
scalability, availability, and performance. Containerizing the application will make it easier
to deploy and manage on AWS. Migrating the application to an Amazon ECS cluster will
allow the application to run on a fully managed container orchestration service. Using the
AWS Fargate launch type for the tasks that host the application will enable the application
to run on serverless compute engines that are automatically provisioned and scaled by
AWS. Creating an Amazon EFS file system for the static content will provide a scalable and
shared storage solution that can be accessed by multiple containers. Mounting the EFS file
system to each container will eliminate the need to copy the static content to each EBS volume and ensure that the content is always up to date. Configuring AWS Application
Auto Scaling on the ECS cluster will enable the application to scale up and down based on
demand or a predefined schedule. Setting the ECS service as a target for the ALB will
distribute the incoming requests across multiple tasks in the ECS cluster and improve the
availability and fault tolerance of the application. Migrating the database to Amazon Aurora
MySQL Serverless v2 with a reader DB instance will provide a fully managed, compatible,
and scalable relational database service that can handle high throughput and concurrent
connections. Using a reader DB instance will offload some of the read load from the
primary DB instance and improve the performance of the database.
Question # 83
A company is using Amazon API Gateway to deploy a private REST API that will provide
access to sensitive data. The API must be accessible only from an application that is deployed in a VPC. The company deploys the API successfully. However, the API is not
accessible from an Amazon EC2 instance that is deployed in the VPC.
Which solution will provide connectivity between the EC2 instance and the API?
A. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy thatallows apigateway:* actions. Disable private DNS naming for the VPC endpoint. Configurean API resource policy that allows access from the VPC. Use the VPC endpoint's DNSname to access the API. B. Create an interface VPC endpoint for API Gateway. Attach an endpoint policy thatallows the execute-api:lnvoke action. Enable private DNS naming for the VPC endpoint.Configure an API resource policy that allows access from the VPC endpoint. Use the APIendpoint's DNS names to access the API. Most Voted C. Create a Network Load Balancer (NLB) and a VPC link. Configure private integrationbetween API Gateway and the NLB. Use the API endpoint's DNS names to access theAPI. D. Create an Application Load Balancer (ALB) and a VPC Link. Configure privateintegration between API Gateway and the ALB. Use the ALB endpoint's DNS name toaccess the API.
Answer: B
Explanation: According to the AWS documentation1, to access a private API from a VPC,
you need to do the following:
Create an interface VPC endpoint for API Gateway in your VPC. This creates a
private connection between your VPC and API Gateway.
Attach an endpoint policy to the VPC endpoint that allows the execute-api:lnvoke
action for your private API. This grants permission to invoke your API from the
VPC.
Enable private DNS naming for the VPC endpoint. This allows you to use the
same DNS names for your private APIs as you would for public APIs.
Configure a resource policy for your private API that allows access from the VPC
endpoint. This controls who can access your API and under what conditions.
Use the API endpoint’s DNS names to access the API from your VPC. For
A solutions architect is creating an application that stores objects in an Amazon S3 bucket The solutions architect must deploy the application in two AWS Regions that will be used simultaneously The objects in the two S3 buckets must remain synchronized with each other. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE)
A. Use AWS Lambda functions to connect to the loT devices B. Configure the loT devices to publish to AWS loT Core C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility) E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare thereports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin toserve the reports F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2instances to prepare the reports Use an ingress controller in the EKS cluster to serve the reports
A solutions architect is creating an application that stores objects in an Amazon S3 bucket
The solutions architect must deploy the application in two AWS Regions that will be used
simultaneously The objects in the two S3 buckets must remain synchronized with each
other.
Which combination of steps will meet these requirements with the LEAST operational
overhead? (Select THREE)
A. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point B. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets C. Modify the application to store objects in each S3 bucket. D. Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket tothe other S3 bucket. E. Enable S3 Versioning for each S3 bucket F. Configure an event notification for each S3 bucket to invoke an AVVS Lambda functionto copy objects from one S3 bucket to the other S3 bucket.
A North American company with headquarters on the East Coast is deploying a new web application running on Amazon EC2 in the us-east-1 Region. The application should
dynamically scale to meet user demand and maintain resiliency. Additionally, the
application must have disaster recover capabilities in an active-passive configuration with
the us-west-1 Region.
Which steps should a solutions architect take after creating a VPC in the us-east-1 Region?
A. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones(AZs) to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs ineach Region as part of an Auto Scaling group spanning both VPCs and served by the ALB. B. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region. Create an Amazon Route 53 record set with a failover routing policy and healthchecks enabled to provide high availability across both Regions. C. Create a VPC in the us-west-1 Region. Use inter-Region VPC peering to connect bothVPCs. Deploy an Application Load Balancer (ALB) that spans both VPCs. Deploy EC2instances across multiple Availability Zones as part of an Auto Scaling group in each VPCserved by the ALB. Create an Amazon Route 53 record that points to the ALB. D. Deploy an Application Load Balancer (ALB) spanning multiple Availability Zones (AZs)to the VPC in the us-east-1 Region. Deploy EC2 instances across multiple AZs as part ofan Auto Scaling group served by the ALB. Deploy the same solution to the us-west-1Region. Create separate Amazon Route 53 records in each Region that point to the ALB inthe Region. Use Route 53 health checks to provide high availability across both Regions.
Answer: B
Question # 87
A company needs to monitor a growing number of Amazon S3 buckets across two AWS
Regions. The company also needs to track the percentage of objects that are
encrypted in Amazon S3. The company needs a dashboard to display this information for
internal compliance teams.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create a new S3 Storage Lens dashboard in each Region to track bucket andencryption metrics. Aggregate data from both Region dashboards into a single dashboardin Amazon QuickSight for the compliance teams. B. Deploy an AWS Lambda function in each Region to list the number of buckets and theencryption status of objects. Store this data in Amazon S3. Use Amazon Athena queries todisplay the data on a custom dashboard in Amazon QuickSight for the compliance teams. C. Use the S3 Storage Lens default dashboard to track bucket and encryption metrics.Give the compliance teams access to the dashboard directly in the S3 console. D. Create an Amazon EventBridge rule to detect AWS Cloud Trail events for S3 objectcreation. Configure the rule to invoke an AWS Lambda function to record encryptionmetrics in Amazon DynamoDB. Use Amazon QuickSight to display the metrics in adashboard for the compliance teams.
Answer: C
Explanation:
This option uses the S3 Storage Lens default dashboard to track bucket and encryption
metrics across two AWS Regions. S3 Storage Lens is a feature that provides organizationwide
visibility into object storage usage and activity trends, and delivers actionable
recommendations to improve cost-efficiency and apply data protection best practices. S3
Storage Lens delivers more than 30 storage metrics, including metrics on encryption,
replication, and data protection. The default dashboard provides a summary of the entire
S3 usage and activity across all Regions and accounts in an organization. The company
can give the compliance teams access to the dashboard directly in the S3 console, which
requires the least operational overhead.
Question # 88
A financial services company runs a complex, multi-tier application on Amazon EC2
instances and AWS Lambda functions. The application stores temporary data in Amazon
S3. The S3 objects are valid for only 45 minutes and are deleted after 24 hours.
The company deploys each version of the application by launching an AWS
CloudFormation stack. The stack creates all resources that are required to run the
application. When the company deploys and validates a new application version, the
company deletes the CloudFormation stack of the old version.
The company recently tried to delete the CloudFormation stack of an old application
version, but the operation failed. An analysis shows that CloudFormation failed to delete an
existing S3 bucket. A solutions architect needs to resolve this issue without making major
changes to the application's architecture.
Which solution meets these requirements?
A. Implement a Lambda function that deletes all files from a given S3 bucket. Integrate thisLambda function as a custom resource into the CloudFormation stack. Ensure that thecustom resource has a DependsOn attribute that points to the S3 bucket's resource. B. Modify the CloudFormation template to provision an Amazon Elastic File System(Amazon EFS) file system to store the temporary files there instead of in Amazon S3.Configure the Lambda functions to run in the same VPC as the file system. Mount the filesystem to the EC2 instances and Lambda functions. C. Modify the CloudFormation stack to create an S3 Lifecycle rule that expires all objects45 minutes after creation. Add a DependsOn attribute that points to the S3 bucket'sresource. D. Modify the CloudFormation stack to attach a DeletionPolicy attribute with a value ofDelete to the S3 bucket.
Answer: D
Explanation: Explanation: This option allows the solutions architect to use a DeletionPolicy
attribute to specify how AWS CloudFormation handles the deletion of an S3 bucket when
the stack is deleted1. By setting the value of Delete, the solutions architect can instruct
CloudFormation to delete the bucket and all of its contents1. This option does not require
any major changes to the application’s architecture or any additional resources.
References:
Deletion policies
Question # 89
A company is currently in the design phase of an application that will need an RPO of less
than 5 minutes and an RTO of less than 10 minutes. The solutions architecture team is
forecasting that the database will store approximately 10 TB of data. As part of the design,
they are looking for a database solution that will provide the company with the ability to fail
over to a secondary Region.
Which solution will meet these business requirements at the LOWEST cost?
A. Deploy an Amazon Aurora DB cluster and take snapshots of the cluster every 5minutes. Once a snapshot is complete, copy the snapshot to a secondary Region to serveas a backup in the event of a failure. B. Deploy an Amazon RDS instance with a cross-Region read replica in a secondaryRegion. In the event of a failure, promote the read replica to become the primary. C. Deploy an Amazon Aurora DB cluster in the primary Region and another in a secondaryRegion. Use AWS DMS to keep the secondary Region in sync. D. Deploy an Amazon RDS instance with a read replica in the same Region. In the event ofa failure, promote the read replica to become the primary.
Answer: B
Explanation: The best solution is to deploy an Amazon RDS instance with a cross-Region
read replica in a secondary Region. This will provide the company with a database solution
that can fail over to the secondary Region in case of a disaster. The read replica will have
minimal replication lag and can be promoted to become the primary in less than 10
minutes, meeting the RTO requirement. The RPO requirement of less than 5 minutes can
also be met by using synchronous replication within the primary Region and asynchronous replication across Regions. This solution will also have the lowest cost compared to the
other options, as it does not involve additional services or resources. References: [Amazon
RDS User Guide], [Amazon Aurora User Guide]
Question # 90
A financial company needs to create a separate AWS account for a new digital wallet
application. The company uses AWS Organizations to manage its accounts. A solutions
architect uses the 1AM user Supportl from the management account to create a new
What should the solutions architect do to create IAM users in the new member account?
A. Sign in to the AWS Management Console with AWS account root user credentials byusing the 64-character password from the initial AWS Organizations [email protected]. Set up the IAM users as required. B. From the management account, switch roles to assume theOrganizationAccountAccessRole role with the account ID of the new member account. Setup the IAM users as required. C. Go to the AWS Management Console sign-in page. Choose "Sign in using root accountcredentials." Sign in in by using the email address [email protected] and themanagement account's root password. Set up the IAM users as required. D. Go to the AWS Management Console sign-in page. Sign in by using the account ID ofthe new member account and the Supportl IAM credentials. Set up the IAM users as required.
Answer: D
Explanation:
The best solution is to turn on the Concurrency Scaling feature for the Amazon Redshift
cluster. This feature allows the cluster to automatically add additional capacity to handle
bursts of read queries without affecting the performance of write queries. The additional
capacity is transparent to the users and is billed separately based on the usage. This
solution meets the business requirements of servicing read and write queries at all times
and is also cost-effective compared to the other options, which involve provisioning
additional resources or resizing the cluster. References: Amazon Redshift
Documentation, Concurrency Scaling in Amazon Redshift
Question # 91
A company has a solution that analyzes weather data from thousands of weather stations.
The weather stations send the data over an Amazon API Gateway REST API that has an
AWS Lambda function integration. The Lambda function calls a third-party service for data
pre-processing. The third-party service gets overloaded and fails the pre-processing,
causing a loss of data.
A solutions architect must improve the resiliency of the solution. The solutions architect
must ensure that no data is lost and that data can be processed later if failures occur.
What should the solutions architect do to meet these requirements?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue. Configure the queueas the dead-letter queue for the API. B. Create two Amazon Simple Queue Service (Amazon SQS) queues: a primary queueand a secondary queue. Configure the secondary queue as the dead-letter queue for theprimary queue. Update the API to use a new integration to the primary queue. Configurethe Lambda function as the invocation target for the primary queue. C. Create two Amazon EventBridge event buses: a primary event bus and a secondaryevent bus. Update the API to use a new integration to the primary event bus. Configure anEventBridge rule to react to all events on the primary event bus. Specify the Lambdafunction as the target of the rule. Configure the secondary event bus as the failuredestination for the Lambda function. D. Create a custom Amazon EventBridge event bus. Configure the event bus as the failuredestination for the Lambda function.
Answer: C
Explanation:
Explanation: This option allows the solution to decouple the API from the Lambda function
and use EventBridge as an event-driven service that can handle failures gracefully1. By
using two event buses, one for normal events and one for failed events, the solution can
ensure that no data is lost and that data can be processed later if failures occur2. The
primary event bus receives the data from the weather stations through the API integration
and triggers the Lambda function through a rule. The Lambda function can then call the third-party service for data pre-processing. If the third-party service fails, the Lambda
function can send an error response to EventBridge, which will route it to the secondary
event bus as a failure destination3. The secondary event bus can then store the failed
events in another service, such as Amazon S3 or Amazon SQS, for troubleshooting or
reprocessing.
References:
Using Amazon EventBridge with AWS Lambda
Using multiple event buses
Using failure destinations
[Using dead-letter queues]
==================
Question # 92
A research center is migrating to the AWS Cloud and has moved its on-premises 1 PB
object storage to an Amazon S3 bucket. One hundred scientists are using this object
storage to store their work-related documents. Each scientist has a personal folder on the
object store. All the scientists are members of a single IAM user group.
The research center's compliance officer is worried that scientists will be able to access
each other's work. The research center has a strict obligation to report on which scientist
accesses which documents. The team that is responsible for these reports has little AWS experience and wants a
ready-to-use solution that minimizes operational overhead.
Which combination of actions should a solutions architect take to meet these
requirements? (Select TWO.)
A. Create an identity policy that grants the user read and write access. Add a condition thatspecifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on thescientists' IAM user group. B. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket.Store the trail output in another S3 bucket. Use Amazon Athena to query the logs andgenerate reports. C. Enable S3 server access logging. Configure another S3 bucket as the target for logdelivery. Use Amazon Athena to query the logs and generate reports. D. Create an S3 bucket policy that grants read and write access to users in the scientists'IAM user group. E. Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucketand write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatchconnector to query the logs and generate reports.
Answer: A,B
Explanation: Explanation: This option allows the solutions architect to use an identity
policy that grants the user read and write access to their own personal folder on the S3
bucket1. By adding a condition that specifies that the S3 paths must be prefixed with
${aws:username}, the solutions architect can ensure that each scientist can only access
their own folder2. By applying the policy on the scientists’ IAM user group, the solutions
architect can simplify the management of permissions for all the scientists3. By configuring
a trail with AWS CloudTrail to capture all object-level events in the S3 bucket, the solutions
architect can record and store information about every action performed on the S3
objects4. By storing the trail output in another S3 bucket, the solutions architect can secure
and archive the log files5. By using Amazon Athena to query the logs and generate reports,
the solutions architect can use a serverless interactive query service that can analyze data
in S3 using standard SQL.
References:
Identity-based policies
Policy variables
IAM groups
Object-level logging
Creating a trail that applies to all regions
[What is Amazon Athena?]
Question # 93
A company is using AWS Organizations with a multi-account architecture. The company's
current security configuration for the account architecture includes SCPs, resource-based
policies, identity-based policies, trust policies, and session policies.
A solutions architect needs to allow an IAM user in Account A to assume a role in Account
B.
Which combination of steps must the solutions architect take to meet this requirement?
(Select THREE.)
A. Configure the SCP for Account A to allow the action. B. Configure the resource-based policies to allow the action. C. Configure the identity-based policy on the user in Account A to allow the action. D. Configure the identity-based policy on the user in Account B to allow the action. E. Configure the trust policy on the target role in Account B to allow the action. F. Configure the session policy to allow the action and to be passed programmatically bythe GetSessionToken API operation.
Answer: B,C,E
Explanation:
Explanation: Resource-based policies are policies that you attach to a resource, such as an
IAM role, to specify who can access the resource and what actions they can perform on
it1. Identity-based policies are policies that you attach to an IAM user, group, or role to
specify what actions they can perform on which resources2. Trust policies are special types
of resource-based policies that define which principals (such as IAM users or roles) can
assume a role3.
To allow an IAM user in Account A to assume a role in Account B, the solutions architect
needs to do the following:
Configure the resource-based policy on the target role in Account B to allow the
action sts:AssumeRole for the IAM user in Account A. This policy grants
permission to the IAM user to assume the role4.
Configure the identity-based policy on the user in Account A to allow the action
sts:AssumeRole for the target role in Account B. This policy grants permission to
the user to perform the action of assuming the role5.
Configure the trust policy on the target role in Account B to allow the principal of
the IAM user in Account A. This policy defines who can assume the role.
References:
Resource-based policies
Identity-based policies
Trust policies
Granting a user permissions to switch roles
Switching roles
[Modifying a role trust policy]
Question # 94
A company is migrating its infrastructure to the AWS Cloud. The company must comply
with a variety of regulatory standards for different projects. The company needs a multiaccount
environment.
A solutions architect needs to prepare the baseline infrastructure. The solution must
provide a consistent baseline of management and security, but it must allow flexibility for
different compliance requirements within various AWS accounts. The solution also needs
to integrate with the existing on-premises Active Directory Federation Services (AD FS)
server.
Which solution meets these requirements with the LEAST amount of operational
overhead?
A. Create an organization in AWS Organizations. Create a single SCP for least privilegeaccess across all accounts. Create a single OU for all accounts. Configure an IAM identityprovider for federation with the on-premises AD FS server. Configure a central loggingaccount with a defined process for log generating services to send log events to the centralaccount. Enable AWS Config in the central account with conformance packs for allaccounts. B. Create an organization in AWS Organizations. Enable AWS Control Tower on theorganization. Review included controls (guardrails) for SCPs. Check AWS Config for areas that require additions. Add OUS as necessary. Connect AWS IAM Identity Center (AWSSingle Sign-On) to the on-premises AD FS server. C. Create an organization in AWS Organizations. Create SCPs for least privilege access.Create an OU structure, and use it to group AWS accounts. Connect AWS IAM IdentityCenter (AWS Single Sign-On) to the on-premises AD FS server. Configure a centrallogging account with a defined process for log generating services to send log events to thecentral account. Enable AWS Config in the central account with aggregators andconformance packs. D. Create an organization in AWS Organizations. Enable AWS Control Tower on theorganization. Review included controls (guardrails) for SCPs. Check AWS Config for areasthat require additions. Configure an IAM identity provider for federation with the onpremisesAD FS server.
Answer: B
Question # 95
A company needs to store and process image data that will be uploaded from mobile
devices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays,
with thousands of uploads per minute. The app is rarely used at any other time. A user is
notified when image processing is complete.
Which combination of actions should a solutions architect take to ensure image processing
can scale to handle the load? (Select THREE.)
A. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon MQ queue. B. Upload files from the mobile software directly to Amazon S3. Use S3 event notificationsto create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue. C. Invoke an AWS Lambda function to perform image processing when a message isavailable in the queue. D. Invoke an S3 Batch Operations job to perform image processing when a message isavailable in the queue E. Send a push notification to the mobile app by using Amazon Simple Notification Service(Amazon SNS) when processing is complete. F. Send a push notification to the mobile app by using Amazon Simple Email Service(Amazon SES) when processing is complete.
Answer: B,C,E
Explanation:
The best solution is to upload files from the mobile software directly to Amazon S3, use S3
event notifications to create a message in an Amazon Simple Queue Service (Amazon
SQS) standard queue, and invoke an AWS Lambda function to perform image processing
when a message is available in the queue. This solution will ensure that image processing
can scale to handle the load, as Amazon S3 can store any amount of data and handle
concurrent uploads, Amazon SQS can buffer the messages and deliver them reliably, and
AWS Lambda can run code without provisioning or managing servers and scale
automatically based on the demand. This solution will also notify the user when processing
is complete by sending a push notification to the mobile app using Amazon Simple
Notification Service (Amazon SNS), which is a web service that enables applications to
send and receive notifications from the cloud. This solution is more cost-effective than
using Amazon MQ, which is a managed message broker service for Apache ActiveMQ that
requires a dedicated broker instance, or S3 Batch Operations, which is a feature that
allows users to perform bulk actions on S3 objects, such as copying or tagging, but does
not support custom code execution. This solution is also more suitable than using Amazon
Simple Email Service (Amazon SES), which is a web service that enables applications to
send and receive email messages, but does not support push notifications for mobile
A company has mounted sensors to collect information about environmental parameters
such as humidity and light throughout all the company's factories. The company needs to
stream and analyze the data in the AWS Cloud in real time. If any of the parameters fall out
of acceptable ranges, the factory operations team must receive a notification immediately.
Which solution will meet these requirements?
A. Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS StepFunctions to consume and analyze the data in the Kinesis Data Firehose delivery stream.use Amazon Simple Notification Service (Amazon SNS) to notify the operations team. B. Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK)cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate task to analyze thedata. Use Amazon Simple Email Service (Amazon SES) to notify the operations team. C. Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function toconsume the Kinesis data stream and to analyze the data. Use Amazon Simple NotificationService (Amazon SNS) to notify the operations team. D. Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automaticallyscaled and containerized service in Amazon Elastic Container Service (Amazon ECS) toconsume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notifythe operations team.
Answer: C
Question # 97
A software company needs to create short-lived test environments to test pull requests as
part of its development process. Each test environment consists of a single Amazon EC2 instance that is in an Auto Scaling group.
The test environments must be able to communicate with a central server to report test
results. The central server is located in an on-premises data center. A solutions architect
must implement a solution so that the company can create and delete test environments
without any manual intervention. The company has created a transit gateway with a VPN
attachment to the on-premises network.
Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS CloudFormation template that contains a transit gateway attachmentand related routing configurations. Create a CloudFormation stack set that includes thistemplate. Use CloudFormation StackSets to deploy a new stack for each VPC in theaccount. Deploy a new VPC for each test environment. B. Create a single VPC for the test environments. Include a transit gateway attachment andrelated routing configurations. Use AWS CloudFormation to deploy all test environmentsinto the VPC. C. Create a new OU in AWS Organizations for testing. Create an AWS CloudFormationtemplate that contains a VPC, necessary networking resources, a transit gatewayattachment, and related routing configurations. Create a CloudFormation stack set thatincludes this template. Use CloudFormation StackSets for deployments into each accountunder the testing 01.1. Create a new account for each test environment. D. Convert the test environment EC2 instances into Docker images. Use AWSCloudFormation to configure an Amazon Elastic Kubernetes Service (Amazon EKS) clusterin a new VPC, create a transit gateway attachment, and create related routingconfigurations. Use Kubernetes to manage the deployment and lifecycle of the testenvironments.
Answer: B
Explanation:
Explanation: This option allows the company to use a single VPC to host multiple test
environments that are isolated from each other by using different subnets and security
groups1. By including a transit gateway attachment and related routing configurations, the
company can enable the test environments to communicate with the central server in the
on-premises data center through a VPN connection2. By using AWS CloudFormation to
deploy all test environments into the VPC, the company can automate the creation and
deletion of test environments without any manual intervention3. This option also minimizes
the operational overhead by reducing the number of VPCs, accounts, and resources that
need to be managed.
References:
Working with VPCs and subnets
Working with transit gateways
Working with AWS CloudFormation stacks
Question # 98
A company is deploying AWS Lambda functions that access an Amazon RDS for
PostgreSQL database. The company needs to launch the Lambda functions in a QA
environment and in a production environment.
The company must not expose credentials within application code and must rotate
passwords automatically.
Which solution will meet these requirements?
A. Store the database credentials for both environments in AWS Systems ManagerParameter Store. Encrypt the credentials by using an AWS Key Management Service(AWS KMS) key. Within the application code of the Lambda functions, pull the credentialsfrom the Parameter Store parameter by using the AWS SDK for Python (Bot03). Add a roleto the Lambda functions to provide access to the Parameter Store parameter. B. Store the database credentials for both environments in AWS Secrets Manager withdistinct key entry for the QA environment and the production environment. Turn on rotation.Provide a reference to the Secrets Manager key as an environment variable for theLambda functions. C. Store the database credentials for both environments in AWS Key Management Service(AWS KMS). Turn on rotation. Provide a reference to the credentials that are stored inAWS KMS as an environment variable for the Lambda functions. D. Create separate S3 buckets for the QA environment and the production environment.Turn on server-side encryption with AWS KMS keys (SSE-KMS) for the S3 buckets. Usean object naming pattern that gives each Lambda function's application code the ability topull the correct credentials for the function's corresponding environment. Grant eachLambda function's execution role access to Amazon S3.
Answer: B
Explanation: The best solution is to store the database credentials for both environments
in AWS Secrets Manager with distinct key entry for the QA environment and the production
environment. AWS Secrets Manager is a web service that can securely store, manage, and
retrieve secrets, such as database credentials. AWS Secrets Manager also supports
automatic rotation of secrets by using Lambda functions or built-in rotation templates. By
storing the database credentials for both environments in AWS Secrets Manager, the
company can avoid exposing credentials within application code and rotate passwords
automatically. By providing a reference to the Secrets Manager key as an environment
variable for the Lambda functions, the company can easily access the credentials from the
code by using the AWS SDK. This solution meets all the requirements of the company.
References: AWS Secrets Manager Documentation, Using AWS Lambda with AWS
Secrets Manager, Using environment variables - AWS Lambda
Question # 99
A company has a legacy application that runs on multiple .NET Framework components.
The components share the same Microsoft SQL Server database and
communicate with each other asynchronously by using Microsoft Message Queueing
(MSMQ).
The company is starting a migration to containerized .NET Core components and wants to
refactor the application to run on AWS. The .NET Core components require complex
orchestration. The company must have full control over networking and host configuration.
The application's database model is strongly relational.
Which solution will meet these requirements?
A. Host the .NET Core components on AWS App Runner. Host the database on AmazonRDS for SQL Server. Use Amazon EventBridge for asynchronous messaging. B. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)with the AWS Fargate launch type. Host the database on Amazon DynamoDB. UseAmazon Simple Notification Service (Amazon SNS) for asynchronous messaging. C. Host the .NET Core components on AWS Elastic Beanstalk. Host the database on Amazon Aurora PostgreSQL Serverless v2. Use Amazon Managed Streaming for ApacheKafka (Amazon MSK) for asynchronous messaging. D. Host the .NET Core components on Amazon Elastic Container Service (Amazon ECS)with the Amazon EC2 launch type. Host the database on Amazon Aurora MySQLServerless v2. Use Amazon Simple Queue Service (Amazon SQS) for asynchronousmessaging.
Answer: D
Explanation:
Hosting the .NET Core components on Amazon ECS with the Amazon EC2 launch type will
meet the requirements of having complex orchestration and full control over networking
and host configuration. Amazon ECS is a fully managed container orchestration service
that supports both AWS Fargate and Amazon EC2 as launch types. The Amazon EC2
launch type allows users to choose their own EC2 instances, configure their own
networking settings, and access their own host operating systems. Hosting the database
on Amazon Aurora MySQL Serverless v2 will meet the requirements of having a strongly
relational database model and using the same database engine as SQL Server. MySQL is
a compatible relational database engine with SQL Server, and it can support most of the
legacy application’s database model. Amazon Aurora MySQL Serverless v2 is a serverless
version of Amazon Aurora MySQL that can scale up and down automatically based on
demand. Using Amazon SQS for asynchronous messaging will meet the requirements of
providing a compatible replacement for MSMQ, which is a queue-based messaging
system3. Amazon SQS is a fully managed message queuing service that enables
decoupled and scalable microservices, distributed systems, and serverless applications.
Question # 100
A research company is running daily simul-ations in the AWS Cloud to meet high demand.
The simu-lations run on several hundred Amazon EC2 instances that are based on
Amazon Linux 2. Occasionally, a simu-lation gets stuck and requires a cloud operations
engineer to solve the problem by connecting to an EC2 instance through SSH.
Company policy states that no EC2 instance can use the same SSH key and that all
connections must be logged in AWS CloudTrail.
How can a solutions architect meet these requirements?
A. Launch new EC2 instances, and generate an individual SSH key for each instance.Store the SSH key in AWS Secrets Manager. Create a new IAM policy, and attach it to theengineers' IAM role with an Allow statement for the GetSecretValue action. Instruct the engineers to fetch the SSH key from Secrets Manager when they connect through anySSH client. B. Create an AWS Systems Manager document to run commands on EC2 instances to seta new unique SSH key. Create a new IAM policy, and attach it to the engineers' IAM rolewith an Allow statement to run Systems Manager documents. Instruct the engineers to runthe document to set an SSH key and to connect through any SSH client. C. Launch new EC2 instances without setting up any SSH key for the instances. Set upEC2 Instance Connect on each instance. Create a new IAM policy, and attach it to theengineers' IAM role with an Allow statement for the SendSSHPublicKey action. Instruct theengineers to connect to the instance by using a browser-based SSH client from the EC2console. D. Set up AWS Secrets Manager to store the EC2 SSH key. Create a new AWS Lambdafunction to create a new SSH key and to call AWS Systems Manager Session Manager toset the SSH key on the EC2 instance. Configure Secrets Manager to use the Lambdafunction for automatic rotation once daily. Instruct the engineers to fetch the SSH key fromSecrets Manager when they connect through any SSH client.
Answer: C
Question # 101
A company wants to migrate its on-premises data center to the AWS Cloud. This includes
thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and
PHP applications with MYSQL, and Oracle databases. There are many dependent services
hosted either in the same data center or externally.
The technical documentation is incomplete and outdated. A solutions architect needs to
understand the current environment and estimate the cloud resource costs after the
migration.
Which tools or services should solutions architect use to plan the cloud migration? (Choose
three.)
A. AWS Application Discovery Service B. AWS SMS C. AWS x-Ray D. AWS Cloud Adoption Readiness Tool (CART) E. Amazon Inspector F. AWS Migration Hub
Answer: A,D,F
Question # 102
A company runs many workloads on AWS and uses AWS Organizations to manage its
accounts. The workloads are hosted on Amazon EC2. AWS Fargate. and AWS Lambda.
Some of the workloads have unpredictable demand. Accounts record high usage in some
months and low usage in other months.
The company wants to optimize its compute costs over the next 3 years A solutions
architect obtains a 6-month average for each of the accounts across the organization to
calculate usage.
Which solution will provide the MOST cost savings for all the organization's compute
usage?
A. Purchase Reserved Instances for the organization to match the size and number of themost common EC2 instances from the member accounts. B. Purchase a Compute Savings Plan for the organization from the management accountby using the recommendation at the management account level C. Purchase Reserved Instances for each member account that had high EC2 usageaccording to the data from the last 6 months. D. Purchase an EC2 Instance Savings Plan for each member account from the management account based on EC2 usage data from the last 6 months.
Answer: B
Question # 103
A solutions architect is determining the DNS strategy for an existing VPC. The VPC is
provisioned to use the 10.24.34.0/24 CIDR block. The VPC also uses Amazon Route 53
Resolver for DNS. New requirements mandate that DNS queries must use private hosted
zones. Additionally, instances that have public IP addresses must receive corresponding
public hostnames.
Which solution will meet these requirements to ensure that the domain names are correctly
resolved within the VPC?
A. Create a private hosted zone. Activate the enableDnsSupport attribute and theenableDnsHostnames attribute for the VPC. Update the VPC DHCP options set to includedomain-name-servers-10.24.34.2. B. Create a private hosted zone. Associate the private hosted zone with the VPC. Activatethe enableDnsSupport attribute and the enableDnsHostnames attribute for the VPC.Create a new VPC DHCP options set, and configure domain-nameservers=AmazonProvidedDNS. Associate the new DHCP options set with the VPC. C. Deactivate the enableDnsSupport attribute for the VPC. Activate theenableDnsHostnames attribute for the VPC. Create a new VPC DHCP options set, andconfigure domain-name-servers=10.24.34.2. Associate the new DHCP options set with theVPC. D. Create a private hosted zone. Associate the private hosted zone with the VPC. Activatethe enableDnsSupport attribute for the VPC. Deactivate the enableDnsHostnames attributefor the VPC. Update the VPC DHCP options set to include domain-nameservers=AmazonProvidedDNS.
Answer: B
Explanation:
Explanation: This option allows the solutions architect to use a private hosted zone to host DNS records that are only accessible within the VPC1. By associating the private hosted
zone with the VPC, the solutions architect can ensure that DNS queries from the VPC are
routed to the private hosted zone2. By activating the enableDnsSupport attribute and the
enableDnsHostnames attribute for the VPC, the solutions architect can enable DNS
resolution and hostname assignment for instances in the VPC3. By creating a new VPC
DHCP options set, and configuring domain-name-servers=AmazonProvidedDNS, the
solutions architect can use Amazon-provided DNS servers to resolve DNS queries from
instances in the VPC4. By associating the new DHCP options set with the VPC, the
solutions architect can apply the DNS settings to all instances in the VPC5.
References:
What is Amazon Route 53 Resolver?
Associating a private hosted zone with your VPC
Using DNS with your VPC
DHCP options sets
Modifying your DHCP options
Question # 104
A large company is migrating ils entire IT portfolio to AWS. Each business unit in the
company has a standalone AWS account that supports both development and test
environments. New accounts to support production workloads will be needed soon.
The finance department requires a centralized method for payment but must maintain
visibility into each group's spending to allocate costs.
The security team requires a centralized mechanism to control 1AM usage in all the
company's accounts.
What combination of the following options meet the company's needs with the LEAST
effort? (Select TWO.)
A. Use a collection of parameterized AWS CloudFormation templates defining common1AM permissions that are launched into each account. Require all new and existingaccounts to launch the appropriate stacks to enforce the least privilege model. B. Use AWS Organizations to create a new organization from a chosen payer account anddefine an organizational unit hierarchy. Invite the existing accounts to join the organizationand create new accounts using Organizations. C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks. D. Enable all features of AWS Organizations and establish appropriate service controlpolicies that filter 1AM permissions for sub-accounts. E. Consolidate all of the company's AWS accounts into a single AWS account. Use tags forbilling purposes and the lAM's Access Advisor feature to enforce the least privilege model.
Answer: B,D
Explanation:
Option B is correct because AWS Organizations allows a company to create a new
organization from a chosen payer account and define an organizational unit
hierarchy. This way, the finance department can have a centralized method for
payment but also maintain visibility into each group’s spending to allocate costs.
The company can also invite the existing accounts to join the organization and
create new accounts using Organizations, which simplifies the account
management process.
Option D is correct because enabling all features of AWS Organizations and
establishing appropriate service control policies (SCPs) that filter IAM permissions
for sub-accounts allows the security team to have a centralized mechanism to
control IAM usage in all the company’s accounts. SCPs are policies that specify
the maximum permissions for an organization or organizational unit (OU), and they
can be used to restrict access to certain services or actions across all accounts in
an organization.
Option A is incorrect because using a collection of parameterized AWS
CloudFormation templates defining common IAM permissions that are launched
into each account requires more effort than using SCPs. Moreover, it does not
provide a centralized mechanism to control IAM usage, as each account would
have to launch the appropriate stacks to enforce the least privilege model.
Option C is incorrect because requiring each business unit to use its own AWS
accounts does not provide a centralized method for payment or a centralized
mechanism to control IAM usage. Tagging each AWS account appropriately and
enabling Cost Explorer to administer chargebacks may help with cost allocation,
but it is not as efficient as using AWS Organizations.
Option E is incorrect because consolidating all of the company’s AWS accounts
into a single AWS account does not provide visibility into each group’s spending or
a way to control IAM usage for different business units. Using tags for billing
purposes and the IAM’s Access Advisor feature to enforce the least privilege
model may help with cost optimization and security, but it is not as scalable or
flexible as using AWS Organizations.
References:
AWS Organizations
Service Control Policies
AWS CloudFormation
Cost Explorer
IAM Access Advisor
Question # 105
An enterprise company is building an infrastructure services platform for its users. The
company has the following requirements:
Provide least privilege access to users when launching AWS infrastructure so
users cannot provision unapproved services.
Use a central account to manage the creation of infrastructure services.
Provide the ability to distribute infrastructure services to multiple accounts in AWS
Organizations.
Provide the ability to enforce tags on any infrastructure that is started by users.
Which combination of actions using AWS services will meet these requirements? (Choose
three.)
A. Develop infrastructure services using AWS Cloud Formation templates. Add thetemplates to a central Amazon S3 bucket and add the-IAM roles or users that requireaccess to the S3 bucket policy. B. Develop infrastructure services using AWS Cloud Formation templates. Upload eachtemplate as an AWS Service Catalog product to portfolios created in a central AWSaccount. Share these portfolios with the Organizations structure created for the company. C. Allow user IAM roles to have AWSCloudFormationFullAccess andAmazonS3ReadOnlyAccess permissions. Add an Organizations SCP at the AWS accountroot user level to deny all services except AWS CloudFormation and Amazon S3. D. Allow user IAM roles to have ServiceCatalogEndUserAccess permissions only. Use anautomation script to import the central portfolios to local AWS accounts, copy theTagOption assign users access and apply launch constraints. E. Use the AWS Service Catalog TagOption Library to maintain a list of tags required bythe company. Apply the TagOption to AWS Service Catalog products or portfolios. F. Use the AWS CloudFormation Resource Tags property to enforce the application of tagsto any CloudFormation templates that will be created for users.
Answer: B,D,E
Explanation: Explanation:
Developing infrastructure services using AWS CloudFormation templates and
uploading them as AWS Service Catalog products to portfolios created in a central
AWS account will enable the company to centrally manage the creation of
infrastructure services and control who can use them1. AWS Service Catalog
allows you to create and manage catalogs of IT services that are approved for use
on AWS2. You can organize products into portfolios, which are collections of
products along with configuration information3. You can share portfolios with other
accounts in your organization using AWS Organizations4.
Allowing user IAM roles to have ServiceCatalogEndUserAccess permissions only
and using an automation script to import the central portfolios to local AWS
accounts, copy the TagOption, assign users access, and apply launch constraints
will enable the company to provide least privilege access to users when launching
AWS infrastructure services. ServiceCatalogEndUserAccess is a managed IAM
policy that grants users permission to list and view products and launch product
instances. An automation script can help import the shared portfolios from the
central account to the local accounts, copy the TagOption from the central
account, assign users access to the portfolios, and apply launch constraints that
specify which IAM role or user can provision a product.
Using the AWS Service Catalog TagOption Library to maintain a list of tags
required by the company and applying the TagOption to AWS Service Catalog
products or portfolios will enable the company to enforce tags on any infrastructure
that is started by users. TagOptions are key-value pairs that you can use to
classify your AWS Service Catalog resources. You can create a TagOption Library
that contains all the tags that you want to use across your organization. You can
apply TagOptions to products or portfolios, and they will be automatically applied
to any provisioned product instances.
References:
Creating a product from an existing CloudFormation template
What is AWS Service Catalog?
Working with portfolios
Sharing a portfolio with AWS Organizations
[Providing least privilege access for users]
[AWS managed policies for job functions]
[Importing shared portfolios]
[Enforcing tag policies]
[Working with TagOptions]
[Creating a TagOption Library]
[Applying TagOptions]
Question # 106
A company is migrating a legacy application from an on-premises data center to AWS. The
application consists of a single application server and a Microsoft SQL Server database server. Each server is deployed on a VMware VM that consumes 500 TB
of data across multiple attached volumes.
The company has established a 10 Gbps AWS Direct Connect connection from the closest
AWS Region to its on-premises data center. The Direct Connect connection is not currently
in use by other services.
Which combination of steps should a solutions architect take to migrate the application with
the LEAST amount of downtime? (Choose two.)
A. Use an AWS Server Migration Service (AWS SMS) replication job to migrate thedatabase server VM to AWS. B. Use VM Import/Export to import the application server VM. C. Export the VM images to an AWS Snowball Edge Storage Optimized device. D. Use an AWS Server Migration Service (AWS SMS) replication job to migrate theapplication server VM to AWS. E. Use an AWS Database Migration Service (AWS DMS) replication instance to migratethe database to an Amazon RDS DB instance.
Answer: A,D
Question # 107
A company has an application that uses an Amazon Aurora PostgreSQL DB cluster for the
application's database. The DB cluster contains one small primary instance and three
larger replica instances. The application runs on an AWS Lambda function. The application
makes many short-lived connections to the database's replica instances to perform readonly
operations.
During periods of high traffic, the application becomes unreliable and the database reports
that too many connections are being established. The frequency of high-traffic periods is
unpredictable.
Which solution will improve the reliability of the application?
A. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-onlyendpoint for the proxy. Update the Lambda function to connect to the proxy endpoint. B. Increase the max_connections setting on the DB cluster's parameter group. Reboot allthe instances in the DB cluster. Update the Lambda function to connect to the DB clusterendpoint. C. Configure instance scaling for the DB cluster to occur when the DatabaseConnectionsmetric is close to the max _ connections setting. Update the Lambda function to connect tothe Aurora reader endpoint. D. Use Amazon RDS Proxy to create a proxy for the DB cluster. Configure a read-onlyendpoint for the Aurora Data API on the proxy. Update the Lambda function to connect tothe proxy endpoint.
Answer: A
Question # 108
A company is planning to migrate its on-premises transaction-processing application to
AWS. The application runs inside Docker containers that are hosted on VMS in the
company's data center. The Docker containers have shared storage where the application
records transaction data.
The transactions are time sensitive. The volume of transactions inside the application is
unpredictable. The company must implement a low-latency storage solution that will
automatically scale throughput to meet increased demand. The company cannot develop
the application further and cannot continue to administer the Docker hosting environment.
How should the company migrate the application to AWS to meet these requirements?
A. Migrate the containers that run the application to Amazon Elastic Kubernetes Service(Amazon EKS). Use Amazon S3 to store the transaction data that the containers share. B. Migrate the containers that run the application to AWS Fargate for Amazon ElasticContainer Service (Amazon ECS). Create an Amazon Elastic File System (Amazon EFS)file system. Create a Fargate task definition. Add a volume to the task definition to point tothe EFS file system C. Migrate the containers that run the application to AWS Fargate for Amazon ElasticContainer Service (Amazon ECS). Create an Amazon Elastic Block Store (Amazon EBS)volume. Create a Fargate task definition. Attach the EBS volume to each running task. D. Launch Amazon EC2 instances. Install Docker on the EC2 instances. Migrate thecontainers to the EC2 instances. Create an Amazon Elastic File System (Amazon EFS) filesystem. Add a mount point to the EC2 instances for the EFS file system.
Answer: B
Explanation:
Migrating the containers that run the application to AWS Fargate for Amazon Elastic
Container Service (Amazon ECS) will meet the requirement of not administering the
Docker hosting environment. AWS Fargate is a serverless compute engine that runs
containers without requiring any infrastructure management3. Creating an Amazon Elastic
File System (Amazon EFS) file system and adding a volume to the Fargate task definition
to point to the EFS file system will meet the requirement of low-latency storage that will
automatically scale throughput to meet increased demand. Amazon EFS is a fully managed
file system service that provides shared access to data from multiple containers, supports
NFSv4 protocol, and offers consistent performance and high availability4. Amazon EFS
also supports automatic scaling of throughput based on the amount of data stored in the
file system5.
Question # 109
An online retail company is migrating its legacy on-premises .NET application to AWS. The
application runs on load-balanced frontend web servers, load-balanced application servers,
and a Microsoft SQL Server database.
The company wants to use AWS managed services where possible and does not want to
rewrite the application. A solutions architect needs to implement a solution to resolve scaling issues and minimize licensing costs as the application scales.
Which solution will meet these requirements MOST cost-effectively?
A. Deploy Amazon EC2 instances in an Auto Scaling group behind an Application LoadBalancer for the web tier and for the application tier. Use Amazon Aurora PostgreSQL withBabelfish turned on to replatform the SOL Server database. B. Create images of all the servers by using AWS Database Migration Service (AWSDMS). Deploy Amazon EC2 instances that are based on the on-premises imports. Deploythe instances in an Auto Scaling group behind a Network Load Balancer for the web tierand for the application tier. Use Amazon DynamoDB as the database tier. C. Containerize the web frontend tier and the application tier. Provision an Amazon ElasticKubernetes Service (Amazon EKS) cluster. Create an Auto Scaling group behind aNetwork Load Balancer for the web tier and for the application tier. Use Amazon RDS forSOL Server to host the database. D. Separate the application functions into AWS Lambda functions. Use Amazon APIGateway for the web frontend tier and the application tier. Migrate the data to Amazon S3.Use Amazon Athena to query the data.
Answer: A
Explanation:
The best solution is to create a tag policy that contains the allowed project tag values in the
organization’s management account and create an SCP that denies the
cloudformation:CreateStack API operation unless a project tag is added. A tag policy is a
type of policy that can help standardize tags across resources in the organization’s
accounts. A tag policy can specify the allowed tag keys, values, and case treatment for
compliance. A service control policy (SCP) is a type of policy that can restrict the actions
that users and roles can perform in the organization’s accounts. An SCP can deny access
to specific API operations unless certain conditions are met, such as having a specific tag.
By creating a tag policy in the management account and attaching it to each OU, the
organization can enforce consistent tagging across all accounts. By creating an SCP that
denies the cloudformation:CreateStack API operation unless a project tag is added, the
organization can prevent users from creating new resources without proper tagging. This
solution will meet the requirements with the least effort, as it does not involve creating
additional resources or modifying existing ones. References: Tag policies - AWS
Organizations, Service control policies - AWS Organizations, AWS CloudFormation User
Guide
Question # 110
A company is deploying a third-party web application on AWS. The application is packaged
as a Docker image. The company has deployed the Docker image as an AWS
Fargate service in Amazon Elastic Container Service (Amazon ECS). An Application Load
Balancer (ALB) directs traffic to the application.
The company needs to give only a specific list of users the ability to access the application
from the internet. The company cannot change the application and cannot integrate the
application with an identity provider. All users must be authenticated through multi-factor
authentication (MFA).
Which solution will meet these requirements?
A. Create a user pool in Amazon Cognito. Configure the pool for the application. Populatethe pool with the required users. Configure the pool to require MFA. Configure a listenerrule on the ALB to require authentication through the Amazon Cognito hosted UI. B. Configure the users in AWS Identity and Access Management (IAM). Attach a resourcepolicy to the Fargate service to require users to use MFA. Configure a listener rule on theALB to require authentication through IAM. C. Configure the users in AWS Identity and Access Management (IAM). Enable AWS IAMIdentity Center (AWS Single Sign-On). Configure resource protection for the ALB. Create a resource protection rule to require users to use MFA. D. Create a user pool in AWS Amplify. Configure the pool for the application. Populate thepool with the required users. Configure the pool to require MFA. Configure a listener ruleon the ALB to require authentication through the Amplify hosted UI.
Answer: A
Explanation:
Creating a user pool in Amazon Cognito and configuring it for the application will meet the
requirement of giving only a specific list of users the ability to access the application from
the internet. A user pool is a directory of users that can sign in to an application with a
username and password1. The company can populate the user pool with the required
users and configure the pool to require MFA for additional security2. Configuring a listener
rule on the ALB to require authentication through the Amazon Cognito hosted UI will meet
the requirement of not changing the application and not integrating it with an identity
provider. The ALB can use Amazon Cognito as an authentication action to authenticate
users before forwarding requests to the Fargate service3. The Amazon Cognito hosted UI
is a customizable web page that provides sign-in and sign-up functionality for users4.
Question # 111
A company built an ecommerce website on AWS using a three-tier web architecture. The
application is Java-based and composed of an Amazon CloudFront distribution, an Apache
web server layer of Amazon EC2 instances in an Auto Scaling group, and a backend
Amazon Aurora MySQL database.
Last month, during a promotional sales event, users reported errors and timeouts while
adding items to their shopping carts. The operations team recovered the logs created by
the web servers and reviewed Aurora DB cluster performance metrics. Some of the web
servers were terminated before logs could be collected and the Aurora metrics were not
sufficient for query performance analysis.
Which combination of steps must the solutions architect take to improve application
performance visibility during peak traffic events? (Choose three.)
A. Configure the Aurora MySQL DB cluster to publish slow query and error logs to AmazonCloudWatch Logs. B. Implement the AWS X-Ray SDK to trace incoming HTTP requests on the EC2 instancesand implement tracing of SQL queries with the X-Ray SDK for Java. C. Configure the Aurora MySQL DB cluster to stream slow query and error logs to AmazonKinesis D. Install and configure an Amazon CloudWatch Logs agent on the EC2 instances to sendthe Apache logs to CloudWatch Logs. E. Enable and configure AWS CloudTrail to collect and analyze application activity fromAmazon EC2 and Aurora. F. Enable Aurora MySQL DB cluster performance benchmarking and publish the stream toAWS X-Ray.
Answer: A,B,D
Explanation:
Explanation:
Configuring the Aurora MySQL DB cluster to publish slow query and error logs to
Amazon CloudWatch Logs will allow the solutions architect to monitor and
troubleshoot the database performance by identifying slow or problematic
queries1. CloudWatch Logs also provides features such as metric filters, alarms,
and dashboards to analyze and visualize the log data2.
Implementing the AWS X-Ray SDK to trace incoming HTTP requests on the EC2
instances and implement tracing of SQL queries with the X-Ray SDK for Java will allow the solutions architect to measure and map the end-to-end latency and
performance of the web application3. X-Ray traces show how requests travel
through the application components, such as web servers, load balancers,
microservices, and databases4. X-Ray also provides features such as service
maps, annotations, histograms, and error rates to analyze and optimize the
application performance.
Installing and configuring an Amazon CloudWatch Logs agent on the EC2
instances to send the Apache logs to CloudWatch Logs will allow the solutions
architect to monitor and troubleshoot the web server performance by collecting
and storing the Apache access and error logs. CloudWatch Logs also provides
features such as metric filters, alarms, and dashboards to analyze and visualize
the log data2.
References:
Publishing Aurora MySQL logs to Amazon CloudWatch Logs
Working with log data in CloudWatch Logs
Instrumenting your application with the X-Ray SDK for Java
Tracing requests with AWS X-Ray
[Analyzing application performance with AWS X-Ray]
[Using CloudWatch Logs with your Apache web server]
Question # 112
A company provides a software as a service (SaaS) application that runs in the AWS
Cloud. The application runs on Amazon EC2 instances behind a Network Load Balancer
(NLB). The instances are in an Auto Scaling group and are distributed across three
Availability Zones in a single AWS Region.
The company is deploying the application into additional Regions. The company must
provide static IP addresses for the application to customers so that the customers can add
the IP addresses to allow lists.
The solution must automatically route customers to the Region that is geographically
closest to them.
Which solution will meet these requirements?
A. Create an Amazon CloudFront distribution. Create a CloudFront origin group. Add theNLB for each additional Region to the origin group. Provide customers with the IP addressranges of the distribution's edge locations. B. Create an AWS Global Accelerator standard accelerator. Create a standard acceleratorendpoint for the NLB in each additional Region. Provide customers with the GlobalAccelerator IP address. C. Create an Amazon CloudFront distribution. Create a custom origin for the NLB in eachadditional Region. Provide customers with the IP address ranges of the distribution's edgelocations. D. Create an AWS Global Accelerator custom routing accelerator. Create a listener for thecustom routing accelerator. Add the IP address and ports for the NLB in each additionalRegion. Provide customers with the Global Accelerator IP address.
Answer: B Explanation:
Explanation: AWS Global Accelerator is a networking service that helps you improve the
availability and performance of the applications that you offer to your global users1. It
provides static IP addresses that act as a fixed entry point to your applications and route
user traffic to the optimal endpoint based on performance, health, and policies that you
configure1. By creating a standard accelerator endpoint for the NLB in each additional
Region, you can ensure that customers are automatically directed to the Region that is
geographically closest to them2. You can also provide customers with the Global
Accelerator IP address, which is anycast from AWS edge locations and does not change
when you add or remove endpoints3.
References:
What is AWS Global Accelerator?
Standard accelerator endpoints
AWS Global Accelerator IP addresses
Question # 113
A company has a project that is launching Amazon EC2 instances that are larger than
required. The project's account cannot be part of the company's organization in AWS
Organizations due to policy restrictions to keep this activity outside of corporate IT. The
company wants to allow only the launch of t3.small
EC2 instances by developers in the project's account. These EC2 instances must be
restricted to the us-east-2 Region.
What should a solutions architect do to meet these requirements?
A. Create a new developer account. Move all EC2 instances, users, and assets into useast-2. Add the account to the company's organization in AWS Organizations. Enforce a tagging policy that denotes Region affinity. B. Create an SCP that denies the launch of all EC2 instances except t3.small EC2instances in us-east-2. Attach the SCP to the project's account. C. Create and purchase a t3.small EC2 Reserved Instance for each developer in us-east-2.Assign each developer a specific EC2 instance with their name as the tag. D. Create an IAM policy than allows the launch of only t3.small EC2 instances in us-east-2.Attach the policy to the roles and groups that the developers use in the project's account.
Answer: D
Question # 114
A large company recently experienced an unexpected increase in Amazon RDS and
Amazon DynamoDB costs. The company needs to increase visibility into details of AWS
Billing and Cost Management There are various accounts associated with AWS
Organizations, including many development and production accounts There is no
consistent tagging strategy across the organization, but there are guidelines in place that
require all infrastructure to be deployed using AWS CloudFormation with consistent
tagging. Management requires cost center numbers and project ID numbers for all existing
and future DynamoDB tables and RDS instances.
Which strategy should the solutions architect provide to meet these requirements?
A. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID and allow 24 hours for tags to propagate to existing resources. B. Use an AWS Config rule to alert the finance team of untagged resources Create acentralized AWS Lambda based solution to tag untagged RDS databases and DynamoDBresources every hour using a cross-account role. C. Use Tag Editor to tag existing resources Create cost allocation tags to define the costcenter and project ID Use SCPs to restrict resource creation that do not have the costcenter and project ID on the resource. D. Create cost allocation tags to define the cost center and project ID and allow 24 hoursfor tags to propagate to existing resources Update existing federated roles to restrictprivileges to provision resources that do not include the cost center and project ID on theresource.
Answer: C
Explanation:
Using Tag Editor to remediate untagged resources is a Best Practice (Page 14 or AWS
Tagging Best Practices WhitePaper). However, that is were answer A stops. It doesn't
address the requirement of "Management requires cost center numbers and project ID
number for all existing and future DynamoDB tables and RDS instances". That is where
Answer C comes in and addresses that requirement with SCPs in the company's AWS
Organization. AWS Tagging Best Practices - https://d1.awsstatic.com/whitepapers/awstagging-
best-practices.pdf
Question # 115
A company wants to migrate its website from an on-premises data center onto AWS. At the
same time, it wants to migrate the website to a containerized microservice-based
architecture to improve the availability and cost efficiency. The company's security policy
states that privileges and network permissions must be configured according to best
practice, using least privilege.
A Solutions Architect must create a containerized architecture that meets the security
requirements and has deployed the application to an Amazon ECS cluster.
What steps are required after the deployment to meet the requirements? (Choose two.)
A. Create tasks using the bridge network mode. B. Create tasks using the awsvpc network mode. C. Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instancesto access other resources. D. Apply security groups to the tasks, and pass IAM credentials into the container at launchtime to access other resources. E. Apply security groups to the tasks, and use IAM roles for tasks to access otherresources.
Answer: B,E
Explanation:
Explanation: The awsvpc network mode provides each task with its own elastic network
interface (ENI) and a primary private IP address1. By using this network mode, the
solutions architect can isolate the tasks from each other and apply security groups to the
tasks directly2. This way, the solutions architect can control the inbound and outbound
traffic at the task level and enforce the least privilege principle3. IAM roles for tasks allow
the solutions architect to assign permissions to each task separately, so that they can
access other AWS resources that they need4. By using IAM roles for tasks, the solutions
architect can avoid passing IAM credentials into the container at launch time, which is less
secure and more prone to errors5.
References:
awsvpc network mode
Task networking with the awsvpc network mode
Security groups for your VPC
IAM roles for tasks
Best practices for managing AWS access keys
Question # 116
A company is migrating an application from on-premises infrastructure to the AWS Cloud.
During migration design meetings, the company expressed concerns about the availability
and recovery options for its legacy Windows file server. The file server contains sensitive
business-critical data that cannot be recreated in the event of data corruption or data loss.
According to compliance requirements, the data must not travel across the public internet.
The company wants to move to AWS managed services where possible.
The company decides to store the data in an Amazon FSx for Windows File Server file
system. A solutions architect must design a solution that copies the data to another AWS
Region for disaster recovery (DR) purposes.
Which solution will meet these requirements?
A. Create a destination Amazon S3 bucket in the DR Region. Establish connectivitybetween the FSx for Windows File Server file system in the primary Region and the S3bucket in the DR Region by using Amazon FSx File Gateway. Configure the S3 bucket as acontinuous backup source in FSx File Gateway. B. Create an FSx for Windows File Server file system in the DR Region. Establishconnectivity between the VPC in the primary Region and the VPC in the DR Region byusing AWS Site-to-Site VPN. Configure AWS DataSync to communicate by using VPNendpoints. C. Create an FSx for Windows File Server file system in the DR Region. Establishconnectivity between the VPC in the primary Region and the VPC in the DR Region by using VPC peering. Configure AWS DataSync to communicate by using interface VPCendpoints with AWS PrivateLink. D. Create an FSx for Windows File Server file system in the DR Region. Establishconnectivity between the VPC in the primary Region and the VPC in the DR Region byusing AWS Transit Gateway in each Region. Use AWS Transfer Family to copy filesbetween the FSx for Windows File Server file system in the primary Region and the FSx forWindows File Server file system in the DR Region over the private AWS backbonenetwork.
Answer: C
Explanation: The best solution is to create an FSx for Windows File Server file system in
the DR Region and establish connectivity between the VPCs in both Regions by using VPC
peering. This will ensure that the data does not travel across the public internet and meets
the compliance requirements. By using AWS DataSync with interface VPC endpoints and
AWS PrivateLink, the data can be copied securely and efficiently between the FSx for
Windows File Server file systems in both Regions. This solution also provides the ability to
fail over to the DR Region in case of a disaster. References: [Amazon FSx for Windows
File Server User Guide], [AWS DataSync User Guide], [Amazon VPC User Guide]
Question # 117
A company is building an application on AWS. The application sends logs to an Amazon
Elasticsearch Service (Amazon ES) cluster for analysis. All data must be stored within a
VPC.
Some of the company's developers work from home. Other developers work from three
different company office locations. The developers need to access
Amazon ES to analyze and visualize logs directly from their local development machines.
Which solution will meet these requirements?
A. Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpointwith a subnet in the VPC. Configure a Client VPN self-service portal. Instruct thedevelopers to connect by using the client for Client VPN. B. Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN.Create an attachment to the transit gateway. Instruct the developers to connect by using anOpenVPN client. C. Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connectconnection. Set up a public VIF on the Direct Connect connection. Associate the public VIFwith the transit gateway. Instruct the developers to connect to the Direct Connectconnection D. Create and configure a bastion host in a public subnet of the VPC. Configure the bastionhost security group to allow SSH access from the company CIDR ranges. Instruct thedevelopers to connect by using SSH.
Answer: A
Explanation:
Explanation: This option allows the company to use AWS Client VPN to enable secure and
private access to the Amazon ES cluster from any location1. By configuring and setting up an AWS Client VPN endpoint, the company can create a secure tunnel between the
developers’ devices and the VPC2. By associating the Client VPN endpoint with a subnet
in the VPC, the company can ensure that the traffic from the developers’ devices is routed
to the Amazon ES cluster within the VPC3. By configuring a Client VPN self-service portal,
the company can enable the developers to download and install the client for Client VPN,
which is based on OpenVPN4. By instructing the developers to connect by using the client
for Client VPN, the company can allow them to access Amazon ES to analyze and
visualize logs directly from their local development machines.
References:
What is AWS Client VPN?
Creating a Client VPN endpoint
Associating a target network with a Client VPN endpoint
Configuring a self-service portal
Question # 118
A company owns a chain of travel agencies and is running an application in the AWS
Cloud. Company employees use the application to search for information about travel
destinations. Destination content is updated four times each year.
Two fixed Amazon EC2 instances serve the application. The company uses an Amazon
Route 53 public hosted zone with a multivalue record of travel.example.com that returns the Elastic IP addresses for the EC2 instances. The application uses Amazon DynamoDB
as its primary data store. The company uses a self-hosted Redis instance as a caching
solution.
During content updates, the load on the EC2 instances and the caching solution increases
drastically. This increased load has led to downtime on several occasions. A solutions
architect must update the application so that the application is highly available and can
handle the load that is generated by the content updates.
Which solution will meet these requirements?
A. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application touse DAX. Create an Auto Scaling group for the EC2 instances. Create an Application LoadBalancer (ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53record to use a simple routing policy that targets the ALB's DNS alias. Configure scheduledscaling for the EC2 instances before the content updates. B. Set up Amazon ElastiCache for Redis. Update the application to use ElastiCache.Create an Auto Scaling group for the EC2 instances. Create an Amazon CloudFrontdistribution, and set the Auto Scaling group as an origin for the distribution. Update theRoute 53 record to use a simple routing policy that targets the CloudFront distribution'sDNS alias. Manually scale up EC2 instances before the content updates. C. Set up Amazon ElastiCache for Memcached. Update the application to use ElastiCacheCreate an Auto Scaling group for the EC2 instances. Create an Application Load Balancer(ALB). Set the Auto Scaling group as a target for the ALB. Update the Route 53 record touse a simple routing policy that targets the ALB's DNS alias. Configure scheduled scalingfor the application before the content updates. D. Set up DynamoDB Accelerator (DAX) as in-memory cache. Update the application touse DAX. Create an Auto Scaling group for the EC2 instances. Create an AmazonCloudFront distribution, and set the Auto Scaling group as an origin for the distribution.Update the Route 53 record to use a simple routing policy that targets the CloudFrontdistribution's DNS alias. Manually scale up EC2 instances before the content updates.
Answer: A
Explanation:
Explanation: This option allows the company to use DAX to improve the performance and
reduce the latency of the DynamoDB queries by caching the results in memory1. By
updating the application to use DAX, the company can reduce the load on the DynamoDB
tables and avoid throttling errors1. By creating an Auto Scaling group for the EC2
instances, the company can adjust the number of instances based on the demand and
ensure high availability2. By creating an ALB, the company can distribute the incoming
traffic across multiple EC2 instances and improve fault tolerance3. By updating the Route
53 record to use a simple routing policy that targets the ALB’s DNS alias, the company can
route users to the ALB endpoint and leverage its health checks and load balancing
features4. By configuring scheduled scaling for the EC2 instances before the content
updates, the company can anticipate and handle traffic spikes during peak periods5. References:
What is Amazon DynamoDB Accelerator (DAX)?
What is Amazon EC2 Auto Scaling?
What is an Application Load Balancer?
Choosing a routing policy
Scheduled scaling for Amazon EC2 Auto Scaling
Question # 119
A company that provisions job boards for a seasonal workforce is seeing an increase in
traffic and usage. The backend services run on a pair of Amazon EC2 instances behind an
Application Load Balancer with Amazon DynamoDB as the datastore. Application read and
write traffic is slow during peak seasons.
Which option provides a scalable application architecture to handle peak seasons with the
LEAST development effort?
A. Migrate the backend services to AWS Lambda. Increase the read and write capacity ofDynamoDB. B. Migrate the backend services to AWS Lambda. Configure DynamoDB to use globaltables. C. Use Auto Scaling groups for the backend services. Use DynamoDB auto scaling. D. Use Auto Scaling groups for the backend services. Use Amazon Simple Queue Service(Amazon SQS) and an AWS Lambda function to write to DynamoDB.
Answer: C
Explanation:
Option C is correct because using Auto Scaling groups for the backend services
allows the company to scale up or down the number of EC2 instances based on
the demand and traffic. This way, the backend services can handle more requests
during peak seasons without compromising performance or availability. Using
DynamoDB auto scaling allows the company to adjust the provisioned read and
write capacity of the table or index automatically based on the actual traffic
patterns. This way, the table or index can handle sudden increases or decreases
in workload without throttling or overprovisioning1.
Option A is incorrect because migrating the backend services to AWS Lambda
may require significant development effort to rewrite the code and test the
functionality. Moreover, increasing the read and write capacity of DynamoDB
manually may not be efficient or cost-effective, as it does not account for the
variability of the workload. The company may end up paying for unused capacity
or experiencing throttling if the workload exceeds the provisioned capacity1.
Option B is incorrect because migrating the backend services to AWS Lambda
may require significant development effort to rewrite the code and test the
functionality. Moreover, configuring DynamoDB to use global tables may not be
necessary or beneficial for the company, as global tables are mainly used for
replicating data across multiple AWS Regions for fast local access and disaster
recovery. Global tables do not automatically scale the provisioned capacity of each
replica table; they still require manual or auto scaling settings2.
Option D is incorrect because using Amazon Simple Queue Service (Amazon
SQS) and an AWS Lambda function to write to DynamoDB may introduce
additional complexity and latency to the application architecture. Amazon SQS is a
message queue service that decouples and coordinates the components of a
distributed system. AWS Lambda is a serverless compute service that runs code
in response to events. Using these services may require significant development
effort to integrate them with the backend services and DynamoDB. Moreover, they
may not improve the read performance of DynamoDB, which may also be affected
by high traffic3. References:
Auto Scaling groups
DynamoDB auto scaling
AWS Lambda
DynamoDB global tables
AWS Lambda vs EC2: Comparison of AWS Compute Resources - Simform
Managing throughput capacity automatically with DynamoDB auto scaling -
Amazon DynamoDB
AWS Aurora Global Database vs. DynamoDB Global Tables
Amazon Simple Queue Service (SQS)
Question # 120
A company has a new application that needs to run on five Amazon EC2 instances in a
single AWS Region. The application requires high-through put. low-latency network
connections between all to the EC2 instances where the application will run. There is no
requirement for the application to be fault tolerant.
Which solution will meet these requirements?
A. Launch five new EC2 instances into a cluster placement group. Ensure that the EC2instance type supports enhanced networking. B. Launch five new EC2 instances into an Auto Scaling group in the same AvailabilityZone. Attach an extra elastic network interface to each EC2 instance. C. Launch five new EC2 instances into a partition placement group. Ensure that the EC2instance type supports enhanced networking. D. Launch five new EC2 instances into a spread placement group Attach an extra elasticnetwork interface to each EC2 instance.
A company wants to migrate to AWS. The company is running thousands of VMs in a
VMware ESXi environment. The company has no configuration management database and
has little Knowledge about the utilization of the VMware portfolio.
A solutions architect must provide the company with an accurate inventory so that the
company can plan for a cost-effective migration.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Systems Manager Patch Manager to deploy Migration Evaluator to each VM.Review the collected data in Amazon QuickSight. Identify servers that have high utilization.Remove the servers that have high utilization from the migration list. Import the data toAWS Migration Hub. B. Export the VMware portfolio to a csv file. Check the disk utilization for each server.Remove servers that have high utilization. Export the data to AWS Application MigrationService. Use AWS Server Migration Service (AWS SMS) to migrate the remaining servers. C. Deploy the Migration Evaluator agentless collector to the ESXi hypervisor. Review thecollected data in Migration Evaluator. Identify inactive servers. Remove the inactive serversfrom the migration list. Import the data to AWS Migration Hub. D. Deploy the AWS Application Migration Service Agent to each VM. When the data iscollected, use Amazon Redshift to import and analyze the data. Use Amazon QuickSightfor data visualization.
A company has migrated a legacy application to the AWS Cloud. The application runs on
three Amazon EC2 instances that are spread across three Availability Zones. One EC2
instance is in each Availability Zone. The EC2 instances are running in three private
subnets of the VPC and are set up as targets for an Application Load Balancer (ALB) that
is associated with three public subnets.
The application needs to communicate with on-premises systems. Only traffic from IP
addresses in the company's IP address range are allowed to access the on-premises
systems. The company's security team is bringing only one IP address from its internal IP
address range to the cloud. The company has added this IP address to the allow list for the
company firewall. The company also has created an Elastic IP address for this IP address.
A solutions architect needs to create a solution that gives the application the ability to
communicate with the on-premises systems. The solution also must be able to mitigate
failures automatically.
Which solution will meet these requirements?
A. Deploy three NAT gateways, one in each public subnet. Assign the Elastic IP address tothe NAT gateways. Turn on health checks for the NAT gateways. If a NAT gateway fails ahealth check, recreate the NAT gateway and assign the Elastic IP address to the new NATgateway. B. Replace the ALB with a Network Load Balancer (NLB). Assign the Elastic IP address tothe NLB Turn on health checks for the NLB. In the case of a failed health check, redeploythe NLB in different subnets. C. Deploy a single NAT gateway in a public subnet. Assign the Elastic IP address to theNAT gateway. Use Amazon CloudWatch with a custom metric tomonitor the NAT gateway. If the NAT gateway is unhealthy, invoke an AWS Lambdafunction to create a new NAT gateway in a different subnet. Assign the Elastic IP addressto the new NAT gateway. D. Assign the Elastic IP address to the ALB. Create an Amazon Route 53 simple recordwith the Elastic IP address as the value. Create a Route 53 health check. In the case of afailed health check, recreate the ALB in different subnets.
Answer: C
Explanation: to connect out from the private subnet you need an NAT gateway and since
only one Elastic IP whitelisted on firewall its one NATGateway at time and if AZ failure
happens Lambda creates a new NATGATEWAY in a different AZ using the Same Elastic
IP ,dont be tempted to select D since application that needs to connect is on a private
subnet whose outbound connections use the NATGateway Elastic IP
Question # 123
A company has created an OU in AWS Organizations for each of its engineering teams
Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts
A solutions architect must design a solution so that each OU can view a breakdown of
usage costs across its AWS accounts. Which solution meets these requirements?
A. Create an AWS Cost and Usage Report (CUR) for each OU by using AWS ResourceAccess Manager Allow each team to visualize the CUR through an Amazon QuickSightdashboard. B. Create an AWS Cost and Usage Report (CUR) from the AWS Organizationsmanagement account- Allow each team to visualize the CUR through an AmazonQuickSight dashboard C. Create an AWS Cost and Usage Report (CUR) in each AWS Organizations memberaccount Allow each team to visualize the CUR through an Amazon QuickSight dashboard. D. Create an AWS Cost and Usage Report (CUR) by using AWS Systems Manager Alloweach team to visualize the CUR through Systems Manager OpsCenter dashboards
A company built an application based on AWS Lambda deployed in an AWS
CloudFormation stack. The last production release of the web application introduced an
issue that resulted in an outage lasting several minutes. A solutions architect must adjust
the deployment process to support a canary release.
Which solution will meet these requirements?
A. Create an alias for every new deployed version of the Lambda function. Use the AWSCLI update-alias command with the routing-config parameter to distribute the load. B. Deploy the application into a new CloudFormation stack. Use an Amazon Route 53weighted routing policy to distribute the load. C. Create a version for every new deployed Lambda function. Use the AWS CLI updatefunction-configuration command with the routing-config parameter to distribute the load. D. Configure AWS CodeDeploy and use CodeDeployDefault.OneAtATime in theDeployment configuration to distribute the load.
A company is running a critical application that uses an Amazon RDS for MySQL database
to store data. The RDS DB instance is deployed in Multi-AZ mode.
A recent RDS database failover test caused a 40-second outage to the application A
solutions architect needs to design a solution to reduce the outage time to less than 20
seconds.
Which combination of steps should the solutions architect take to meet these
requirements? (Select THREE.)
A. Use Amazon ElastiCache for Memcached in front of the database B. Use Amazon ElastiCache for Redis in front of the database. C. Use RDS Proxy in front of the database D. Migrate the database to Amazon Aurora MySQL E. Create an Amazon Aurora Replica F. Create an RDS for MySQL read replica
Answer: C,D,E
Explanation: Migrate the database to Amazon Aurora MySQL. - Create an Amazon Aurora
Replica. - Use RDS Proxy in front of the database. - These options are correct because
they address the requirement of reducing the failover time to less than 20 seconds.
Migrating to Amazon Aurora MySQL and creating an Aurora replica can reduce the failover
time to less than 20 seconds. Aurora has a built-in, fault-tolerant storage system that can
automatically detect and repair failures. Additionally, Aurora has a feature called "Aurora
Global Database" which allows you to create read-only replicas across multiple AWS
regions which can further help to reduce the failover time. Creating an Aurora replica can
also help to reduce the failover time as it can take over as the primary DB instance in case
of a failure. Using RDS proxy can also help to reduce the failover time as it can route the
queries to the healthy DB instance, it also helps to balance the load across multiple DB
instances.
Question # 126
A company has multiple AWS accounts. The company recently had a security audit that
revealed many unencrypted Amazon Elastic Block Store (Amazon EBS) volumes attached to Amazon EC2 instances.
A solutions architect must encrypt the unencrypted volumes and ensure that unencrypted
volumes will be detected automatically in the future. Additionally, the company wants a
solution that can centrally manage multiple AWS accounts with a focus on compliance and
security.
Which combination of steps should the solutions architect take to meet these
requirements? (Choose two.)
A. Create an organization in AWS Organizations. Set up AWS Control Tower, and turn onthe strongly recommended guardrails. Join all accounts to the organization. Categorize theAWS accounts into OUs. B. Use the AWS CLI to list all the unencrypted volumes in all the AWS accounts. Run ascript to encrypt all the unencrypted volumes in place. C. Create a snapshot of each unencrypted volume. Create a new encrypted volume fromthe unencrypted snapshot. Detach the existing volume, and replace it with the encryptedvolume. D. Create an organization in AWS Organizations. Set up AWS Control Tower, and turn onthe mandatory guardrails. Join all accounts to the organization. Categorize the AWSaccounts into OUs. E. Turn on AWS CloudTrail. Configure an Amazon EventBridge (Amazon CloudWatchEvents) rule to detect and automatically encrypt unencrypted volumes.
An online gaming company needs to optimize the cost of its workloads on AWS. The
company uses a dedicated account to host the production environment for its online
gaming application and an analytics application.
Amazon EC2 instances host the gaming application and must always be vailable. The EC2
instances run all year. The analytics application uses data that is stored in Amazon S3. The
analytics application can be interrupted and resumed without issue.
Which solution will meet these requirements MOST cost-effectively?
A. Purchase an EC2 Instance Savings Plan for the online gaming application instances.Use On-Demand Instances for the analytics application. B. Purchase an EC2 Instance Savings Plan for the online gaming application instances.Use Spot Instances for the analytics application. C. Use Spot Instances for the online gaming application and the analytics application. Setup a catalog in AWS Service Catalog to provision services at a discount. D. Use On-Demand Instances for the online gaming application. Use Spot Instances for theanalytics application. Set up a catalog in AWS Service Catalog to provision services at adiscount.
Answer: B
Explanation:
The correct answer is B.
B. This solution is the most cost-effective because it uses an EC2 Instance Savings Plan
for the online gaming application instances, which provides the lowest prices and savings
up to 72% compared to On-Demand prices. The EC2 Instance Savings Plan applies to any
instance size within the same family and region, regardless of availability zone, operating system, or tenancy. The online gaming application instances run all year and must always
be available, so they are not suitable for Spot Instances, which can be interrupted with a
two-minute notice. This solution also uses Spot Instances for the analytics application,
which can reduce the cost by up to 90% compared to On-Demand prices. The analytics
application can be interrupted and resumed without issue, so it is a good fit for Spot
Instances, which use spare EC2 capacity. This solution does not require AWS Service
Catalog, which is a service that helps to create and manage catalogs of IT services that are
approved for use on AWS, but does not provide any discounts123
A. This solution is incorrect because it uses On-Demand Instances for the analytics
application, which are more expensive than Spot Instances. The analytics application can
be interrupted and resumed without issue, so it can benefit from the lower cost of Spot
Instances, which use spare EC2 capacity.
C. This solution is incorrect because it uses Spot Instances for the online gaming
application, which can be interrupted with a two-minute notice. The online gaming
application instances must always be available, so they are not suitable for Spot Instances,
which use spare EC2 capacity. This solution also uses AWS Service Catalog, which is a
service that helps to create and manage catalogs of IT services that are approved for use
on AWS, but does not provide any discounts.
D. This solution is incorrect because it uses On-Demand Instances for the online gaming
application, which are more expensive than an EC2 Instance Savings Plan. The online
gaming application instances run all year and must always be available, so they are
suitable for an EC2 Instance Savings Plan, which provides the lowest prices and savings
up to 72% compared to On-Demand prices. This solution also uses AWS Service Catalog,
which is a service that helps to create and manage catalogs of IT services that are
approved for use on AWS, but does not provide any discounts.
3: Cloud Management and Governance – AWS Service Catalog – Amazon Web Services
Question # 128
A company uses a load balancer to distribute traffic to Amazon EC2 instances in a single
Availability Zone. The company is concerned about security and wants a solutions architect
to re-architect the solution to meet the following requirements:
• Inbound requests must be filtered for common vulnerability attacks.
• Rejected requests must be sent to a third-party auditing application.
• All resources should be highly available.
Which solution meets these requirements?
A. Configure a Multi-AZ Auto Scaling group using the application's AMI. Create anApplication Load Balancer (ALB) and select the previously created Auto Scaling group asthe target. Use Amazon Inspector to monitor traffic to the ALB and EC2 instances. Create aweb ACL in WAF. Create an AWS WAF using the web ACL and ALB. Use an AWSLambda function to frequently push the Amazon Inspector report to the third-party auditingapplication. B. Configure an Application Load Balancer (ALB) and add the EC2 instances as targetsCreate a web ACL in WAF. Create an AWS WAF using the web ACL and ALB name andenable logging with Amazon CloudWatch Logs. Use an AWS Lambda function to frequentlypush the logs to the third-party auditing application. C. Configure an Application Load Balancer (ALB) along with a target group adding the EC2instances as targets. Create an Amazon Kinesis Data Firehose with the destination of thethird-party auditing application. Create a web ACL in WAF. Create an AWS WAF using theweb ACL and ALB then enable logging by selecting the Kinesis Data Firehose as thedestination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF asthe subscriber. D. Configure a Multi-AZ Auto Scaling group using the application's AMI. Create anApplication Load Balancer (ALB) and select the previously created Auto Scaling group asthe target. Create an Amazon Kinesis Data Firehose with a destination of the third-partyauditing application. Create a web ACL in WAF. Create an AWS WAF using the WebACLand ALB then enable logging by selecting the Kinesis Data Firehose as the destination. Subscribe to AWS Managed Rules in AWS Marketplace, choosing the WAF as thesubscriber.
A company needs to aggregate Amazon CloudWatch logs from its AWS accounts into one
central logging account. The collected logs must remain in the AWS Region of
creation. The central logging account will then process the logs, normalize the logs into
standard output format, and stream the output logs to a security tool for more processing.
A solutions architect must design a solution that can handle a large volume of logging data
that needs to be ingested. Less logging will occur outside normal business hours than
during normal business hours. The logging solution must scale with the anticipated load.
The solutions architect has decided to use an AWS Control Tower design to handle the
multi-account logging process.
Which combination of steps should the solutions architect take to meet the requirements?
(Select THREE.)
A. Create a destination Amazon Kinesis data stream in the central logging account. B. Create a destination Amazon Simple Queue Service (Amazon SQS) queue in thecentral logging account. C. Create an IAM role that grants Amazon CloudWatch Logs the permission to add data tothe Amazon Kinesis data stream. Create a trust policy. Specify the trust policy in the IAMrole. In each member account, create a subscription filter for each log group to send data tothe Kinesis data stream. D. Create an IAM role that grants Amazon CloudWatch Logs the permission to add data tothe Amazon Simple Queue Service (Amazon SQS) queue. Create a trust policy. Specify the trust policy in the IAM role. In each member account, create a single subscription filterfor all log groups to send data to the SQS queue. E. Create an AWS Lambda function. Program the Lambda function to normalize the logs inthe central logging account and to write the logs to the security tool. F. Create an AWS Lambda function. Program the Lambda function to normalize the logs inthe member accounts and to write the logs to the security tool.
Answer: A,C,E
Question # 130
A large payroll company recently merged with a small staffing company. The unified
company now has multiple business units, each with its own existing AWS account.
A solutions architect must ensure that the company can centrally manage the billing and
access policies for all the AWS accounts. The solutions architect configures AWS
Organizations by sending an invitation to all member accounts of the company from a
centralized management account. What should the solutions architect do next to meet these requirements?
A. Create the OrganizationAccountAccess IAM group in each member account. Include thenecessary IAM roles for each administrator. B. Create the OrganizationAccountAccessPoIicy IAM policy in each member account.Connect the member accounts to the management account by using cross- accountaccess. C. Create the OrganizationAccountAccessRoIe IAM role in each member account. Grantpermission to the management account to assume the IAM role. D. Create the OrganizationAccountAccessRoIe IAM role in the management account.Attach the AdministratorAccess AWS managed policy to the IAM role. Assign the IAM roleto the administrators in each member account.
Answer: C
Question # 131
A company runs a web application on AWS. The web application delivers static content
from an Amazon S3 bucket that is behind an Amazon CloudFront distribution. The
application serves dynamic content by using an Application Load Balancer (ALB) that
distributes requests to a fleet of Amazon EC2 instances in Auto Scaling groups. The
application uses a domain name setup in Amazon Route 53.
Some users reported occasional issues when the users attempted to access the website
during peak hours. An operations team found that the ALB sometimes returned HTTP 503
Service Unavailable errors. The company wants to display a custom error message page
when these errors occur. The page should be displayed immediately for this error code.
Which solution will meet these requirements with the LEAST operational overhead?
A. Set up a Route 53 failover routing policy. Configure a health check to determine thestatus of the ALB endpoint and to fail over to the failover S3 bucket endpoint. B. Create a second CloudFront distribution and an S3 static website to host the customerror page. Set up a Route 53 failover routing policy. Use an active-passive configurationbetween the two distributions. C. Create a CloudFront origin group that has two origins. Set the ALB endpoint as theprimary origin. For the secondary origin, set an S3 bucket that is configured to host a staticwebsite Set up origin failover for the CloudFront distribution. Update the S3 static websiteto incorporate the custom error page. D. Create a CloudFront function that validates each HTTP response code that the ALBreturns. Create an S3 static website in an S3 bucket. Upload the custom error page to theS3 bucket as a failover. Update the function to read the S3 bucket and to serve the errorpage to the end users.
Answer: C
Question # 132
A company's solutions architect needs to provide secure Remote Desktop connectivity to
users for Amazon EC2 Windows instances that are hosted in a VPC. The solution must
integrate centralized user management with the company's on-premises Active Directory.
Connectivity to the VPC is through the internet. The company has hardware that can be
used to establish an AWS Site-to-Site VPN connection.
Which solution will meet these requirements MOST cost-effectively?
A. Deploy a managed Active Directory by using AWS Directory Service for Microsoft ActiveDirectory. Establish a trust with the on-premises Active Directory. Deploy an EC2 instanceas a bastion host in the VPC. Ensure that the EC2 instance is joined to the domain. Usethe bastion host to access the target instances through RDP. B. Configure AWS IAM Identity Center (AWS Single Sign-On) to integrate with the onpremisesActive Directory by using the AWS Directory Service for Microsoft ActiveDirectory AD Connector. Configure permission sets against user groups for access to AWSSystems Manager. Use Systems Manager Fleet Manager to access the target instancesthrough RDP. C. Implement a VPN between the on-premises environment and the target VPC. Ensurethat the target instances are joined to the on-premises Active Directory domain over theVPN connection. Configure RDP access through the VPN. Connect from the company'snetwork to the target instances. D. Deploy a managed Active Directory by using AWS Directory Service for Microsoft ActiveDirectory. Establish a trust with the on-premises Active Directory. Deploy a RemoteDesktop Gateway on AWS by using an AWS Quick Start. Ensure that the Remote DesktopGateway is joined to the domain. Use the Remote Desktop Gateway to access the targetinstances through RDP.
Answer: D
Question # 133
A team of data scientists is using Amazon SageMaker instances and SageMaker APIs to
train machine learning (ML) models. The SageMaker instances are deployed in a
VPC that does not have access to or from the internet. Datasets for ML model training are
stored in an Amazon S3 bucket. Interface VPC endpoints provide access to Amazon S3
and the SageMaker APIs.
Occasionally, the data scientists require access to the Python Package Index (PyPl)
repository to update Python packages that they use as part of their workflow. A solutions
architect must provide access to the PyPI repository while ensuring that the SageMaker instances remain isolated from the internet.
Which solution will meet these requirements?
A. Create an AWS CodeCommit repository for each package that the data scientists needto access. Configure code synchronization between the PyPl repository and theCodeCommit repository. Create a VPC endpoint for CodeCommit. B. Create a NAT gateway in the VPC. Configure VPC routes to allow access to the internetwith a network ACL that allows access to only the PyPl repository endpoint. C. Create a NAT instance in the VPC. Configure VPC routes to allow access to theinternet. Configure SageMaker notebook instance firewall rules that allow access to onlythe PyPI repository endpoint. D. Create an AWS CodeArtifact domain and repository. Add an external connection forpublic:pypi to the CodeArtifact repository. Configure the Python client to use theCodeArtifact repository. Create a VPC endpoint for CodeArtifact.
Answer: D
Question # 134
A company plans to deploy a new private intranet service on Amazon EC2 instances inside
a VPC. An AWS Site-to-Site VPN connects the VPC to the company's on-premise network. The new service must communicate with existing on-premises services The onpremises
services are accessible through the use of hostnames that reside in the company
example DNS zone This DNS zone is wholly hosted on premises and is available only on
the company's private network.
A solutions architect must ensure that the new service can resolve hostnames on the
company example domain to integrate with existing services.
Which solution meets these requirements?
A. Create an empty private zone in Amazon Route 53 for company example Add anadditional NS record to the company's on-premises company example zone that points tothe authoritative name servers for the new private zone in Route 53 B. Turn on DNS hostnames for the VPC Configure a new outbound endpoint with AmazonRoute 53 Resolver. Create a Resolver rule to forward requests for company example to theon-premises name servers C. Turn on DNS hostnames for the VPC Configure a new inbound resolver endpoint withAmazon Route 53 Resolver. Configure the on-premises DNS server to forward requests forcompany example to the new resolver. D. Use AWS Systems Manager to configure a run document that will install a hosts file thatcontains any required hostnames. Use an Amazon EventBndge rule to run the documentwhen an instance is entering the running state.
A company runs its application in the eu-west-1 Region and has one account for each of its
environments development, testing, and production All the environments are running 24
hours a day 7 days a week by using stateful Amazon EC2 instances and Amazon RDS for
MySQL databases The databases are between 500 GB and 800 GB in size
The development team and testing team work on business days during business hours, but
the production environment operates 24 hours a day. 7 days a week. The company wants
to reduce costs AH resources are tagged with an environment tag with either development,
testing, or production as the key.
What should a solutions architect do to reduce costs with the LEAST operational effort?
A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs once everyday Configure the rule to invoke one AWS Lambda function that starts or stops instancesbased on the tag day and time. B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs everybusiness day in the evening. Configure the rule to invoke an AWS Lambda function thatstops instances based on the tag-Create a second EventBridge (CloudWatch Events) rulethat runs every business day in the morning Configure the second rule to invoke anotherLambda function that starts instances based on the tag C. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that runs everybusiness day in the evening Configure the rule to invoke an AWS Lambda function thatterminates instances based on the tag Create a second EventBridge (CloudWatch Events)rule that runs every business day in the morning Configure the second rule to invokeanother Lambda function that restores the instances from their last backup based on thetag. D. Create an Amazon EventBridge rule that runs every hour. Configure the rule to invokeone AWS Lambda function that terminates or restores instances from their last backupbased on the tag. day, and time.
Answer: B
Explanation: Creating an Amazon EventBridge rule that runs every business day in the
evening to stop instances and another rule that runs every business day in the morning to
start instances based on the tag will reduce costs with the least operational effort. This
Question No : 290
Amazon Web Services SAP-C02 : Practice Test approach allows for instances to be stopped during non-business hours when they are not
in use, reducing the costs associated with running them. It also allows for instances to be
started again in the morning when the development and testing teams need to use them.
Question # 136
A company has used infrastructure as code (IaC) to provision a set of two Amazon EC2
instances. The instances have remained the same for several years.
The company's business has grown rapidly in the past few months. In response the
company's operations team has implemented an Auto Scaling group to manage the
sudden increases in traffic. Company policy requires a monthly installation of security
updates on all operating systems that are running. The most recent security update required a reboot. As a result, the Auto Scaling group
terminated the instances and replaced them with new, unpatched instances.
Which combination of steps should a solutions architect recommend to avoid a recurrence
of this issue? (Choose two.)
A. Modify the Auto Scaling group by setting the Update policy to target the oldest launchconfiguration for replacement. B. Create a new Auto Scaling group before the next patch maintenance. During themaintenance window, patch both groups and reboot the instances. C. Create an Elastic Load Balancer in front of the Auto Scaling group. Configure monitoringto ensure that target group health checks return healthy after the Auto Scaling groupreplaces the terminated instances. D. Create automation scripts to patch an AMI, update the launch configuration, and invokean Auto Scaling instance refresh. E. Create an Elastic Load Balancer in front of the Auto Scaling group. Configuretermination protection on the instances.
Answer: C,D
Question # 137
A company has application services that have been containerized and deployed on multiple
Amazon EC2 instances with public IPs. An Apache Kafka cluster has been deployed to the EC2 instances. A PostgreSQL database has been migrated to Amazon RDS for
PostgreSQL. The company expects a significant increase of orders on its platform when a
new version of its flagship product is released.
What changes to the current architecture will reduce operational overhead and support the
product release?
A. Create an EC2 Auto Scaling group behind an Application Load Balancer. Createadditional read replicas for the DB instance. Create Amazon Kinesis data streams andconfigure the application services to use the data streams. Store and serve static contentdirectly from Amazon S3. B. Create an EC2 Auto Scaling group behind an Application Load Balancer. Deploy the DBinstance in Multi-AZ mode and enable storage auto scaling. Create Amazon Kinesis datastreams and configure the application services to use the data streams. Store and servestatic content directly from Amazon S3. C. Deploy the application on a Kubernetes cluster created on the EC2 instances behind anApplication Load Balancer. Deploy the DB instance in Multi-AZ mode and enable storageauto scaling. Create an Amazon Managed Streaming for Apache Kafka cluster andconfigure the application services to use the cluster. Store static content in Amazon S3behind an Amazon CloudFront distribution. D. Deploy the application on Amazon Elastic Kubernetes Service (Amazon EKS) with AWSFargate and enable auto scaling behind an Application Load Balancer. Create additionalread replicas for the DB instance. Create an Amazon Managed Streaming for ApacheKafka cluster and configure the application services to use the cluster. Store static contentin Amazon S3 behind an Amazon CloudFront distribution.
Answer: D
Explanation:
The correct answer is D. Deploy the application on Amazon Elastic Kubernetes Service
(Amazon EKS) with AWS Fargate and enable auto scaling behind an Application Load
Balancer. Create additional read replicas for the DB instance. Create an Amazon Managed
Streaming for Apache Kafka cluster and configure the application services to use the
cluster. Store static content in Amazon S3 behind an Amazon CloudFront distribution.
Option D meets the requirements of the scenario because it allows you to reduce
operational overhead and support the product release by using the following AWS services
and features:
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed service that
allows you to run Kubernetes applications on AWS without needing to install,
operate, or maintain your own Kubernetes control plane. You can use Amazon
EKS to deploy your containerized application services on a Kubernetes cluster that
is compatible with your existing tools and processes.
AWS Fargate is a serverless compute engine that eliminates the need to provision
and manage servers for your containers. You can use AWS Fargate as the launch
type for your Amazon EKS pods, which are the smallest deployable units of
computing in Kubernetes. You can also enable auto scaling for your pods, which
allows you to automatically adjust the number of pods based on the demand or custom metrics.
An Application Load Balancer (ALB) is a load balancer that distributes traffic
across multiple targets in multiple Availability Zones using HTTP or HTTPS
protocols. You can use an ALB to balance the load across your Amazon EKS pods
and provide high availability and fault tolerance for your application.
Amazon RDS for PostgreSQL is a fully managed relational database service that
supports the PostgreSQL open source database engine. You can create additional
read replicas for your DB instance, which are copies of your primary DB instance
that can handle read-only queries and improve performance. You can also use
read replicas to scale out beyond the capacity of a single DB instance for readheavy
workloads.
Amazon Managed Streaming for Apache Kafka (Amazon MSK) is a fully managed
service that makes it easy to build and run applications that use Apache Kafka to
process streaming data. Apache Kafka is an open source platform for building
real-time data pipelines and streaming applications. You can use Amazon MSK to
create and manage a Kafka cluster that is highly available, secure, and compatible
with your existing Kafka applications. You can also configure your application
services to use the Amazon MSK cluster as a source or destination of streaming
data.
Amazon S3 is an object storage service that offers high durability, availability, and
scalability. You can store static content such as images, videos, or documents in
Amazon S3 buckets, which are containers for objects. You can also serve static
content directly from Amazon S3 using public URLs or presigned URLs.
Amazon CloudFront is a fast content delivery network (CDN) service that securely
delivers data, videos, applications, and APIs to customers globally with low latency
and high transfer speeds. You can use Amazon CloudFront to create a distribution
that caches static content from your Amazon S3 bucket at edge locations closer to
your users. This can improve the performance and user experience of your
application.
Option A is incorrect because creating an EC2 Auto Scaling group behind an ALB would
not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as
you would still need to manage EC2 instances for your containers. Creating additional read
replicas for the DB instance would not provide high availability or fault tolerance in case of
a failure of the primary DB instance, unlike deploying the DB instance in Multi-AZ mode.
Creating Amazon Kinesis data streams would not be compatible with your existing Apache
Kafka applications, unlike using Amazon MSK.
Option B is incorrect because creating an EC2 Auto Scaling group behind an ALB would
not reduce operational overhead as much as using AWS Fargate with Amazon EKS, as
you would still need to manage EC2 instances for your containers. Creating Amazon
Kinesis data streams would not be compatible with your existing Apache Kafka
applications, unlike using Amazon MSK. Storing and serving static content directly from
Amazon S3 would not provide optimal performance and user experience, unlike using
Amazon CloudFront.
Option C is incorrect because deploying the application on a Kubernetes cluster created on
the EC2 instances behind an ALB would not reduce operational overhead as much as
using AWS Fargate with Amazon EKS, as you would still need to manage EC2 instances
and Kubernetes control plane for your containers. Using Amazon API Gateway to interact
with the application would add an unnecessary layer of complexity and cost to your
architecture, as you would need to create and maintain an API gateway that proxies requests to your ALB.
Question # 138
A company manages hundreds of AWS accounts centrally in an organization in AWS
Organizations. The company recently started to allow product teams to create and manage
their own S3 access points in their accounts. The S3 access points can be accessed only
within VPCs not on the internet.
What is the MOST operationally efficient way to enforce this requirement?
A. Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unlessthe s3: AccessPointNetworkOngm condition key evaluates to VPC. B. Create an SCP at the root level in the organization to deny the s3 CreateAccessPointaction unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC. C. Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS accountthat allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigincondition key evaluates to VPC. D. Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3AccessPointNetworkOrigin condition key evaluates to VPC.
What our clients say about SAP-C02 Study Resources
Oscar
Sep 13, 2024
I cannot express how impressed I am with the SAP-C02 PDF Guide from Salesforcexamdumps.com. I just share my experience All the questions came from the dumps, except for two new ones. Thanks
Umar
Sep 12, 2024
I got 97% score Thanks !!! Awesome!!
Quentin
Sep 12, 2024
In comparison to other websites, this platform offers more affordable exam resources that contain the exact same questions and answers. I was able to achieve an outstanding score of 90%, and I am grateful for the Dumps provided by Salesforcexamdumps.com.
Vanessa
Sep 11, 2024
I started using these dumps, I knew I was in good hands. Because I used these dumps some time ago Thanks to these dumps, I passed the exam with ease and can confidently say that Salesforcexamdumps.com is the Fantastic
Xavier
Sep 11, 2024
If you're looking for a reliable source of SAP-C02 dumps, look no further than Salesforcexamdumps.com. The dumps are up-to-date and accurate, and the explanations are clear and easy to understand. I would highly recommend these dumps to anyone preparing for the SAP-C02 exam.
Alex Turner
Sep 10, 2024
I am excited to announce that I passed the exam, and I couldn't have done it without the invaluable assistance provided by Salesforcexamdumps.com exam dumps. The questions were remarkably similar to those in the actual exam, and I am extremely grateful for this amazing resource.
Hina Khan
Sep 10, 2024
The SAP-C02 dumps from Salesforcexamdumps.com were just what I needed to prepare for my exam. The dumps were well-organized and covered all the important topics in a concise and clear manner. I passed the exam without any difficulty and am grateful for these helpful dumps.
Uma
Sep 09, 2024
After reading only 5 days I Clear My AWS Certified Solutions Architect - Professional Exam With 880/ 1000 Marks.
Ryan Ali
Sep 09, 2024
These SAP-C02 Practice Tests exceeded my expectations in every way possible. The material is comprehensive, well-organized, and updated regularly to ensure it covers the latest exam topics. I was able to pass the exam on my first attempt
Frederick
Sep 08, 2024
I am thoroughly impressed with the accuracy and quality of the SAP-C02 dumps. The exam material provided by this platform has exceeded my expectations, and I'm more satisfied with the results.
Zachary
Sep 08, 2024
I purchased SAP-C02 Dumps from Salesforcexamdumps.com and I have to say, it was a great study material. The dumps were comprehensive and covered all the topics I needed to know for the SAP-C02 exam. I was able to pass the exam on my first try with flying colors thanks to these SAP-C02 dumps.
Jack
Sep 07, 2024
I am immensely grateful for the invaluable resource provided by this platform. Without it, passing my SAP-C02 exam would have been an insurmountable challenge. Thank you for your assistance and support throughout the exam preparation process.
Patrick
Sep 07, 2024
I wanted to say thanks these SAP-C02 dumps are up-to-dated, accurate and authentic i passed my exam. highly recommended
Samuel
Sep 06, 2024
Hello everyone, I am delighted to share with you that I passed my SAP-C02 exam on my first attempt, all thanks to the Dumps that I came across. I couldn't be more thrilled with the results, and I owe it all to these wonderful dumps!
Katherine
Sep 06, 2024
Highly recommended! I Passed my SAP-C02 Exam easily.
Leave a comment
Your email address will not be published. Required fields are marked *
Leave a comment
Your email address will not be published. Required fields are marked *