AWS Certified Solutions Architect - Associate (SAA-C03) Dumps April 2024
Are you tired of looking for a source that'll keep you updated on the AWS Certified Solutions Architect - Associate (SAA-C03) Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Amazon SAA-C03 Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a AWS Certified Associate Certified.
PDF
$100 $40
Test Engine
First Try Then Buy!
Last 1 Hour Left To Avail This 80% Discount Offer Coupon Code "SPECIAL80"
Amazon SAA-C03 is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The AWS Certified Associate Certification validates a candidate's expertise to work with Amazon. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the AWS Certified Solutions Architect - Associate (SAA-C03) Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Amazon SAA-C03 Practice Questions. Passing AWS Certified Solutions Architect - Associate (SAA-C03) will be your ticket to a better future!
Pass with Amazon SAA-C03 Braindumps!
Contrary to the belief that certification exams are generally hard to get through, passing AWS Certified Solutions Architect - Associate (SAA-C03) is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Amazon SAA-C03 PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Amazon AWS Certified Associate certification is all about having the right information. Hence, we filled our Amazon SAA-C03 Dumps with all the necessary data you need to pass. These carefully curated sets of AWS Certified Solutions Architect - Associate (SAA-C03) Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Amazon SAA-C03 Braindumps now!
We aim to provide all AWS Certified Associate certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Amazon SAA-C03 Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.
Why Choose Amazon SAA-C03 PDF?
Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the AWS Certified Solutions Architect - Associate (SAA-C03) candidates. Not every Amazon exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the AWS Certified Associate solutions. Amazon SAA-C03 Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Amazon SAA-C03 Dumps with ease.
If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].
Amazon SAA-C03 Sample Questions
Question # 1
A company is developing a mobile game that streams score updates to a backendprocessor and then posts results on a leaderboard A solutions architect needs to design asolution that can handle large traffic spikes process the mobile game updates in order ofreceipt, and store the processed updates in a highly available database The company alsowants to minimize the management overhead required to maintain the solutionWhat should the solutions architect do to meet these requirements?
A. Push score updates to Amazon Kinesis Data Streams Process the updates in KinesisData Streams with AWS Lambda Store the processed updates in Amazon DynamoDB. B. Push score updates to Amazon Kinesis Data Streams. Process the updates with a fleetof Amazon EC2 instances set up for Auto Scaling Store the processed updates in AmazonRedshift. C. Push score updates to an Amazon Simple Notification Service (Amazon SNS) topicSubscribe an AWS Lambda function to the SNS topic to process the updates. Store theprocessed updates in a SQL database running on Amazon EC2. D. Push score updates to an Amazon Simple Queue Service (Amazon SQS) queue. Use afleet of Amazon EC2 instances with Auto Scaling to process the updates in the SQSqueue. Store the processed updates in an Amazon RDS Multi-AZ DB instance.
Answer: A
Explanation: Amazon Kinesis Data Streams is a scalable and reliable service that can
ingest, buffer, and process streaming data in real-time. It can handle large traffic spikes
and preserve the order of the incoming data records. AWS Lambda is a serverless
compute service that can process the data streams from Kinesis Data Streams without
requiring any infrastructure management. It can also scale automatically to match the
throughput of the data stream. Amazon DynamoDB is a fully managed, highly available,
and fast NoSQL database that can store the processed updates from Lambda. It can also
handle high write throughput and provide consistent performance. By using these services,
the solutions architect can design a solution that meets the requirements of the company
with the least operational overhead.
Question # 2
A company runs an SMB file server in its data center. The file server stores large files thatthe company frequently accesses for up to 7 days after the file creation date. After 7 days,the company needs to be able to access the files with a maximum retrieval time of 24hours.Which solution will meet these requirements?
A. Use AWS DataSync to copy data that is older than 7 days from the SMB file server toAWS. B. Create an Amazon S3 File Gateway to increase the company's storage space. Createan S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days. C. Create an Amazon FSx File Gateway to increase the company's storage space. Createan Amazon S3 Lifecycle policy to transition the data after 7 days. D. Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy totransition the data to S3 Glacier Flexible Retrieval after 7 days.
Answer: B
Explanation:
Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3,
which appears as a network file share. It enables you to store and retrieve Amazon S3
objects through standard file storage protocols such as SMB. S3 File Gateway can also
cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a
feature that allows you to define rules that automate the management of your objects
throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different
storage classes based on their age and access patterns. S3 Glacier Deep Archive is a
storage class that offers the lowest cost for long-term data archiving, with a retrieval time of
12 hours or 48 hours. This solution will meet the requirements, as it allows the company to
store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep
Archive after 7 days for cost savings and compliance.
References: 1 provides an overview of Amazon S3 File Gateway and its benefits.
2 explains how to use S3 Lifecycle policy to manage object storage lifecycle.
3 describes the features and use cases of S3 Glacier Deep Archive storage class.
Question # 3
A company has an organization in AWS Organizations that has all features enabled Thecompany requires that all API calls and logins in any existing or new AWS account must beaudited The company needs a managed solution to prevent additional work and tominimize costs The company also needs to know when any AWS account is not compliantwith the AWS Foundational Security Best Practices (FSBP) standard.Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an AWS Control Tower environment in the Organizations management accountEnable AWS Security Hub and AWS Control Tower Account Factory in the environment. B. Deploy an AWS Control Tower environment in a dedicated Organizations memberaccount Enable AWS Security Hub and AWS Control Tower Account Factory in theenvironment. C. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ. D. Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone(MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Answer: A
Explanation: AWS Control Tower is a fully managed service that simplifies the setup and
governance of a secure, compliant, multi-account AWS environment. It establishes a
landing zone that is based on best-practices blueprints, and it enables governance using
controls you can choose from a pre-packaged list. The landing zone is a well-architected,
multi-account baseline that follows AWS best practices. Controls implement governance
rules for security, compliance, and operations. AWS Security Hub is a service that provides
a comprehensive view of your security posture across your AWS accounts. It aggregates,
organizes, and prioritizes security alerts and findings from multiple AWS services, such as
IAM Access Analyzer, as well as from AWS Partner solutions. AWS Security Hub
continuously monitors your environment using automated compliance checks based on the
AWS best practices and industry standards, such as the AWS Foundational Security Best
Practices (FSBP) standard. AWS Control Tower Account Factory is a feature that
automates the provisioning of new AWS accounts that are preconfigured to meet your
business, security, and compliance requirements. By deploying an AWS Control Tower
environment in the Organizations management account, you can leverage the existing
organization structure and policies, and enable AWS Security Hub and AWS Control Tower
Account Factory in the environment. This way, you can audit all API calls and logins in any
existing or new AWS account, monitor the compliance status of each account with the FSBP standard, and provision new accounts with ease and consistency. This solution
meets the requirements with the least operational overhead, as you do not need to manage
any infrastructure, perform any data migration, or submit any requests for changes.
References:
AWS Control Tower
[AWS Security Hub]
[AWS Control Tower Account Factory]
Question # 4
A solutions architect is designing a user authentication solution for a company The solutionmust invoke two-factor authentication for users that log in from inconsistent geographicallocations. IP addresses, or devices. The solution must also be able to scale up toaccommodate millions of users.Which solution will meet these requirements'?
A. Configure Amazon Cognito user pools for user authentication Enable the nsk-basedadaptive authentication feature with multi-factor authentication (MFA) B. Configure Amazon Cognito identity pools for user authentication Enable multi-factorauthentication (MFA). C. Configure AWS Identity and Access Management (1AM) users for user authenticationAttach an 1AM policy that allows the AllowManageOwnUserMFA action D. Configure AWS 1AM Identity Center (AWS Single Sign-On) authentication for userauthentication Configure the permission sets to require multi-factor authentication(MFA)
Answer: A
Explanation: Amazon Cognito user pools provide a secure and scalable user directory for
user authentication and management. User pools support various authentication methods,
such as username and password, email and password, phone number and password, and
social identity providers. User pools also support multi-factor authentication (MFA), which
adds an extra layer of security by requiring users to provide a verification code or a
biometric factor in addition to their credentials. User pools can also enable risk-based
adaptive authentication, which dynamically adjusts the authentication challenge based on
the risk level of the sign-in attempt. For example, if a user tries to sign in from an unfamiliar
device or location, the user pool can require a stronger authentication factor, such as SMS
or email verification code. This feature helps to protect user accounts from unauthorized
access and reduce the friction for legitimate users. User pools can scale up to millions of
users and integrate with other AWS services, such as Amazon SNS, Amazon SES, AWS
Lambda, and AWS KMS.
Amazon Cognito identity pools provide a way to federate identities from multiple identity
providers, such as user pools, social identity providers, and corporate identity providers.
Identity pools allow users to access AWS resources with temporary, limited-privilege
credentials. Identity pools do not provide user authentication or management features,
such as MFA or adaptive authentication. Therefore, option B is not correct.
AWS Identity and Access Management (IAM) is a service that helps to manage access to
AWS resources. IAM users are entities that represent people or applications that need to
interact with AWS. IAM users can be authenticated with a password or an access key. IAM
users can also enable MFA for their own accounts, by using the
AllowManageOwnUserMFA action in an IAM policy. However, IAM users are not suitable
for user authentication for web or mobile applications, as they are intended for
administrative purposes. IAM users also do not support adaptive authentication based on
risk factors. Therefore, option C is not correct.
AWS IAM Identity Center (AWS Single Sign-On) is a service that enables users to sign in
to multiple AWS accounts and applications with a single set of credentials. AWS SSO
supports various identity sources, such as AWS SSO directory, AWS Managed Microsoft
AD, and external identity providers. AWS SSO also supports MFA for user authentication,
which can be configured in the permission sets that define the level of access for each
user. However, AWS SSO does not support adaptive authentication based on risk factors.
Therefore, option D is not correct.
References:
Amazon Cognito User Pools
Adding Multi-Factor Authentication (MFA) to a User Pool
Risk-Based Adaptive Authentication
Amazon Cognito Identity Pools
IAM Users
Enabling MFA Devices
AWS Single Sign-On
How AWS SSO Works
Question # 5
A solutions architect needs to design the architecture for an application that a vendorprovides as a Docker container image The container needs 50 GB of storage available fortemporary files The infrastructure must be serverless.Which solution meets these requirements with the LEAST operational overhead?
A. Create an AWS Lambda function that uses the Docker container image with an AmazonS3 mounted volume that has more than 50 GB of space B. Create an AWS Lambda function that uses the Docker container image with an AmazonElastic Block Store (Amazon EBS) volume that has more than 50 GB of space C. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses the AWSFargate launch type Create a task definition for the container image with an AmazonElastic File System (Amazon EFS) volume. Create a service with that task definition. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster that uses theAmazon EC2 launch type with an Amazon Elastic Block Store (Amazon EBS) volume thathas more than 50 GB of space Create a task definition for the container image. Create aservice with that task definition.
Answer: C
Explanation:
The AWS Fargate launch type is a serverless way to run containers on Amazon ECS,
without having to manage any underlying infrastructure. You only pay for the resources
required to run your containers, and AWS handles the provisioning, scaling, and security of
the cluster. Amazon EFS is a fully managed, elastic, and scalable file system that can be
mounted to multiple containers, and provides high availability and durability. By using AWS
Fargate and Amazon EFS, you can run your Docker container image with 50 GB of storage available for temporary files, with the least operational overhead. This solution meets the
requirements of the question.
References:
AWS Fargate
Amazon Elastic File System
Using Amazon EFS file systems with Amazon ECS
Question # 6
A company uses AWS Organizations to run workloads within multiple AWS accounts Atagging policy adds department tags to AWS resources when the company creates tags.An accounting team needs to determine spending on Amazon EC2 consumption Theaccounting team must determine which departments are responsible for the costsregardless of AWS account The accounting team has access to AWS Cost Explorer for allAWS accounts within the organization and needs to access all reports from Cost Explorer.Which solution meets these requirements in the MOST operationally efficient way'?
A. From the Organizations management account billing console, activate a user-definedcost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2. B. From the Organizations management account billing console, activate an AWS-definedcost allocation tag named department. Create one cost report in Cost Explorer grouping bytag name, and filter by EC2. C. From the Organizations member account billing console, activate a user-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by thetag name, and filter by EC2. D. From the Organizations member account billing console, activate an AWS-defined costallocation tag named department. Create one cost report in Cost Explorer grouping by tagname and filter by EC2.
Answer: B
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one activation of the cost allocation
tag and one creation of the cost report from the management account, which has
access to all the member accounts’ data and billing preferences.
It is consistent, as it uses the AWS-defined cost allocation tag named department,
which is automatically applied to resources when the company creates tags using
the tagging policy enforced by AWS Organizations. This ensures that the tag name
and value are the same across all the resources and accounts, and avoids any
discrepancies or errors that might arise from user-defined tags.
It is informative, as it creates one cost report in Cost Explorer grouping by the tag
name, and filters by EC2. This allows the accounting team to see the breakdown
of EC2 consumption and costs by department, regardless of the AWS account.
The team can also use other features of Cost Explorer, such as charts, filters, and
forecasts, to analyze and optimize the spending.
References:
Using AWS cost allocation tags - AWS Billing
User-defined cost allocation tags - AWS Billing
Cost Tagging and Reporting with AWS Organizations
Question # 7
A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for itsworkloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetesetcd key-value store.Which solution will meet these requirements?
A. Create a new AWS Key Management Service (AWS KMS) key Use AWS SecretsManager to manage rotate, and store all secrets in Amazon EKS. B. Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKSKMS secrets encryption on the Amazon EKS cluster. C. Create the Amazon EKS cluster with default options Use the Amazon Elastic BlockStore (Amazon EBS) Container Storage Interface (CSI) driver as an add-on. D. Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebsalias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for theaccount.
Answer: B
Explanation: This option is the most secure and simple way to encrypt the secrets that are
stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that
allows you to create and manage encryption keys that can be used to encrypt your data.
Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to
encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an
additional layer of protection for your sensitive data, such as passwords, tokens, and keys.
You can create a new KMS key or use an existing one, and then enable the Amazon EKS
KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to
control who can access or use the KMS key.
Option A is not correct because using AWS Secrets Manager to manage, rotate, and store
all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service
that helps you securely store, retrieve, and rotate your secrets, such as database
credentials, API keys, and passwords. You can use it to manage secrets that are used by
your applications or services outside of Amazon EKS, but it is not designed to encrypt the
secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS
Secrets Manager would incur additional costs and complexity, and it would not leverage the
Option C is not correct because using the Amazon EBS Container Storage Interface (CSI)
driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The
Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as
persistent storage for your Kubernetes pods. It is useful for providing durable and scalable
storage for your applications, but it does not affect the encryption of the secrets that are
stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI
driver would require additional configuration and resources, and it would not provide the
same level of security as using a KMS key.
Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and
enabling default Amazon EBS volume encryption for the account does not encrypt the
secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used
by AWS to create a default KMS key for your account. This key is used to encrypt the
Amazon EBS volumes that are created in your account, unless you specify a different KMS
key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these
features do not affect the encryption of the secrets that are stored in the Kubernetes etcd
key-value store. Moreover, using the default KMS key or the default encryption setting
would not provide the same level of control and security as using a custom KMS key and
enabling the Amazon EKS KMS secrets encryption feature. References:
Encrypting secrets used in Amazon EKS
What Is AWS Key Management Service?
What Is AWS Secrets Manager?
Amazon EBS CSI driver
Encryption at rest
Question # 8
A retail company has several businesses. The IT team for each business manages its ownAWS account. Each team account is part of an organization in AWS Organizations. Eachteam monitors its product inventory levels in an Amazon DynamoDB table in the team'sown AWS account.The company is deploying a central inventory reporting application into a shared AWSaccount. The application must be able to read items from all the teams' DynamoDB tables.Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account.Configure the application to use the correct secret from Secrets Manager to authenticateand read the DynamoDB table. Schedule secret rotation for every 30 days. B. In every business account, create an 1AM user that has programmatic access.Configure the application to use the correct 1AM user access key ID and secret access keyto authenticate and read the DynamoDB table. Manually rotate 1AM access keys every 30days. C. In every business account, create an 1AM role named BU_ROLE with a policy that givesthe role access to the DynamoDB table and a trust policy to trust a specific role in theinventory application account. In the inventory account, create a role named APP_ROLEthat allows access to the STS AssumeRole API operation. Configure the application to useAPP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table. D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identitycertificates to authenticate DynamoDB. Configure the application to use the correctcertificate to authenticate and read the DynamoDB table.
Answer: C
Explanation: This solution meets the requirements most securely because it uses IAM
roles and the STS AssumeRole API operation to authenticate and authorize the inventory
application to access the DynamoDB tables in different accounts. IAM roles are more
secure than IAM users or certificates because they do not require long-term credentials or
passwords. Instead, IAM roles provide temporary security credentials that are automatically
rotated and can be configured with a limited duration. The STS AssumeRole API operation
enables you to request temporary credentials for a role that you are allowed to assume. By
using this operation, you can delegate access to resources that are in different AWS
accounts that you own or that are owned by third parties. The trust policy of the role defines
which entities can assume the role, and the permissions policy of the role defines which
actions can be performed on the resources. By using this solution, you can avoid hardcoding
credentials or certificates in the inventory application, and you can also avoid
storing them in Secrets Manager or ACM. You can also leverage the built-in security
features of IAM and STS, such as MFA, access logging, and policy conditions.
References: IAM Roles
STS AssumeRole
Tutorial: Delegate Access Across AWS Accounts Using IAM Roles
Question # 9
A company built an application with Docker containers and needs to run the application inthe AWS Cloud The company wants to use a managed sen/ice to host the applicationThe solution must scale in and out appropriately according to demand on the individualcontainer services The solution also must not result in additional operational overhead orinfrastructure to manageWhich solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes.
Answer: A,B
Explanation: These options are the best solutions because they allow the company to run
the application with Docker containers in the AWS Cloud using a managed service that
scales automatically and does not require any infrastructure to manage. By using AWS
Fargate, the company can launch and run containers without having to provision, configure,
or scale clusters of EC2 instances. Fargate allocates the right amount of compute
resources for each container and scales them up or down as needed. By using Amazon
ECS or Amazon EKS, the company can choose the container orchestration platform that
suits its needs. Amazon ECS is a fully managed service that integrates with other AWS
services and simplifies the deployment and management of containers. Amazon EKS is a
managed service that runs Kubernetes on AWS and provides compatibility with existing
Kubernetes tools and plugins.
C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the
containers. This option is not feasible because AWS Lambda does not support running
Docker containers directly. Lambda functions are executed in a sandboxed environment
that is isolated from other functions and resources. To run Docker containers on Lambda,
the company would need to use a custom runtime or a wrapper library that emulates the
Docker API, which can introduce additional complexity and overhead.
D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
This option is not optimal because it requires the company to manage the EC2 instances
that host the containers. The company would need to provision, configure, scale, patch,
and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker
nodes. This option is not ideal because it requires the company to manage the EC2
instances that host the containers. The company would need to provision, configure, scale,
patch, and monitor the EC2 instances, which can increase the operational overhead and
infrastructure costs.
References:
1 AWS Fargate - Amazon Web Services
2 Amazon Elastic Container Service - Amazon Web Services
3 Amazon Elastic Kubernetes Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 10
A company uses Amazon S3 as its data lake. The company has a new partner that mustuse SFTP to upload data files A solutions architect needs to implement a highly availableSFTP solution that minimizes operational overhead.Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publiclyaccessible endpoint Choose the S3 data lake as the destination B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpointURL to the new partner Share the S3 File Gateway endpoint with the newpartner C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partnerto upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2instance to upload files to the S3 data lake D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network LoadBalancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instancesto upload files to the S3 data lake.
Answer: A
Explanation: This option is the most cost-effective and simple way to enable SFTP access
to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure
file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled
server with a public endpoint and associate it with your S3 bucket. You can also use AWS
Identity and Access Management (IAM) roles and policies to control access to your S3 data
lake. The service scales automatically to handle any volume of file transfers and provides
high availability and durability. You do not need to provision, manage, or patch any servers
or load balancers.
Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a
hybrid cloud storage service that provides a local file system interface to S3. You can use it
to store and retrieve files as objects in S3 using standard file protocols such as NFS and
SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway
appliance on-premises or on EC2.
Option C is not cost-effective or scalable because it requires launching and managing an
EC2 instance in a private subnet and setting up a VPN connection for the new partner. This
would incur additional costs for the EC2 instance, the VPN connection, and the data
transfer. It would also introduce complexity and security risks to the solution. Moreover, it
would require running a cron job script on the EC2 instance to upload files to the S3 data
lake, which is not efficient or reliable.
Option D is not cost-effective or scalable because it requires launching and managing
multiple EC2 instances in a private subnet and placing a NLB in front of them. This would
incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also
introduce complexity and security risks to the solution. Moreover, it would require running a
cron job script on the EC2 instances to upload files to the S3 data lake, which is not
efficient or reliable. References:
What Is AWS Transfer Family?
What Is Amazon S3 File Gateway?
What Is Amazon EC2?
[What Is Amazon Virtual Private Cloud?]
[What Is a Network Load Balancer?]
Question # 11
A company hosts an application used to upload files to an Amazon S3 bucket Onceuploaded, the files are processed to extract metadata which takes less than 5 seconds Thevolume and frequency of the uploads varies from a few files each hour to hundreds ofconcurrent uploads The company has asked a solutions architect to design a cost-effectivearchitecture that will meet these requirements.What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to tog S3 API calls Use AWS AppSync to process thefiles. B. Configure an object-created event notification within the S3 bucket to invoke an AWSLambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process thefiles uploaded to Amazon S3 Invoke an AWS Lambda function to process the files.
Answer: B
Explanation: This option is the most cost-effective and scalable way to process the files
uploaded to S3. AWS CloudTrail is used to log API calls, not to trigger actions based on
them. AWS AppSync is a service for building GraphQL APIs, not for processing files.
Amazon Kinesis Data Streams is used to ingest and process streaming data, not to send
data to S3. Amazon SNS is a pub/sub service that can be used to notify subscribers of
events, not to process files. References:
Using AWS Lambda with Amazon S3
AWS CloudTrail FAQs
What Is AWS AppSync?
[What Is Amazon Kinesis Data Streams?]
[What Is Amazon Simple Notification Service?]
Question # 12
A company runs analytics software on Amazon EC2 instances The software accepts jobrequests from users to process data that has been uploaded to Amazon S3 Users reportthat some submitted data is not being processed Amazon CloudWatch reveals that theEC2 instances have a consistent CPU utilization at or near 100% The company wants toimprove system performance and scale the system based on user load.What should a solutions architect do to meet these requirements?
A. Create a copy of the instance Place all instances behind an Application Load Balancer B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS) Configurean EC2 Auto Scaling group based on queue size Update the software to read from the queue.
Answer: D
Explanation: This option is the best solution because it allows the company to decouple
the analytics software from the user requests and scale the EC2 instances dynamically
based on the demand. By using Amazon SQS, the company can create a queue that
stores the user requests and acts as a buffer between the users and the analytics software.
This way, the software can process the requests at its own pace without losing any data or
overloading the EC2 instances. By using EC2 Auto Scaling, the company can create an
Auto Scaling group that launches or terminates EC2 instances automatically based on the
size of the queue. This way, the company can ensure that there are enough instances to
handle the load and optimize the cost and performance of the system. By updating the
software to read from the queue, the company can enable the analytics software to
consume the requests from the queue and process the data from Amazon S3.
A. Create a copy of the instance Place all instances behind an Application Load Balancer.
This option is not optimal because it does not address the root cause of the problem, which
is the high CPU utilization of the EC2 instances. An Application Load Balancer can
distribute the incoming traffic across multiple instances, but it cannot scale the instances
based on the load or reduce the processing time of the analytics software. Moreover, this
option can incur additional costs for the load balancer and the extra instances.
B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference the
endpoint. This option is not effective because it does not solve the issue of the high CPU
utilization of the EC2 instances. An S3 VPC endpoint can enable the EC2 instances to
access Amazon S3 without going through the internet, which can improve the network
performance and security. However, it cannot reduce the processing time of the analytics
software or scale the instances based on the load.
C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU and
more memory. Restart the instances. This option is not scalable because it does not
account for the variability of the user load. Changing the instance type to a more powerful
one can improve the performance of the analytics software, but it cannot adjust the number
of instances based on the demand. Moreover, this option can increase the cost of the
system and cause downtime during the instance modification.
References:
1 Using Amazon SQS queues with Amazon EC2 Auto Scaling - Amazon EC2 Auto
Scaling
2 Tutorial: Set up a scaled and load-balanced application - Amazon EC2 Auto
Scaling
3 Amazon EC2 Auto Scaling FAQs
Question # 13
A company is deploying an application that processes streaming data in near-real time Thecompany plans to use Amazon EC2 instances for the workload The network architecturemust be configurable to provide the lowest possible latency between nodesWhich combination of network solutions will meet these requirements? (Select TWO)
A. Enable and configure enhanced networking on each EC2 instance B. Group the EC2 instances in separate accounts C. Run the EC2 instances in a cluster placement group D. Attach multiple elastic network interfaces to each EC2 instance E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer: A,C
Explanation: These options are the most suitable ways to configure the network
architecture to provide the lowest possible latency between nodes. Option A enables and
configures enhanced networking on each EC2 instance, which is a feature that improves
the network performance of the instance by providing higher bandwidth, lower latency, and
lower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or Elastic
Fabric Adapter (EFA) to provide direct access to the network hardware. You can enable
and configure enhanced networking by choosing a supported instance type and a
compatible operating system, and installing the required drivers. Option C runs the EC2
instances in a cluster placement group, which is a logical grouping of instances within a
single Availability Zone that are placed close together on the same underlying hardware.
Cluster placement groups provide the lowest network latency and the highest network
throughput among the placement group options. You can run the EC2 instances in a
cluster placement group by creating a placement group and launching the instances into it.
Option B is not suitable because grouping the EC2 instances in separate accounts does
not provide the lowest possible latency between nodes. Separate accounts are used to
isolate and organize resources for different purposes, such as security, billing, or
compliance. However, they do not affect the network performance or proximity of the
instances. Moreover, grouping the EC2 instances in separate accounts would incur
additional costs and complexity, and it would require setting up cross-account networking
and permissions.
Option D is not suitable because attaching multiple elastic network interfaces to each EC2
instance does not provide the lowest possible latency between nodes. Elastic network
interfaces are virtual network interfaces that can be attached to EC2 instances to provide
additional network capabilities, such as multiple IP addresses, multiple subnets, or
enhanced security. However, they do not affect the network performance or proximity of the
instances. Moreover, attaching multiple elastic network interfaces to each EC2 instance
would consume additional resources and limit the instance type choices. Option E is not suitable because using Amazon EBS optimized instance types does not
provide the lowest possible latency between nodes. Amazon EBS optimized instance types
are instances that provide dedicated bandwidth for Amazon EBS volumes, which are block
storage volumes that can be attached to EC2 instances. EBS optimized instance types
improve the performance and consistency of the EBS volumes, but they do not affect the
network performance or proximity of the instances. Moreover, using EBS optimized
instance types would incur additional costs and may not be necessary for the streaming
data workload. References:
Enhanced networking on Linux
Placement groups
Elastic network interfaces
Amazon EBS-optimized instances
Question # 14
A company runs a container application on a Kubernetes cluster in the company's datacenter The application uses Advanced Message Queuing Protocol (AMQP) tocommunicate with a message queue The data center cannot scale fast enough to meet thecompany's expanding business needs The company wants to migrate the workloads toAWSWhich solution will meet these requirements with the LEAST operational overhead? \
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS)Use Amazon MQ to retrieve the messages. C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ toretrieve the messages. D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service(Amazon SQS) to retrieve the messages.
Answer: B
Explanation: This option is the best solution because it allows the company to migrate the
container application to AWS with minimal changes and leverage a managed service to run
the Kubernetes cluster and the message queue. By using Amazon EKS, the company can
run the container application on a fully managed Kubernetes control plane that is
compatible with the existing Kubernetes tools and plugins. Amazon EKS handles the
provisioning, scaling, patching, and security of the Kubernetes cluster, reducing the
operational overhead and complexity. By using Amazon MQ, the company can use a fully
managed message broker service that supports AMQP and other popular messaging
protocols. Amazon MQ handles the administration, maintenance, and scaling of the
message broker, ensuring high availability, durability, and security of the messages.
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)
Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This option
is not optimal because it requires the company to change the container orchestration
platform from Kubernetes to ECS, which can introduce additional complexity and risk.
Moreover, it requires the company to change the messaging protocol from AMQP to SQS,
which can also affect the application logic and performance. Amazon ECS and Amazon
SQS are both fully managed services that simplify the deployment and management of
containers and messages, but they may not be compatible with the existing application
architecture and requirements.
C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ to
retrieve the messages. This option is not ideal because it requires the company to manage
the EC2 instances that host the container application. The company would need to
provision, configure, scale, patch, and monitor the EC2 instances, which can increase the
operational overhead and infrastructure costs. Moreover, the company would need to
install and maintain the Kubernetes software on the EC2 instances, which can also add
complexity and risk. Amazon MQ is a fully managed message broker service that supports
AMQP and other popular messaging protocols, but it cannot compensate for the lack of a
managed Kubernetes service.
D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service
(Amazon SQS) to retrieve the messages. This option is not feasible because AWS Lambda
does not support running container applications directly. Lambda functions are executed in
a sandboxed environment that is isolated from other functions and resources. To run container applications on Lambda, the company would need to use a custom runtime or a
wrapper library that emulates the container API, which can introduce additional complexity
and overhead. Moreover, Lambda functions have limitations in terms of available CPU,
memory, and runtime, which may not suit the application needs. Amazon SQS is a fully
managed message queue service that supports asynchronous communication, but it does
not support AMQP or other messaging protocols.
References:
1 Amazon Elastic Kubernetes Service - Amazon Web Services
2 Amazon MQ - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Lambda FAQs - Amazon Web Services
Question # 15
A company runs a real-time data ingestion solution on AWS. The solution consists of themost recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Thesolution is deployed in a VPC in private subnets across three Availability Zones.A solutions architect needs to redesign the data ingestion solution to be publicly availableover the internet. The data in transit must also be encrypted.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A
Explanation: The solution that meets the requirements with the most operational efficiency
is to configure public subnets in the existing VPC and deploy an MSK cluster in the public
subnets. This solution allows the data ingestion solution to be publicly available over the
internet without creating a new VPC or deploying a load balancer. The solution also
ensures that the data in transit is encrypted by enabling mutual TLS authentication, which
requires both the client and the server to present certificates for verification. This solution
leverages the public access feature of Amazon MSK, which is available for clusters running
Apache Kafka 2.6.0 or later versions1.
The other solutions are not as efficient as the first one because they either create
unnecessary resources or do not encrypt the data in transit. Creating a new VPC with
public subnets would incur additional costs and complexity for managing network resources
and routing. Deploying an ALB or an NLB would also add more costs and latency for the
data ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transit
by itself, unless they are configured with HTTPS listeners and certificates, which would
require additional steps and maintenance. Therefore, these solutions are not optimal for the
given requirements.
References:
Public access - Amazon Managed Streaming for Apache Kafka
Question # 16
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hourand takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB ofmemory. The CPU utilization of the instance is low except for short surges during which thejob uses the maximum CPU available. The company wants to optimize the costs to run thejob.Which solution will meet these requirements?
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an AmazonElastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU(vCPU) and 1 GB of memory. B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create anAmazon EventBridge scheduled rule to run the code each hour. C. Use AWS App2Container (A2C) to containerize the job. Install the container in theexisting Amazon Machine Image (AMI). Ensure that the schedule stops the container whenthe task finishes. D. Configure the existing schedule to stop the EC2 instance at the completion of the joband restart the EC2 instance when the next job starts.
Answer: B
Explanation: AWS Lambda is a serverless compute service that allows you to run code
without provisioning or managing servers. You can create Lambda functions using various
languages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculated
based on the number of requests and the duration of your code execution. You can use
Amazon EventBridge to trigger your Lambda function on a schedule, such as every hour,
using cron or rate expressions. This solution will optimize the costs to run the job, as you
will not pay for any idle time or unused resources, unlike running the job on an EC2
instance. References: 1: AWS Lambda - FAQs2, General Information section2: Tutorial:
Schedule AWS Lambda functions using EventBridge3, Introduction section3: Schedule
expressions using rate or cron - AWS Lambda4, Introduction section.
Question # 17
An ecommerce company runs applications in AWS accounts that are part of anorganization in AWS Organizations The applications run on Amazon Aurora PostgreSQLdatabases across all the accounts The company needs to prevent malicious activity andmust identify abnormal failed and incomplete login attempts to the databasesWhich solution will meet these requirements in the MOST operationally efficient way?
A. Attach service control policies (SCPs) to the root of the organization to identify the failedlogin attempts B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the memberaccounts of the organization C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export thelog data to a central Amazon S3 bucket D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a centralAmazon S3 bucket
Answer: C
Explanation: This option is the most operationally efficient way to meet the requirements
because it allows the company to monitor and analyze the database login activity across all
the accounts in the organization. By publishing the Aurora general logs to a log group in
Amazon CloudWatch Logs, the company can enable the logging of the database
connections, disconnections, and failed authentication attempts. By exporting the log data
to a central Amazon S3 bucket, the company can store the log data in a durable and costeffective
way and use other AWS services or tools to perform further analysis or alerting on
the log data. For example, the company can use Amazon Athena to query the log data in
Amazon S3, or use Amazon SNS to send notifications based on the log data.
A. Attach service control policies (SCPs) to the root of the organization to identify the failed
login attempts. This option is not effective because SCPs are not designed to identify the
failed login attempts, but to restrict the actions that the users and roles can perform in the
member accounts of the organization. SCPs are applied to the AWS API calls, not to the
database login attempts. Moreover, SCPs do not provide any logging or analysis
capabilities for the database activity.
B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member
accounts of the organization. This option is not optimal because the Amazon RDS
Protection feature in Amazon GuardDuty is not available for Aurora PostgreSQL
databases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the database
login attempts, but the network and API activity related to the RDS instances.
D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central
Amazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capture
the database login attempts, but only the AWS API calls made by or on behalf of the
Aurora PostgreSQL database. For example, AWS CloudTrail can record the events such
as creating, modifying, or deleting the database instances, clusters, or snapshots, but not
the events such as connecting, disconnecting, or failing to authenticate to the database.
References:
1 Working with Amazon Aurora PostgreSQL - Amazon Aurora
2 Working with log groups and log streams - Amazon CloudWatch Logs
3 Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs
[4] Amazon GuardDuty FAQs
[5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon Relational
Database Service
Question # 18
A company needs to provide customers with secure access to its data. The companyprocesses customer data and stores the results in an Amazon S3 bucket.All the data is subject to strong regulations and security requirements. The data must beencrypted at rest. Each customer must be able to access only their data from their AWSaccount. Company employees must not be able to access the data.Which solution will meet these requirements?
A. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the private certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides. B. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data forall principals except an 1AM role that the customer provides. C. Provision a separate AWS Key Management Service (AWS KMS) key for eachcustomer. Encrypt the data server-side. In each KMS key policy, deny decryption of datafor all principals except an 1AM role that the customer provides. D. Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt thedata client-side. In the public certificate policy, deny access to the certificate for allprincipals except an 1AM role that the customer provides.
Answer: C
Explanation: The correct solution is to provision a separate AWS KMS key for each
customer and encrypt the data server-side. This way, the company can use the S3
encryption feature to protect the data at rest and delegate the control of the encryption keys
to the customers. The customers can then use their own IAM roles to access and decrypt
their data. The company employees will not be able to access the data because they are
not authorized by the KMS key policies. The other options are incorrect because:
Option A and D are using ACM certificates to encrypt the data client-side. This is
not a recommended practice for S3 encryption because it adds complexity and
overhead to the encryption process. Moreover, the company will have to manage
the certificates and their policies for each customer, which is not scalable and
secure.
Option B is using a separate KMS key for each customer, but it is using the S3
bucket policy to control the decryption access. This is not a secure solution
because the bucket policy applies to the entire bucket, not to individual objects.
Therefore, the customers will be able to access and decrypt each other’s data if
they have the permission to list the bucket contents. The bucket policy also
overrides the KMS key policy, which means the company employees can access
the data if they have the permission to use the KMS key.
References:
S3 encryption
KMS key policies
ACM certificates
Question # 19
A company has a nightly batch processing routine that analyzes report files that an onpremisesfile system receives daily through SFTP. The company wants to move thesolution to the AWS Cloud. The solution must be highly available and resilient. The solutionalso must minimize operational effort.Which solution meets these requirements?
A. Deploy AWS Transfer for SFTP and an Amazon Elastic File System (Amazon EFS) filesystem for storage. Use an Amazon EC2 instance in an Auto Scaling group with ascheduled scaling policy to run the batch operation. B. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic Block Store {Amazon EBS) volume for storage. Use an Auto Scaling group with theminimum number of instances and desired number of instances set to 1. C. Deploy an Amazon EC2 instance that runs Linux and an SFTP service. Use an AmazonElastic File System (Amazon EFS) file system for storage. Use an Auto Scaling group withthe minimum number of instances and desired number of instances set to 1. D. Deploy AWS Transfer for SFTP and an Amazon S3 bucket for storage. Modify theapplication to pull the batch files from Amazon S3 to an Amazon EC2 instance forprocessing. Use an EC2 instance in an Auto Scaling group with a scheduled scaling policyto run the batch operation.
Answer: D
Explanation: The solution that meets the requirements of high availability, performance,
security, and static IP addresses is to use Amazon CloudFront, Application Load Balancers
(ALBs), Amazon Route 53, and AWS WAF. This solution allows the company to distribute
its HTTP-based application globally using CloudFront, which is a content delivery network
(CDN) service that caches content at edge locations and provides static IP addresses for
each edge location. The company can also use Route 53 latency-based routing to route
requests to the closest ALB in each Region, which balances the load across the EC2
instances. The company can also deploy AWS WAF on the CloudFront distribution to
protect the application against common web exploits by creating rules that allow, block, or
count web requests based on conditions that are defined. The other solutions do not meet
all the requirements because they either use Network Load Balancers (NLBs), which do not
support HTTP-based applications, or they do not use CloudFront, which provides better
performance and security than AWS Global Accelerator. References :=
Amazon CloudFront
Application Load Balancer
Amazon Route 53
AWS WAF
Question # 20
A company uses high concurrency AWS Lambda functions to process a constantlyincreasing number of messages in a message queue during marketing events. TheLambda functions use CPU intensive code to process the messages. The company wantsto reduce the compute costs and to maintain service latency for its customers.Which solution will meet these requirements?
A. Configure reserved concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. B. Configure reserved concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations. C. Configure provisioned concurrency for the Lambda functions. Decrease the memoryallocated to the Lambda functions. D. Configure provisioned concurrency for the Lambda functions. Increase the memoryaccording to AWS Compute Optimizer recommendations.
Answer: D
Explanation: The company wants to reduce the compute costs and maintain service
latency for its Lambda functions that process a constantly increasing number of messages
in a message queue. The Lambda functions use CPU intensive code to process the
messages. To meet these requirements, a solutions architect should recommend the
following solution:
Configure provisioned concurrency for the Lambda functions. Provisioned
concurrency is the number of pre-initialized execution environments that are
allocated to the Lambda functions. These execution environments are prepared to
respond immediately to incoming function requests, reducing the cold start latency.
Configuring provisioned concurrency also helps to avoid throttling errors due to
reaching the concurrency limit of the Lambda service.
Increase the memory according to AWS Compute Optimizer recommendations.
AWS Compute Optimizer is a service that provides recommendations for optimal
AWS resource configurations based on your utilization data. By increasing the
memory allocated to the Lambda functions, you can also increase the CPU power
and improve the performance of your CPU intensive code. AWS Compute
Optimizer can help you find the optimal memory size for your Lambda functions
based on your workload characteristics and performance goals.
This solution will reduce the compute costs by avoiding unnecessary over-provisioning of
memory and CPU resources, and maintain service latency by using provisioned
concurrency and optimal memory size for the Lambda functions.
References:
Provisioned Concurrency
AWS Compute Optimizer
Question # 21
A company runs applications on AWS that connect to the company's Amazon RDSdatabase. The applications scale on weekends and at peak times of the year. Thecompany wants to scale the database more effectively for its applications that connect tothe database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon DynamoDB with connection pooling with a target group configuration forthe database. Change the applications to use the DynamoDB endpoint. B. Use Amazon RDS Proxy with a target group for the database. Change the applicationsto use the RDS Proxy endpoint. C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database.Change the applications to use the custom proxy endpoint. D. Use an AWS Lambda function to provide connection pooling with a target groupconfiguration for the database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon
Relational Database Service (RDS) that makes applications more scalable, more resilient
to database failures, and more secure1. RDS Proxy allows applications to pool and share
connections established with the database, improving database efficiency and application
scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to
66% and enables IAM authentication and Secrets Manager integration for database
access1. RDS Proxy can be enabled for most applications with no code changes2.
Question # 22
A company wants to run its payment application on AWS The application receives paymentnotifications from mobile devices Payment notifications require a basic validation beforethey are sent for further processingThe backend processing application is long running and requires compute and memory tobe adjusted The company does not want to manage the infrastructureWhich solution will meet these requirements with the LEAST operational overhead?
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queuewith an Amazon EventBndge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic KubernetesService (Amazon EKS) Anywhere Create a standalone cluster B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functionsstate machine to receive payment notifications from mobile devices Invoke the statemachine to validate payment notifications and send the notifications to the backendapplication Deploy the backend application on Amazon Elastic Kubernetes Sen/ice(Amazon EKS). Configure an EKS cluster with self-managed nodes. C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queuewith an Amazon EventBridge rule to receive payment notifications from mobile devicesConfigure the rule to validate payment notifications and send the notifications to thebackend application Deploy the backend application on Amazon EC2 Spot InstancesConfigure a Spot Fleet with a default allocation strategy. D. Create an Amazon API Gateway API Integrate the API with AWS Lambda to receivepayment notifications from mobile devices Invoke a Lambda function to validate paymentnotifications and send the notifications to the backend application Deploy the backendapplication on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECSwith an AWS Fargate launch type.
Answer: D
Explanation:
This option is the best solution because it allows the company to run its payment
application on AWS with minimal operational overhead and infrastructure management. By
using Amazon API Gateway, the company can create a secure and scalable API to receive
payment notifications from mobile devices. By using AWS Lambda, the company can run a
serverless function to validate the payment notifications and send them to the backend
application. Lambda handles the provisioning, scaling, and security of the function,
reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate,
the company can run the backend application on a fully managed container service that
scales the compute resources automatically and does not require any EC2 instances to
manage. Fargate allocates the right amount of CPU and memory for each container and
adjusts them as needed.
A. Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue
with an Amazon EventBndge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon Elastic Kubernetes
Service (Amazon EKS) Anywhere Create a standalone cluster. This option is not optimal
because it requires the company to manage the Kubernetes cluster that runs the backend
application. Amazon EKS Anywhere is a deployment option that allows the company to
create and operate Kubernetes clusters on-premises or in other environments outside
AWS. The company would need to provision, configure, scale, patch, and monitor the
cluster nodes, which can increase the operational overhead and complexity. Moreover, the
company would need to ensure the connectivity and security between the AWS services
and the EKS Anywhere cluster, which can also add challenges and risks. B. Create an Amazon API Gateway API Integrate the API with anAWS Step Functions
state ma-chine to receive payment notifications from mobile devices Invoke the state
machine to validate payment notifications and send the notifications to the backend
application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice
(Amazon EKS). Configure an EKS cluster with self-managed nodes. This option is not ideal
because it requires the company to manage the EC2 instances that host the Kubernetes
cluster that runs the backend application. Amazon EKS is a fully managed service that runs
Kubernetes on AWS, but it still requires the company to manage the worker nodes that run
the containers. The company would need to provision, configure, scale, patch, and monitor
the EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using AWS Step Functions to validate the payment notifications may be
unnecessary and complex, as the validation logic can be implemented in a simpler way
with Lambda or other services.
C. Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue
with an Amazon EventBridge rule to receive payment notifications from mobile devices
Configure the rule to validate payment notifications and send the notifications to the
backend application Deploy the backend application on Amazon EC2 Spot Instances
Configure a Spot Fleet with a default al-location strategy. This option is not cost-effective
because it requires the company to manage the EC2 instances that run the backend
application. The company would need to provision, configure, scale, patch, and monitor the
EC2 instances, which can increase the operational overhead and infrastructure costs.
Moreover, using Spot Instances can introduce the risk of interruptions, as Spot Instances
are reclaimed by AWS when the demand for On-Demand Instances increases. The
company would need to handle the interruptions gracefully and ensure the availability and
reliability of the backend application.
References:
1 Amazon API Gateway - Amazon Web Services
2 AWS Lambda - Amazon Web Services
3 Amazon Elastic Container Service - Amazon Web Services
4 AWS Fargate - Amazon Web Services
Question # 23
A company has multiple AWS accounts with applications deployed in the us-west-2 RegionApplication logs are stored within Amazon S3 buckets in each account The company wants to build a centralized log analysis solution that uses a single S3 bucket Logs must not leaveus-west-2, and the company wants to incur minimal operational overheadWhich solution meets these requirements and is MOST cost-effective?
A. Create an S3 Lifecycle policy that copies the objects from one of the application S3buckets to the centralized S3 bucket B. Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3bucket in us-west-2 Use this S3 bucket for log analysis. C. Write a script that uses the PutObject API operation every day to copy the entirecontents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for loganalysis. D. Write AWS Lambda functions in these accounts that are triggered every time logs aredelivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucketin us-west-2. Use this S3 bucket for log analysis.
Answer: B
Explanation: This solution meets the following requirements:
It is cost-effective, as it only charges for the storage and data transfer of the
replicated objects, and does not require any additional AWS services or custom
scripts. S3 Same-Region Replication (SRR) is a feature that automatically
replicates objects across S3 buckets within the same AWS Region. SRR can help
you aggregate logs from multiple sources to a single destination for analysis and
auditing. SRR also preserves the metadata, encryption, and access control of the
source objects.
It is operationally efficient, as it does not require any manual intervention or
scheduling. SRR replicates objects as soon as they are uploaded to the source
bucket, ensuring that the destination bucket always has the latest log data. SRR
also handles any updates or deletions of the source objects, keeping the
destination bucket in sync. SRR can be enabled with a few clicks in the S3 console
or with a simple API call.
It is secure, as it does not allow the logs to leave the us-west-2 Region. SRR only
replicates objects within the same AWS Region, ensuring that the data sovereignty
and compliance requirements are met. SRR also supports encryption of the source
and destination objects, using either server-side encryption with AWS KMS or S3-
managed keys, or client-side encryption.
References:
Same-Region Replication - Amazon Simple Storage Service
How do I replicate objects across S3 buckets in the same AWS Region?
A company runs a highly available web application on Amazon EC2 instances behind anApplication Load Balancer The company uses Amazon CloudWatch metricsAs the traffic to the web application Increases, some EC2 instances become overloadedwith many outstanding requests The CloudWatch metrics show that the number of requestsprocessed and the time to receive the responses from some EC2 instances are both highercompared to other EC2 instances The company does not want new requests to beforwarded to the EC2 instances that are already overloaded.Which solution will meet these requirements?
A. Use the round robin routing algorithm based on the RequestCountPerTarget and ActiveConnection Count CloudWatch metrics. B. Use the least outstanding requests algorithm based on the RequestCountPerTarget andActiveConnectionCount CloudWatch metrics. C. Use the round robin routing algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics. D. Use the least outstanding requests algorithm based on the RequestCount andTargetResponseTime CloudWatch metrics.
Answer: D
Explanation: The least outstanding requests (LOR) algorithm is a load balancing algorithm
that distributes incoming requests to the target with the fewest outstanding requests. This
helps to avoid overloading any single target and improves the overall performance and
availability of the web application. The LOR algorithm can use the RequestCount and
TargetResponseTime CloudWatch metrics to determine the number of outstanding
requests and the response time of each target. These metrics measure the number of
requests processed by each target and the time elapsed after the request leaves the load
balancer until a response from the target is received by the load balancer, respectively. By
using these metrics, the LOR algorithm can route new requests to the targets that are less
busy and more responsive, and avoid sending requests to the targets that are already
overloaded or slow. This solution meets the requirements of the company.
References:
Application Load Balancer now supports Least Outstanding Requests algorithm for
An analytics company uses Amazon VPC to run its multi-tier services. The company wantsto use RESTful APIs to offer a web analytics service to millions of users. Users must beverified by using an authentication service to access the APIs.Which solution will meet these requirements with the MOST operational efficiency?
A. Configure an Amazon Cognito user pool for user authentication. Implement Amazon APIGateway REST APIs with a Cognito authorizer. B. Configure an Amazon Cognito identity pool for user authentication. Implement AmazonAPI Gateway HTTP APIs with a Cognito authorizer. C. Configure an AWS Lambda function to handle user authentication. Implement AmazonAPI Gateway REST APIs with a Lambda authorizer. D. Configure an 1AM user to handle user authentication. Implement Amazon API GatewayHTTP APIs with an 1AM authorizer.
Answer: A
Explanation: This solution will meet the requirements with the most operational efficiency
because:
Amazon Cognito user pools provide a secure and scalable user directory that can
store and manage user profiles, and handle user sign-up, sign-in, and access
control. User pools can also integrate with social identity providers and enterprise
identity providers via SAML or OIDC. User pools can issue JSON Web Tokens
(JWTs) that can be used to authenticate users and authorize API requests.
Amazon API Gateway REST APIs enable you to create and deploy APIs that
expose your backend services to your clients. REST APIs support multiple
authorization mechanisms, including Cognito user pools, IAM, Lambda, and
custom authorizers. A Cognito authorizer is a type of Lambda authorizer that uses
a Cognito user pool as the identity source. When a client makes a request to a
REST API method that is configured with a Cognito authorizer, API Gateway
verifies the JWTs that are issued by the user pool and grants access based on the
token’s claims and the authorizer’s configuration.
By using Cognito user pools and API Gateway REST APIs with a Cognito
authorizer, you can achieve a high level of security, scalability, and performance
for your web analytics service. You can also leverage the built-in features of
Cognito and API Gateway, such as user management, token validation, caching,
throttling, and monitoring, without having to implement them yourself. This reduces
the operational overhead and complexity of your solution.
References:
Amazon Cognito User Pools
Amazon API Gateway REST APIs
Use API Gateway Lambda authorizers
Question # 26
A company has an AWS Direct Connect connection from its on-premises location to anAWS account The AWS account has 30 different VPCs in the same AWS Region TheVPCs use private virtual interfaces (VIFs) Each VPC has a CIDR block that does notoverlap with other networks under the company's controlThe company wants to centrally manage the networking architecture while still allowingeach VPC to communicate with all other VPCs and on-premises networksWhich solution will meet these requirements with the LEAST amount of operationaloverhead?
A. Create a transit gateway and associate the Direct Connect connection with a new transitVIF Turn on the transit gateway's route propagation feature B. Create a Direct Connect gateway Recreate the private VIFs to use the new gatewayAssociate each VPC by creating new virtual private gateways C. Create a transit VPC Connect the Direct Connect connection to the transit VPC Create apeenng connection between all other VPCs in the Region Update the route tables D. Create AWS Site-to-Site VPN connections from on premises to each VPC Ensure thatboth VPN tunnels are UP for each connection Turn on the route propagation feature
Answer: A
Explanation: This solution meets the following requirements:
It is operationally efficient, as it only requires one transit gateway and one transit
VIF to connect the Direct Connect connection to all the VPCs in the same AWS
Region. The transit gateway acts as a regional network hub that simplifies the
network management and reduces the number of VIFs and gateways needed.
It is scalable, as it can support up to 5000 attachments per transit gateway, which
can include VPCs, VPNs, Direct Connect gateways, and peering connections. The
transit gateway can also be connected to other transit gateways in different
Regions or accounts using peering connections, enabling cross-Region and cross-account connectivity.
It is flexible, as it allows each VPC to communicate with all other VPCs and onpremises
networks using dynamic routing protocols such as Border Gateway
Protocol (BGP). The transit gateway’s route propagation feature automatically
propagates the routes from the attached VPCs and VPNs to the transit gateway
route table, eliminating the need to manually update the route tables.
References:
Transit Gateways - Amazon Virtual Private Cloud
Working with transit gateways - AWS Direct Connect
A solutions architect is designing a shared storage solution for a web application that isdeployed across multiple Availability Zones The web application runs on Amazon EC2instances that are in an Auto Scaling group The company plans to make frequent changesto the content The solution must have strong consistency in returning the new content assoon as the changes occur.Which solutions meet these requirements? (Select TWO)
A. Use AWS Storage Gateway Volume Gateway Internet Small Computer SystemsInterface (iSCSI) block storage that is mounted to the individual EC2 instances B. Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS filesystem on the individual EC2 instances C. Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBSvolume on the individual EC2 instances. D. Use AWS DataSync to perform continuous synchronization of data between EC2 hostsin the Auto Scaling group E. Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
Answer: B,E
Explanation: These options are the most suitable ways to design a shared storage
solution for a web application that is deployed across multiple Availability Zones and
requires strong consistency. Option B uses Amazon Elastic File System (Amazon EFS) as
a shared file system that can be mounted on multiple EC2 instances in different Availability
Zones. Amazon EFS provides high availability, durability, scalability, and performance for
file-based workloads. It also supports strong consistency, which means that any changes
made to the file system are immediately visible to all clients. Option E uses Amazon S3 as
a shared object store that can store the web content and serve it through Amazon
CloudFront, a content delivery network (CDN). Amazon S3 provides high availability,
durability, scalability, and performance for object-based workloads. It also supports strong
consistency for read-after-write and list operations, which means that any changes made to
the objects are immediately visible to all clients. By setting the metadata for the Cache-
Control header to no-cache, the web content can be prevented from being cached by the
browsers or the CDN edge locations, ensuring that the latest content is always delivered to
the users.
Option A is not suitable because using AWS Storage Gateway Volume Gateway as a
shared storage solution for a web application is not efficient or scalable. AWS Storage
Gateway Volume Gateway is a hybrid cloud storage service that provides block storage
volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is
useful for migrating or backing up data to AWS, but it is not designed for serving web
content or providing strong consistency. Moreover, using Volume Gateway would incur
additional costs and complexity, and it would not leverage the native AWS storage
services.
Option C is not suitable because creating a shared Amazon EBS volume and mounting it
on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage
service that provides persistent and high-performance volumes for EC2 instances.
However, EBS volumes can only be attached to one EC2 instance at a time, and they are
constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a
web application that is deployed across multiple Availability Zones is not feasible.
Moreover, EBS volumes do not support strong consistency, which means that any changes
made to the volume may not be immediately visible to other clients.
Option D is not suitable because using AWS DataSync to perform continuous
synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or
scalable. AWS DataSync is a data transfer service that helps you move large amounts of
data to and from AWS storage services. It is useful for migrating or archiving data, but it is
not designed for serving web content or providing strong consistency. Moreover, using
DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services. References:
What Is Amazon Elastic File System?
What Is Amazon Simple Storage Service?
What Is Amazon CloudFront?
What Is AWS Storage Gateway?
What Is Amazon Elastic Block Store?
What Is AWS DataSync?
Question # 28
A company needs to extract the names of ingredients from recipe records that are storedas text files in an Amazon S3 bucket A web application will use the ingredient names toquery an Amazon DynamoDB table and determine a nutrition score.The application can handle non-food records and errors The company does not have anyemployees who have machine learning knowledge to develop this solutionWhich solution will meet these requirements MOST cost-effectively?
A. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Program the Lambda function to analyze the object and extract theingredient names by using Amazon Comprehend Store the Amazon Comprehend output inthe DynamoDB table. B. Use an Amazon EventBridge rule to invoke an AWS Lambda function when PutObjectrequests occur. Program the Lambda function to analyze the object by using AmazonForecast to extract the ingredient names Store the Forecast output in the DynamoDB table. C. Use S3 Event Notifications to invoke an AWS Lambda function when PutObjectrequests occur Use Amazon Polly to create audio recordings of the recipe records. Savethe audio files in the S3 bucket Use Amazon Simple Notification Service (Amazon SNS) tosend a URL as a message to employees Instruct the employees to listen to the audio filesand calculate the nutrition score Store the ingredient names in the DynamoDB table. D. Use an Amazon EventBridge rule to invoke an AWS Lambda function when a PutObjectrequest occurs Program the Lambda function to analyze the object and extract theingredient names by using Amazon SageMaker Store the inference output from theSageMaker endpoint in the DynamoDB table.
Answer: A
Explanation: This solution meets the following requirements:
It is cost-effective, as it only uses serverless components that are charged based
on usage and do not require any upfront provisioning or maintenance.
It is scalable, as it can handle any number of recipe records that are uploaded to
the S3 bucket without any performance degradation or manual intervention.
It is easy to implement, as it does not require any machine learning knowledge or
complex data processing logic. Amazon Comprehend is a natural language
processing service that can automatically extract entities such as ingredients from
text files. The Lambda function can simply invoke the Comprehend API and store
the results in the DynamoDB table.
It is reliable, as it can handle non-food records and errors gracefully. Amazon
Comprehend can detect the language and domain of the text files and return an
appropriate response. The Lambda function can also implement error handling
and logging mechanisms to ensure the data quality and integrity.
References:
Using AWS Lambda with Amazon S3 - AWS Lambda
What Is Amazon Comprehend? - Amazon Comprehend
Working with Tables - Amazon DynamoDB
Question # 29
A company has a new mobile app. Anywhere in the world, users can see local news ontopics they choose. Users also can post photos and videos from inside the app.Users access content often in the first minutes after the content is posted. New contentquickly replaces older content, and then the older content disappears. The local nature ofthe news means that users consume 90% of the content within the AWS Region where it isuploaded.Which solution will optimize the user experience by providing the LOWEST latency forcontent uploads?
A. Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads. B. Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads. C. Upload content to Amazon EC2 instances in the Region that is closest to the user. Copythe data to Amazon S3. D. Upload and store content in Amazon S3 in the Region that is closest to the user. Usemultiple distributions of Amazon CloudFront.
Answer: B
Explanation: The most suitable solution for optimizing the user experience by providing
the lowest latency for content uploads is to upload and store content in Amazon S3 and
use S3 Transfer Acceleration for the uploads. This solution will enable the company to
leverage the AWS global network and edge locations to speed up the data transfer
between the users and the S3 buckets.
Amazon S3 is a storage service that provides scalable, durable, and highly available object
storage for any type of data. Amazon S3 allows users to store and retrieve data from
anywhere on the web, and offers various features such as encryption, versioning, lifecycle
management, and replication1.
S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and
from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network
paths and Amazon’s backbone network to accelerate data transfer speeds. Users can
enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them,
such as <bucket>.s3-accelerate.amazonaws.com2.
The other options are not correct because they either do not provide the lowest latency or
are not suitable for the use case. Uploading and storing content in Amazon S3 and using Amazon CloudFront for the uploads is not correct because this solution is not designed for
optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content
delivery network (CDN) that helps users distribute their content globally with low latency
and high transfer speeds. CloudFront works by caching the content at edge locations
around the world, so that users can access it quickly and easily from anywhere3. Uploading
content to Amazon EC2 instances in the Region that is closest to the user and copying the
data to Amazon S3 is not correct because this solution adds unnecessary complexity and
cost to the process. Amazon EC2 is a computing service that provides scalable and secure
virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed,
and choose from various instance types, operating systems, and configurations4.
Uploading and storing content in Amazon S3 in the Region that is closest to the user and
using multiple distributions of Amazon CloudFront is not correct because this solution is not
cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a
CDN that helps users distribute their content globally with low latency and high transfer
speeds. However, creating multiple CloudFront distributions for each Region would incur
additional charges and management overhead, and would not be necessary since 90% of
the content is consumed within the same Region where it is uploaded3.
References:
What Is Amazon Simple Storage Service? - Amazon Simple Storage Service
Amazon S3 Transfer Acceleration - Amazon Simple Storage Service
What Is Amazon CloudFront? - Amazon CloudFront
What Is Amazon EC2? - Amazon Elastic Compute Cloud
Question # 30
An ecommerce application uses a PostgreSQL database that runs on an Amazon EC2instance. During a monthly sales event, database usage increases and causes databaseconnection issues for the application. The traffic is unpredictable for subsequent monthlysales events, which impacts the sales forecast. The company needs to maintainperformance when there is an unpredictable increase in traffic.Which solution resolves this issue in the MOST cost-effective way?
A. Migrate the PostgreSQL database to Amazon Aurora Serverless v2. B. Enable auto scaling for the PostgreSQL database on the EC2 instance to accommodateincreased usage. C. Migrate the PostgreSQL database to Amazon RDS for PostgreSQL with a largerinstance type D. Migrate the PostgreSQL database to Amazon Redshift to accommodate increasedusage
Answer: A
Explanation: Amazon Aurora Serverless v2 is a cost-effective solution that can
automatically scale the database capacity up and down based on the application’s needs. It
can handle unpredictable traffic spikes without requiring any provisioning or management
of database instances. It is compatible with PostgreSQL and offers high performance,
A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobsneed to run at regular intervals in parallel A few jobs need to run in a specific order laterThe company wants to remove the operational overhead of job error handling retry logic,and state managementWhich solution will meet these requirements?
A. Use an AWS Lambda function to process the data as soon as the data is uploaded tothe S3 bucket Invoke Other Lambda functions at regularly scheduled intervals B. Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invokeAthena on a regular internal C. Use AWS Glue DataBrew to process the data Use an AWS Step Functions statemachine to run the DataBrew data preparation jobs D. Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the dataonce at midnight.
Answer: C
Explanation: AWS Glue DataBrew is a visual data preparation tool that allows you to
easily clean, normalize, and transform your data without writing any code. You can create
and run data preparation jobs on your data stored in Amazon S3, Amazon Redshift, or
other data sources. AWS Step Functions is a service that lets you coordinate multiple AWS
services into serverless workflows. You can use Step Functions to orchestrate your
DataBrew jobs, define the order and parallelism of execution, handle errors and retries, and
monitor the state of your workflow. By using AWS Glue DataBrew and AWS Step
Functions, you can meet the requirements of the company with minimal operational
overhead, as you do not need to write any code, manage any servers, or deal with complex
dependencies.
References:
AWS Glue DataBrew
AWS Step Functions
Orchestrate AWS Glue DataBrew jobs using AWS Step Functions
Question # 32
A research company uses on-premises devices to generate data for analysis. Thecompany wants to use the AWS Cloud to analyze the data. The devices generate .csv filesand support writing the data to SMB file share. Company analysts must be able to use SQLcommands to query the data. The analysts will run queries periodically throughout the day.Which combination of steps will meet these requirements MOST cost-effectively? (SelectTHREE.)
A. Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode. B. Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode. C. Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3. D. Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data thatis in Amazon S3. Provide access to analysts. E. Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provideaccess to analysts. F. Set up Amazon Athena to query the data that is in Amazon S3. Provide access toanalysts.
Answer: A,C,F
Explanation: To meet the requirements of the use case in a cost-effective way, the
following steps are recommended:
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
This will allow the company to write the .csv files generated by the devices to an
SMB file share, which will be stored as objects in Amazon S3 buckets. AWS
Storage Gateway is a hybrid cloud storage service that integrates on-premises
environments with AWS storage. Amazon S3 File Gateway mode provides a
seamless way to connect to Amazon S3 and access a virtually unlimited amount of
cloud storage1.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon
S3. This will enable the company to use standard SQL to query the data stored in
Amazon S3 buckets. AWS Glue is a serverless data integration service that
simplifies data preparation and analysis. AWS Glue crawlers can automatically
discover and classify data from various sources, and create metadata tables in the
AWS Glue Data Catalog2. The Data Catalog is a central repository that stores
information about data sources and how to access them3.
Set up Amazon Athena to query the data that is in Amazon S3. This will provide
the company analysts with a serverless and interactive query service that can
analyze data directly in Amazon S3 using standard SQL. Amazon Athena is
integrated with the AWS Glue Data Catalog, so users can easily point Athena at
the data source tables defined by the crawlers. Amazon Athena charges only for
the queries that are run, and offers a pay-per-query pricing model, which makes it
a cost-effective option for periodic queries4.
The other options are not correct because they are either not cost-effective or not suitable
for the use case. Deploying an AWS Storage Gateway on premises in Amazon FSx File
Gateway mode is not correct because this mode provides low-latency access to fully
managed Windows file shares in AWS, which is not required for the use case. Setting up
an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in
Amazon S3 is not correct because this option involves setting up and managing a cluster of
EC2 instances, which adds complexity and cost to the solution. Setting up an Amazon
Redshift cluster to query the data that is in Amazon S3 is not correct because this option
also involves provisioning and managing a cluster of nodes, which adds overhead and cost
to the solution.
References:
What is AWS Storage Gateway?
What is AWS Glue?
AWS Glue Data Catalog
What is Amazon Athena?
Question # 33
A company website hosted on Amazon EC2 instances processes classified data stored inThe application writes data to Amazon Elastic Block Store (Amazon EBS) volumes Thecompany needs to ensure that all data that is written to the EBS volumes is encrypted atrest.Which solution will meet this requirement?
A. Create an 1AM role that specifies EBS encryption Attach the role to the EC2 instances B. Create the EBS volumes as encrypted volumes Attach the EBS volumes to the EC2instances C. Create an EC2 instance tag that has a key of Encrypt and a value of True Tag allinstances that require encryption at the EBS level D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBSencryption in the account Ensure that the key policy is active
Answer: B
Explanation: The simplest and most effective way to ensure that all data that is written to
the EBS volumes is encrypted at rest is to create the EBS volumes as encrypted volumes.
You can do this by selecting the encryption option when you create a new EBS volume, or
by copying an existing unencrypted volume to a new encrypted volume. You can also
specify the AWS KMS key that you want to use for encryption, or use the default AWSmanaged
key. When you attach the encrypted EBS volumes to the EC2 instances, the data
will be automatically encrypted and decrypted by the EC2 host. This solution does not
require any additional IAM roles, tags, or policies. References:
Amazon EBS encryption
Creating an encrypted EBS volume
Encrypting an unencrypted EBS volume
Question # 34
A company has Amazon EC2 instances that run nightly batch jobs to process data. TheEC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails onone instance: another instance will reprocess the job. The batch jobs run between 12:00AM and 06 00 AM local time every day.Which solution will provide EC2 instances to meet these requirements MOST cost-effectively'?
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of theAuto Scaling group that the batch job uses. B. Purchase a 1-year Reserved Instance for the specific instance type and operatingsystem of the instances in the Auto Scaling group that the batch job uses. C. Create a new launch template for the Auto Scaling group Set the instances to SpotInstances Set a policy to scale out based on CPU usage. D. Create a new launch template for the Auto Scaling group Increase the instance size Seta policy to scale out based on CPU usage.
Answer: C
Explanation: This option is the most cost-effective solution because it leverages the Spot
Instances, which are unused EC2 instances that are available at up to 90% discount
compared to On-Demand prices. Spot Instances can be interrupted by AWS when the
demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and
can be reprocessed by another instance, this is not a major issue. By using a launch
template, the company can specify the configuration of the Spot Instances, such as the
instance type, the operating system, and the user data. By using an Auto Scaling group,
the company can automatically scale the number of Spot Instances based on the CPU
usage, which reflects the load of the batch jobs. This way, the company can optimize the
performance and the cost of the EC2 instances for the nightly batch jobs.
A. Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the
Auto Scaling group that the batch job uses. This option is not optimal because it requires a
commitment to a consistent amount of compute usage per hour for a one-year term,
regardless of the instance type, size, region, or operating system. This can limit the flexibility and scalability of the Auto Scaling group and result in overpaying for unused
compute capacity. Moreover, Savings Plans do not provide a capacity reservation, which
means the company still needs to reserve capacity with On-Demand Capacity
Reservations and pay lower prices with Savings Plans.
B. Purchase a 1-year Reserved Instance for the specific instance type and operating
system of the instances in the Auto Scaling group that the batch job uses. This option is not
ideal because it requires a commitment to a specific instance configuration for a one-year
term, which can reduce the flexibility and scalability of the Auto Scaling group and result in
overpaying for unused compute capacity. Moreover, Reserved Instances do not provide a
capacity reservation, which means the company still needs to reserve capacity with On-
Demand Capacity Reservations and pay lower prices with Reserved Instances.
D. Create a new launch template for the Auto Scaling group Increase the instance size Set
a policy to scale out based on CPU usage. This option is not cost-effective because it does
not take advantage of the lower prices of Spot Instances. Increasing the instance size can
improve the performance of the batch jobs, but it can also increase the cost of the On-
Demand instances. Moreover, scaling out based on CPU usage can result in launching
more instances than needed, which can also increase the cost of the system.
References:
1 Spot Instances - Amazon Elastic Compute Cloud
2 Launch templates - Amazon Elastic Compute Cloud
3 Auto Scaling groups - Amazon EC2 Auto Scaling
[4] Savings Plans - Amazon EC2 Reserved Instances and Other AWS Reservation
Models
Question # 35
A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDSfor MySQL server forms the database layer. Amazon ElastiCache forms the cache layer.The company wants a caching strategy that adds or updates data in the cache when acustomer adds an item to the database. The data in the cache must always match the datain the database.Which solution will meet these requirements?
A. Implement the lazy loading caching strategy B. Implement the write-through caching strategy. C. Implement the adding TTL caching strategy. D. Implement the AWS AppConfig caching strategy.
Answer: B
Explanation: A write-through caching strategy adds or updates data in the cache
whenever data is written to the database. This ensures that the data in the cache is always
consistent with the data in the database. A write-through caching strategy also reduces the
cache miss penalty, as data is always available in the cache when it is requested.
However, a write-through caching strategy can increase the write latency, as data has to be
written to both the cache and the database. A write-through caching strategy is suitable for
applications that require high data consistency and low read latency.
A lazy loading caching strategy only loads data into the cache when it is requested, and
updates the cache when there is a cache miss. This can result in stale data in the cache,
as data is not updated in the cache when it is changed in the database. A lazy loading
caching strategy is suitable for applications that can tolerate some data inconsistency and
have a low cache miss rate.
An adding TTL caching strategy assigns a time-to-live (TTL) value to each data item in the cache, and removes the data from the cache when the TTL expires. This can help prevent
stale data in the cache, as data is periodically refreshed from the database. However, an
adding TTL caching strategy can also increase the cache miss rate, as data can be evicted
from the cache before it is requested. An adding TTL caching strategy is suitable for
applications that have a high cache hit rate and can tolerate some data inconsistency.
An AWS AppConfig caching strategy is not a valid option, as AWS AppConfig is a service
that enables customers to quickly deploy validated configurations to applications of any
size and scale. AWS AppConfig does not provide a caching layer for web applications.
References: Caching strategies - Amazon ElastiCache, Caching for high-volume workloads
with Amazon ElastiCache
Question # 36
A company wants to analyze and troubleshoot Access Denied errors and Unauthonzederrors that are related to 1AM permissions The company has AWS CloudTrail turned onWhich solution will meet these requirements with the LEAST effort?
A. Use AWS Glue and write custom scripts to query CloudTrail logs for the errors B. Use AWS Batch and write custom scripts to query CloudTrail logs for the errors C. Search CloudTrail logs with Amazon Athena queries to identify the errors D. Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Answer: C
Explanation: This solution meets the following requirements:
It is the least effort, as it does not require any additional AWS services, custom
scripts, or data processing steps. Amazon Athena is a serverless interactive query
service that allows you to analyze data in Amazon S3 using standard SQL. You
can use Athena to query CloudTrail logs directly from the S3 bucket where they
are stored, without any data loading or transformation. You can also use the AWS
Management Console, the AWS CLI, or the Athena API to run and manage your
queries.
It is effective, as it allows you to filter, aggregate, and join CloudTrail log data using
SQL syntax. You can use various SQL functions and operators to specify the
criteria for identifying Access Denied and Unauthorized errors, such as the error
code, the user identity, the event source, the event name, the event time, and the
resource ARN. You can also use subqueries, views, and common table
expressions to simplify and optimize your queries.
It is flexible, as it allows you to customize and save your queries for future use.
You can also export the query results to other formats, such as CSV or JSON, or
integrate them with other AWS services, such as Amazon QuickSight, for further
analysis and visualization.
References:
Querying AWS CloudTrail Logs - Amazon Athena
Analyzing Data in S3 using Amazon Athena | AWS Big Data Blog
Troubleshoot IAM permisson access denied or unauthorized errors | AWS re:Post
Question # 37
A global company runs its applications in multiple AWS accounts in AWS Organizations.The company's applications use multipart uploads to upload data to multiple Amazon S3buckets across AWS Regions. The company wants to report on incomplete multipartuploads for cost compliance purposes.Which solution will meet these requirements with the LEAST operational overhead?
A. Configure AWS Config with a rule to report the incomplete multipart upload object count. B. Create a service control policy (SCP) to report the incomplete multipart upload objectcount. C. Configure S3 Storage Lens to report the incomplete multipart upload object count. D. Create an S3 Multi-Region Access Point to report the incomplete multipart upload objectcount.
Answer: C
Explanation: S3 Storage Lens is a cloud storage analytics feature that provides
organization-wide visibility into object storage usage and activity across multiple AWS
accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart
upload object count as one of the metrics that it collects and displays on an interactive
dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet
format to an S3 bucket for further analysis. This solution will meet the requirements with the
least operational overhead, as it does not require any code development or policy changes.
References:
1 explains how to use S3 Storage Lens to gain insights into S3 storage usage and
activity.
2 describes the concept and benefits of multipart uploads.
Question # 38
A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucketThe company occasionally needs to use SQL to analyze the log files Which solution willmeet these requirements MOST cost-effectively?
A. Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket intoAurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements tothe Aurora database. B. Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statementsdirectly on the data in the S3 bucket C. Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucketUse Amazon Athena to run SQL statements directly on the data in the S3 bucket D. Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directlyon the data in the S3 bucket
Answer: C
Explanation: AWS Glue is a serverless data integration service that can crawl, catalog,
and prepare data for analysis. AWS Glue can automatically discover the schema and
partitioning of the data stored in Apache Parquet format in S3, and create a table in the
AWS Glue Data Catalog. Amazon Athena is a serverless interactive query service that can
run SQL queries directly on data in S3, without requiring any data loading or
transformation. Athena can use the table metadata from the AWS Glue Data Catalog to
query the data in S3. By using AWS Glue and Athena, you can analyze the log files in S3
most cost-effectively, as you only pay for the resources consumed by the crawler and the
queries, and you do not need to provision or manage any servers or clusters.
References:
AWS Glue
Amazon Athena
Analyzing Data in S3 using Amazon Athena
Question # 39
A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown exponentially over the past few months. The company'sresearchers regularly require a subset of the entire dataset to be immediately available withminimal lag. However the entire dataset does not need to be accessed on a daily basis. Allthe data currently resides in on-premises storage arrays, and the company wants to reduceongoing capital expenses.Which storage solution should a solutions architect recommend to meet theserequirements?
A. Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3bucket on an ongoing basis. B. Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the targetstorage Migrate the data to the Storage Gateway appliance. C. Deploy an AWS Storage Gateway volume gateway with cached volumes with anAmazon S3 bucket as the target storage. Migrate the data to the Storage Gatewayappliance. D. Configure an AWS Site-to-Site VPN connection from the on-premises environment toAWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
Answer: C
Explanation: AWS Storage Gateway is a hybrid cloud storage service that allows you to
seamlessly integrate your on-premises applications with AWS cloud storage. Volume
Gateway is a type of Storage Gateway that presents cloud-backed iSCSI block storage
volumes to your on-premises applications. Volume Gateway operates in either cache mode
or stored mode. In cache mode, your primary data is stored in Amazon S3, while retaining
your frequently accessed data locally in the cache for low latency access. In stored mode,
your primary data is stored locally and your entire dataset is available for low latency
access on premises while also asynchronously getting backed up to Amazon S3.
For the pharmaceutical company’s use case, cache mode is the most suitable option, as it
meets the following requirements:
It reduces the need to scale the on-premises storage infrastructure, as most of the
data is stored in Amazon S3, which is scalable, durable, and cost-effective.
It provides low latency access to the subset of the data that the researchers
regularly require, as it is cached locally in the Storage Gateway appliance.
It does not require the entire dataset to be accessed on a daily basis, as it is
stored in Amazon S3 and can be retrieved on demand.
It offers flexible data protection and recovery options, as it allows taking point-intime
copies of the volumes using AWS Backup, which are stored in AWS as
Amazon EBS snapshots.
Therefore, the solutions architect should recommend deploying an AWS Storage Gateway
volume gateway with cached volumes with an Amazon S3 bucket as the target storage and
migrating the data to the Storage Gateway appliance.
References:
Volume Gateway | Amazon Web Services
How Volume Gateway works (architecture) - AWS Storage Gateway
A company runs a three-tier web application in a VPC across multiple Availability Zones.Amazon EC2 instances run in an Auto Scaling group for the application tier.The company needs to make an automated scaling plan that will analyze each resource'sdaily and weekly historical workload trends. The configuration must scale resourcesappropriately according to both the forecast and live changes in utilization.Which scaling strategy should a solutions architect recommend to meet theserequirements?
A. Implement dynamic scaling with step scaling based on average CPU utilization from theEC2 instances. B. Enable predictive scaling to forecast and scale. Configure dynamic scaling with targettracking. C. Create an automated scheduled scaling action based on the traffic patterns of the webapplication. D. Set up a simple scaling policy. Increase the cooldown period based on the EC2 instancestartup time
Answer: B
Explanation:
This solution meets the requirements because it allows the company to use both predictive
scaling and dynamic scaling to optimize the capacity of its Auto Scaling group. Predictive
scaling uses machine learning to analyze historical data and forecast future traffic patterns.
It then adjusts the desired capacity of the group in advance of the predicted changes.
Dynamic scaling uses target tracking to maintain a specified metric (such as CPU
utilization) at a target value. It scales the group in or out as needed to keep the metric close to the target. By using both scaling methods, the company can benefit from faster, simpler,
and more accurate scaling that responds to both forecasted and live changes in utilization.
References:
Predictive scaling for Amazon EC2 Auto Scaling
[Target tracking scaling policies for Amazon EC2 Auto Scaling
Question # 41
A company deployed a serverless application that uses Amazon DynamoDB as a databaselayer The application has experienced a large increase in users. The company wants toimprove database response time from milliseconds to microseconds and to cache requeststo the database.Which solution will meet these requirements with the LEAST operational overhead?
A. Use DynamoDB Accelerator (DAX). B. Migrate the database to Amazon Redshift. C. Migrate the database to Amazon RDS. D. Use Amazon ElastiCache for Redis.
Answer: A
Explanation: DynamoDB Accelerator (DAX) is a fully managed, highly available caching
service built for Amazon DynamoDB. DAX delivers up to a 10 times performance
improvement—from milliseconds to microseconds—even at millions of requests per
second. DAX does all the heavy lifting required to add in-memory acceleration to your
DynamoDB tables, without requiring developers to manage cache invalidation, data
population, or cluster management. Now you can focus on building great applications for
your customers without worrying about performance at scale. You do not need to modify
application logic because DAX is compatible with existing DynamoDB API calls. This
solution will meet the requirements with the least operational overhead, as it does not
require any code development or manual intervention. References:
1 provides an overview of Amazon DynamoDB Accelerator (DAX) and its benefits.
2 explains how to use DAX with DynamoDB for in-memory acceleration.
Question # 42
An online video game company must maintain ultra-low latency for its game servers. Thegame servers run on Amazon EC2 instances. The company needs a solution that canhandle millions of UDP internet traffic requests each second.Which solution will meet these requirements MOST cost-effectively?
A. Configure an Application Load Balancer with the required protocol and ports for theinternet traffic. Specify the EC2 instances as the targets. B. Configure a Gateway Load Balancer for the internet traffic. Specify the EC2 instances asthe targets. C. Configure a Network Load Balancer with the required protocol and ports for the internettraffic. Specify the EC2 instances as the targets. D. Launch an identical set of game servers on EC2 instances in separate AWS Regions. Route internet traffic to both sets of EC2 instances.
Answer: C
Explanation: The most cost-effective solution for the online video game company is to
configure a Network Load Balancer with the required protocol and ports for the internet
traffic and specify the EC2 instances as the targets. This solution will enable the company
to handle millions of UDP requests per second with ultra-low latency and high performance.
A Network Load Balancer is a type of Elastic Load Balancing that operates at the
connection level (Layer 4) and routes traffic to targets (EC2 instances, microservices, or
containers) within Amazon VPC based on IP protocol data. A Network Load Balancer is
ideal for load balancing of both TCP and UDP traffic, as it is capable of handling millions of
requests per second while maintaining high throughput at ultra-low latency. A Network
Load Balancer also preserves the source IP address of the clients to the back-end
applications, which can be useful for logging or security purposes1.
Question # 43
A company maintains an Amazon RDS database that maps users to cost centers. Thecompany has accounts in an organization in AWS Organizations. The company needs asolution that will tag all resources that are created in a specific AWS account in theorganization. The solution must tag each resource with the cost center ID of the user whocreated the resource.Which solution will meet these requirements?
A. Move the specific AWS account to a new organizational unit (OU) in Organizations fromthe management account. Create a service control policy (SCP) that requires all existingresources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU. B. Create an AWS Lambda function to tag the resources after the Lambda function looksup the appropriate cost center from the RDS database. Configure an Amazon EventBridgerule that reacts to AWS CloudTrail events to invoke the Lambda function. C. Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configurethe Lambda function to look up the appropriate cost center from the RDS database and totag resources. Create an Amazon EventBridge scheduled rule to invoke theCloudFormation stack. D. Create an AWS Lambda function to tag the resources with a default value. Configure anAmazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambdafunction when a resource is missing the cost center tag.
Answer: B
Explanation: AWS Lambda is a serverless compute service that lets you run code without
provisioning or managing servers. Lambda can be used to tag resources with the cost
center ID of the user who created the resource, by querying the RDS database that maps
users to cost centers. Amazon EventBridge is a serverless event bus service that enables
event-driven architectures. EventBridge can be configured to react to AWS CloudTrail
events, which are recorded API calls made by or on behalf of the AWS account.
EventBridge can invoke the Lambda function when a resource is created in the specific
AWS account, passing the user identity and resource information as parameters. This
solution will meet the requirements, as it enables automatic tagging of resources based on
the user and cost center mapping.
References:
1 provides an overview of AWS Lambda and its benefits.
2 provides an overview of Amazon EventBridge and its benefits.
3 explains the concept and benefits of AWS CloudTrail events.
Question # 44
A company is designing a tightly coupled high performance computing (HPC) environmentin the AWS Cloud The company needs to include features that will optimize the HPCenvironment for networking and storage.Which combination of solutions will meet these requirements? (Select TWO )
A. Create an accelerator in AWS Global Accelerator. Configure custom routing for theaccelerator. B. Create an Amazon FSx for Lustre file system. Configure the file system with scratchstorage. C. Create an Amazon CloudFront distribution. Configure the viewer protocol policy to beHTTP and HTTPS. D. Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to theinstances. E. Create an AWS Elastic Beanstalk deployment to manage the environment.
Answer: B,D
Explanation: These two solutions will optimize the HPC environment for networking and
storage. Amazon FSx for Lustre is a fully managed service that provides cost-effective,
high-performance, scalable storage for compute workloads. It is built on the world’s most
popular high-performance file system, Lustre, which is designed for applications that
require fast storage, such as HPC and machine learning. By configuring the file system
with scratch storage, you can achieve sub-millisecond latencies, up to hundreds of GBs/s
of throughput, and millions of IOPS. Scratch file systems are ideal for temporary storage
and shorter-term processing of data. Data is not replicated and does not persist if a file
server fails. For more information, see Amazon FSx for Lustre.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables
customers to run applications requiring high levels of inter-node communications at scale
on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the
performance of inter-instance communications, which is critical to scaling HPC and
machine learning applications. EFA provides a low-latency, low-jitter channel for interinstance
communications, enabling your tightly-coupled HPC or distributed machine
learning applications to scale to thousands of cores. EFA uses libfabric interface and
libfabric APIs for communications, which are supported by most HPC programming
models. For more information, see Elastic Fabric Adapter. The other solutions are not suitable for optimizing the HPC environment for networking and
storage. AWS Global Accelerator is a networking service that helps you improve the
availability, performance, and security of your public applications by using the AWS global
network. It provides two global static public IPs, deterministic routing, fast failover, and TCP
termination at the edge for your application endpoints. However, it does not support OSbypass
capabilities or high-performance file systems that are required for HPC and
machine learning applications. For more information, see AWS Global Accelerator.
Amazon CloudFront is a content delivery network (CDN) service that securely delivers
data, videos, applications, and APIs to customers globally with low latency, high transfer
speeds, all within a developer-friendly environment. CloudFront is integrated with AWS
services such as Amazon S3, Amazon EC2, AWS Elemental Media Services, AWS Shield,
AWS WAF, and AWS Lambda@Edge. However, CloudFront is not designed for HPC and
machine learning applications that require high levels of inter-node communications and
fast storage. For more information, see [Amazon CloudFront].
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web
applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go,
and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can
simply upload your code and Elastic Beanstalk automatically handles the deployment, from
capacity provisioning, load balancing, auto-scaling to application health monitoring.
However, Elastic Beanstalk is not optimized for HPC and machine learning applications
that require OS-bypass capabilities and high-performance file systems. For more
information, see [AWS Elastic Beanstalk].
References: Amazon FSx for Lustre, Elastic Fabric Adapter, AWS Global Accelerator,
[Amazon CloudFront], [AWS Elastic Beanstalk].
Question # 45
A company is running a photo hosting service in the us-east-1 Region. The service enablesusers across multiple countries to upload and view photos. Some photos are heavilyviewed for months, and others are viewed for less than a week. The application allowsuploads of up to 20 MB for each photo. The service uses the photo metadata to determinewhich photos to display to each user.Which solution provides the appropriate user access MOST cost-effectively?
A. Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) tocache frequently viewed items. B. Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photometadata and its S3 location in DynamoDB. C. Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecyclepolicy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3Standard-IA) storage class. Use the object tags to keep track of metadata. D. Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policyto move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store thephoto metadata and its S3 location in Amazon OpenSearch Service.
Answer: B
Explanation: This solution provides the appropriate user access most cost-effectively
because it uses the Amazon S3 Intelligent-Tiering storage class, which automatically
optimizes storage costs by moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead1. This storage class
is ideal for data with unknown, changing, or unpredictable access patterns, such as photos
that are heavily viewed for months or less than a week. By storing the photo metadata and
its S3 location in DynamoDB, the application can quickly query and retrieve the relevant
photos for each user. DynamoDB is a fast, scalable, and fully managed NoSQL database
service that supports key-value and document data models2.
A company is designing a new web application that will run on Amazon EC2 Instances. Theapplication will use Amazon DynamoDB for backend data storage. The application trafficwill be unpredictable. T company expects that the application read and write throughput tothe database will be moderate to high. The company needs to scale in response toapplication traffic.Which DynamoDB table configuration will meet these requirements MOST cost-effectively?
A. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard table class. Set DynamoDB auto scaling to a maximum defined capacity. B. Configure DynamoDB in on-demand mode by using the DynamoDB Standard tableclass. C. Configure DynamoDB with provisioned read and write by using the DynamoDBStandard Infrequent Access (DynamoDB Standard-IA) table class. Set DynamoDB autoscaling to a maximum defined capacity. D. Configure DynamoDB in on-demand mode by using the DynamoDB Standard InfrequentAccess (DynamoDB Standard-IA) table class.
Answer: B
Explanation: The most cost-effective DynamoDB table configuration for the web
application is to configure DynamoDB in on-demand mode by using the DynamoDB
Standard table class. This configuration will allow the company to scale in response to
application traffic and pay only for the read and write requests that the application performs
on the table.
On-demand mode is a flexible billing option that can handle thousands of requests per
second without capacity planning. On-demand mode automatically adjusts the table’s
capacity based on the incoming traffic, and charges only for the read and write requests
that are actually performed. On-demand mode is suitable for applications with
unpredictable or variable workloads, or applications that prefer the ease of paying for only
what they use1.
The DynamoDB Standard table class is the default and recommended table class for most
workloads. The DynamoDB Standard table class offers lower throughput costs than the
DynamoDB Standard-Infrequent Access (DynamoDB Standard-IA) table class, and is more
cost-effective for tables where throughput is the dominant cost. The DynamoDB Standard
table class also offers the same performance, durability, and availability as the DynamoDB
Standard-IA table class2. The other options are not correct because they are either not cost-effective or not suitable
for the use case. Configuring DynamoDB with provisioned read and write by using the
DynamoDB Standard table class, and setting DynamoDB auto scaling to a maximum
defined capacity is not correct because this configuration requires manual estimation and
management of the table’s capacity, which adds complexity and cost to the solution.
Provisioned mode is a billing option that requires users to specify the amount of read and
write capacity units for their tables, and charges for the reserved capacity regardless of
usage. Provisioned mode is suitable for applications with predictable or stable workloads,
or applications that require finer-grained control over their capacity settings1. Configuring
DynamoDB with provisioned read and write by using the DynamoDB Standard-Infrequent
Access (DynamoDB Standard-IA) table class, and setting DynamoDB auto scaling to a
maximum defined capacity is not correct because this configuration is not cost-effective for
tables with moderate to high throughput. The DynamoDB Standard-IA table class offers
lower storage costs than the DynamoDB Standard table class, but higher throughput costs.
The DynamoDB Standard-IA table class is optimized for tables where storage is the
dominant cost, such as tables that store infrequently accessed data2. Configuring
DynamoDB in on-demand mode by using the DynamoDB Standard-Infrequent Access
(DynamoDB Standard-IA) table class is not correct because this configuration is not costeffective
for tables with moderate to high throughput. As mentioned above, the DynamoDB
Standard-IA table class has higher throughput costs than the DynamoDB Standard table
class, which can offset the savings from lower storage costs.
References:
Table classes - Amazon DynamoDB
Read/write capacity mode - Amazon DynamoDB
Question # 47
A company's web application that is hosted in the AWS Cloud recently increased inpopularity. The web application currently exists on a single Amazon EC2 instance in asingle public subnet. The web application has not been able to meet the demand of theincreased web traffic.The company needs a solution that will provide high availability and scalability to meet theincreased user demand without rewriting the web application.Which combination of steps will meet these requirements? (Select TWO.)
A. Replace the EC2 instance with a larger compute optimized instance. B. Configure Amazon EC2 Auto Scaling with multiple Availability Zones in private subnets. C. Configure a NAT gateway in a public subnet to handle web requests. D. Replace the EC2 instance with a larger memory optimized instance. E. Configure an Application Load Balancer in a public subnet to distribute web traffic
Answer: B,E
Explanation:
These two steps will meet the requirements because they will provide high availability and
scalability for the web application without rewriting it. Amazon EC2 Auto Scaling allows you
to automatically adjust the number of EC2 instances in response to changes in demand. By
configuring Auto Scaling with multiple Availability Zones in private subnets, you can ensure
that your web application is distributed across isolated and fault-tolerant locations, and that
your instances are not directly exposed to the internet. An Application Load Balancer
operates at the application layer and distributes incoming web traffic across multiple
targets, such as EC2 instances, containers, or Lambda functions. By configuring an
Application Load Balancer in a public subnet, you can enable your web application to
handle requests from the internet and route them to the appropriate targets in the private
subnets.
References:
What is Amazon EC2 Auto Scaling?
What is an Application Load Balancer?
Question # 48
A company is designing a web application on AWS The application will use a VPNconnection between the company's existing data centers and the company's VPCs. Thecompany uses Amazon Route 53 as its DNS service. The application must use privateDNS records to communicate with the on-premises services from a VPC. Which solutionwill meet these requirements in the MOST secure manner?
A. Create a Route 53 Resolver outbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC B. Create a Route 53 Resolver inbound endpoint. Create a resolver rule. Associate theresolver rule with the VPC. C. Create a Route 53 private hosted zone. Associate the private hosted zone with the VPC. D. Create a Route 53 public hosted zone. Create a record for each service to allow servicecommunication.
Answer: A
Explanation: To meet the requirements of the web application in the most secure manner,
the company should create a Route 53 Resolver outbound endpoint, create a resolver rule,
and associate the resolver rule with the VPC. This solution will allow the application to use
private DNS records to communicate with the on-premises services from a VPC. Route 53
Resolver is a service that enables DNS resolution between on-premises networks and
AWS VPCs. An outbound endpoint is a set of IP addresses that Resolver uses to forward
DNS queries from a VPC to resolvers on an on-premises network. A resolver rule is a rule
that specifies the domain names for which Resolver forwards DNS queries to the IP
addresses that you specify in the rule. By creating an outbound endpoint and a resolver
rule, and associating them with the VPC, the company can securely resolve DNS queries
for the on-premises services using private DNS records12.
The other options are not correct because they do not meet the requirements or are not
secure. Creating a Route 53 Resolver inbound endpoint, creating a resolver rule, and
associating the resolver rule with the VPC is not correct because this solution will allow
DNS queries from on-premises networks to access resources in a VPC, not vice versa. An
inbound endpoint is a set of IP addresses that Resolver uses to receive DNS queries from
resolvers on an on-premises network1. Creating a Route 53 private hosted zone and
associating it with the VPC is not correct because this solution will only allow DNS
resolution for resources within the VPC or other VPCs that are associated with the same
hosted zone. A private hosted zone is a container for DNS records that are only accessible
from one or more VPCs3. Creating a Route 53 public hosted zone and creating a record for
each service to allow service communication is not correct because this solution will expose the on-premises services to the public internet, which is not secure. A public hosted
zone is a container for DNS records that are accessible from anywhere on the internet3.
References:
Resolving DNS queries between VPCs and your network - Amazon Route 53
Working with rules - Amazon Route 53
Working with private hosted zones - Amazon Route 53
Question # 49
A media company stores movies in Amazon S3. Each movie is stored in a single video filethat ranges from 1 GB to 10 GB in size.The company must be able to provide the streaming content of a movie within 5 minutes ofa user purchase. There is higher demand for movies that are less than 20 years old thanfor movies that are more than 20 years old. The company wants to minimize hostingservice costs based on demand.Which solution will meet these requirements?
A. Store all media content in Amazon S3. Use S3 Lifecycle policies to move media datainto the Infrequent Access tier when the demand for a movie decreases. B. Store newer movie video files in S3 Standard Store older movie video files in S3Standard-Infrequent Access (S3 Standard-IA). When a user orders an older movie, retrievethe video file by using standard retrieval. C. Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files inS3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file byusing expedited retrieval. D. Store newer movie video files in S3 Standard. Store older movie video files in S3 GlacierFlexible Retrieval. When a user orders an older movie, retrieve the video file by using bulkretrieval.
Answer: C
Explanation: This solution will meet the requirements of minimizing hosting service costs
based on demand and providing the streaming content of a movie within 5 minutes of a user purchase. S3 Intelligent-Tiering is a storage class that automatically optimizes storage
costs by moving data to the most cost-effective access tier when access patterns
change. It is suitable for data with unknown, changing, or unpredictable access patterns,
such as newer movies that may have higher demand1. S3 Glacier Flexible Retrieval is a
storage class that provides low-cost storage for archive data that is retrieved
asynchronously. It offers flexible data retrieval options from minutes to hours, and free bulk
retrievals in 5-12 hours. It is ideal for backup, disaster recovery, and offsite data storage
needs2. By using expedited retrieval, the user can access the older movie video file in 1-5
minutes, which meets the requirement of 5 minutes3.
Amazon S3 Glacier Flexible Retrieval and Glacier Deep Archive Retrieval …1, Amazon S3
Glacier Flexible Retrieval section3: Amazon S3 Glacier Flexible Retrieval and Glacier Deep
Archive Retrieval …1, Retrieval Rates section.
Question # 50
A business application is hosted on Amazon EC2 and uses Amazon S3 for encryptedobject storage. The chief information security officer has directed that no application trafficbetween the two services should traverse the public internet.Which capability should the solutions architect use to meet the compliance requirements?
A. AWS Key Management Service (AWS KMS) B. VPC endpoint C. Private subnet D. Virtual private gateway
To meet security requirements, a company needs to encrypt all of its application data intransit while communicating with an Amazon RDS MySQL DB instance. A recent securityaudit revealed that encryption at rest is enabled using AWS Key Management Service(AWS KMS), but data in transit is not enabled.What should a solutions architect do to satisfy the security requirements?
A. Enable 1AM database authentication on the database. B. Provide self-signed certificates. Use the certificates in all connections to the RDSinstance. C. Take a snapshot of the RDS instance. Restore the snapshot to a new instance withencryption enabled. D. Download AWS-provided root certificates. Provide the certificates in all connections tothe RDS instance.
Answer: D
Explanation: To satisfy the security requirements, the solutions architect should download
AWS-provided root certificates and provide the certificates in all connections to the RDS
instance. This will enable SSL/TLS encryption for data in transit between the application
and the RDS instance. SSL/TLS encryption provides a layer of security by encrypting data
that moves between the client and the server. Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned. The application
can use the AWS-provided root certificates to verify the identity of the DB instance and
establish a secure connection1.
The other options are not correct because they do not enable encryption for data in transit
or are not relevant for the use case. Enabling IAM database authentication on the database
is not correct because this option only provides a method of authentication, not encryption.
IAM database authentication allows users to use AWS Identity and Access Management
(IAM) users and roles to access a database, instead of using a database user name and
password2. Providing self-signed certificates is not correct because this option is not
secure or reliable. Self-signed certificates are certificates that are signed by the same entity
that issued them, instead of by a trusted certificate authority (CA). Self-signed certificates
can be easily forged or compromised, and are not recognized by most browsers and
applications3. Taking a snapshot of the RDS instance and restoring it to a new instance
with encryption enabled is not correct because this option only enables encryption at rest,
not encryption in transit. Encryption at rest protects data that is stored on disk, but does not
protect data that is moving between the client and the server4.
References:
Using SSL/TLS to encrypt a connection to a DB instance - Amazon Relational
Database Service
IAM database authentication for MySQL and PostgreSQL - Amazon Relational
Database Service
What are self-signed certificates?
Encrypting Amazon RDS resources - Amazon Relational Database Service
Question # 52
A company stores text files in Amazon S3. The text files include customer chat messages,date and time information, and customer personally identifiable information (Pll).The company needs a solution to provide samples of the conversations to an externalservice provider for quality control. The external service provider needs to randomly picksample conversations up to the most recent conversation. The company must not sharethe customer Pll with the external service provider. The solution must scale when thenumber of customer conversations increases.Which solution will meet these requirements with the LEAST operational overhead?
A. Create an Object Lambda Access Point. Create an AWS Lambda function that redactsthe Pll when the function reads the file. Instruct the external service provider to access theObject Lambda Access Point. B. Create a batch process on an Amazon EC2 instance that regularly reads all new files,redacts the Pll from the files, and writes the redacted files to a different S3 bucket. Instructthe external service provider to access the bucket that does not contain the Pll. C. Create a web application on an Amazon EC2 instance that presents a list of the files,redacts the Pll from the files, and allows the external service provider to download newversions of the files that have the Pll redacted. D. Create an Amazon DynamoDB table. Create an AWS Lambda function that reads onlythe data in the files that does not contain Pll. Configure the Lambda function to store thenon-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant theexternal service provider access to the DynamoDB table.
Answer: A
Explanation: The correct solution is to create an Object Lambda Access Point and an
AWS Lambda function that redacts the PII when the function reads the file. This way, the
company can use the S3 Object Lambda feature to modify the S3 object content on the fly,
without creating a copy or changing the original object. The external service provider can
access the Object Lambda Access Point and get the redacted version of the file. This
solution has the least operational overhead because it does not require any additional
storage, processing, or synchronization. The solution also scales automatically with the
number of customer conversations and the demand from the external service provider. The
other options are incorrect because: Option B is using a batch process on an EC2 instance to read, redact, and write
the files to a different S3 bucket. This solution has more operational overhead
because it requires managing the EC2 instance, the batch process, and the
additional S3 bucket. It also introduces latency and inconsistency between the
original and the redacted files.
Option C is using a web application on an EC2 instance to present, redact, and
download the files. This solution has more operational overhead because it
requires managing the EC2 instance, the web application, and the download
process. It also exposes the original files to the web application, which increases
the risk of leaking the PII.
Option D is using a DynamoDB table and a Lambda function to store the non-PII
data from the files. This solution has more operational overhead because it
requires managing the DynamoDB table, the Lambda function, and the data
transformation. It also changes the format and the structure of the original files,
which may affect the quality control process.
References:
S3 Object Lambda
Object Lambda Access Point
Lambda function
Question # 53
A company wants to deploy its containerized application workloads to a VPC across threeAvailability Zones. The company needs a solution that is highly available across AvailabilityZones. The solution must require minimal changes to the application.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS ServiceAuto Scaling to use target tracking scaling. Set the minimum capacity to 3. Set the taskplacement strategy type to spread with an Availability Zone attribute. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) self-managed nodes. ConfigureApplication Auto Scaling to use target tracking scaling. Set the minimum capacity to 3. C. Use Amazon EC2 Reserved Instances. Launch three EC2 instances in a spreadplacement group. Configure an Auto Scaling group to use target tracking scaling. Set theminimum capacity to 3. D. Use an AWS Lambda function. Configure the Lambda function to connect to a VPC.Configure Application Auto Scaling to use Lambda as a scalable target. Set the minimumcapacity to 3.
Answer: A
Explanation: The company wants to deploy its containerized application workloads to a
VPC across three Availability Zones, with high availability and minimal changes to the
application. The solution that will meet these requirements with the least operational
overhead is:
Use Amazon Elastic Container Service (Amazon ECS). Amazon ECS is a fully
managed container orchestration service that allows you to run and scale
containerized applications on AWS. Amazon ECS eliminates the need for you to
install, operate, and scale your own cluster management infrastructure. Amazon
ECS also integrates with other AWS services, such as VPC, ELB,
CloudFormation, CloudWatch, IAM, and more.
Configure Amazon ECS Service Auto Scaling to use target tracking scaling.
Amazon ECS Service Auto Scaling allows you to automatically adjust the number
of tasks in your service based on the demand or custom metrics. Target tracking
scaling is a policy type that adjusts the number of tasks in your service to keep a
specified metric at a target value. For example, you can use target tracking scaling
to maintain a target CPU utilization or request count per task for your service.
Set the minimum capacity to 3. This ensures that your service always has at least
three tasks running across three Availability Zones, providing high availability and
fault tolerance for your application.
Set the task placement strategy type to spread with an Availability Zone attribute.
This ensures that your tasks are evenly distributed across the Availability Zones in
your cluster, maximizing the availability of your service.
This solution will provide high availability across Availability Zones, require minimal
changes to the application, and reduce the operational overhead of managing your own
cluster infrastructure.
References: Amazon Elastic Container Service
Amazon ECS Service Auto Scaling
Target Tracking Scaling Policies for Amazon ECS Services
Amazon ECS Task Placement Strategies
Question # 54
A company needs to use its on-premises LDAP directory service to authenticate its usersto the AWS Management Console. The directory service is not compatible with SecurityAssertion Markup Language (SAML).Which solution meets these requirements?
A. Enable AWS 1AM Identity Center (AWS Single Sign-On) between AWS and the onpremisesLDAP. B. Create an 1AM policy that uses AWS credentials, and integrate the policy into LDAP. C. Set up a process that rotates the I AM credentials whenever LDAP credentials areupdated. D. Develop an on-premises custom identity broker application or process that uses AWSSecurity Token Service (AWS STS) to get short-lived credentials.
Answer: D
Explanation: The solution that meets the requirements is to develop an on-premises
custom identity broker application or process that uses AWS Security Token Service (AWS
STS) to get short-lived credentials. This solution allows the company to use its existing LDAP directory service to authenticate its users to the AWS Management Console, without
requiring SAML compatibility. The custom identity broker application or process can act as
a proxy between the LDAP directory service and AWS STS, and can request temporary
security credentials for the users based on their LDAP attributes and roles. The users can
then use these credentials to access the AWS Management Console via a sign-in URL
generated by the identity broker. This solution also enhances security by using short-lived
credentials that expire after a specified duration.
The other solutions do not meet the requirements because they either require SAML
compatibility or do not provide access to the AWS Management Console. Enabling AWS
IAM Identity Center (AWS Single Sign-On) between AWS and the on-premises LDAP
would require the LDAP directory service to support SAML 2.0, which is not the case for
this scenario. Creating an IAM policy that uses AWS credentials and integrating the policy
into LDAP would not provide access to the AWS Management Console, but only to the
AWS APIs. Setting up a process that rotates the IAM credentials whenever LDAP
credentials are updated would also not provide access to the AWS Management Console,
but only to the AWS CLI. Therefore, these solutions are not suitable for the given
requirements.
Question # 55
A company wants to migrate its on-premises Microsoft SQL Server Enterprise editiondatabase to AWS. The company's online application uses the database to processtransactions. The data analysis team uses the same production database to run reports foranalytical processing. The company wants to reduce operational overhead by moving tomanaged services wherever possible.Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reportingpurposes. B. Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas forreporting purposes. C. Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reportingpurposes. D. Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
Answer: A
Explanation: Amazon RDS for Microsoft SQL Server is a fully managed service that offers
SQL Server 2014, 2016, 2017, and 2019 editions while offloading database administration
tasks such as backups, patching, and scaling. Amazon RDS supports read replicas, which
are read-only copies of the primary database that can be used for reporting purposes
without affecting the performance of the online application. This solution will meet the
requirements with the least operational overhead, as it does not require any code changes
or manual intervention.
References:
1 provides an overview of Amazon RDS for Microsoft SQL Server and its benefits.
2 explains how to create and use read replicas with Amazon RDS.
Question # 56
A company's website is used to sell products to the public. The site runs on Amazon EC2instances in an Auto Scaling group behind an Application Load Balancer (ALB). There isalso an Amazon CloudFront distribution, and AWS WAF is being used to protect againstSQL injection attacks. The ALB is the origin for the CloudFront distribution. A recent reviewof security logs revealed an external malicious IP that needs to be blocked from accessingthe website.What should a solutions architect do to protect the application?
A. Modify the network ACL on the CloudFront distribution to add a deny rule for themalicious IP address. B. Modify the configuration of AWS WAF to add an IP match condition to block themalicious IP address. C. Modify the network ACL for the EC2 instances in the target groups behind the ALB todeny the malicious IP address. D. Modify the security groups for the EC2 instances in the target groups behind the ALB todeny the malicious IP address.
Answer: B
Explanation: AWS WAF is a web application firewall that helps protect web applications
from common web exploits that could affect application availability, compromise security, or
consume excessive resources. AWS WAF allows users to create rules that block, allow, or
count web requests based on customizable web security rules. One of the types of rules
that can be created is an IP match rule, which allows users to specify a list of IP addresses
or IP address ranges that they want to allow or block. By modifying the configuration of
AWS WAF to add an IP match condition to block the malicious IP address, the solution
architect can prevent the attacker from accessing the website through the CloudFront
distribution and the ALB.
The other options are not correct because they do not effectively block the malicious IP
address from accessing the website. Modifying the network ACL on the CloudFront
distribution or the EC2 instances in the target groups behind the ALB will not work because
network ACLs are stateless and do not evaluate traffic at the application layer. Modifying
the security groups for the EC2 instances in the target groups behind the ALB will not work
because security groups are stateful and only evaluate traffic at the instance level, not at
the load balancer level.
References:
AWS WAF
How AWS WAF works
Working with IP match conditions
Question # 57
A company has a web application for travel ticketing. The application is based on adatabase that runs in a single data center in North America. The company wants to expandthe application to serve a global user base. The company needs to deploy the applicationto multiple AWS Regions. Average latency must be less than 1 second on updates to thereservation database.The company wants to have separate deployments of its web platform across multipleRegions. However the company must maintain a single primary reservation database thatis globally consistent.Which solution should a solutions architect recommend to meet these requirements?
A. Convert the application to use Amazon DynamoDB. Use a global table for the centerreservation table. Use the correct Regional endpoint in each Regional deployment. B. Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora ReadReplicas in each Region. Use the correct Regional endpoint in each Regional deploymentfor access to the database. C. Migrate the database to an Amazon RDS for MySQL database Deploy MySQL readreplicas in each Region. Use the correct Regional endpoint in each Regional deploymentfor access to the database. D. Migrate the application to an Amazon Aurora Serverless database. Deploy instances ofthe database to each Region. Use the correct Regional endpoint in each Regionaldeployment to access the database. Use AWS Lambda functions to process event streamsin each Region to synchronize the databases.
A company has an application that uses an Amazon DynamoDB table for storage. Asolutions architect discovers that many requests to the table are not returning the latestdata. The company's users have not reported any other issues with database performance.Latency is in an acceptable range.Which design change should the solutions architect recommend?
A. Add read replicas to the table. B. Use a global secondary index (GSI). C. Request strongly consistent reads for the table. D. Request eventually consistent reads for the table.
Answer: C
Explanation: The most suitable design change for the company’s application is to request
strongly consistent reads for the table. This change will ensure that the requests to the
table return the latest data, reflecting the updates from all prior write operations.
Amazon DynamoDB is a fully managed NoSQL database service that provides fast and
predictable performance with seamless scalability. DynamoDB supports two types of read
consistency: eventually consistent reads and strongly consistent reads. By default,
DynamoDB uses eventually consistent reads, unless users specify otherwise1.
Eventually consistent reads are reads that may not reflect the results of a recently
completed write operation. The response might not include the changes because of the
latency of propagating the data to all replicas. If users repeat their read request after a
short time, the response should return the updated data. Eventually consistent reads are
suitable for applications that do not require up-to-date data or can tolerate eventual
consistency1.
Strongly consistent reads are reads that return a result that reflects all writes that received a successful response prior to the read. Users can request a strongly consistent read by
setting the ConsistentRead parameter to true in their read operations, such as GetItem,
Query, or Scan. Strongly consistent reads are suitable for applications that require up-todate
data or cannot tolerate eventual consistency1.
The other options are not correct because they do not address the issue of read
consistency or are not relevant for the use case. Adding read replicas to the table is not
correct because this option is not supported by DynamoDB. Read replicas are copies of a
primary database instance that can serve read-only traffic and improve availability and
performance. Read replicas are available for some relational database services, such as
Amazon RDS or Amazon Aurora, but not for DynamoDB2. Using a global secondary index
(GSI) is not correct because this option is not related to read consistency. A GSI is an
index that has a partition key and an optional sort key that are different from those on the
base table. A GSI allows users to query the data in different ways, with eventual
consistency3. Requesting eventually consistent reads for the table is not correct because
this option is already the default behavior of DynamoDB and does not solve the problem of
requests not returning the latest data.
References:
Read consistency - Amazon DynamoDB
Working with read replicas - Amazon Relational Database Service
Working with global secondary indexes - Amazon DynamoDB
Question # 59
A company has an AWS Direct Connect connection from its corporate data center to itsVPC in the us-east-1 Region. The company recently acquired a corporation that hasseveral VPCs and a Direct Connect connection between its on-premises data center andthe eu-west-2 Region. The CIDR blocks for the VPCs of the company and the corporationdo not overlap. The company requires connectivity between two Regions and the datacenters. The company needs a solution that is scalable while reducing operationaloverhead. What should a solutions architect do to meet these requirements?
A. Set up inter-Region VPC peering between the VPC in us-east-1 and the VPCs in euwest-2. B. Create private virtual interfaces from the Direct Connect connection in us-east-1 to theVPCs in eu-west-2. C. Establish VPN appliances in a fully meshed VPN network hosted by Amazon EC2. UseAWS VPN CloudHub to send and receive data between the data centers and each VPC. D. Connect the existing Direct Connect connection to a Direct Connect gateway. Routetraffic from the virtual private gateways of the VPCs in each Region to the Direct Connectgateway.
Answer: D
Explanation: This solution meets the requirements because it allows the company to use a
single Direct Connect connection to connect to multiple VPCs in different Regions using a
Direct Connect gateway. A Direct Connect gateway is a globally available resource that
enables you to connect your on-premises network to VPCs in any AWS Region, except the
AWS China Regions. You can associate a Direct Connect gateway with a transit gateway
or a virtual private gateway in each Region. By routing traffic from the virtual private
gateways of the VPCs to the Direct Connect gateway, you can enable inter-Region and onpremises
connectivity for your VPCs. This solution is scalable because you can add more
VPCs in different Regions to the Direct Connect gateway without creating additional
connections. This solution also reduces operational overhead because you do not need to
manage multiple VPN appliances, VPN connections, or VPC peering connections.
References:
Direct Connect gateways
Inter-Region VPC peering
Question # 60
A company has five organizational units (OUs) as part of its organization in AWSOrganizations. Each OU correlates to the five businesses that the company owns. Thecompany's research and development (R&D) business is separating from the company andwill need its own organization. A solutions architect creates a separate new managementaccount for this purpose.What should the solutions architect do next in the new management account?
A. Have the R&D AWS account be part of both organizations during the transition. B. Invite the R&D AWS account to be part of the new organization after the R&D AWSaccount has left the prior organization. C. Create a new R&D AWS account in the new organization. Migrate resources from theprior R&D AWS account to the new R&D AWS account. D. Have the R&D AWS account join the new organization. Make the new managementaccount a member of the prior organization.
Answer: B
Explanation: it allows the solutions architect to create a separate organization for the
research and development (R&D) business and move its AWS account to the new organization. By inviting the R&D AWS account to be part of the new organization after it
has left the prior organization, the solutions architect can ensure that there is no overlap or
conflict between the two organizations. The R&D AWS account can accept or decline the
invitation to join the new organization. Once accepted, it will be subject to any policies and
controls applied by the new organization. References:
Inviting an AWS Account to Join Your Organization
Leaving an Organization as a Member Account
Question # 61
A company runs a real-time data ingestion solution on AWS. The solution consists of themost recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Thesolution is deployed in a VPC in private subnets across three Availability Zones.A solutions architect needs to redesign the data ingestion solution to be publicly availableover the internet. The data in transit must also be encrypted.Which solution will meet these requirements with the MOST operational efficiency
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A
Explanation: The solution that meets the requirements with the most operational efficiency
is to configure public subnets in the existing VPC and deploy an MSK cluster in the public subnets. This solution allows the data ingestion solution to be publicly available over the
internet without creating a new VPC or deploying a load balancer. The solution also
ensures that the data in transit is encrypted by enabling mutual TLS authentication, which
requires both the client and the server to present certificates for verification. This solution
leverages the public access feature of Amazon MSK, which is available for clusters running
Apache Kafka 2.6.0 or later versions1.
The other solutions are not as efficient as the first one because they either create
unnecessary resources or do not encrypt the data in transit. Creating a new VPC with
public subnets would incur additional costs and complexity for managing network resources
and routing. Deploying an ALB or an NLB would also add more costs and latency for the
data ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transit
by itself, unless they are configured with HTTPS listeners and certificates, which would
require additional steps and maintenance. Therefore, these solutions are not optimal for the
given requirements.
References:
Public access - Amazon Managed Streaming for Apache Kafka
Question # 62
A company is building a shopping application on AWS. The application offers a catalog thatchanges once each month and needs to scale with traffic volume. The company wants thelowest possible latency from the application. Data from each user's shopping carl needs tobe highly available. User session data must be available even if the user is disconnectedand reconnects.What should a solutions architect do to ensure that the shopping cart data is preserved atall times?
A. Configure an Application Load Balancer to enable the sticky sessions feature (sessionaffinity) for access to the catalog in Amazon Aurora. B. Configure Amazon ElastiCacJie for Redis to cache catalog data from AmazonDynamoDB and shopping carl data from the user's session. C. Configure Amazon OpenSearch Service to cache catalog data from AmazonDynamoDB and shopping cart data from the user's session. D. Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS)storage for the catalog and shopping cart. Configure automated snapshots.
Answer: B
Explanation:
To ensure that the shopping cart data is preserved at all times, a solutions architect should
configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB
and shopping cart data from the user’s session. This solution has the following benefits:
It offers the lowest possible latency from the application, as ElastiCache for Redis
is a blazing fast in-memory data store that provides sub-millisecond latency to
power internet-scale real-time applications1.
It scales with traffic volume, as ElastiCache for Redis supports horizontal scaling
by adding more nodes or shards to the cluster, and vertical scaling by changing
the node type2.
It is highly available, as ElastiCache for Redis supports replication across multiple
Availability Zones and automatic failover in case of a primary node failure3.
It preserves user session data even if the user is disconnected and reconnects, as
ElastiCache for Redis can store session data, such as user login information and
shopping cart contents, in a persistent and durable manner using snapshots or
A company has deployed a multiplayer game for mobile devices. The game requires livelocation tracking of players based on latitude and longitude. The data store for the gamemust support rapid updates and retrieval of locations.The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to storethe location data. During peak usage periods, the database is unable to maintain theperformance that is needed for reading and writing updates. The game's user base isincreasing rapidly.What should a solutions architect do to improve the performance of the data tier?
A. Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZenabled. B. Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearchDashboards. C. Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance.Modify the game to use DAX. D. Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance.Modify the game to use Redis.
Answer: D
Explanation: The solution that will improve the performance of the data tier is to deploy an
Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the
game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that
supports geospatial data types and commands. By using ElastiCache for Redis, the game
can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for
high-frequency updates and queries of location data. ElastiCache for Redis also supports
replication, sharding, and auto scaling to handle the increasing user base of the game.
The other solutions are not as effective as the first one because they either do not improve
the performance, do not support geospatial data, or do not leverage caching. Taking a
snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve
the performance of the data tier, as it only provides high availability and durability, but not
scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service
with OpenSearch Dashboards will not improve the performance of the data tier, as
OpenSearch Service is mainly designed for full-text search and analytics, not for real-time
location tracking. OpenSearch Service also does not support geospatial data types and
commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in
front of the existing DB instance and modifying the game to use DAX will not improve the
performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for
PostgreSQL. DAX also does not support geospatial data types and commands.
References:
Amazon ElastiCache for Redis
Geospatial Data Support - Amazon ElastiCache for Redis
Amazon RDS for PostgreSQL
Amazon OpenSearch Service
Amazon DynamoDB Accelerator (DAX)
Question # 64
A company wants to run its experimental workloads in the AWS Cloud. The company has abudget for cloud spending. The company's CFO is concerned about cloud spendingaccountability for each department. The CFO wants to receive notification when thespending threshold reaches 60% of the budget.Which solution will meet these requirements?
A. Use cost allocation tags on AWS resources to label owners. Create usage budgets inAWS Budgets. Add an alert threshold to receive notification when spending exceeds 60%of the budget. B. Use AWS Cost Explorer forecasts to determine resource owners. Use AWS CostAnomaly Detection to create alert threshold notifications when spending exceeds 60% ofthe budget. C. Use cost allocation tags on AWS resources to label owners. Use AWS Support API onAWS Trusted Advisor to create alert threshold notifications when spending exceeds 60% ofthe budget D. Use AWS Cost Explorer forecasts to determine resource owners. Create usage budgetsin AWS Budgets. Add an alert threshold to receive notification when spending exceeds60% of the budget.
Answer: A
Explanation: This solution meets the requirements because it allows the company to track
and manage its cloud spending by using cost allocation tags to assign costs to different
departments, creating usage budgets to set spending limits, and adding alert thresholds to
receive notifications when the spending reaches a certain percentage of the budget. This
way, the company can monitor its experimental workloads and avoid overspending on the
cloud.
References:
Using Cost Allocation Tags
Creating an AWS Budget
Creating an Alert for an AWS Budgetc
Question # 65
A city has deployed a web application running on Amazon EC2 instances behind anApplication Load Balancer (ALB). The application's users have reported sporadicperformance, which appears to be related to DDoS attacks originating from random IPaddresses. The city needs a solution that requires minimal configuration changes and provides an audit trail for the DDoS sources. Which solution meets these requirements?
A. Enable an AWS WAF web ACL on the ALB, and configure rules to block traffic fromunknown sources. B. Subscribe to Amazon Inspector. Engage the AWS DDoS Response Team (DRT) tointegrate mitigating controls into the service. C. Subscribe to AWS Shield Advanced. Engage the AWS DDoS Response Team (DRT) tointegrate mitigating controls into the service. D. Create an Amazon CloudFront distribution for the application, and set the ALB as theorigin. Enable an AWS WAF web ACL on the distribution, and configure rules to blocktraffic from unknown sources.
Answer: C
Explanation: To protect the web application from DDoS attacks originating from random IP
addresses, a solutions architect should subscribe to AWS Shield Advanced and engage
the AWS DDoS Response Team (DRT) to integrate mitigating controls into the service.
AWS Shield Advanced is a managed service that provides protection against large and
sophisticated DDoS attacks, with access to 24/7 support and response from the DRT. The
DRT can help the city configure proactive and reactive safeguards, such as AWS WAF
rules, rate-based rules, and network ACLs, to block malicious traffic and improve the
application’s resilience. The service also provides an audit trail for the DDoS sources
through detailed attack reports and Amazon CloudWatch metrics.
Question # 66
A company runs a web application on Amazon EC2 instances in an Auto Scaling group thathas a target group. The company desgned the application to work with session affinity(sticky sessions) for a better user experience.The application must be available publicly over the internet as an endpoint_ A WAF mustbe applied to the endpoint for additional security. Session affinity (sticky sessions) must beconfigured on the endpointWhich combination of steps will meet these requirements? (Select TWO)
A. Create a public Network Load Balancer Specify the application target group. B. Create a Gateway Load Balancer Specify the application target group. C. Create a public Application Load Balancer Specify the application target group. D. Create a second target group. Add Elastic IP addresses to the EC2 instances E. Create a web ACL in AWS WAF Associate the web ACL with the endpoint
Answer: C,E
Explanation: C and E are the correct answers because they allow the company to create a
public endpoint for its web application that supports session affinity (sticky sessions) and
has a WAF applied for additional security. By creating a public Application Load Balancer,
the company can distribute incoming traffic across multiple EC2 instances in an Auto
Scaling group and specify the application target group. By creating a web ACL in AWS
WAF and associating it with the Application Load Balancer, the company can protect its
web application from common web exploits. By enabling session stickiness on the
Application Load Balancer, the company can ensure that subsequent requests from a user
during a session are routed to the same target. References:
Application Load Balancers
AWS WAF
Target Groups for Your Application Load Balancers
How Application Load Balancer Works with Sticky Sessions
Question # 67
A security audit reveals that Amazon EC2 instances are not being patched regularly. Asolutions architect needs to provide a solution that will run regular security scans across alarge fleet of EC2 instances. The solution should also patch the EC2 instances on a regularschedule and provide a report of each instance's patch status.Which solution will meet these requirements?
A. Set up Amazon Macie to scan the EC2 instances for software vulnerabilities. Set up acron job on each EC2 instance to patch the instance on a regular schedule. B. Turn on Amazon GuardDuty in the account. Configure GuardDuty to scan the EC2instances for software vulnerabilities. Set up AWS Systems Manager Session Manager topatch the EC2 instances on a regular schedule. C. Set up Amazon Detective to scan the EC2 instances for software vulnerabilities. Set upan Amazon EventBridge scheduled rule to patch the EC2 instances on a regular schedule. D. Turn on Amazon Inspector in the account. Configure Amazon Inspector to scan the EC2instances for software vulnerabilities. Set up AWS Systems Manager Patch Manager topatch the EC2 instances on a regular schedule.
Answer: D
Explanation: Amazon Inspector is an automated security assessment service that helps
improve the security and compliance of applications deployed on AWS. Amazon Inspector
automatically assesses applications for exposure, vulnerabilities, and deviations from best
practices. After performing an assessment, Amazon Inspector produces a detailed list of
security findings prioritized by level of severity1. Amazon Inspector can scan the EC2
instances for software vulnerabilities and provide a report of each instance’s patch status.
AWS Systems Manager Patch Manager is a capability of AWS Systems Manager that
automates the process of patching managed nodes with both security-related updates and
other types of updates. Patch Manager uses patch baselines, which include rules for autoapproving
patches within days of their release, in addition to optional lists of approved and
rejected patches. Patch Manager can patch fleets of Amazon EC2 instances, edge
devices, on-premises servers, and virtual machines (VMs) by operating system type2.
Patch Manager can patch the EC2 instances on a regular schedule and provide a report of
each instance’s patch status. Therefore, the combination of Amazon Inspector and AWS
Systems Manager Patch Manager will meet the requirements of the question.
The other options are not valid because:
Amazon Macie is a security service that uses machine learning to automatically
discover, classify, and protect sensitive data in AWS. Amazon Macie does not
scan the EC2 instances for software vulnerabilities, but rather for data
classification and protection3. A cron job is a Linux command for scheduling a task
to be executed sometime in the future. A cron job is not a reliable way to patch the
EC2 instances on a regular schedule, as it may fail or be interrupted by other
processes4.
Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts and
workloads. Amazon GuardDuty does not scan the EC2 instances for software
vulnerabilities, but rather for network and API activity anomalies5. AWS Systems
Manager Session Manager is a fully managed AWS Systems Manager capability
that lets you manage your Amazon EC2 instances, edge devices, on-premises
servers, and virtual machines (VMs) through an interactive one-click browserbased
shell or the AWS Command Line Interface (AWS CLI). Session Manager
does not patch the EC2 instances on a regular schedule, but rather provides
secure and auditable node management2.
Amazon Detective is a security service that makes it easy to analyze, investigate,
and quickly identify the root cause of potential security issues or suspicious
activities. Amazon Detective does not scan the EC2 instances for software
vulnerabilities, but rather collects and analyzes data from AWS sources such as
Amazon GuardDuty, Amazon VPC Flow Logs, and AWS CloudTrail. Amazon EventBridge is a serverless event bus that makes it easy to connect applications
using data from your own applications, integrated Software-as-a-Service (SaaS)
applications, and AWS services. EventBridge delivers a stream of real-time data
from event sources, such as Zendesk, Datadog, or Pagerduty, and routes that
data to targets like AWS Lambda. EventBridge does not patch the EC2 instances
on a regular schedule, but rather triggers actions based on events.
References: Amazon Inspector, AWS Systems Manager Patch Manager, Amazon
A manufacturing company runs its report generation application on AWS. The applicationgenerates each report in about 20 minutes. The application is built as a monolith that runson a single Amazon EC2 instance. The application requires frequent updates to its tightlycoupled modules. The application becomes complex to maintain as the company adds newfeatures.Each time the company patches a software module, the application experiences downtime.Report generation must restart from the beginning after any interruptions. The companywants to redesign the application so that the application can be flexible, scalable, andgradually improved. The company wants to minimize application downtime.Which solution will meet these requirements?
A. Run the application on AWS Lambda as a single function with maximum provisionedconcurrency. B. Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleetdefault allocation strategy. C. Run the application on Amazon Elastic Container Service (Amazon ECS) asmicroservices with service auto scaling. D. Run the application on AWS Elastic Beanstalk as a single application environment withan all-at-once deployment strategy.
Answer: C
Explanation: The solution that will meet the requirements is to run the application on
Amazon Elastic Container Service (Amazon ECS) as microservices with service auto
scaling. This solution will allow the application to be flexible, scalable, and gradually
improved, as well as minimize application downtime. By breaking down the monolithic
application into microservices, the company can decouple the modules and update them
independently, without affecting the whole application. By running the microservices on
Amazon ECS, the company can leverage the benefits of containerization, such as
portability, efficiency, and isolation. By enabling service auto scaling, the company can
adjust the number of containers running for each microservice based on demand, ensuring optimal performance and cost. Amazon ECS also supports various deployment strategies,
such as rolling update or blue/green deployment, that can reduce or eliminate downtime
during updates.
The other solutions are not as effective as the first one because they either do not meet the
requirements or introduce new challenges. Running the application on AWS Lambda as a
single function with maximum provisioned concurrency will not meet the requirements, as it
will not break down the monolith into microservices, nor will it reduce the complexity of
maintenance. Lambda functions are also limited by execution time (15 minutes), memory
size (10 GB), and concurrency quotas, which may not be sufficient for the report generation
application. Running the application on Amazon EC2 Spot Instances as microservices with
a Spot Fleet default allocation strategy will not meet the requirements, as it will introduce
the risk of interruptions due to spot price fluctuations. Spot Instances are not guaranteed to
be available or stable, and may be reclaimed by AWS at any time with a two-minute
warning. This may cause report generation to fail or restart from scratch. Running the
application on AWS Elastic Beanstalk as a single application environment with an all-atonce
deployment strategy will not meet the requirements, as it will not break down the
monolith into microservices, nor will it minimize application downtime. The all-at-once
deployment strategy will deploy updates to all instances simultaneously, causing a brief
outage for the application.
References:
Amazon Elastic Container Service
Microservices on AWS
Service Auto Scaling - Amazon Elastic Container Service
AWS Lambda
Amazon EC2 Spot Instances
[AWS Elastic Beanstalk]
Question # 69
A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS)volumes to run an application. The company creates one snapshot of each EBS volumeevery day to meet compliance requirements. The company wants to implement anarchitecture that prevents the accidental deletion of EBS volume snapshots. The solutionmust not change the administrative rights of the storage administrator user.Which solution will meet these requirements with the LEAST administrative effort?
A. Create an 1AM role that has permission to delete snapshots. Attach the role to a newEC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots. B. Create an 1AM policy that denies snapshot deletion. Attach the policy to the storageadministrator user. C. Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots thathave the tags. D. Lock the EBS snapshots to prevent deletion.
Answer: D
Explanation: EBS snapshots are point-in-time backups of EBS volumes that can be used
to restore data or create new volumes. EBS snapshots can be locked to prevent accidental
deletion using a feature called EBS Snapshot Lock. When a snapshot is locked, it cannot
be deleted by any user, including the root user, until it is unlocked. The lock policy can also
specify a retention period, after which the snapshot can be deleted. This solution will meet
the requirements with the least administrative effort, as it does not require any code
development or policy changes.
References:
1 explains how to lock and unlock EBS snapshots using EBS Snapshot Lock.
2 describes the concept and benefits of EBS snapshots.
Question # 70
A company is deploying a new application to Amazon Elastic Kubernetes Service (AmazonEKS) with an AWS Fargate cluster. The application needs a storage solution for datapersistence. The solution must be highly available and fault tolerant. The solution also mustbe shared between multiple application containers.Which solution will meet these requirements with the LEAST operational overhead?
A. Create Amazon Elastic Block Store (Amazon EBS) volumes in the same AvailabilityZones where EKS worker nodes are placed. Register the volumes in a StorageClass objecton an EKS cluster. Use EBS Multi-Attach to share the data between containers. B. Create an Amazon Elastic File System (Amazon EFS) file system. Register the filesystem in a StorageClass object on an EKS cluster. Use the same file system for allcontainers. C. Create an Amazon Elastic Block Store (Amazon EBS) volume. Register the volume in aStorageClass object on an EKS cluster. Use the same volume for all containers. D. Create Amazon Elastic File System (Amazon EFS) file systems in the same AvailabilityZones where EKS worker nodes are placed. Register the file systems in a StorageClass object on an EKS cluster. Create an AWS Lambda function to synchronize the databetween file systems.
Answer: B
Explanation: Amazon EFS is a fully managed, elastic, and scalable file system that can be
shared between multiple containers. It provides high availability and fault tolerance by
replicating data across multiple Availability Zones. Amazon EFS is compatible with Amazon
EKS and AWS Fargate, and can be registered in a StorageClass object on an EKS cluster.
Amazon EBS volumes are not supported by AWS Fargate, and cannot be shared between
multiple containers without using EBS Multi-Attach, which has limitations and performance
implications. EBS Multi-Attach also requires the volumes to be in the same Availability
Zone as the worker nodes, which reduces availability and fault tolerance. Synchronizing
data between multiple EFS file systems using AWS Lambda is unnecessary, complex, and
prone to errors. References:
Amazon EFS Storage Classes
Amazon EKS Storage Classes
Amazon EBS Multi-Attach
Question # 71
A company has NFS servers in an on-premises data center that need to periodically backup small amounts of data to Amazon S3. Which solution meets these requirements and isMOST cost-effective?
A. Set up AWS Glue to copy the data from the on-premises servers to Amazon S3. B. Set up an AWS DataSync agent on the on-premises servers, and sync the data toAmazon S3. C. Set up an SFTP sync using AWS Transfer for SFTP to sync data from on premises toAmazon S3. D. Set up an AWS Direct Connect connection between the on-premises data center and aVPC, and copy the data to Amazon S3.
Answer: B
Explanation: AWS DataSync is a service that makes it easy to move large amounts of
data online between on-premises storage and AWS storage services. AWS DataSync can
transfer data at speeds up to 10 times faster than open-source tools by using a purposebuilt
network protocol and parallelizing data transfers. AWS DataSync also handles
encryption, data integrity verification, and bandwidth optimization. To use AWS DataSync,
users need to deploy a DataSync agent on their on-premises servers, which connects to
the NFS servers and syncs the data to Amazon S3. Users can schedule periodic or onetime
sync tasks and monitor the progress and status of the transfers.
The other options are not correct because they are either not cost-effective or not suitable
for the use case. Setting up AWS Glue to copy the data from the on-premises servers to
Amazon S3 is not cost-effective because AWS Glue is a serverless data integration service
that is mainly used for extract, transform, and load (ETL) operations, not for simple data
backup. Setting up an SFTP sync using AWS Transfer for SFTP to sync data from on
premises to Amazon S3 is not cost-effective because AWS Transfer for SFTP is a fully
managed service that provides secure file transfer using the SFTP protocol, which is more
suitable for exchanging data with third parties than for backing up data. Setting up an AWS
Direct Connect connection between the on-premises data center and a VPC, and copying
the data to Amazon S3 is not cost-effective because AWS Direct Connect is a dedicated
network connection between AWS and the on-premises location, which has high upfront
costs and requires additional configuration.
References:
AWS DataSync
How AWS DataSync works
AWS DataSync FAQs
Question # 72
A company has an application that uses Docker containers in its local data center Theapplication runs on a container host that stores persistent data in a volume on the host.The container instances use the stored persistent data.The company wants to move the application to a fully managed service because thecompany does not want to manage any servers or storage infrastructure.Which solution will meet these requirements?
A. Use Amazon Elastic Kubernetes Service (Amazon EKS) with self-managed nodes.Create an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2instance. Use the EBS volume as a persistent volume mounted in the containers. B. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launchtype. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volumeas a persistent storage volume mounted in the containers. C. Use Amazon Elastic Container Service (Amazon ECS) with an AWS Fargate launchtype. Create an Amazon S3 bucket. Map the S3 bucket as a persistent storage volumemounted in the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with an Amazon EC2 launchtype. Create an Amazon Elastic File System (Amazon EFS) volume. Add the EFS volumeas a persistent storage volume mounted in the containers.
Answer: B
Explanation: This solution meets the requirements because it allows the company to move
the application to a fully managed service without managing any servers or storage
infrastructure. AWS Fargate is a serverless compute engine for containers that runs the
Amazon ECS tasks. With Fargate, the company does not need to provision, configure, or
scale clusters of virtual machines to run containers. Amazon EFS is a fully managed file
system that can be accessed by multiple containers concurrently. With EFS, the company
does not need to provision and manage storage capacity. EFS provides a simple interface
to create and configure file systems quickly and easily. The company can use the EFS
volume as a persistent storage volume mounted in the containers to store the persistent
data. The company can also use the EFS mount helper to simplify the mounting
process. References: Amazon ECS on AWS Fargate, Using Amazon EFS file systems with
Amazon ECS, Amazon EFS mount helper.
Question # 73
A company stores critical data in Amazon DynamoDB tables in the company's AWSaccount. An IT administrator accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's operations. The company wants toprevent this type of disruption in the future.Which solution will meet this requirement with the LEAST operational overhead?
A. Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for deleteactions. Create an AWS Lambda function to automatically restore deleted DynamoDBtables. B. Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDBtables manually. C. Configure deletion protection on the DynamoDB tables. D. Enable point-in-time recovery on the DynamoDB tables.
Answer: C
Explanation: Deletion protection is a feature of DynamoDB that prevents accidental
deletion of tables. When deletion protection is enabled, you cannot delete a table unless
you explicitly disable it first. This adds an extra layer of security and reduces the risk of
data loss and operational disruption. Deletion protection is easy to enable and disable
using the AWS Management Console, the AWS CLI, or the DynamoDB API. This solution
has the least operational overhead, as you do not need to create, manage, or invoke any
additional resources or services. References:
Using deletion protection to protect your table
Preventing Accidental Table Deletion in DynamoDB
Amazon DynamoDB now supports table deletion protection
Question # 74
A company hosts multiple applications on AWS for different product lines. The applicationsuse different compute resources, including Amazon EC2 instances and Application LoadBalancers. The applications run in different AWS accounts under the same organization inAWS Organizations across multiple AWS Regions. Teams for each product line havetagged each compute resource in the individual accounts.The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.Which combination of steps will meet these requirements? (Select TWO.)
A. Select a specific AWS generated tag in the AWS Billing console. B. Select a specific user-defined tag in the AWS Billing console. C. Select a specific user-defined tag in the AWS Resource Groups console. D. Activate the selected tag from each AWS account. E. Activate the selected tag from the Organizations management account.
Answer: B,E
Explanation: User-defined tags are key-value pairs that can be applied to AWS resources
to categorize and track them. User-defined tags can also be used to allocate costs and
create detailed billing reports in the AWS Billing console. To use user-defined tags for cost
allocation, the tags must be activated from the Organizations management account, which
is the root account that has full control over all the member accounts in the organization.
Once activated, the user-defined tags will appear as columns in the cost allocation report,
and can be used to filter and group costs by product line. This solution will meet the
requirements with the least operational overhead, as it leverages the existing tagging
strategy and does not require any code development or manual intervention.
References:
1 explains how to use user-defined tags for cost allocation.
2 describes how to access and manage member accounts from the Organizations
management account.
3 discusses how to create and view cost allocation reports in the AWS Billing
console.
Question # 75
A solutions architect needs to ensure that API calls to Amazon DynamoDB from AmazonEC2 instances in a VPC do not travel across the internet.Which combination of steps should the solutions architect take to meet this requirement?(Choose two.)
A. Create a route table entry for the endpoint. B. Create a gateway endpoint for DynamoDB. C. Create an interface endpoint for Amazon EC2. D. Create an elastic network interface for the endpoint in each of the subnets of the VPC. E. Create a security group entry in the endpoint's security group to provide access.
Answer: B,E
Explanation: B and E are the correct answers because they allow the solutions architect to
ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not
travel across the internet. By creating a gateway endpoint for DynamoDB, the solutions
architect can enable private connectivity between the VPC and DynamoDB. By creating a
security group entry in the endpoint’s security group to provide access, the solutions
architect can control which EC2 instances can communicate with DynamoDB through the
endpoint. References:
Gateway Endpoints
Controlling Access to Services with VPC Endpoints
Question # 76
A company hosts a data lake on Amazon S3. The data lake ingests data in ApacheParquet format from various data sources. The company uses multiple transformationsteps to prepare the ingested data. The steps include filtering of anomalies, normalizing ofdata to standard date and time values, and generation of aggregates for analyses.The company must store the transformed data in S3 buckets that data analysts access.The company needs a prebuilt solution for data transformation that does not require code.The solution must provide data lineage and data profiling. The company needs to share thedata transformation steps with employees throughout the company.Which solution will meet these requirements?
A. Configure an AWS Glue Studio visual canvas to transform the data. Share thetransformation steps with employees by using AWS Glue jobs. B. Configure Amazon EMR Serverless to transform the data. Share the transformationsteps with employees by using EMR Serveriess jobs. C. Configure AWS Glue DataBrew to transform the data. Share the transformation stepswith employees by using DataBrew recipes. D. Create Amazon Athena tables for the data. Write Athena SQL queries to transform thedata. Share the Athena SQL queries with employees.
Answer: C
Explanation: The most suitable solution for the company’s requirements is to configure
AWS Glue DataBrew to transform the data and share the transformation steps with
employees by using DataBrew recipes. This solution will provide a prebuilt solution for data
transformation that does not require code, and will also provide data lineage and data profiling. The company can easily share the data transformation steps with employees
throughout the company by using DataBrew recipes.
AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts
and data scientists to clean and normalize data for analytics or machine learning by up to
80% faster. Users can upload their data from various sources, such as Amazon S3,
Amazon RDS, Amazon Redshift, Amazon Aurora, or Glue Data Catalog, and use a pointand-
click interface to apply over 250 built-in transformations. Users can also preview the
results of each transformation step and see how it affects the quality and distribution of the
data1.
A DataBrew recipe is a reusable set of transformation steps that can be applied to one or
more datasets. Users can create recipes from scratch or use existing ones from the
DataBrew recipe library. Users can also export, import, or share recipes with other users or
groups within their AWS account or organization2.
DataBrew also provides data lineage and data profiling features that help users understand
and improve their data quality. Data lineage shows the source and destination of each
dataset and how it is transformed by each recipe step. Data profiling shows various
statistics and metrics about each dataset, such as column
Question # 77
A company is using an Application Load Balancer (ALB) to present its application to theinternet. The company finds abnormal traffic access patterns across the application. Asolutions architect needs to improve visibility into the infrastructure to help the companyunderstand these abnormalities better.What is the MOST operationally efficient solution that meets these requirements?
A. Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for therelevant information. B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and querythe logs. C. Enable ALB access logging to Amazon S3 Open each file in a text editor, and searcheach line for the relevant information D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB toacquire traffic access log information.
Answer: B
Explanation: This solution meets the requirements because it allows the company to improve visibility into the infrastructure by using ALB access logging and Amazon Athena.
ALB access logging is a feature that captures detailed information about requests sent to
the load balancer, such as the client’s IP address, request path, response code, and
latency. By enabling ALB access logging to Amazon S3, the company can store the access
logs in an S3 bucket as compressed files. Amazon Athena is an interactive query service
that makes it easy to analyze data in Amazon S3 using standard SQL. By creating a table
in Amazon Athena for the access logs, the company can query the logs and get results in
seconds. This way, the company can better understand the abnormal traffic access
patterns across the application.
References:
Access logs for your Application Load Balancer
Querying Application Load Balancer Logs
Question # 78
A company copies 200 TB of data from a recent ocean survey onto AWS Snowball EdgeStorage Optimized devices. The company has a high performance computing (HPC)cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect mustprovide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending thedevices back to AWS.Which solution will meet these requirements?
A. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWSStorage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPCcluster instances. B. Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AmazonFSx for Lustre file system, and integrate it with the S3 bucket. Access the FSx for Lustrefile system from the HPC cluster instances. C. Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) filesystem. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFSfile system. Access the EFS file system from the HPC cluster instances. D. Create an Amazon FSx for Lustre file system. Import the data directly into the FSx forLustre file system. Access the FSx for Lustre file system from the HPC cluster instances.
Answer: B
Explanation: To provide the HPC cluster with consistent sub-millisecond latency and highthroughput
access to the data on the Snowball Edge Storage Optimized devices, a
solutions architect should configure an Amazon FSx for Lustre file system, and integrate it
with an Amazon S3 bucket. This solution has the following benefits:
It allows the HPC cluster to access the data on the Snowball Edge devices using a
POSIX-compliant file system that is optimized for fast processing of large
datasets1.
It enables the data to be imported from the Snowball Edge devices into the S3
bucket using the AWS Snow Family Console or the AWS CLI2. The data can then
be accessed from the FSx for Lustre file system using the S3 integration feature3.
It supports high availability and durability of the data, as the FSx for Lustre file
system can automatically copy the data to and from the S3 bucket3. The data can
also be accessed from other AWS services or applications using the S3 API4.
A gaming company wants to launch a new internet-facing application in multiple AWSRegions The application will use the TCP and UDP protocols for communication. Thecompany needs to provide high availability and minimum latency for global users.Which combination of actions should a solutions architect take to meet theserequirements? (Select TWO.)
A. Create internal Network Load Balancers in front of the application in each Region. B. Create external Application Load Balancers in front of the application in each Region. C. Create an AWS Global Accelerator accelerator to route traffic to the load balancers ineach Region. D. Configure Amazon Route 53 to use a geolocation routing policy to distribute the traffic. E. Configure Amazon CloudFront to handle the traffic and route requests to the applicationin each Region.
Answer: B,C
Explanation: This combination of actions will provide high availability and minimum latency
for global users by using AWS Global Accelerator and Application Load Balancers. AWS
Global Accelerator is a networking service that helps you improve the availability,
performance, and security of your internet-facing applications by using the AWS global
network. It provides two global static public IPs that act as a fixed entry point to your
application endpoints, such as Application Load Balancers, in multiple Regions1. Global
Accelerator uses the AWS backbone network to route traffic to the optimal regional
endpoint based on health, client location, and policies that you configure. It also offers TCP
and UDP support, traffic encryption, and DDoS protection2. Application Load Balancers are
external load balancers that distribute incoming application traffic across multiple targets,
such as EC2 instances, in multiple Availability Zones. They support both HTTP and HTTPS
(SSL/TLS) protocols, and offer advanced features such as content-based routing, health
checks, and integration with other AWS services3. By creating external Application Load
Balancers in front of the application in each Region, you can ensure that the application
can handle varying load patterns and scale on demand. By creating an AWS Global
Accelerator accelerator to route traffic to the load balancers in each Region, you can
leverage the performance, security, and availability of the AWS global network to deliver
the best possible user experience.
References: 1: What is AWS Global Accelerator? - AWS Global Accelerator4, Overview
section2: Network Acceleration Service - AWS Global Accelerator - AWS5, Why AWS
Global Accelerator? section. 3: What is an Application Load Balancer? - Elastic Load
Balancing6, Overview section.
Question # 80
A company's application runs on Amazon EC2 instances that are in multiple AvailabilityZones. The application needs to ingest real-time data from third-party applications.The company needs a data ingestion solution that places the ingested raw data in anAmazon S3 bucket.Which solution will meet these requirements?
A. Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis DataFirehose delivery streams to consume the Kinesis data streams. Specify the S3 bucket asthe destination of the delivery streams. B. Create database migration tasks in AWS Database Migration Service (AWS DMS).Specify replication instances of the EC2 instances as the source endpoints. Specify the S3bucket as the target endpoint. Set the migration type to migrate existing data and replicateongoing changes. C. Create and configure AWS DataSync agents on the EC2 instances. Configure DataSynctasks to transfer data from the EC2 instances to the S3 bucket. D. Create an AWS Direct Connect connection to the application for data ingestion. CreateAmazon Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3 bucket as the destination of the delivery streams.
Answer: A
Explanation: The solution that will meet the requirements is to create Amazon Kinesis
data streams for data ingestion, create Amazon Kinesis Data Firehose delivery streams to
consume the Kinesis data streams, and specify the S3 bucket as the destination of the
delivery streams. This solution will allow the company’s application to ingest real-time data
from third-party applications and place the ingested raw data in an S3 bucket. Amazon
Kinesis data streams are scalable and durable streams that can capture and store data
from hundreds of thousands of sources. Amazon Kinesis Data Firehose is a fully managed
service that can deliver streaming data to destinations such as S3, Amazon Redshift,
Amazon OpenSearch Service, and Splunk. Amazon Kinesis Data Firehose can also
transform and compress the data before delivering it to S3.
The other solutions are not as effective as the first one because they either do not support
real-time data ingestion, do not work with third-party applications, or do not use S3 as the
destination. Creating database migration tasks in AWS Database Migration Service (AWS
DMS) will not support real-time data ingestion, as AWS DMS is mainly designed for
migrating relational databases, not streaming data. AWS DMS also requires replication
instances, source endpoints, and target endpoints to be compatible with specific database
engines and versions. Creating and configuring AWS DataSync agents on the EC2
instances will not work with third-party applications, as AWS DataSync is a service that
transfers data between on-premises storage systems and AWS storage services, not
between applications. AWS DataSync also requires installing agents on the source or
destination servers. Creating an AWS Direct Connect connection to the application for data
ingestion will not use S3 as the destination, as AWS Direct Connect is a service that
establishes a dedicated network connection between on-premises and AWS, not between
applications and storage services. AWS Direct Connect also requires a physical connection
to an AWS Direct Connect location.
References:
Amazon Kinesis
Amazon Kinesis Data Firehose
AWS Database Migration Service
AWS DataSync
AWS Direct Connect
Question # 81
A financial services company wants to shut down two data centers and migrate more than100 TB of data to AWS. The data has an intricate directory structure with millions of smallfiles stored in deep hierarchies of subfolders. Most of the data is unstructured, and thecompany's file storage consists of SMB-based storage types from multiple vendors. Thecompany does not want to change its applications to access the data after migration.What should a solutions architect do to meet these requirements with the LEASToperational overhead?
A. Use AWS Direct Connect to migrate the data to Amazon S3. B. Use AWS DataSync to migrate the data to Amazon FSx for Lustre. C. Use AWS DataSync to migrate the data to Amazon FSx for Windows File Server. D. Use AWS Direct Connect to migrate the data on-premises file storage to an AWSStorage Gateway volume gateway.
Answer: C
Explanation: AWS DataSync is a data transfer service that simplifies, automates, and
accelerates moving data between on-premises storage systems and AWS storage services
over the internet or AWS Direct Connect1. AWS DataSync can transfer data to Amazon
FSx for Windows File Server, which is a fully managed file system that is accessible over
the industry-standard Server Message Block (SMB) protocol. Amazon FSx for Windows
File Server is built on Windows Server, delivering a wide range of administrative features
such as user quotas, end-user file restore, and Microsoft Active Directory (AD) integration2.
This solution meets the requirements of the question because:
It can migrate more than 100 TB of data to AWS within a reasonable time frame,
as AWS DataSync is optimized for high-speed and efficient data transfer1.
It can preserve the intricate directory structure and the millions of small files stored
in deep hierarchies of subfolders, as AWS DataSync can handle complex file
structures and metadata, such as file names, permissions, and timestamps1.
It can avoid changing the applications to access the data after migration, as
Amazon FSx for Windows File Server supports the same SMB protocol and
Windows Server features that the company’s on-premises file storage uses2.
It can reduce the operational overhead, as AWS DataSync and Amazon FSx for
Windows File Server are fully managed services that handle the tasks of setting
up, configuring, and maintaining the data transfer and the file system12.
Question # 82
An loT company is releasing a mattress that has sensors to collect data about a user'ssleep. The sensors will send data to an Amazon S3 bucket. The sensors collectapproximately 2 MB of data every night for each mattress. The company must process andsummarize the data for each mattress. The results need to be available as soon aspossible Data processing will require 1 GB of memory and will finish within 30 seconds.Which solution will meet these requirements MOST cost-effectively?
A. Use AWS Glue with a Scalajob. B. Use Amazon EMR with an Apache Spark script. C. Use AWS Lambda with a Python script. D. Use AWS Glue with a PySpark job.
Answer: C
Explanation: AWS Lambda charges you based on the number of invocations and the
execution time of your function. Since the data processing job is relatively small (2 MB of
data), Lambda is a cost-effective choice. You only pay for the actual usage without the
need to provision and maintain infrastructure.
Question # 83
A company plans to migrate toAWS and use Amazon EC2 On-Demand Instances for itsapplication. During the migration testing phase, a technical team observes that theapplication takes a long time to launch and load memory to become fully productive.Which solution will reduce the launch time of the application during the next testing phase?
A. Launch two or more EC2 On-Demand Instances. Turn on auto scaling features andmake the EC2 On-Demand Instances available during the next testing phase. B. Launch EC2 Spot Instances to support the application and to scale the application so itis available during the next testing phase. C. Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 AutoScaling warm pools during the next testing phase. D. Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2instances during the next testing phase.
Answer: C
Explanation: The solution that will reduce the launch time of the application during the
next testing phase is to launch the EC2 On-Demand Instances with hibernation turned on
and configure EC2 Auto Scaling warm pools. This solution allows the application to resume
from a hibernated state instead of starting from scratch, which can save time and resources. Hibernation preserves the memory (RAM) state of the EC2 instances to the root
EBS volume and then stops the instances. When the instances are resumed, they restore
their memory state from the EBS volume and become productive quickly. EC2 Auto Scaling
warm pools can be used to maintain a pool of pre-initialized instances that are ready to
scale out when needed. Warm pools can also support hibernated instances, which can
further reduce the launch time and cost of scaling out.
The other solutions are not as effective as the first one because they either do not reduce
the launch time, do not guarantee availability, or do not use On-Demand Instances as
required. Launching two or more EC2 On-Demand Instances with auto scaling features
does not reduce the launch time of the application, as each instance still has to go through
the initialization process. Launching EC2 Spot Instances does not guarantee availability, as
Spot Instances can be interrupted by AWS at any time when there is a higher demand for
capacity. Launching EC2 On-Demand Instances with Capacity Reservations does not
reduce the launch time of the application, as it only ensures that there is enough capacity
available for the instances, but does not pre-initialize them.
References:
Hibernating your instance - Amazon Elastic Compute Cloud
Warm pools for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling
Question # 84
A company runs an application on AWS. The application receives inconsistent amounts ofusage. The application uses AWS Direct Connect to connect to an on-premises MySQLcompatibledatabase. The on-premises database consistently uses a minimum of 2 GiB ofmemory.The company wants to migrate the on-premises database to a managed AWS service. Thecompany wants to use auto scaling capabilities to manage unexpected workload increases.Which solution will meet these requirements with the LEAST administrative overhead?
A. Provision an Amazon DynamoDB database with default read and write capacity settings. B. Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacityunit (ACU). C. Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1Aurora capacity unit (ACU). D. Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Answer: C
Explanation: it allows the company to migrate the on-premises database to a managed
AWS service that supports auto scaling capabilities and has the least administrative
overhead. Amazon Aurora Serverless v2 is a configuration of Amazon Aurora that
automatically scales compute capacity based on workload demand. It can scale from
hundreds to hundreds of thousands of transactions in a fraction of a second. Amazon
Aurora Serverless v2 also supports MySQL-compatible databases and AWS Direct
Connect connectivity. References:
Amazon Aurora Serverless v2
Connecting to an Amazon Aurora DB Cluster
Question # 85
A company uses AWS Organizations. The company wants to operate some of its AWSaccounts with different budgets. The company wants to receive alerts and automaticallyprevent provisioning of additional resources on AWS accounts when the allocated budgetthreshold is met during a specific period.Which combination of solutions will meet these requirements? (Select THREE.)
A. Use AWS Budgets to create a budget. Set the budget amount under the Cost andUsage Reports section of the required AWS accounts. B. Use AWS Budgets to create a budget. Set the budget amount under the Billingdashboards of the required AWS accounts. C. Create an 1AM user for AWS Budgets to run budget actions with the requiredpermissions. D. Create an 1AM role for AWS Budgets to run budget actions with the requiredpermissions. E. Add an alert to notify the company when each account meets its budget threshold. Adda budget action that selects the 1AM identity created with the appropriate config rule toprevent provisioning of additional resources. F. Add an alert to notify the company when each account meets its budget threshold. Add abudget action that selects the 1AM identity created with the appropriate service controlpolicy (SCP) to prevent provisioning of additional resources.
Answer: B,D,F
Explanation: To use AWS Budgets to create and manage budgets for different AWS
accounts, the company needs to do the following steps:
Use AWS Budgets to create a budget for each AWS account that needs a different
budget amount. The budget can be based on cost or usage metrics, and can have
different time periods, filters, and thresholds. The company can set the budget
amount under the Billing dashboards of the required AWS accounts1.
Create an IAM role for AWS Budgets to run budget actions with the required
permissions. A budget action is a response that AWS Budgets initiates when a
budget exceeds a specified threshold. The IAM role allows AWS Budgets to
perform actions on behalf of the company, such as applying an IAM policy or a
service control policy (SCP) to restrict the provisioning of additional resources2.
Add an alert to notify the company when each account meets its budget threshold.
The alert can be sent via email or Amazon SNS. The company can also add a
budget action that selects the IAM role created and the appropriate SCP to prevent
provisioning of additional resources. An SCP is a type of policy that can be applied
to an AWS account or an organizational unit (OU) within AWS Organizations. An
SCP can limit the actions that users and roles can perform in the account or OU3.
A recent analysis of a company's IT expenses highlights the need to reduce backup costs.The company's chief information officer wants to simplify the on- premises backupinfrastructure and reduce costs by eliminating the use of physical backup tapes. Thecompany must preserve the existing investment in the on- premises backup applicationsand workflows.What should a solutions architect recommend?
A. Set up AWS Storage Gateway to connect with the backup applications using the NFSinterface. B. Set up an Amazon EFS file system that connects with the backup applications using theNFS interface. C. Set up an Amazon EFS file system that connects with the backup applications using theiSCSI interface. D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSIvirtualtape library (VTL) interface.
Answer: D
Explanation: it allows the company to simplify the on-premises backup infrastructure and
reduce costs by eliminating the use of physical backup tapes. By setting up AWS Storage
Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL)
interface, the company can store backup data on virtual tapes in S3 or Glacier. This
preserves the existing investment in the on-premises backup applications and workflows
while leveraging AWS storage services. References:
AWS Storage Gateway
Tape Gateway
Question # 87
A company is migrating its multi-tier on-premises application to AWS. The applicationconsists of a single-node MySQL database and a multi-node web tier. The company mustminimize changes to the application during the migration. The company wants to improveapplication resiliency after the migration.Which combination of steps will meet these requirements? (Select TWO.)
A. Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind anApplication Load Balancer. B. Migrate the database to Amazon EC2 instances in an Auto Scaling group behind aNetwork Load Balancer. C. Migrate the database to an Amazon RDS Multi-AZ deployment. D. Migrate the web tier to an AWS Lambda function. E. Migrate the database to an Amazon DynamoDB table.
Answer: A,C
Explanation: An Auto Scaling group is a collection of EC2 instances that share similar
characteristics and can be scaled in or out automatically based on demand. An Auto
Scaling group can be placed behind an Application Load Balancer, which is a type of
Elastic Load Balancing load balancer that distributes incoming traffic across multiple
targets in multiple Availability Zones. This solution will improve the resiliency of the web tier
by providing high availability, scalability, and fault tolerance. An Amazon RDS Multi-AZ
deployment is a configuration that automatically creates a primary database instance and
synchronously replicates the data to a standby instance in a different Availability Zone.
When a failure occurs, Amazon RDS automatically fails over to the standby instance
without manual intervention. This solution will improve the resiliency of the database tier by
providing data redundancy, backup support, and availability. This combination of steps will
meet the requirements with minimal changes to the application during the migration.
References:
1 describes the concept and benefits of an Auto Scaling group.
2 provides an overview of Application Load Balancers and their benefits.
3 explains how Amazon RDS Multi-AZ deployments work and their benefits.
Question # 88
A company runs its applications on Amazon EC2 instances that are backed by AmazonElastic Block Store (Amazon EBS). The EC2 instances run the most recent Amazon Linuxrelease. The applications are experiencing availability issues when the company's employees store and retrieve files that are 25 GB or larger. The company needs a solutionthat does not require the company to transfer files between EC2 instances. The files mustbe available across many EC2 instances and across multiple Availability Zones.Which solution will meet these requirements?
A. Migrate all the files to an Amazon S3 bucket. Instruct the employees to access the filesfrom the S3 bucket. B. Take a snapshot of the existing EBS volume. Mount the snapshot as an EBS volumeacross the EC2 instances. Instruct the employees to access the files from the EC2instances. C. Mount an Amazon Elastic File System (Amazon EFS) file system across all the EC2instances. Instruct the employees to access the files from the EC2 instances. D. Create an Amazon Machine Image (AMI) from the EC2 instances. Configure new EC2instances from the AMI that use an instance store volume. Instruct the employees toaccess the files from the EC2 instances
Answer: C
Explanation: To store and access files that are 25 GB or larger across many EC2
instances and across multiple Availability Zones, Amazon Elastic File System (Amazon
EFS) is a suitable solution. Amazon EFS provides a simple, scalable, elastic file system
that can be mounted on multiple EC2 instances concurrently. Amazon EFS supports high
availability and durability by storing data across multiple Availability Zones within a Region.
References:
What Is Amazon Elastic File System?
Using EFS with EC2
Question # 89
A company has an application that delivers on-demand training videos to students aroundthe world. The application also allows authorized content developers to upload videos. Thedata is stored in an Amazon S3 bucket in the us-east-2 Region.The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in theap-southeast-1 Region. The company wants to replicate the data to the new S3 buckets.The company needs to minimize latency for developers who upload videos and studentswho stream videos near eu-west-2 and ap-southeast-1. Which combination of steps will meet these requirements with the FEWEST changes to theapplication? (Select TWO.)
A. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket.Configure one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3bucket. B. Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket.Configure one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3bucket. C. Configure two-way (bidirectional) replication among the S3 buckets that are in all threeRegions. D. Create an S3 Multi-Region Access Point. Modify the application to use the AmazonResource Name (ARN) of the Multi-Region Access Point for video streaming. Do notmodify the application for video uploads. E. Create an S3 Multi-Region Access Point Modify the application to use the AmazonResource Name (ARN) of the Multi-Region Access Point for video streaming and uploads.
Answer: A,E
Explanation: These two steps will meet the requirements with the fewest changes to the
application because they will enable the company to replicate the data to the new S3
buckets and minimize latency for both video streaming and uploads. One-way replication
from the us-east-2 S3 bucket to the other two S3 buckets will ensure that the data is
synchronized across all three regions. The company can use S3 Cross-Region Replication
(CRR) to automatically copy objects across buckets in different AWS Regions. CRR can
help the company achieve lower latency and compliance requirements by keeping copies
of their data in different regions. Creating an S3 Multi-Region Access Point and modifying
the application to use its ARN will allow the company to access the data through a single
global endpoint. An S3 Multi-Region Access Point is a globally unique name that can be
used to access objects stored in S3 buckets across multiple regions. It automatically routes
requests to the closest S3 bucket with the lowest latency. By using an S3 Multi-Region
Access Point, the company can simplify the application architecture and improve the
performance and reliability of the application.
References:
Replicating objects
Multi-Region Access Points in Amazon
Question # 90
A company has 150 TB of archived image data stored on-premises that needs to be movedto the AWS Cloud within the next month. The company's current network connection allowsup to 100 Mbps uploads for this purpose during the night only.What is the MOST cost-effective mechanism to move this data and meet the migrationdeadline?
A. Use AWS Snowmobile to ship the data to AWS. B. Order multiple AWS Snowball devices to ship the data to AWS. C. Enable Amazon S3 Transfer Acceleration and securely upload the data. D. Create an Amazon S3 VPC endpoint and establish a VPN to upload the data.
Answer: B
Explanation: AWS Snowball is a petabyte-scale data transport service that uses secure
devices to transfer large amounts of data into and out of the AWS Cloud. Snowball
addresses common challenges with large-scale data transfers including high network
costs, long transfer times, and security concerns. AWS Snowball can transfer up to 80 TB
of data per device, and multiple devices can be used in parallel to meet the migration
deadline. AWS Snowball is more cost-effective than AWS Snowmobile, which is designed
for exabyte-scale data transfers, or Amazon S3 Transfer Acceleration, which is optimized
for fast transfers over long distances. Amazon S3 VPC endpoint does not increase the
upload speed, but only provides a secure and private connection between the VPC and
S3. References: AWS Snowball, AWS Snowmobile, Amazon S3 Transfer
Acceleration, Amazon S3 VPC endpoint
Question # 91
A company has an on-premises MySQL database that handles transactional data. Thecompany is migrating the database to the AWS Cloud. The migrated database mustmaintain compatibility with the company's applications that use the database. The migrateddatabase also must scale automatically during periods of increased demand.Which migration solution will meet these requirements?
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configureelastic storage scaling. B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on AutoScaling for the Amazon Redshift cluster. C. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonAurora. Turn on Aurora Auto Scaling. D. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonDynamoDB. Configure an Auto Scaling policy.
Answer: C
Explanation: To migrate a MySQL database to AWS with compatibility and scalability,
Amazon Aurora is a suitable option. Aurora is compatible with MySQL and can scale
automatically with Aurora Auto Scaling. AWS Database Migration Service (AWS DMS) can
be used to migrate the database from on-premises to Aurora with minimal downtime.
References:
What Is Amazon Aurora?
Using Amazon Aurora Auto Scaling with Aurora Replicas
What Is AWS Database Migration Service?
Question # 92
A solutions architect wants to use the following JSON text as an identity-based policy togrant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Select TWO.)
A. Role B. Group C. Organization D. Amazon Elastic Container Service (Amazon ECS) resource E. Amazon EC2 resource
Answer: A,B
Explanation:
This JSON text is an identity-based policy that grants specific permissions. The IAM
principals that the solutions architect can attach this policy to are Role and Group. This is
because the policy is written in JSON and is an identity-based policy, which can be
attached to IAM principals such as users, groups, and roles. Identity-based policies are
permissions policies that you attach to IAM identities (users, groups, or roles) and explicitly
state what that identity is allowed (or denied) to do1. Identity-based policies are different
from resource-based policies, which define the permissions around the specific
resource1. Resource-based policies are attached to a resource, such as an Amazon S3
bucket or an Amazon EC2 instance1. Resource-based policies can also specify a principal,
which is the entity that is allowed or denied access to the resource1. Organization is not an
IAM principal, but a feature of AWS Organizations that allows you to manage multiple AWS accounts centrally2. Amazon ECS resource and Amazon EC2 resource are not IAM
principals, but AWS resources that can have resource-based policies attached to them34.
References:
Identity-based policies and resource-based policies
AWS Organizations
Amazon ECS task role
Amazon EC2 instance profile
Question # 93
A company that uses AWS needs a solution to predict the resources needed formanufacturing processes each month. The solution must use historical values that arecurrently stored in an Amazon S3 bucket The company has no machine learning (ML)experience and wants to use a managed service for the training and predictions.Which combination of steps will meet these requirements? (Select TWO.)
A. Deploy an Amazon SageMaker model. Create a SageMaker endpoint for inference. B. Use Amazon SageMaker to train a model by using the historical data in the S3 bucket. C. Configure an AWS Lambda function with a function URL that uses Amazon SageMakerendpoints to create predictions based on the inputs. D. Configure an AWS Lambda function with a function URL that uses an Amazon Forecastpredictor to create a prediction based on the inputs. E. Train an Amazon Forecast predictor by using the historical data in the S3 bucket.
Answer: B,E
Explanation: To predict the resources needed for manufacturing processes each month
using historical values that are currently stored in an Amazon S3 bucket, a solutions
architect should use Amazon SageMaker to train a model by using the historical data in the
S3 bucket, and deploy an Amazon SageMaker model and create a SageMaker endpoint for
inference. Amazon SageMaker is a fully managed service that provides an easy way to
build, train, and deploy machine learning (ML) models. The solutions architect can use the
built-in algorithms or frameworks provided by SageMaker, or bring their own custom code,
to train a model using the historical data in the S3 bucket as input. The trained model can
then be deployed to a SageMaker endpoint, which is a scalable and secure web service
that can handle requests for predictions from the application. The solutions architect does
not need to have any ML experience or manage any infrastructure to use SageMaker.
Question # 94
A company is running a legacy system on an Amazon EC2 instance. The application codecannot be modified, and the system cannot run on more than one instance. A solutionsarchitect must design a resilient solution that can improve the recovery time for the system.What should the solutions architect recommend to meet these requirements?
A. Enable termination protection for the EC2 instance. B. Configure the EC2 instance for Multi-AZ deployment. C. Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure. D. Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumesthat use RAID configurations for storage redundancy.
Answer: C
Explanation:
To design a resilient solution that can improve the recovery time for the system, a solutions
architect should recommend creating an Amazon CloudWatch alarm to recover the EC2
instance in case of failure. This solution has the following benefits: It allows the EC2 instance to be automatically recovered when a system status
check failure occurs, such as loss of network connectivity, loss of system power,
software issues on the physical host, or hardware issues on the physical host that
impact network reachability1.
It preserves the instance ID, private IP addresses, Elastic IP addresses, and all
instance metadata of the original instance. A recovered instance is identical to the
original instance, except for any data that is in-memory, which is lost during the
recovery process1.
It does not require any modification of the application code or the EC2 instance
configuration. The solutions architect can create a CloudWatch alarm using the
AWS Management Console, the AWS CLI, or the CloudWatch API2.
A company has applications that run on Amazon EC2 instances. The EC2 instancesconnect to Amazon RDS databases by using an 1AM role that has associated policies. Thecompany wants to use AWS Systems Manager to patch the EC2 instances withoutdisrupting the running applications.Which solution will meet these requirements?
A. Create a new 1AM role. Attach the AmazonSSMManagedlnstanceCore policy to thenew 1AM role. Attach the new 1AM role to the EC2 instances and the existing 1AM role. B. Create an 1AM user. Attach the AmazonSSMManagedlnstanceCore policy to the 1AMuser. Configure Systems Manager to use the 1AM user to manage the EC2 instances. C. Enable Default Host Configuration Management in Systems Manager to manage theEC2 instances. D. Remove the existing policies from the existing 1AM role. Add theAmazonSSMManagedlnstanceCore policy to the existing 1AM role.
Answer: C
Explanation: The most suitable solution for the company’s requirements is to enable
Default Host Configuration Management in Systems Manager to manage the EC2
instances. This solution will allow the company to patch the EC2 instances without
disrupting the running applications and without manually creating or modifying IAM roles or
users.
Default Host Configuration Management is a feature of AWS Systems Manager that
enables Systems Manager to manage EC2 instances automatically as managed instances.
A managed instance is an EC2 instance that is configured for use with Systems Manager.
The benefits of managing instances with Systems Manager include the following:
Connect to EC2 instances securely using Session Manager.
Perform automated patch scans using Patch Manager.
View detailed information about instances using Systems Manager Inventory.
Track and manage instances using Fleet Manager.
Keep SSM Agent up to date automatically.
Default Host Configuration Management makes it possible to manage EC2 instances
without having to manually create an IAM instance profile. Instead, Default Host
Configuration Management creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and account where it is
activated. If the permissions provided are not sufficient for the use case, the default IAM
role can be modified or replaced with a custom role1.
The other options are not correct because they either have more operational overhead or
do not meet the requirements. Creating a new IAM role, attaching the
AmazonSSMManagedInstanceCore policy to the new IAM role, and attaching the new IAM
role and the existing IAM role to the EC2 instances is not correct because this solution
requires manual creation and management of IAM roles, which adds complexity and cost to
the solution. The AmazonSSMManagedInstanceCore policy is a managed policy that
grants permissions for Systems Manager core functionality2. Creating an IAM user,
attaching the AmazonSSMManagedInstanceCore policy to the IAM user, and configuring
Systems Manager to use the IAM user to manage the EC2 instances is not correct
because this solution requires manual creation and management of IAM users, which adds
complexity and cost to the solution. An IAM user is an identity within an AWS account that
has specific permissions for a single person or application3. Removing the existing policies
from the existing IAM role and adding the AmazonSSMManagedInstanceCore policy to the
existing IAM role is not correct because this solution may disrupt the running applications
that rely on the existing policies for accessing RDS databases. An IAM role is an identity
within an AWS account that has specific permissions for a service or entity4.
References:
AWS managed policy: AmazonSSMManagedInstanceCore
IAM users
IAM roles
Default Host Management Configuration - AWS Systems Manager
Question # 96
A company has data collection sensors at different locations. The data collection sensorsstream a high volume of data to the company. The company wants to design a platform onAWS to ingest and process high-volume streaming data. The solution must be scalable andsupport data collection in near real time. The company must store the data in Amazon S3for future reporting.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon Kinesis Data Firehose to deliver streaming data to Amazon S3. B. Use AWS Glue to deliver streaming data to Amazon S3. C. Use AWS Lambda to deliver streaming data and store the data to Amazon S3. D. Use AWS Database Migration Service (AWS DMS) to deliver streaming data to AmazonS3.
Answer: A
Explanation: To ingest and process high-volume streaming data with the least operational
overhead, Amazon Kinesis Data Firehose is a suitable solution. Amazon Kinesis Data
Firehose can capture, transform, and deliver streaming data to Amazon S3 or other
destinations. Amazon Kinesis Data Firehose can scale automatically to match the throughput of the data and handle any amount of data. Amazon Kinesis Data Firehose is
also a fully managed service that does not require any servers to provision or manage.
References:
What Is Amazon Kinesis Data Firehose?
Amazon Kinesis Data Firehose Pricing
Question # 97
A company has a web application that includes an embedded NoSQL database. Theapplication runs on Amazon EC2 instances behind an Application Load Balancer (ALB).The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone.A recent increase in traffic requires the application to be highly available and for thedatabase to be eventually consistentWhich solution will meet these requirements with the LEAST operational overhead?
A. Replace the ALB with a Network Load Balancer Maintain the embedded NoSQLdatabase with its replication service on the EC2 instances. B. Replace the ALB with a Network Load Balancer Migrate the embedded NoSQLdatabase to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS). C. Modify the Auto Scaling group to use EC2 instances across three Availability Zones.Maintain the embedded NoSQL database with its replication service on the EC2 instances. D. Modify the Auto Scaling group to use EC2 instances across three Availability Zones.Migrate the embedded NoSQL database to Amazon DynamoDB by using AWS DatabaseMigration Service (AWS DMS).ccccccccc
Answer: D
Explanation: This solution will meet the requirements of high availability and eventual
consistency with the least operational overhead. By modifying the Auto Scaling group to
use EC2 instances across three Availability Zones, the web application can handle the
increase in traffic and tolerate the failure of one or two Availability Zones. By migrating the
embedded NoSQL database to Amazon DynamoDB, the company can benefit from a fully
managed, scalable, and reliable NoSQL database service that supports eventual
consistency. AWS Database Migration Service (AWS DMS) is a cloud service that makes it
easy to migrate relational databases, data warehouses, NoSQL databases, and other types
of data stores. AWS DMS can migrate the embedded NoSQL database to Amazon
DynamoDB with minimal downtime and zero data loss.
References: AWS Database Migration Service (AWS DMS), Amazon DynamoDB
Features, Amazon EC2 Auto Scaling
Question # 98
A company's developers want a secure way to gain SSH access on the company's Amazon EC2 instances that run the latest version of Amazon Linux. The developers workremotely and in the corporate office.The company wants to use AWS services as a part of the solution. The EC2 instances arehosted in a VPC private subnet and access the internet through a NAT gateway that isdeployed in a public subnet.What should a solutions architect do to meet these requirements MOST cost-effectively?
A. Create a bastion host in the same subnet as the EC2 instances. Grant the ec2:CreateVpnConnection 1AM permission to the developers. Install EC2 Instance Connect sothat the developers can connect to the EC2 instances. B. Create an AWS Site-to-Site VPN connection between the corporate network and theVPC. Instruct the developers to use the Site-to-Site VPN connection to access the EC2instances when the developers are on the corporate network. Instruct the developers to setup another VPN connection for access when they work remotely. C. Create a bastion host in the public subnet of the VPC. Configure the security groups andSSH keys of the bastion host to only allow connections and SSH authentication from thedevelopers' corporate and remote networks. Instruct the developers to connect through thebastion host by using SSH to reach the EC2 instances. D. Attach the AmazonSSMManagedlnstanceCore 1AM policy to an 1AM role that isassociated with the EC2 instances. Instruct the developers to use AWS Systems ManagerSession Manager to access the EC2 instances.
Answer: D
Explanation: AWS Systems Manager Session Manager is a service that enables you to
securely connect to your EC2 instances without using SSH keys or bastion hosts. You can
use Session Manager to access your instances through the AWS Management Console,
the AWS CLI, or the AWS SDKs. Session Manager uses IAM policies and roles to control
who can access which instances. By attaching the AmazonSSMManagedlnstanceCore IAM
policy to an IAM role that is associated with the EC2 instances, you grant the Session
Manager service the necessary permissions to perform actions on your instances. You also
need to attach another IAM policy to the developers’ IAM users or roles that allows them to
start sessions to the instances. Session Manager uses the AWS Systems Manager Agent
(SSM Agent) that is installed by default on Amazon Linux 2 and other supported Linux
distributions. Session Manager also encrypts all session data between your client and your
instances, and streams session logs to Amazon S3, Amazon CloudWatch Logs, or both for
auditing purposes. This solution is the most cost-effective, as it does not require any
additional resources or services, such as bastion hosts, VPN connections, or NAT
gateways. It also simplifies the security and management of SSH access, as it eliminates
the need for SSH keys, port opening, or firewall rules. References:
What is AWS Systems Manager?
Setting up Session Manager
Getting started with Session Manager
Controlling access to Session Manager Logging Session Manager activity
Question # 99
A company uses an organization in AWS Organizations to manage AWS accounts thatcontain applications. The company sets up a dedicated monitoring member account in theorganization. The company wants to query and visualize observability data across theaccounts by using Amazon CloudWatch.Which solution will meet these requirements?
A. Enable CloudWatch cross-account observability for the monitoring account. Deploy anAWS CloudFormation template provided by the monitoring account in each AWS accountto share the data with the monitoring account. B. Set up service control policies (SCPs) to provide access to CloudWatch in themonitoring account under the Organizations root organizational unit (OU). C. Configure a new IAM user in the monitoring account. In each AWS account, configurean 1AM policy to have access to query and visualize the CloudWatch data in the account.Attach the new 1AM policy to the new 1AM user. D. Create a new IAM user in the monitoring account. Create cross-account 1AM policies ineach AWS account. Attach the 1AM policies to the new IAM user.
Answer: A
Explanation: This solution meets the requirements because it allows the monitoring
account to query and visualize observability data across the accounts by using
CloudWatch. CloudWatch cross-account observability is a feature that enables a central
monitoring account to view and interact with observability data shared by other accounts.
To enable cross-account observability, the monitoring account needs to configure the types
of data to be shared (metrics, logs, and traces) and the source accounts to be linked. The
source accounts can be specified by account IDs, organization IDs, or organization paths.
To share the data with the monitoring account, the source accounts need to deploy an
AWS CloudFormation template provided by the monitoring account. This template creates
an observability link resource that represents the link between the source account and the
monitoring account. The template also creates a sink resource that represents an
attachment point in the monitoring account. The source accounts can share their
observability data with the sink in the monitoring account. The monitoring account can then
use the CloudWatch console, API, or CLI to search, analyze, and correlate the
observability data across the accounts. References: CloudWatch cross-account
observability, Setting up CloudWatch cross-account observability, [Observability Access
Manager API Reference]
Question # 100
A company hosts a database that runs on an Amazon RDS instance that is deployed tomultiple Availability Zones. The company periodically runs a script against the database toreport new entries that are added to the database. The script that runs against thedatabase negatively affects the performance of a critical application. The company needsto improve application performance with minimal costs.Which solution will meet these requirements with the LEAST operational overhead?
A. Add functionality to the script to identify the instance that has the fewest activeconnections. Configure the script to read from that instance to report the total new entries. B. Create a read replica of the database. Configure the script to query only the read replicato report the total new entries. C. Instruct the development team to manually export the new entries for the day in thedatabase at the end of each day. D. Use Amazon ElastiCache to cache the common queries that the script runs against thedatabase.
Answer: B
Explanation: A read replica is a copy of the primary database that supports read-only
queries. By creating a read replica, you can offload the read workload from the primary
database and improve its performance. The script can query the read replica without
affecting the critical application that uses the primary database. This solution also has the
least operational overhead, as you do not need to modify the script, export the data
manually, or manage a cache cluster. References:
Working with PostgreSQL, MySQL, and MariaDB Read Replicas
Amazon RDS Performance Insights
Question # 101
A company wants to use an AWS CloudFormatlon stack for its application in a testenvironment. The company stores the CloudFormation template in an Amazon S3 bucketthat blocks public access. The company wants to grant CloudFormation access to thetemplate in the S3 bucket based on specific user requests to create the test environmentThe solution must follow security best practices.Which solution will meet these requirements?
A. Create a gateway VPC endpoint for Amazon S3. Configure the CloudFormation stack touse the S3 object URL B. Create an Amazon API Gateway REST API that has the S3 bucket as the target.Configure the CloudFormat10n stack to use the API Gateway URL _ C. Create a presigned URL for the template object_ Configure the CloudFormation stack touse the presigned URL. D. Allow public access to the template object in the S3 bucket. Block the public accessafter the test environment is created
Answer: C
Explanation: it allows CloudFormation to access the template in the S3 bucket without
granting public access or creating additional resources. A presigned URL is a URL that is
signed with the access key of an IAM user or role that has permission to access the object.
The presigned URL can be used by anyone who receives it, but it expires after a specified
time. By creating a presigned URL for the template object and configuring the
CloudFormation stack to use it, the company can grant CloudFormation access to the
template based on specific user requests and follow security best practices. References:
Using Amazon S3 Presigned URLs
Using Amazon S3 Buckets
Question # 102
A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon ElasticFile System (Amazon EFS) file system and another S3 bucket. The files must be copiedcontinuously. New files are added to the original S3 bucket consistently. The copied filesshould be overwritten only if the source file changes.Which solution will meet these requirements with the LEAST operational overhead?
A. Create an AWS DataSync location for both the destination S3 bucket and the EFS filesystem. Create a task for the destination S3 bucket and the EFS file system. Set thetransfer mode to transfer only data that has changed. B. Create an AWS Lambda function. Mount the file system to the function. Set up an S3event notification to invoke the function when files are created and changed in Amazon S3.Configure the function to copy files to the file system and the destination S3 bucket. C. Create an AWS DataSync location for both the destination S3 bucket and the EFS filesystem. Create a task for the destination S3 bucket and the EFS file system. Set thetransfer mode to transfer all data. D. Launch an Amazon EC2 instance in the same VPC as the file system. Mount the filesystem. Create a script to routinely synchronize all objects that changed in the origin S3bucket to the destination S3 bucket and the mounted file system.
Answer: A
Explanation: AWS DataSync is a service that makes it easy to move large amounts of
data between AWS storage services and on-premises storage systems. AWS DataSync
can copy files from an S3 bucket to an EFS file system and another S3 bucket
continuously, as well as overwrite only the files that have changed in the source. This
solution will meet the requirements with the least operational overhead, as it does not
require any code development or manual intervention.
References:
4 explains how to create AWS DataSync locations for different storage services.
5 describes how to create and configure AWS DataSync tasks for data transfer.
6 discusses the different transfer modes that AWS DataSync supports.
Question # 103
The DNS provider that hosts a company's domain name records is experiencing outagesthat cause service disruption for a website running on AWS. The company needs tomigrate to a more resilient managed DNS service and wants the service to run on AWS.What should a solutions architect do to rapidly migrate the DNS hosting service?
A. Create an Amazon Route 53 public hosted zone for the domain name. Import the zonefile containing the domain records hosted by the previous provider B. Create an Amazon Route 53 private hosted zone for the domain name Import the zonefile containing the domain records hosted by the previous provider. C. Create a Simple AD directory in AWS. Enable zone transfer between the DNS providerand AWS Directory Service for Microsoft Active Directory for the domain records. D. Create an Amazon Route 53 Resolver inbound endpomt in the VPC. Specify the IPaddresses that the provider's DNS will forward DNS queries to. Configure the provider'sDNS to forward DNS queries for the domain to the IP addresses that are specified in theinbound endpoint.
Answer: A
Explanation: To migrate the DNS hosting service to a more resilient managed DNS
service on AWS, the company should use Amazon Route 53, which is a highly available
and scalable cloud DNS web service. Route 53 can host public DNS records for the
company’s domain name and provide reliable and secure DNS resolution. To rapidly
migrate the DNS hosting service, the company should create a public hosted zone for the
domain name in Route 53, which is a container for the domain’s DNS records. Then, the
company should import the zone file containing the domain records hosted by the previous
provider, which is a text file that defines the DNS records for the domain. This way, the
company can quickly transfer the existing DNS records to Route 53 without manually
creating them. After importing the zone file, the company should update the domain
registrar to use the name servers that Route 53 assigns to the hosted zone. This will
ensure that DNS queries for the domain name are routed to Route 53 and resolved by the
imported records.
Question # 104
A company has an online gaming application that has TCP and UDP multiplayer gamingcapabilities. The company uses Amazon Route 53 to point the application traffic to multipleNetwork Load Balancers (NLBs) in different AWS Regions. The company needs to improveapplication performance and decrease latency for the online game in preparation for usergrowth.Which solution will meet these requirements?
A. Add an Amazon CloudFront distribution in front of the NLBs. Increase the Cache-Control: max-age parameter. B. Replace the NLBs with Application Load Balancers (ALBs). Configure Route 53 to uselatency-based routing. C. Add AWS Global Accelerator in front of the NLBs. Configure a Global Acceleratorendpoint to use the correct listener ports. D. ‘Add an Amazon API Gateway endpoint behind the NLBs. Enable API caching. Overridemethod caching for the different stages.
Answer: C
Explanation: This answer is correct because it improves the application performance and
decreases latency for the online game by using AWS Global Accelerator. AWS Global
Accelerator is a networking service that helps you improve the availability, performance,
and security of your public applications. Global Accelerator provides two global static public
IPs that act as a fixed entry point to your application endpoints, such as NLBs, in different
AWS Regions. Global Accelerator uses the AWS global network to route traffic to the
optimal regional endpoint based on health, client location, and policies that you configure.
Global Accelerator also terminates TCP and UDP traffic at the edge locations, which
reduces the number of hops and improves the network performance. By adding AWS
Global Accelerator in front of the NLBs, you can achieve up to 60% improvement in latency
A research company runs experiments that are powered by a simulation application and avisualization application. The simulation application runs on Linux and outputs intermediatedata to an NFS share every 5 minutes. The visualization application is a Windows desktopapplication that displays the simulation output and requires an SMB file system.The company maintains two synchronized file systems. This strategy is causing dataduplication and inefficient resource usage. The company needs to migrate the applicationsto AWS without making code changes to either application.Which solution will meet these requirements?
A. Migrate both applications to AWS Lambda. Create an Amazon S3 bucket toexchange data between the applications. B. Migrate both applications to Amazon Elastic Container Service (Amazon ECS).Configure Amazon FSx File Gateway for storage. C. Migrate the simulation application to Linux Amazon EC2 instances. Migrate thevisualization application to Windows EC2 instances. Configure Amazon Simple QueueService (Amazon SQS) to exchange data between the applications. D. Migrate the simulation application to Linux Amazon EC2 instances. Migrate thevisualization application to Windows EC2 instances. Configure Amazon FSx for NetAppONTAP for storage.
Answer: D
This solution will meet the requirements because Amazon FSx for NetApp ONTAP is a fully
managed service that provides highly reliable, scalable, and feature-rich file storage built
on NetApp’s popular ONTAP file system. FSx for ONTAP supports both NFS and SMB
protocols, which means it can be accessed by both Linux and Windows applications
without code changes. FSx for ONTAP also eliminates data duplication and inefficient
resource usage by automatically tiering infrequently accessed data to a lower-cost storage
tier and providing storage efficiency features such as deduplication and compression. FSx for ONTAP also integrates with other AWS services such as Amazon S3, AWS Backup,
and AWS CloudFormation. By migrating the applications to Amazon EC2 instances, the
company can leverage the scalability, security, and performance of AWS compute
resources.
Question # 106
A solutions architect is designing an AWS Identity and Access Management (1AM)authorization model for a company's AWS account. The company has designated fivespecific employees to have full access to AWS services and resources in the AWS account.The solutions architect has created an 1AM user for each of the five designated employeesand has created an 1AM user group.Which solution will meet these requirements?
A. Attach the AdministratorAccess resource-based policy to the 1AM user group. Placeeach of the five designated employee IAM users in the 1AM user group. B. Attach the SystemAdministrator identity-based policy to the IAM user group. Place eachof the five designated employee IAM users in the IAM user group. C. Attach the AdministratorAccess identity-based policy to the IAM user group. Place eachof the five designated employee IAM users in the IAM user group. D. Attach the SystemAdministrator resource-based policy to the IAM user group. Placeeach of the five designated employee IAM users in the IAM user group.
Answer: C
Explanation: This solution meets the requirements because it uses the following
components and features:
AdministratorAccess identity-based policy: This is an AWS managed policy that
provides full access to AWS services and resources1. By attaching this policy to
the IAM user group, the solutions architect can grant the permissions needed for
the designated employees to perform any task in the AWS account.
IAM user group: This is a collection of IAM users that share common
permissions2. By creating a user group and adding the five designated employees
as members, the solutions architect can simplify the management of permissions
and reduce the risk of human errors or inconsistencies.
IAM users: These are identities that represent the designated employees in AWS2.
By creating an IAM user for each employee and requiring them to sign in with their
own credentials, the solutions architect can enhance the security and
accountability of the AWS account.
Question # 107
A company's ecommerce website has unpredictable traffic and uses AWS Lambdafunctions to directly access a private Amazon RDS for PostgreSQL DB instance. Thecompany wants to maintain predictable database performance and ensure that the Lambdainvocations do not overload the database with too many connections.What should a solutions architect do to meet these requirements?
A. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions inside aVPC. B. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions inside aVPC. C. Point the client driver at an RDS custom endpoint. Deploy the Lambda functions outsidea VPC. D. Point the client driver at an RDS proxy endpoint. Deploy the Lambda functions outside aVPC.
Answer: B
Explanation: To maintain predictable database performance and ensure that the Lambda
invocations do not overload the database with too many connections, a solutions architect should point the client driver at an RDS proxy endpoint and deploy the Lambda functions
inside a VPC. An RDS proxy is a fully managed database proxy that allows applications to
share connections to a database, improving database availability and scalability. By using
an RDS proxy, the Lambda functions can reuse existing connections, rather than creating
new ones for every invocation, reducing the connection overhead and latency. Deploying
the Lambda functions inside a VPC allows them to access the private RDS DB instance
securely and efficiently, without exposing it to the public internet. References:
Using Amazon RDS Proxy with AWS Lambda
Configuring a Lambda function to access resources in a VPC
Question # 108
A company wants to analyze and generate reports to track the usage of its mobile app. Theapp is popular and has a global user base The company uses a custom report buildingprogram to analyze application usage.The program generates multiple reports during the last week of each month. The programtakes less than 10 minutes to produce each report. The company rarely uses the program to generate reports outside of the last week of each month. The company wants togenerate reports in the least amount of time when the reports are requested.Which solution will meet these requirements MOST cost-effectively?
A. Run the program by using Amazon EC2 On-Demand Instances. Create an AmazonEventBridge rule to start the EC2 instances when reports are requested. Run the EC2instances continuously during the last week of each month. B. Run the program in AWS Lambda. Create an Amazon EventBridge rule to run a Lambdafunction when reports are requested. C. Run the program in Amazon Elastic Container Service (Amazon ECS). ScheduleAmazon ECS to run the program when reports are requested. D. Run the program by using Amazon EC2 Spot Instances. Create an AmazonEventBridge rule to start the EC2 instances when reports are requested. Run the EC2instances continuously during the last week of each month.
Answer: B
Explanation: This solution meets the requirements most cost-effectively because it
leverages the serverless and event-driven capabilities of AWS Lambda and Amazon
EventBridge. AWS Lambda allows you to run code without provisioning or managing
servers, and you pay only for the compute time you consume. Amazon EventBridge is a
serverless event bus service that lets you connect your applications with data from various
sources and routes that data to targets such as AWS Lambda. By using Amazon
EventBridge, you can create a rule that triggers a Lambda function to run the program
when reports are requested, and you can also schedule the rule to run during the last week
of each month. This way, you can generate reports in the least amount of time and pay
only for the resources you use.
References:
AWS Lambda
Amazon EventBridge
Question # 109
A company has established a new AWS account. The account is newly provisioned and nochanges have been made to the default settings. The company is concerned about thesecurity of the AWS account root user.What should be done to secure the root user?
A. Create 1AM users for daily administrative tasks. Disable the root user. B. Create 1AM users for daily administrative tasks. Enable multi-factor authentication onthe root user. C. Generate an access key for the root user Use the access key for daily administrationtasks instead of the AWS Management Console. D. Provide the root user credentials to the most senior solutions architect. Have thesolutions architect use the root user for daily administration tasks.
Answer: B
Explanation: This answer is the most secure and recommended option for securing the
root user of a new AWS account. The root user is the identity that has complete access to
all AWS services and resources in the account. It is accessed by signing in with the email
address and password that were used to create the account. To protect the root user
credentials from unauthorized use, AWS advises the following best practices:
Create IAM users for daily administrative tasks. IAM users are identities that you
create in your account that have specific permissions to access AWS resources.
You can create individual IAM users for yourself and for others who need access
to your account. You can also assign IAM users to IAM groups that have a set of
policies that grant permissions to perform common tasks. By using IAM users
instead of the root user, you can follow the principle of least privilege and reduce
the risk of compromising your account.
Enable multi-factor authentication (MFA) on the root user. MFA is a security
feature that requires users to prove their identity by providing two pieces of
information: their password and a code from a device that only they have access
to. By enabling MFA on the root user, you can add an extra layer of protection to
your account and prevent unauthorized access even if your password is
compromised.
Limit the tasks you perform with the root user account. You should use the root
user only for tasks that require root user credentials, such as changing your
account settings, closing your account, or managing consolidated billing. For a
complete list of tasks that require root user credentials, see Tasks that require root
user credentials. For all other tasks, you should use IAM users or roles that have
the appropriate permissions.
References:
AWS account root user
Root user best practices for your AWS account
Tasks that require root user credentials
Question # 110
A company has users all around the world accessing its HTTP-based application deployedon Amazon EC2 instances in multiple AWS Regions. The company wants to improve theavailability and performance of the application. The company also wants to protect theapplication against common web exploits that may affect availability, compromise security, or consume excessive resources. Static IP addresses are required.What should a solutions architect recommend to accomplish this?
A. Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. DeployAWS WAF on the NLBs. Create an accelerator using AWS Global Accelerator and registerthe NLBs as endpoints. B. Put the EC2 instances behind Application Load Balancers (ALBs) in each Region.Deploy AWS WAF on the ALBs. Create an accelerator using AWS Global Accelerator andregister the ALBs as endpoints. C. Put the EC2 instances behind Network Load Balancers (NLBs) in each Region. DeployAWS WAF on the NLBs. Create an Amazon CloudFront distribution with an origin that usesAmazon Route 53 latency-based routing to route requests to the NLBs. D. Put the EC2 instances behind Application Load Balancers (ALBs) in each Region.Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53latency-based routing to route requests to the ALBs. Deploy AWS WAF on the CloudFrontdistribution.
Answer: A
Explanation: The company wants to improve the availability and performance of the
application, as well as protect it against common web exploits. The company also needs
static IP addresses for the application. To meet these requirements, a solutions architect
should recommend the following solution:
Put the EC2 instances behind Network Load Balancers (NLBs) in each Region.
NLBs are designed to handle millions of requests per second while maintaining
high throughput at ultra-low latency. NLBs also support static IP addresses for
each Availability Zone, which can be useful for whitelisting or firewalling purposes.
Deploy AWS WAF on the NLBs. AWS WAF is a web application firewall that helps
protect web applications from common web exploits that could affect availability,
security, or performance. AWS WAF lets you define customizable web security
rules that control which traffic to allow or block to your web applications.
Create an accelerator using AWS Global Accelerator and register the NLBs as
endpoints. AWS Global Accelerator is a service that improves the availability and
performance of your applications with local or global users. It provides static IP
addresses that act as a fixed entry point to your application endpoints in any AWS
Region. It uses the AWS global network to optimize the path from your users to
your applications, improving the performance of your TCP and UDP traffic.
This solution will provide high availability across Availability Zones and Regions, improve
performance by routing traffic over the AWS global network, protect the application from
common web attacks, and provide static IP addresses for the application.
References:
Network Load Balancer
AWS WAF
AWS Global Accelerator
Question # 111
A company runs multiple workloads in its on-premises data center. The company's datacenter cannot scale fast enough to meet the company's expanding business needs. Thecompany wants to collect usage and configuration data about the on-premises servers andworkloads to plan a migration to AWS.Which solution will meet these requirements?
A. Set the home AWS Region in AWS Migration Hub. Use AWS Systems Manager tocollect data about the on-premises servers. B. Set the home AWS Region in AWS Migration Hub. Use AWS Application DiscoveryService to collect data about the on-premises servers. C. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates.Use AWS Trusted Advisor to collect data about the on-premises servers. D. Use the AWS Schema Conversion Tool (AWS SCT) to create the relevant templates.Use AWS Database Migration Service (AWS DMS) to collect data about the on-premisesservers.
Answer: B
Explanation: The most suitable solution for the company’s requirements is to set the home
AWS Region in AWS Migration Hub and use AWS Application Discovery Service to collect
data about the on-premises servers. This solution will enable the company to gather usage
and configuration data of its on-premises servers and workloads, and plan a migration to
AWS.
AWS Migration Hub is a service that simplifies and accelerates migration tracking by
aggregating migration status information into a single console. Users can view the
discovered servers, group them into applications, and track the migration status of each
application from the Migration Hub console in their home Region. The home Region is the
AWS Region where users store their migration data, regardless of which Regions they
migrate into1.
AWS Application Discovery Service is a service that helps users plan their migration to
AWS by collecting usage and configuration data about their on-premises servers and
databases. Application Discovery Service is integrated with AWS Migration Hub and
supports two methods of performing discovery: agentless discovery and agent-based
discovery. Agentless discovery can be performed by deploying the Application Discovery Service Agentless Collector through VMware vCenter, which collects static configuration
data and utilization data for virtual machines (VMs) and databases. Agent-based discovery
can be performed by deploying the AWS Application Discovery Agent on each of the VMs
and physical servers, which collects static configuration data, detailed time-series systemperformance
information, inbound and outbound network connections, and processes that
are running2.
The other options are not correct because they do not meet the requirements or are not
relevant for the use case. Using the AWS Schema Conversion Tool (AWS SCT) to create
the relevant templates and using AWS Trusted Advisor to collect data about the onpremises
servers is not correct because this solution is not suitable for collecting usage
and configuration data of on-premises servers and workloads. AWS SCT is a tool that
helps users convert database schemas and code objects from one database engine to
another, such as from Oracle to PostgreSQL3. AWS Trusted Advisor is a service that
provides best practice recommendations for cost optimization, performance, security, fault
tolerance, and service limits4. Using the AWS Schema Conversion Tool (AWS SCT) to
create the relevant templates and using AWS Database Migration Service (AWS DMS) to
collect data about the on-premises servers is not correct because this solution is not
suitable for collecting usage and configuration data of on-premises servers and workloads.
As mentioned above, AWS SCT is a tool that helps users convert database schemas and
code objects from one database engine to another. AWS DMS is a service that helps users
migrate relational databases, non-relational databases, and other types of data stores to
AWS with minimal downtime5.
References:
Home Region - AWS Migration Hub
What is AWS Application Discovery Service? - AWS Application Discovery Service
AWS Schema Conversion Tool - Amazon Web Services
What Is Trusted Advisor? - Trusted Advisor
What Is AWS Database Migration Service? - AWS Database Migration Service
Question # 112
A company is designing a new web service that will run on Amazon EC2 instances behindan Elastic Load Balancing (ELB) load balancer. However, many of the web service clientscan only reach IP addresses authorized on their firewalls.What should a solutions architect recommend to meet the clients' needs?
A. A Network Load Balancer with an associated Elastic IP address. B. An Application Load Balancer with an associated Elastic IP address. C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address. D. An EC2 instance with a public IP address running as a proxy in front of the loadbalancer.
Answer: A
Explanation: A Network Load Balancer can be assigned one Elastic IP address for each
Availability Zone it uses1. This allows the clients to reach the load balancer using a static
IP address that can be authorized on their firewalls. An Application Load Balancer cannot
be assigned an Elastic IP address2. An A record in an Amazon Route 53 hosted zone
pointing to an Elastic IP address would not work because the load balancer would still use
its own IP address as the source of the forwarded requests to the web service. An EC2
instance with a public IP address running as a proxy in front of the load balancer would add
unnecessary complexity and cost, and would not provide the same scalability and
availability as a Network Load Balancer. References: 1: Network Load Balancers - Elastic
Load Balancing3, IP address type section2: How to assign Elastic IP to Application Load
Balancer in AWS?4, answer section.
Question # 113
A company is running its production and nonproduction environment workloads in multipleAWS accounts. The accounts are in an organization in AWS Organizations. The companyneeds to design a solution that will prevent the modification of cost usage tags.Which solution will meet these requirements?
A. Create a custom AWS Config rule to prevent tag modification except by authorizedprincipals. B. Create a custom trail in AWS CloudTrail to prevent tag modification C. Create a service control policy (SCP) to prevent tag modification except by authonzedprincipals. D. Create custom Amazon CloudWatch logs to prevent tag modification.
Answer: C
Explanation: This solution meets the requirements because it uses SCPs to restrict the
actions that can be performed on cost usage tags in the organization. SCPs are a type of
organization policy that you can use to manage permissions in your organization. SCPs
specify the maximum permissions for an organization, organizational unit (OU), or account.
You can use SCPs to enforce consistent tag policies across your organization and prevent
unauthorized or accidental changes to your tags. You can also create exceptions for
authorized principals, such as administrators or auditors, who need to modify tags for
legitimate purposes.
References:
Service control policies (SCPs) - AWS Organizations
Tag policies - AWS Organizations
Question # 114
A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launchits Amazon EC2 instances. The AMIs contain critical data and configurations that arenecessary for the company's operations. The company wants to implement a solution thatwill recover accidentally deleted AMIs quickly and efficiently.Which solution will meet these requirements with the LEAST operational overhead?
A. Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store thesnapshots in a separate AWS account. B. Copy all AMIs to another AWS account periodically. C. Create a retention rule in Recycle Bin. D. Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
Answer: C
Explanation: Recycle Bin is a data recovery feature that enables you to restore
accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. When using Recycle
Bin, if your resources are deleted, they are retained in the Recycle Bin for a time period
that you specify before being permanently deleted. You can restore a resource from the
Recycle Bin at any time before its retention period expires. This solution has the least
operational overhead, as you do not need to create, copy, or upload any additional
resources. You can also manage tags and permissions for AMIs in the Recycle Bin. AMIs
in the Recycle Bin do not incur any additional charges. References:
Recover AMIs from the Recycle Bin
Recover an accidentally deleted Linux AMI
Question # 115
An ecommerce company is running a seasonal online sale. The company hosts its websiteon Amazon EC2 instances spanning multiple Availability Zones. The company wants itswebsite to manage sudden traffic increases during the sale.Which solution will meet these requirements MOST cost-effectively?
A. Create an Auto Scaling group that is large enough to handle peak traffic load. Stop half of the Amazon EC2 instances. Configure the Auto Scaling group to use the stoppedinstances to scale out when traffic increases. B. Create an Auto Scaling group for the website. Set the minimum size of the Auto Scalinggroup so that it can handle high traffic volumes without the need to scale out. C. Use Amazon CIoudFront and Amazon ElastiCache to cache dynamic content with anAuto Scaling group set as the origin. Configure the Auto Scaling group with the instancesnecessary to populate CIoudFront and ElastiCache. Scale in after the cache is fullypopulated. D. Configure an Auto Scaling group to scale out as traffic increases. Create a launchtemplate to start new instances from a preconfigured Amazon Machine Image (AMI).
Answer: D
Explanation:
The solution that meets the requirements of high availability, resiliency, and minimal
operational effort is to use AWS Transfer for SFTP and an Amazon S3 bucket for storage.
This solution allows the company to securely transfer files over SFTP to Amazon S3, which
is a durable and scalable object storage service. The company can then modify the
application to pull the batch files from Amazon S3 to an Amazon EC2 instance for
processing. The EC2 instance can be part of an Auto Scaling group with a scheduled
scaling policy to run the batch operation only at night. This way, the company can save
costs by scaling down the EC2 instances when they are not needed. The other solutions do
not meet all the requirements because they either use Amazon EFS or Amazon EBS for
storage, which are more expensive and less scalable than Amazon S3, or they do not use
a scheduled scaling policy to optimize the EC2 instances usage. References :=
AWS Transfer for SFTP
Amazon S3
Amazon EC2 Auto Scaling
Question # 116
A company needs a solution to prevent photos with unwanted content from being uploadedto the company's web application. The solution must not involve training a machinelearning (ML) model. Which solution will meet these requirements?
A. Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-timeendpoint that the web application invokes when new photos are uploaded. B. Create an AWS Lambda function that uses Amazon Rekognition to detect unwantedcontent. Create a Lambda function URL that the web application invokes when new photosare uploaded. C. Create an Amazon CloudFront function that uses Amazon Comprehend to detectunwanted content. Associate the function with the web application. D. Create an AWS Lambda function that uses Amazon Rekognition Video to detectunwanted content. Create a Lambda function URL that the web application invokes whennew photos are uploaded.
Answer: B
Explanation:
The solution that will meet the requirements is to create an AWS Lambda function that
uses Amazon Rekognition to detect unwanted content, and create a Lambda function URL that the web application invokes when new photos are uploaded. This solution does not
involve training a machine learning model, as Amazon Rekognition is a fully managed
service that provides pre-trained computer vision models for image and video analysis.
Amazon Rekognition can detect unwanted content such as explicit or suggestive adult
content, violence, weapons, drugs, and more. By using AWS Lambda, the company can
create a serverless function that can be triggered by an HTTP request from the web
application. The Lambda function can use the Amazon Rekognition API to analyze the
uploaded photos and return a response indicating whether they contain unwanted content
or not.
The other solutions are not as effective as the first one because they either involve training
a machine learning model, do not support image analysis, or do not work with photos.
Creating and deploying a model by using Amazon SageMaker Autopilot involves training a
machine learning model, which is not required for the scenario. Amazon SageMaker
Autopilot is a service that automatically creates, trains, and tunes the best machine
learning models for classification or regression based on the data provided by the user.
Creating an Amazon CloudFront function that uses Amazon Comprehend to detect
unwanted content does not support image analysis, as Amazon Comprehend is a natural
language processing service that analyzes text, not images. Amazon Comprehend can
extract insights and relationships from text such as language, sentiment, entities, topics,
and more. Creating an AWS Lambda function that uses Amazon Rekognition Video to
detect unwanted content does not work with photos, as Amazon Rekognition Video is
designed for analyzing video streams, not static images. Amazon Rekognition Video can
detect activities, objects, faces, celebrities, text, and more in video streams.
References:
Amazon Rekognition
AWS Lambda
Detecting unsafe content - Amazon Rekognition
Amazon SageMaker Autopilot
Amazon Comprehend
Question # 117
A solutions architect creates a VPC that includes two public subnets and two privatesubnets. A corporate security mandate requires the solutions architect to launch allAmazon EC2 instances in a private subnet. However, when the solutions architectlaunches an EC2 instance that runs a web server on ports 80 and 443 in a private subnet,no external internet traffic can connect to the server.What should the solutions architect do to resolve this issue?
A. Attach the EC2 instance to an Auto Scaling group in a private subnet. Ensure that theDNS record for the website resolves to the Auto Scaling group identifier. B. Provision an internet-facing Application Load Balancer (ALB) in a public subnet. Add theEC2 instance to the target group that is associated with the ALB. Ensure that the DNSrecord for the website resolves to the ALB. C. Launch a NAT gateway in a private subnet. Update the route table for the privatesubnets to add a default route to the NAT gateway. Attach a public Elastic IP address tothe NAT gateway. D. Ensure that the security group that is attached to the EC2 instance allows HTTP trafficon port 80 and HTTPS traffic on port 443. Ensure that the DNS record for the websiteresolves to the public IP address of the EC2 instance.
Answer: B
Explanation: An Application Load Balancer (ALB) is a type of Elastic Load Balancer (ELB)
that distributes incoming application traffic across multiple targets, such as EC2 instances,
containers, Lambda functions, and IP addresses, in multiple Availability Zones1. An ALB
can be internet-facing or internal. An internet-facing ALB has a public DNS name that
clients can use to send requests over the internet1. An internal ALB has a private DNS
name that clients can use to send requests within a VPC1. This solution meets the
requirements of the question because:
It allows external internet traffic to connect to the web server on ports 80 and 443,
as the ALB listens for requests on these ports and forwards them to the EC2
instance in the private subnet1.
It does not violate the corporate security mandate, as the EC2 instance is
launched in a private subnet and does not have a public IP address or a route to
an internet gateway2.
It reduces the operational overhead, as the ALB is a fully managed service that
handles the tasks of load balancing, health checking, scaling, and security1.
Question # 118
A company has deployed its newest product on AWS. The product runs in an Auto Scalinggroup behind a Network Load Balancer. The company stores the product's objects in anAmazon S3 bucket.The company recently experienced malicious attacks against its systems. The companyneeds a solution that continuously monitors for malicious activity in the AWS account,workloads, and access patterns to the S3 bucket. The solution must also report suspiciousactivity and display the information on a dashboard.Which solution will meet these requirements?
A. Configure Amazon Made to monitor and report findings to AWS Config. B. Configure Amazon Inspector to monitor and report findings to AWS CloudTrail. C. Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub. D. Configure AWS Config to monitor and report findings to Amazon EventBridge.
Answer: C
Explanation: Amazon GuardDuty is a threat detection service that continuously monitors
for malicious activity and unauthorized behavior across the AWS account and workloads.
GuardDuty analyzes data sources such as AWS CloudTrail event logs, Amazon VPC Flow
Logs, and DNS logs to identify potential threats such as compromised instances,
reconnaissance, port scanning, and data exfiltration. GuardDuty can report its findings to
AWS Security Hub, which is a service that provides a comprehensive view of the security
posture of the AWS account and workloads. Security Hub aggregates, organizes, and
prioritizes security alerts from multiple AWS services and partner solutions, and displays
them on a dashboard. This solution will meet the requirements, as it enables continuous
monitoring, reporting, and visualization of malicious activity in the AWS account, workloads,
and access patterns to the S3 bucket.
References:
1 provides an overview of Amazon GuardDuty and its benefits.
2 explains how GuardDuty generates and reports findings based on threat
detection.
3 provides an overview of AWS Security Hub and its benefits.
4 describes how Security Hub collects and displays findings from multiple sources
on a dashboard
Question # 119
A development team is collaborating with another company to create an integrated product.The other company needs to access an Amazon Simple Queue Service (Amazon SQS)queue that is contained in the development team's account. The other company wants topoll the queue without giving up its own account permissions to do so.How should a solutions architect provide access to the SQS queue?
A. Create an instance profile that provides the other company access to the SQS queue. B. Create an 1AM policy that provides the other company access to the SQS queue. C. Create an SQS access policy that provides the other company access to the SQSqueue. D. Create an Amazon Simple Notification Service (Amazon SNS) access policy thatprovides the other company access to the SQS queue.
Answer: C
Explanation: To provide access to the SQS queue to the other company without giving up
its own account permissions, a solutions architect should create an SQS access policy that
provides the other company access to the SQS queue. An SQS access policy is a
resource-based policy that defines who can access the queue and what actions they can
perform. The policy can specify the AWS account ID of the other company as a principal,
and grant permissions for actions such as sqs:ReceiveMessage, sqs:DeleteMessage,
and sqs:GetQueueAttributes. This way, the other company can poll the queue using its
own credentials, without needing to assume a role or use cross-account access
keys. References:
Using identity-based policies (IAM policies) for Amazon SQS
Using custom policies with the Amazon SQS access policy language
Question # 120
A company has an organization in AWS Organizations. The company runs Amazon EC2instances across four AWS accounts in the root organizational unit (OU). There are threenonproduction accounts and one production account. The company wants to prohibit usersfrom launching EC2 instances of a certain size in the nonproduction accounts. Thecompany has created a service control policy (SCP) to deny access to launch instancesthat use the prohibited types.Which solutions to deploy the SCP will meet these requirements? (Select TWO.)
A. Attach the SCP to the root OU for the organization. B. Attach the SCP to the three nonproduction Organizations member accounts. C. Attach the SCP to the Organizations management account. D. Create an OU for the production account. Attach the SCP to the OU. Move theproduction member account into the new OU. E. Create an OU for the required accounts. Attach the SCP to the OU. Move thenonproduction member accounts into the new OU.
Answer: B,E
Explanation: SCPs are a type of organization policy that you can use to manage
permissions in your organization. SCPs offer central control over the maximum available
permissions for all accounts in your organization. SCPs help you to ensure your accounts
stay within your organization’s access control guidelines1.
To apply an SCP to a specific set of accounts, you need to create an OU for those
accounts and attach the SCP to the OU. This way, the SCP affects only the member
accounts in that OU and not the other accounts in the organization. If you attach the SCP
to the root OU, it will apply to all accounts in the organization, including the production
account, which is not the desired outcome. If you attach the SCP to the management
account, it will have no effect, as SCPs do not affect users or roles in the management
account1.
Therefore, the best solutions to deploy the SCP are B and E. Option B attaches the SCP
directly to the three nonproduction accounts, while option E creates a separate OU for the
nonproduction accounts and attaches the SCP to the OU. Both options will achieve the
same result of restricting the EC2 instance types in the nonproduction accounts, but option E might be more scalable and manageable if there are more accounts or policies to be
applied in the future2.
References:
1: Service control policies (SCPs) - AWS Organizations
2: Best Practices for AWS Organizations Service Control Policies in a Multi-
Account Environment
Question # 121
A company uses an organization in AWS Organizations to manage AWS accounts thatcontain applications. The company sets up a dedicated monitoring member account in theorganization. The company wants to query and visualize observability data across theaccounts by using Amazon CloudWatch.Which solution will meet these requirements?
A. Enable CloudWatch cross-account observability for the monitoring account. Deploy anAWS CloudFormation template provided by the monitoring account in each AWS accountto share the data with the monitoring account. B. Set up service control policies (SCPs) to provide access to CloudWatch in themonitoring account under the Organizations root organizational unit (OU). C. Configure a new 1AM user in the monitoring account. In each AWS account, configurean 1AM policy to have access to query and visualize the CloudWatch data in the account.Attach the new 1AM policy to the new I AM user. D. Create a new 1AM user in the monitoring account. Create cross-account 1AM policies ineach AWS account. Attach the 1AM policies to the new 1AM user.
Answer: A
Explanation:
CloudWatch cross-account observability is a feature that allows you to monitor and
troubleshoot applications that span multiple accounts within a Region. You can seamlessly
search, visualize, and analyze your metrics, logs, traces, and Application Insights
applications in any of the linked accounts without account boundaries1. To enable
CloudWatch cross-account observability, you need to set up one or more AWS accounts as
monitoring accounts and link them with multiple source accounts. A monitoring account is a
central AWS account that can view and interact with observability data shared by other
accounts. A source account is an individual AWS account that shares observability data
and resources with one or more monitoring accounts1. To create links between monitoring
accounts and source accounts, you can use the CloudWatch console, the AWS CLI, or the
AWS API. You can also use AWS Organizations to link accounts in an organization or
organizational unit to the monitoring account1. CloudWatch provides a CloudFormation template that you can deploy in each source account to share observability data with the
monitoring account. The template creates a sink resource in the monitoring account and an
observability link resource in the source account. The template also creates the necessary
IAM roles and policies to allow cross-account access to the observability data2. Therefore,
the solution that meets the requirements of the question is to enable CloudWatch crossaccount
observability for the monitoring account and deploy the CloudFormation template
provided by the monitoring account in each AWS account to share the data with the
monitoring account.
The other options are not valid because:
Service control policies (SCPs) are a type of organization policy that you can use
to manage permissions in your organization. SCPs offer central control over the
maximum available permissions for all accounts in your organization, allowing you
to ensure your accounts stay within your organization’s access control guidelines3.
SCPs do not provide access to CloudWatch in the monitoring account, but rather
restrict the actions that users and roles can perform in the source accounts. SCPs
are not required to enable CloudWatch cross-account observability, as the
CloudFormation template creates the necessary IAM roles and policies for crossaccount
access2.
IAM users are entities that you create in AWS to represent the people or
applications that use them to interact with AWS. IAM users can have permissions
to access the resources in your AWS account4. Configuring a new IAM user in the
monitoring account and an IAM policy in each AWS account to have access to
query and visualize the CloudWatch data in the account is not a valid solution, as it
does not enable CloudWatch cross-account observability. This solution would
require the IAM user to switch between different accounts to view the observability
data, which is not seamless and efficient. Moreover, this solution would not allow
the IAM user to search, visualize, and analyze metrics, logs, traces, and
Application Insights applications across multiple accounts in a single place1.
Cross-account IAM policies are policies that allow you to delegate access to
resources that are in different AWS accounts that you own. You attach a crossaccount
policy to a user or group in one account, and then specify which accounts
the user or group can access5. Creating a new IAM user in the monitoring account
and cross-account IAM policies in each AWS account is not a valid solution, as it
does not enable CloudWatch cross-account observability. This solution would also
require the IAM user to switch between different accounts to view the observability
data, which is not seamless and efficient. Moreover, this solution would not allow
the IAM user to search, visualize, and analyze metrics, logs, traces, and
Application Insights applications across multiple accounts in a single place1.
References: CloudWatch cross-account observability, CloudFormation template for
CloudWatch cross-account observability, Service control policies, IAM users, Crossaccount
IAM policies
Question # 122
A company wants to rearchitect a large-scale web application to a serverless microservicesarchitecture. The application uses Amazon EC2 instances and is written in Python.The company selected one component of the web application to test as a microservice.The component supports hundreds of requests each second. The company wants to createand test the microservice on an AWS solution that supports Python. The solution must alsoscale automatically and require minimal infrastructure and minimal operational support. Which solution will meet these requirements?
A. Use a Spot Fleet with auto scaling of EC2 instances that run the most recent AmazonLinux operating system. B. Use an AWS Elastic Beanstalk web server environment that has high availabilityconfigured. C. Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups ofself-managed EC2 instances. D. Use an AWS Lambda function that runs custom developed code.
Answer: D
Explanation: AWS Lambda is a serverless compute service that lets you run code without
provisioning or managing servers. You can use Lambda to create and test microservices
that are written in Python or other supported languages. Lambda scales automatically to
handle the number of requests per second. You only pay for the compute time you
consume. Lambda also integrates with other AWS services, such as Amazon API
Gateway, Amazon S3, Amazon DynamoDB, and Amazon SQS, to enable event-driven
architectures. Lambda has minimal infrastructure and operational overhead, as you do not
need to manage servers, operating systems, patches, or scaling policies.
The other options are not serverless solutions and require more infrastructure and
operational support. They also do not scale automatically to handle the number of requests
per second. A Spot Fleet is a collection of EC2 instances that run on spare capacity at low
prices. However, Spot Instances can be interrupted by AWS at any time, which can affect
the availability and performance of your microservice. AWS Elastic Beanstalk is a service
that automates the deployment and management of web applications on EC2 instances.
However, you still need to provision, configure, and monitor the underlying EC2 instances
and load balancers. Amazon EKS is a service that runs Kubernetes on AWS. However, you
still need to create, configure, and manage the EC2 instances that form the Kubernetes
cluster and nodes. You also need to install and update the Kubernetes software and tools.
References:
What is AWS Lambda?
Building Lambda functions with Python
Create a layer for a Lambda Python function
AWS Lambda – Function in Python
How do I call my AWS Lambda function from a local python script?
Question # 123
A company uses an on-premises network-attached storage (NAS) system to provide fileshares to its high performance computing (HPC) workloads. The company wants to migrateits latency-sensitive HPC workloads and its storage to the AWS Cloud. The company mustbe able to provide NFS and SMB multi-protocol access from the file system.Which solution will meet these requirements with the LEAST latency? (Select TWO.)
A. Deploy compute optimized EC2 instances into a cluster placement group. B. Deploy compute optimized EC2 instances into a partition placement group. C. Attach the EC2 instances to an Amazon FSx for Lustre file system. D. Attach the EC2 instances to an Amazon FSx for OpenZFS file system. E. Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
Answer: A,E
Explanation: A cluster placement group is a logical grouping of EC2 instances within a
single Availability Zone that are placed close together to minimize network latency. This is
suitable for latency-sensitive HPC workloads that require high network performance. A
compute optimized EC2 instance is an instance type that has a high ratio of vCPUs to
memory, which is ideal for compute-intensive applications. Amazon FSx for NetApp
ONTAP is a fully managed service that provides NFS and SMB multi-protocol access from
the file system, as well as features such as data deduplication, compression, thin
provisioning, and snapshots. This solution will meet the requirements with the least latency,
as it leverages the low-latency network and storage performance of AWS.
References:
1 explains how cluster placement groups work and their benefits.
2 describes the characteristics and use cases of compute optimized EC2
instances.
3 provides an overview of Amazon FSx for NetApp ONTAP and its features.
Question # 124
A company needs to connect several VPCs in the us-east-1 Region that span hundreds ofAWS accounts. The company's networking team has its own AWS account to manage thecloud network.What is the MOST operationally efficient solution to connect the VPCs?
A. Set up VPC peering connections between each VPC. Update each associated subnet'sroute table. B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPCthrough the internet. C. Create an AWS Transit Gateway in the networking team's AWS account. Configurestatic routes from each VPC. D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team's AWSaccount to connect to each VPC.
Answer: C
Explanation: AWS Transit Gateway is a highly scalable and centralized hub for connecting
multiple VPCs, on-premises networks, and remote networks. It simplifies network
connectivity by providing a single entry point and reducing the number of connections
required. In this scenario, deploying an AWS Transit Gateway in the networking team's
AWS account allows for efficient management and control over the network connectivity
across multiple VPCs.
Question # 125
A company’s infrastructure consists of Amazon EC2 instances and an Amazon RDS DBinstance in a single AWS Region. The company wants to back up its data in a separateRegion.Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Backup to copy EC2 backups and RDS backups to the separate Region. B. Use Amazon Data Lifecycle Manager (Amazon DLM) to copy EC2 backups and RDSbackups to the separate Region. C. Create Amazon Machine Images (AMIs) of the EC2 instances. Copy the AMIs to theseparate Region. Create a read replica for the RDS DB instance in the separate Region. D. Create Amazon Elastic Block Store (Amazon EBS) snapshots. Copy the EBS snapshotsto the separate Region. Create RDS snapshots. Export the RDS snapshots to Amazon S3. Configure S3 Cross-Region Replication (CRR) to the separate Region.
Answer: A
Explanation: To back up EC2 instances and RDS DB instances in a separate Region with
the least operational overhead, AWS Backup is a simple and cost-effective solution. AWS
Backup can copy EC2 backups and RDS backups to another Region automatically and
securely. AWS Backup also supports backup policies, retention rules, and monitoring
features.
References:
What Is AWS Backup?
Cross-Region Backup
Question # 126
A company has a large workload that runs every Friday evening. The workload runs onAmazon EC2 instances that are in two Availability Zones in the us-east-1 Region. Normally,the company must run no more than two instances at all times. However, the companywants to scale up to six instances each Friday to handle a regularly repeating increasedworkload.Which solution will meet these requirements with the LEAST operational overhead?
A. Create a reminder in Amazon EventBridge to scale the instances. B. Create an Auto Scaling group that has a scheduled action. C. Create an Auto Scaling group that uses manual scaling. D. Create an Auto Scaling group that uses automatic scaling.
Answer: B
Explanation: An Auto Scaling group is a collection of EC2 instances that share similar
characteristics and can be scaled in or out automatically based on demand. An Auto
Scaling group can have a scheduled action, which is a configuration that tells the group to
scale to a specific size at a specific time. This way, the company can scale up to six
instances each Friday evening to handle the increased workload, and scale down to two
instances at other times to save costs. This solution meets the requirements with the least
operational overhead, as it does not require manual intervention or custom scripts.
References:
1 explains how to create a scheduled action for an Auto Scaling group.
2 describes the concept and benefits of an Auto Scaling group.
Question # 127
A company has deployed its application on Amazon EC2 instances with an Amazon RDSdatabase. The company used the principle of least privilege to configure the databaseaccess credentials. The company's security team wants to protect the application and thedatabase from SQL injection and other web-based attacks.Which solution will meet these requirements with the LEAST operational overhead?
A. Use security groups and network ACLs to secure the database and application servers. B. Use AWS WAF to protect the application. Use RDS parameter groups to configure thesecurity settings. C. Use AWS Network Firewall to protect the application and the database. D. Use different database accounts in the application code for different functions. Avoidgranting excessive privileges to the database users.
Answer: B
Explanation: AWS WAF is a web application firewall that helps protect web applications
from common web exploits that could affect application availability, compromise security, or
consume excessive resources. AWS WAF allows users to create rules that block, allow, or
count web requests based on customizable web security rules. One of the types of rules
that can be created is an SQL injection rule, which allows users to specify a list of IP
addresses or IP address ranges that they want to allow or block. By using AWS WAF to
protect the application, the company can prevent SQL injection and other web-based
attacks from reaching the application and the database.
RDS parameter groups are collections of parameters that define how a database instance
operates. Users can modify the parameters in a parameter group to change the behavior
and performance of the database. By using RDS parameter groups to configure the security settings, the company can enforce best practices such as disabling remote root
login, requiring SSL connections, and limiting the maximum number of connections.
The other options are not correct because they do not effectively protect the application
and the database from SQL injection and other web-based attacks. Using security groups
and network ACLs to secure the database and application servers is not sufficient because
they only filter traffic at the network layer, not at the application layer. Using AWS Network
Firewall to protect the application and the database is not necessary because it is a stateful
firewall service that provides network protection for VPCs, not for individual applications or
databases. Using different database accounts in the application code for different functions
is a good practice, but it does not prevent SQL injection attacks from exploiting
vulnerabilities in the application code.
References:
AWS WAF
How AWS WAF works
Working with IP match conditions
Working with DB parameter groups
Amazon RDS security best practices
Question # 128
A solutions architect is creating a new Amazon CloudFront distribution for an application.Some of the information submitted by users is sensitive. The application uses HTTPS butneeds another layer of security. The sensitive information should.be protected throughoutthe entire application stack, and access to the information should be restricted to certainapplications.Which action should the solutions architect take?
A. Configure a CloudFront signed URL. B. Configure a CloudFront signed cookie. C. Configure a CloudFront field-level encryption profile. D. Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for theViewer Protocol Policy.
Answer: C
Explanation: it allows the company to protect sensitive information submitted by users throughout the entire application stack and restrict access to certain applications. By
configuring a CloudFront field-level encryption profile, the company can encrypt specific
fields of user data at the edge locations before sending it to the origin servers. By using
public-private key pairs, the company can ensure that only authorized applications can
decrypt and access the sensitive information. References:
Field-Level Encryption
Encrypting and Decrypting Data
Question # 129
A company manages AWS accounts in AWS Organizations. AWS 1AM Identity Center(AWS Single Sign-On) and AWS Control Tower are configured for the accounts. Thecompany wants to manage multiple user permissions across all the accounts.The permissions will be used by multiple 1AM users and must be split between thedeveloper and administrator teams. Each team requires different permissions. Thecompany wants a solution that includes new users that are hired on both teams.Which solution will meet these requirements with the LEAST operational overhead?
A. Create individual users in 1AM Identity Center (or each account. Create separatedeveloper and administrator groups in 1AM Identity Center. Assign the users to theappropriate groups Create a custom 1AM policy for each group to set fine-grainedpermissions. B. Create individual users in 1AM Identity Center for each account. Create separatedeveloper and administrator groups in 1AM Identity Center. Assign the users to theappropriate groups. Attach AWS managed 1AM policies to each user as needed for finegrainedpermissions. C. Create individual users in 1AM Identity Center Create new developer and administratorgroups in 1AM Identity Center. Create new permission sets that include the appropriate1AM policies for each group. Assign the new groups to the appropriate accounts Assign thenew permission sets to the new groups When new users are hired, add them to theappropriate group. D. Create individual users in 1AM Identity Center. Create new permission sets that includethe appropriate 1AM policies for each user. Assign the users to the appropriate accounts.Grant additional 1AM permissions to the users from within specific accounts. When newusers are hired, add them to 1AM Identity Center and assign them to the accounts.
Answer: C
Explanation: This solution meets the requirements with the least operational overhead
because it leverages the features of IAM Identity Center and AWS Control Tower to
centrally manage multiple user permissions across all the accounts. By creating new
groups and permission sets, the company can assign fine-grained permissions to the
developer and administrator teams based on their roles and responsibilities. The
permission sets are applied to the groups at the organization level, so they are
automatically inherited by all the accounts in the organization. When new users are hired,
the company only needs to add them to the appropriate group in IAM Identity Center, and
they will automatically get the permissions assigned to that group. This simplifies the user
management and reduces the manual effort of assigning permissions to each user
individually.
References:
Managing access to AWS accounts and applications
Managing permissions sets
Managing groups
Question # 130
A company is deploying an application in three AWS Regions using an Application LoadBalancer Amazon Route 53 will be used to distribute traffic between these Regions. WhichRoute 53 configuration should a solutions architect use to provide the MOST highperformingexperience?
A. Create an A record with a latency policy. B. Create an A record with a geolocation policy. C. Create a CNAME record with a failover policy. D. Create a CNAME record with a geoproximity policy.
Answer: A
Explanation:
To provide the most high-performing experience for the users of the application, a solutions
architect should use a latency routing policy for the Route 53 A record. This policy allows
Route 53 to route traffic to the AWS Region that provides the lowest possible latency for
the users1. A latency routing policy can also improve the availability of the application, as
Route 53 can automatically route traffic to another Region if the primary Region becomes
A company wants to migrate its three-tier application from on premises to AWS. The webtier and the application tier are running on third-party virtual machines (VMs). The databasetier is running on MySQL.The company needs to migrate the application by making the fewest possible changes tothe architecture. The company also needs a database solution that can restore data to aspecific point in time.Which solution will meet these requirements with the LEAST operational overhead?
A. Migrate the web tier and the application tier to Amazon EC2 instances in privatesubnets. Migrate the database tier to Amazon RDS for MySQL in private subnets. B. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the applicationtier to EC2 instances in private subnets. Migrate the database tier to Amazon AuroraMySQL in private subnets. C. Migrate the web tier to Amazon EC2 instances in public subnets. Migrate the applicationtier to EC2 instances in private subnets. Migrate the database tier to Amazon RDS forMySQL in private subnets. D. Migrate the web tier and the application tier to Amazon EC2 instances in public subnets.Migrate the database tier to Amazon Aurora MySQL in public subnets.
Answer: C
Explanation: The solution that meets the requirements with the least operational overhead
is to migrate the web tier to Amazon EC2 instances in public subnets, migrate the
application tier to EC2 instances in private subnets, and migrate the database tier to
Amazon RDS for MySQL in private subnets. This solution allows the company to migrate
its three-tier application to AWS by making minimal changes to the architecture, as it
preserves the same web, application, and database tiers and uses the same MySQL
database engine. The solution also provides a database solution that can restore data to a
specific point in time, as Amazon RDS for MySQL supports automated backups and pointin-
time recovery. This solution also reduces the operational overhead by using managed
services such as Amazon EC2 and Amazon RDS, which handle tasks such as
provisioning, patching, scaling, and monitoring.
The other solutions do not meet the requirements as well as the first one because they
either involve more changes to the architecture, do not provide point-in-time recovery, or do not follow best practices for security and availability. Migrating the database tier to Amazon
Aurora MySQL would require changing the database engine and potentially modifying the
application code to ensure compatibility. Migrating the web tier and the application tier to
public subnets would expose them to more security risks and reduce their availability in
case of a subnet failure. Migrating the database tier to public subnets would also
compromise its security and performance. References:
Migrate Your Application Database to Amazon RDS
Amazon RDS for MySQL
Amazon Aurora MySQL
Amazon VPC
Question # 132
A solutions architect must provide an automated solution for a company's compliancepolicy that states security groups cannot include a rule that allows SSH from 0.0.0.0/0. Thecompany needs to be notified if there is any breach in the policy. A solution is needed assoon as possible.What should the solutions architect do to meet these requirements with the LEASToperational overhead?
A. Write an AWS Lambda script that monitors security groups for SSH being open to0.0.0.0/0 addresses and creates a notification every time it finds one. B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon SimpleNotification Service (Amazon SNS) notification when a noncompliant rule is created. C. Create an 1AM role with permissions to globally open security groups and networkACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate anotification every time the role is assumed by a user. D. Configure a service control policy (SCP) that prevents non-administrative users fromcreating or editing security groups. Create a notification in the ticketing system when a userrequests a rule that needs administrator permissions.
Answer: B
Explanation: The most suitable solution for the company’s compliance policy is to enable
the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification
Service (Amazon SNS) notification when a noncompliant rule is created. This solution has
the least operational overhead because it uses a predefined rule that is already available in
AWS Config, which is a service that enables users to assess, audit, and evaluate the
configurations of their AWS resources. The restricted-ssh rule checks whether security
groups that are in use have inbound rules that allow SSH from 0.0.0.0/0 addresses, and
reports them as noncompliant1. Users can configure the rule to send notifications to an
Amazon SNS topic when a noncompliant change occurs, and subscribe to the topic to
receive alerts via email, SMS, or other methods2.
The other options are not correct because they either have more operational overhead or
do not meet the requirements. Writing an AWS Lambda script that monitors security groups
for SSH being open to 0.0.0.0/0 addresses and creates a notification every time it finds one
is not correct because it requires custom code development and maintenance, which adds
complexity and cost to the solution. Creating an IAM role with permissions to globally open
security groups and network ACLs, and creating an Amazon SNS topic to generate a notification every time the role is assumed by a user is not correct because it does not
prevent or detect the creation of noncompliant rules by other users or roles, and it does not
address the existing rules that may violate the policy. Configuring a service control policy
(SCP) that prevents non-administrative users from creating or editing security groups, and
creating a notification in the ticketing system when a user requests a rule that needs
administrator permissions is not correct because it does not provide an automated solution
for the policy enforcement and notification, and it may limit the flexibility and productivity of
the users.
References:
restricted-ssh - AWS Config
Getting Notifications When Your Resources Change - AWS Config
Question # 133
A company is developing an application that will run on a production Amazon ElasticKubernetes Service (Amazon EKS) cluster The EKS cluster has managed node groupsthat are provisioned with On-Demand Instances.The company needs a dedicated EKS cluster for development work. The company will usethe development cluster infrequently to test the resiliency of the application. The EKScluster must manage all the nodes.Which solution will meet these requirements MOST cost-effectively?
A. Create a managed node group that contains only Spot Instances. B. Create two managed node groups. Provision one node group with On-DemandInstances. Provision the second node group with Spot Instances. C. Create an Auto Scaling group that has a launch configuration that uses Spot Instances.Configure the user data to add the nodes to the EKS cluster. D. Create a managed node group that contains only On-Demand Instances.
Answer: A
Explanation: Spot Instances are EC2 instances that are available at up to a 90% discount
compared to On-Demand prices. Spot Instances are suitable for stateless, fault-tolerant,
and flexible workloads that can tolerate interruptions. Spot Instances can be reclaimed by
EC2 when the demand for On-Demand capacity increases, but they provide a two-minute warning before termination. EKS managed node groups automate the provisioning and
lifecycle management of nodes for EKS clusters. Managed node groups can use Spot
Instances to reduce costs and scale the cluster based on demand. Managed node groups
also support features such as Capacity Rebalancing and Capacity Optimized allocation
strategy to improve the availability and resilience of Spot Instances. This solution will meet
the requirements most cost-effectively, as it leverages the lowest-priced EC2 capacity and
does not require any manual intervention.
References:
1 explains how to create and use managed node groups with EKS.
2 describes how to use Spot Instances with managed node groups.
3 provides an overview of Spot Instances and their benefits.
Question # 134
A company is deploying an application that processes large quantities of data in parallel.The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to prevent groups of nodes from sharing the sameunderlying hardware.Which networking solution meets these requirements?
A. Run the EC2 instances in a spread placement group. B. Group the EC2 instances in separate accounts. C. Configure the EC2 instances with dedicated tenancy. D. Configure the EC2 instances with shared tenancy.
Answer: A
Explanation: it allows the company to deploy an application that processes large
quantities of data in parallel and prevent groups of nodes from sharing the same underlying
hardware. By running the EC2 instances in a spread placement group, the company can
launch a small number of instances across distinct underlying hardware to reduce
correlated failures. A spread placement group ensures that each instance is isolated from
each other at the rack level. References:
Placement Groups
Spread Placement Groups
Question # 135
A company is preparing a new data platform that will ingest real-time streaming data frommultiple sources. The company needs to transform the data before writing the data toAmazon S3. The company needs the ability to use SQL to query the transformed data. Which solutions will meet these requirements? (Choose two.)
A. Use Amazon Kinesis Data Streams to stream the data. Use Amazon Kinesis DataAnalytics to transform the data. Use Amazon Kinesis Data Firehose to write the data toAmazon S3. Use Amazon Athena to query the transformed data from Amazon S3. B. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data.Use AWS Glue to transform the data and to write the data to Amazon S3. Use AmazonAthena to query the transformed data from Amazon S3. C. Use AWS Database Migration Service (AWS DMS) to ingest the data. Use AmazonEMR to transform the data and to write the data to Amazon S3. Use Amazon Athena toquery the transformed data from Amazon S3. D. Use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to stream the data.Use Amazon Kinesis Data Analytics to transform the data and to write the data to AmazonS3. Use the Amazon RDS query editor to query the transformed data from Amazon S3. E. Use Amazon Kinesis Data Streams to stream the data. Use AWS Glue to transform thedata. Use Amazon Kinesis Data Firehose to write the data to Amazon S3. Use the AmazonRDS query editor to query the transformed data from Amazon S3.
Answer: A,B
Explanation: To ingest, transform, and query real-time streaming data from multiple
sources, Amazon Kinesis and Amazon MSK are suitable solutions. Amazon Kinesis Data
Streams can stream the data from various sources and integrate with other AWS services.
Amazon Kinesis Data Analytics can transform the data using SQL or Apache Flink.
Amazon Kinesis Data Firehose can write the data to Amazon S3 or other destinations.
Amazon Athena can query the transformed data from Amazon S3 using standard SQL.
Amazon MSK can stream the data using Apache Kafka, which is a popular open-source
platform for streaming data. AWS Glue can transform the data using Apache Spark or
Python scripts and write the data to Amazon S3 or other destinations. Amazon Athena can
also query the transformed data from Amazon S3 using standard SQL.
References:
What Is Amazon Kinesis Data Streams?
What Is Amazon Kinesis Data Analytics?
What Is Amazon Kinesis Data Firehose?
What Is Amazon Athena?
What Is Amazon MSK?
What Is AWS Glue?
Question # 136
A company runs an application on Amazon EC2 instances. The company needs toimplement a disaster recovery (DR) solution for the application. The DR solution needs tohave a recovery time objective (RTO) of less than 4 hours. The DR solution also needs touse the fewest possible AWS resources during normal operations.Which solution will meet these requirements in the MOST operationally efficient way?
A. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIsto a secondary AWS Region. Automate infrastructure deployment in the secondary Regionby using AWS Lambda and custom scripts. B. Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Regionby using AWS CloudFormation. C. Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in thesecondary Region active at all times. D. Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in thesecondary Availability Zone active at all times.
Answer: B
Explanation: it allows the company to implement a disaster recovery (DR) solution for the
application that has a recovery time objective (RTO) of less than 4 hours and uses the
fewest possible AWS resources during normal operations. By creating Amazon Machine
Images (AMIs) to back up the EC2 instances and copying the AMIs to a secondary AWS
Region, the company can create point-in-time snapshots of the application and store them
in a different geographical location. By automating infrastructure deployment in the
secondary Region by using AWS CloudFormation, the company can quickly launch a stack
of resources from a template in case of a disaster. This is a cost-effective and operationally
efficient way to implement a DR solution for EC2 instances. References:
Amazon Machine Images (AMI)
Copying an AMI
AWS CloudFormation
Working with Stacks
Question # 137
A solutions architect needs to review a company's Amazon S3 buckets to discoverpersonally identifiable information (Pll). The company stores the Pll data in the us-east-IRegion and us-west-2 Region.Which solution will meet these requirements with the LEAST operational overhead?
A. Configure Amazon Macie in each Region. Create a job to analyze the data that is inAmazon S3_ B. Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze thedata that is in Amazon S3_ C. Configure Amazon Inspector to analyze the data that IS in Amazon S3. D. Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
Answer: A
Explanation: it allows the solutions architect to review the S3 buckets to discover
personally identifiable information (Pll) with the least operational overhead. Amazon Macie
is a fully managed data security and data privacy service that uses machine learning and
pattern matching to discover and protect sensitive data in AWS. Amazon Macie can
analyze data in S3 buckets across multiple regions and provide insights into the type,
location, and level of sensitivity of the data. References:
Amazon Macie
Analyzing data with Amazon Macie
Question # 138
A solutions architect is designing a workload that will store hourly energy consumption bybusiness tenants in a building. The sensors will feed a database through HTTP requeststhat will add up usage for each tenant. The solutions architect must use managed serviceswhen possible. The workload will receive more features in the future as the solutionsarchitect adds independent components.Which solution will meet these requirements with the LEAST operational overhead?
A. Use Amazon API Gateway with AWS Lambda functions to receive the data from thesensors, process the data, and store the data in an Amazon DynamoDB table. B. Use an Elastic Load Balancer that is supported by an Auto Scaling group of AmazonEC2 instances to receive and process the data from the sensors. Use an Amazon S3bucket to store the processed data. C. Use Amazon API Gateway with AWS Lambda functions to receive the data from thesensors, process the data, and store the data in a Microsoft SQL Server Express databaseon an Amazon EC2 instance. D. Use an Elastic Load Balancer that is supported by an Auto Scaling group of AmazonEC2 instances to receive and process the data from the sensors. Use an Amazon ElasticFile System (Amazon EFS) shared file system to store the processed data.
Answer: A
Explanation: To use an event-driven programming model with AWS Lambda and reduce operational overhead, Amazon API Gateway and Amazon DynamoDB are suitable
solutions. Amazon API Gateway can receive the data from the sensors and invoke AWS
Lambda functions to process the data. AWS Lambda can run code without provisioning or
managing servers, and scale automatically with the incoming requests. Amazon
DynamoDB can store the data in a fast and flexible NoSQL database that can handle any
amount of data with consistent performance.
References:
What Is Amazon API Gateway?
What Is AWS Lambda?
What Is Amazon DynamoDB?
Question # 139
A company is deploying a new public web application toAWS. The application Will runbehind an Application Load Balancer (ALE). The application needs to be encrypted at theedge with an SSL/TLS certificate that is issued by an external certificate authority (CA).The certificate must be rotated each year before the certificate expires.What should a solutions architect do to meet these requirements?
A. Use AWS Certificate Manager (ACM) to issue an SSUTLS certificate. Apply thecertificate to the ALB Use the managed renewal feature to automatically rotate thecertificate. B. Use AWS Certificate Manager (ACM) to issue an SSUTLS certificate_ Import the keymaterial from the certificate. Apply the certificate to the ALB Use the managedrenewal teature to automatically rotate the certificate. C. Use AWS Private Certificate Authority to issue an SSL/TLS certificate from the root CA.Apply the certificate to the ALB. use the managed renewal feature to automatically rotate the certificate D. Use AWS Certificate Manager (ACM) to import an SSL/TLS certificate. Apply thecertificate to the ALB_ Use Amazon EventBridge to send a notification when the certificateis nearing expiration. Rotate the certificate manually.
Answer: D
Explanation: To use an SSL/TLS certificate that is issued by an external CA, the certificate
must be imported to AWS Certificate Manager (ACM). ACM can send a notification when
the certificate is nearing expiration, but it cannot automatically rotate the certificate.
Therefore, the certificate must be rotated manually by importing a new certificate and
applying it to the ALB.
References:
Importing Certificates into AWS Certificate Manager
Renewing and Rotating Imported Certificates
Using an ACM Certificate with an Application Load Balancer
Question # 140
A company runs an infrastructure monitoring service. The company is building a newfeature that will enable the service to monitor data in customer AWS accounts. The newfeature will call AWS APIs in customer accounts to describe Amazon EC2 instances andread Amazon CloudWatch metrics.What should the company do to obtain access to customer accounts in the MOST secureway?
A. Ensure that the customers create an 1AM role in their account with read-only EC2 andCloudWatch permissions and a trust policy to the company's account. B. Create a serverless API that implements a token vending machine to provide temporaryAWS credentials for a role with read-only EC2 and CloudWatch permissions. C. Ensure that the customers create an 1AM user in their account with read-only EC2 andCloudWatch permissions. Encrypt and store customer access and secret keys in a secretsmanagement system. D. Ensure that the customers create an Amazon Cognito user in their account to use an1AM role with read-only EC2 and CloudWatch permissions. Encrypt and store the AmazonCognito user and password in a secrets management system.
Answer: A
Explanation: By having customers create an IAM role with the necessary permissions in
their own accounts, the company can use AWS Identity and Access Management (IAM) to
establish cross-account access. The trust policy allows the company's AWS account to
assume the customer's IAM role temporarily, granting access to the specified resources
(EC2 instances and CloudWatch metrics) within the customer's account. This approach
follows the principle of least privilege, as the company only requests the necessary
permissions and does not require long-term access keys or user credentials from the
customers.
Question # 141
A company has two VPCs that are located in the us-west-2 Region within the same AWSaccount. The company needs to allow network traffic between these VPCs. Approximately500 GB of data transfer will occur between the VPCs each month.What is the MOST cost-effective solution to connect these VPCs?
A. Implement AWS Transit Gateway to connect the VPCs. Update the route tables of eachVPC to use the transit gateway for inter-VPC communication. B. Implement an AWS Site-to-Site VPN tunnel between the VPCs. Update the route tablesof each VPC to use the VPN tunnel for inter-VPC communication. C. Set up a VPC peering connection between the VPCs. Update the route tables of eachVPC to use the VPC peering connection for inter-VPC communication. D. Set up a 1 GB AWS Direct Connect connection between the VPCs. Update the routetables of each VPC to use the Direct Connect connection for inter-VPC communication.
Answer: C
Explanation: To connect two VPCs in the same Region within the same AWS account,
VPC peering is the most cost-effective solution. VPC peering allows direct network traffic between the VPCs without requiring a gateway, VPN connection, or AWS Transit Gateway.
VPC peering also does not incur any additional charges for data transfer between the
VPCs.
References:
What Is VPC Peering?
VPC Peering Pricing
Question # 142
A company runs a web application that is deployed on Amazon EC2 instances in theprivate subnet of a VPC. An Application Load Balancer (ALB) that extends across thepublic subnets directs web traffic to the EC2 instances. The company wants to implementnew security measures to restrict inbound traffic from the ALB to the EC2 instances whilepreventing access from any other source inside or outside the private subnet of the EC2instances. Which solution will meet these requirements?
A. Configure a route in a route table to direct traffic from the internet to the private IPaddresses of the EC2 instances. B. Configure the security group for the EC2 instances to only allow traffic that comes fromthe security group for the ALB. C. Move the EC2 instances into the public subnet. Give the EC2 instances a set of ElasticIP addresses. D. Configure the security group for the ALB to allow any TCP traffic on any port.
Answer: B
Explanation: To restrict inbound traffic from the ALB to the EC2 instances, the security
group for the EC2 instances should only allow traffic that comes from the security group for
the ALB. This way, the EC2 instances can only receive requests from the ALB and not from
any other source inside or outside the private subnet.
References:
Security Groups for Your Application Load Balancers
Security Groups for Your VPC
Question # 143
A company runs applications on AWS that connect to the company's Amazon RDSdatabase. The applications scale on weekends and at peak times of the year. Thecompany wants to scale the database more effectively for its applications that connect tothe database.Which solution will meet these requirements with the LEAST operational overhead
A. Use Amazon DynamoDB with connection pooling with a target group configuration forthe database. Change the applications to use the DynamoDB endpoint. B. Use Amazon RDS Proxy with a target group for the database. Change the applicationsto use the RDS Proxy endpoint. C. Use a custom proxy that runs on Amazon EC2 as an intermediary to the database.Change the applications to use the custom proxy endpoint. D. Use an AWS Lambda function to provide connection pooling with a target groupconfiguration for the database. Change the applications to use the Lambda function.
Answer: B
Explanation:
Amazon RDS Proxy is a fully managed, highly available database proxy for Amazon
Relational Database Service (RDS) that makes applications more scalable, more resilient
to database failures, and more secure1. RDS Proxy allows applications to pool and share
connections established with the database, improving database efficiency and application
scalability2. RDS Proxy also reduces failover times for Aurora and RDS databases by up to
66% and enables IAM authentication and Secrets Manager integration for database
access1. RDS Proxy can be enabled for most applications with no code changes2.
Question # 144
A company runs an application that uses Amazon RDS for PostgreSQL. The applicationreceives traffic only on weekdays during business hours. The company wants to optimizecosts and reduce operational overhead based on this usage.Which solution will meet these requirements?
A. Use the Instance Scheduler on AWS to configure start and stop schedules. B. Turn off automatic backups. Create weekly manual snapshots of the database. C. Create a custom AWS Lambda function to start and stop the database based onminimum CPU utilization. D. Purchase All Upfront reserved DB instances.
The Instance Scheduler on AWS solution automates the starting and stopping of Amazon
Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon
RDS) instances. This solution helps reduce operational costs by stopping resources that
are not in use and starting them when they are needed1. The solution allows you to define custom schedules and periods using a command line interface (CLI) or an SSM
maintenance window1. You can also choose between different payment options for the
reserved DB instances, such as No Upfront, Partial Up front, or All Upfront2.
Question # 145
A company needs to connect several VPCs in the us-east-1 Region that span hundreds ofAWS accounts. The company's networking team has its own AWS account to manage thecloud network.What is the MOST operationally efficient solution to connect the VPCs?
A. Set up VPC peering connections between each VPC. Update each associated subnet'sroute table. B. Configure a NAT gateway and an internet gateway in each VPC to connect each VPCthrough the internet. C. Create an AWS Transit Gateway in the networking team's AWS account. Configurestatic routes from each VPC. D. Deploy VPN gateways in each VPC. Create a transit VPC in the networking team's AWSaccount to connect to each VPC.
Answer: C
Explanation: AWS Transit Gateway is a highly scalable and centralized hub for connecting
multiple VPCs, on-premises networks, and remote networks. It simplifies network
connectivity by providing a single entry point and reducing the number of connections
required. In this scenario, deploying an AWS Transit Gateway in the networking team's
AWS account allows for efficient management and control over the network connectivity
across multiple VPCs.
Question # 146
A company has created a multi-tier application for its ecommerce website. The websiteuses an Application Load Balancer that resides in the public subnets, a web tier in thepublic subnets, and a MySQL cluster hosted on Amazon EC2 instances in the privatesubnets. The MySQL database needs to retrieve product catalog and pricing informationthat is hosted on the internet by a third-party provider. A solutions architect must devise astrategy that maximizes security without increasing operational overhead.What should the solutions architect do to meet these requirements?
A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NATinstance. B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table todirect all internet-bound traffic to the NAT gateway. C. Configure an internet gateway and attach it to the VPC. Modify the private subnet routetable to direct internet-bound traffic to the internet gateway. D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnetroute table to direct internet-bound traffic to the virtual private gateway.
Answer: B
Explanation: To allow the MySQL database in the private subnets to access the internet
without exposing it to the public, a NAT gateway is a suitable solution. A NAT gateway
enables instances in a private subnet to connect to the internet or other AWS services, but
prevents the internet from initiating a connection with those instances. A NAT gateway
resides in the public subnets and can handle high throughput of traffic with low latency. A
NAT gateway is also a managed service that does not require any operational overhead.
References:
NAT Gateways
NAT Gateway Pricing
Question # 147
A solutions architect is designing a highly available Amazon ElastiCache for Redis basedsolution. The solutions architect needs to ensure that failures do not result in performancedegradation or loss of data locally and within an AWS Region. The solution needs toprovide high availability at the node level and at the Region level.Which solution will meet these requirements?
A. Use Multi-AZ Redis replication groups with shards that contain multiple nodes. B. Use Redis shards that contain multiple nodes with Redis append only files (AOF) turedon. C. Use a Multi-AZ Redis cluster with more than one read replica in the replication group. D. Use Redis shards that contain multiple nodes with Auto Scaling turned on.
Answer: A
Explanation: This answer is correct because it provides high availability at the node level
and at the Region level for the ElastiCache for Redis solution. A Multi-AZ Redis replication
group consists of a primary cluster and up to five read replica clusters, each in a different
Availability Zone. If the primary cluster fails, one of the read replicas is automatically
promoted to be the new primary cluster. A Redis replication group with shards enables
partitioning of the data across multiple nodes, which increases the scalability and
performance of the solution. Each shard can have one or more replicas to provide
A company runs its applications on Amazon EC2 instances. The company performsperiodic financial assessments of itsAWS costs. The company recently identified unusualspending.The company needs a solution to prevent unusual spending. The solution must monitorcosts and notify responsible stakeholders in the event of unusual spending.Which solution will meet these requirements?
A. Use an AWS Budgets template to create a zero spend budget B. Create an AWS Cost Anomaly Detection monitor in the AWS Billing and CostManagement console. C. CreateAWS Pricing Calculator estimates for the current running workload pricingdetails_ D. Use Amazon CloudWatch to monitor costs and to identify unusual spending
Answer: B
Explanation: it allows the company to monitor costs and notify responsible stakeholders in
the event of unusual spending. By creating an AWS Cost Anomaly Detection monitor in the
AWS Billing and Cost Management console, the company can use a machine learning
service that automatically detects and alerts on anomalous spend. By configuring alert
thresholds, notification preferences, and root cause analysis, the company can prevent
unusual spending and identify its source. References:
AWS Cost Anomaly Detection
Creating a Cost Anomaly Monitor
Question # 149
A company wants to use an event-driven programming model with AWS Lambda. Thecompany wants to reduce startup latency for Lambda functions that run on Java 11. Thecompany does not have strict latency requirements for the applications. The companywants to reduce cold starts and outlier latencies when a function scales up.Which solution will meet these requirements MOST cost-effectively?
A. Configure Lambda provisioned concurrency. B. Increase the timeout of the Lambda functions. C. Increase the memory of the Lambda functions. D. Configure Lambda SnapStart.
Answer: D
Explanation: To reduce startup latency for Lambda functions that run on Java 11, Lambda
SnapStart is a suitable solution. Lambda SnapStart is a feature that enables faster cold
starts and lower outlier latencies for Java 11 functions. Lambda SnapStart uses a preinitialized
Java Virtual Machine (JVM) to run the functions, which reduces the initialization
time and memory footprint. Lambda SnapStart does not incur any additional charges.
References: Lambda SnapStart for Java 11 Functions
Lambda SnapStart FAQs
Question # 150
A company needs to minimize the cost of its 1 Gbps AWS Direct Connect connection. Thecompany's average connection utilization is less than 10%. A solutions architect mustrecommend a solution that will reduce the cost without compromising security.Which solution will meet these requirements?
A. Set up a new 1 Gbps Direct Connect connection. Share the connection with anotherAWS account. B. Set up a new 200 Mbps Direct Connect connection in the AWS Management Console. C. Contact an AWS Direct Connect Partner to order a 1 Gbps connection. Share theconnection with another AWS account. D. Contact an AWS Direct Connect Partner to order a 200 Mbps hosted connection for anexisting AWS account.
Answer: D
Explanation: company need to setup a cheaper connection (200 M) but B is incorrect
because you can only order port speeds of 1, 10, or 100 Gbps for more flexibility you can go with hosted connection, You can order port speeds between 50 Mbps and 10 Gbps.
A company runs a website that stores images of historical events. Website users need theability to search and view images based on the year that the event in the image occurred.On average, users request each image only once or twice a year The company wants ahighly available solution to store and deliver the images to users.Which solution will meet these requirements MOST cost-effectively?
A. Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runson Amazon EC2_ B. Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runson Amazon EC2. C. Store images in Amazon S3 Standard. use S3 Standard to directly deliver images byusing a static website. D. Store images in Amazon S3 Standard-InfrequentAccess (S3 Standard-IA). use S3Standard-IA to directly deliver images by using a static website.
Answer: C
Explanation: it allows the company to store and deliver images to users in a highly
available and cost-effective way. By storing images in Amazon S3 Standard, the company
can use a durable, scalable, and secure object storage service that offers high availability
and performance. By using S3 Standard to directly deliver images by using a static
website, the company can avoid running web servers and reduce operational overhead. S3 Standard also offers low storage pricing and free data transfer within AWS Regions.
References:
Amazon S3 Storage Classes
Hosting a Static Website on Amazon S3
Question # 152
A company runs a website that uses a content management system (CMS) on AmazonEC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL Multi-AZ DB instance for the data tier. Website images are stored on an Amazon Elastic BlockStore (Amazon EBS) volume that is mounted inside the EC2 instance.Which combination of actions should a solutions architect take to improve the performanceand resilience of the website? (Select TWO.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2instance. B. Share the website images by using an NFS share from the primary EC2 instance. Mountthis share on the other EC2 instances. C. Move the website images onto an Amazon Elastic File System (Amazon EFS) filesystem that is mounted on every EC2 instance. D. Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI toprovision new instances behind an Application Load Balancer as part of an Auto Scalinggroup. Configure the Auto Scaling group to maintain a minimum of two instances.Configure an accelerator in AWS Global Accelerator for the website. E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI toprovision new instances behind an Application Load Balancer as part of an Auto Scalinggroup. Configure the Auto Scaling group to maintain a minimum of two instances.Configure an Amazon CloudFront distribution for the website.
Answer: C,E
Explanation: Option C provides moving the website images onto an Amazon EFS file system that is mounted on every EC2 instance. Amazon EFS provides a scalable and fully
managed file storage solution that can be accessed concurrently from multiple EC2
instances. This ensures that the website images can be accessed efficiently and
consistently by all instances, improving performance In Option E The Auto Scaling group
maintains a minimum of two instances, ensuring resilience by automatically replacing any
unhealthy instances. Additionally, configuring an Amazon CloudFront distribution for the
website further improves performance by caching content at edge locations closer to the
end-users, reducing latency and improving content delivery. Hence combining these
actions, the website's performance is improved through efficient image storage and content
delivery
Question # 153
A company has an on-premises MySQL database that handles transactional data. Thecompany is migrating the database to the AWS Cloud. The migrated database mustmaintain compatibility with the company's applications that use the database. The migrateddatabase also must scale automatically during periods of increased demand.Which migration solution will meet these requirements?
A. Use native MySQL tools to migrate the database to Amazon RDS for MySQL. Configureelastic storage scaling. B. Migrate the database to Amazon Redshift by using the mysqldump utility. Turn on AutoScaling for the Amazon Redshift cluster. C. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonAurora. Turn on Aurora Auto Scaling. D. Use AWS Database Migration Service (AWS DMS) to migrate the database to AmazonDynamoDB. Configure an Auto Scaling policy.
Answer: C
Explanation: To migrate a MySQL database to AWS with compatibility and scalability,
Amazon Aurora is a suitable option. Aurora is compatible with MySQL and can scale
automatically with Aurora Auto Scaling. AWS Database Migration Service (AWS DMS) can
be used to migrate the database from on-premises to Aurora with minimal downtime.
References:
What Is Amazon Aurora?
Using Amazon Aurora Auto Scaling with Aurora Replicas
What Is AWS Database Migration Service?
Question # 154
A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWSLambda environment variables. A solutions architect needs to ensure that the requiredpermissions are in place to decrypt and use the environment variables.Which steps must the solutions architect take to implement the correct permissions?(Choose two.)
A. Add AWS KMS permissions in the Lambda resource policy. B. Add AWS KMS permissions in the Lambda execution role. C. Add AWS KMS permissions in the Lambda function policy. D. Allow the Lambda execution role in the AWS KMS key policy. E. Allow the Lambda resource policy in the AWS KMS key policy.
Answer: B,D
Explanation: B and D are the correct answers because they ensure that the Lambda
execution role has the permissions to decrypt and use the environment variables, and that
the AWS KMS key policy allows the Lambda execution role to use the key. The Lambda
execution role is an IAM role that grants the Lambda function permission to access AWS
resources, such as AWS KMS. The AWS KMS key policy is a resource-based policy that
controls access to the key. By adding AWS KMS permissions in the Lambda execution role
and allowing the Lambda execution role in the AWS KMS key policy, the solutions architect
can implement the correct permissions for encrypting and decrypting environment
variables. References:
AWS Lambda Execution Role
Using AWS KMS keys in AWS Lambda
Question # 155
A company operates an ecommerce website on Amazon EC2 instances behind anApplication Load Balancer (ALB) in an Auto Scaling group. The site is experiencingperformance issues related to a high request rate from illegitimate external systems withchanging IP addresses. The security team is worried about potential DDoS attacks againstthe website. The company must block the illegitimate incoming requests in a way that has aminimal impact on legitimate users.What should a solutions architect recommend?
A. Deploy Amazon Inspector and associate it with the ALB. B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule. C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic. D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.
Answer: B
Explanation: This answer is correct because it meets the requirements of blocking the
illegitimate incoming requests in a way that has a minimal impact on legitimate users. AWS
WAF is a web application firewall that helps protect your web applications or APIs against
common web exploits that may affect availability, compromise security, or consume
excessive resources. AWS WAF gives you control over how traffic reaches your
applications by enabling you to create security rules that block common attack patterns,
such as SQL injection or cross-site scripting, and rules that filter out specific traffic patterns
you define. You can associate AWS WAF with an ALB to protect the web application from
malicious requests. You can configure a rate-limiting rule in AWS WAF to track the rate of
requests for each originating IP address and block requests from an IP address that
exceeds a certain limit within a five-minute period. This way, you can mitigate potential
DDoS attacks and improve the performance of your website.
A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a containerapplication. The EKS cluster stores sensitive information in the Kubernetes secrets object.The company wants to ensure that the information is encryptedWhich solution will meet these requirements with the LEAST operational overhead?
A. Use the container application to encrypt the information by using AWS Key ManagementService (AWS KMS). B. Enable secrets encryption in the EKS cluster by using AWS Key Management Service(AWS KMS)_ C. Implement an AWS Lambda tuncüon to encrypt the information by using AWS KeyManagement Service (AWS KMS). D. use AWS Systems Manager Parameter Store to encrypt the information by using AWSKey Management Service (AWS KMS).
Answer: B
Explanation: it allows the company to encrypt the Kubernetes secrets object in the EKS
cluster with the least operational overhead. By enabling secrets encryption in the EKS
cluster, the company can use AWS Key Management Service (AWS KMS) to generate and
manage encryption keys for encrypting and decrypting secrets at rest. This is a simple and
secure way to protect sensitive information in EKS clusters. References:
Encrypting Kubernetes secrets with AWS KMS
Kubernetes Secrets
Question # 157
A company runs a three-tier application in two AWS Regions. The web tier, the applicationtier, and the database tier run on Amazon EC2 instances. The company uses AmazonRDS for Microsoft SQL Server Enterprise for the database tier The database tier isexperiencing high load when weekly and monthly reports are run. The company wants toreduce the load on the database tier.Which solution will meet these requirements with the LEAST administrative effort?
A. Create read replicas. Configure the reports to use the new read replicas. B. Convert the RDS database to Amazon DynamoDB_ Configure the reports to useDynamoDB C. Modify the existing RDS DB instances by selecting a larger instance size. D. Modify the existing ROS DB instances and put the instances into an Auto Scaling group.
Answer: A
Explanation: it allows the company to create read replicas of its RDS database and
reduce the load on the database tier. By creating read replicas, the company can offload
read traffic from the primary database instance to one or more replicas. By configuring the
reports to use the new read replicas, the company can improve performance and
availability of its database tier. References:
Working with Read Replicas
Read Replicas for Amazon RDS for SQL Server
Question # 158
A company is building an ecommerce application and needs to store sensitive customerinformation. The company needs to give customers the ability to complete purchasetransactions on the website. The company also needs to ensure that sensitive customerdata is protected, even from database administrators.Which solution meets these requirements?
A. Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBSencryption to encrypt the data. Use an IAM instance role to restrict access. B. Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service(AWS KMS) client-side encryption to encrypt the data. C. Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS)server-side encryption to encrypt the data. Use S3 bucket policies to restrict access. D. Store sensitive data in Amazon FSx for Windows Server. Mount the file share onapplication servers. Use Windows file permissions to restrict access.
Answer: B
Explanation: it allows the company to store sensitive customer information in a managed
AWS service and give customers the ability to complete purchase transactions on the
website. By using AWS Key Management Service (AWS KMS) client-side encryption, the
company can encrypt the data before sending it to Amazon RDS for MySQL. This ensures
that sensitive customer data is protected, even from database administrators, as only the
application has access to the encryption keys. References: Using Encryption with Amazon RDS for MySQL
Encrypting Amazon RDS Resources
Question # 159
A company is moving its data and applications to AWS during a multiyear migration project.The company wants to securely access data on Amazon S3 from the company's AWSRegion and from the company's on-premises location. The data must not traverse theinternet. The company has established an AWS Direct Connect connection between itsRegion and its on-premises locationWhich solution will meet these requirements?
A. Create gateway endpoints for Amazon S3. Use the gateway endpoints to securelyaccess the data from the Region and the on-premises location. B. Create a gateway in AWS Transit Gateway to access Amazon S3 securely from theRegion and the on-premises location. C. Create interface endpoints for Amazon S3_ Use the interface endpoints to securelyaccess the data from the Region and the on-premises location. D. Use an AWS Key Management Service (AWS KMS) key to access the data securelyfrom the Region and the on-premises location.
Answer: B
Explanation: A gateway endpoint is a gateway that is a target for a specified route in your
route table, used for traffic destined to a supported AWS service1. Amazon S3 does not
support gateway endpoints, only interface endpoints2. Therefore, option A is incorrect.
An interface endpoint is an elastic network interface with a private IP address that serves
as an entry point for traffic destined to a supported service1. An interface endpoint can
provide secure access to Amazon S3 from within the Region, but not from the on-premises
location. Therefore, option C is incorrect.
AWS Key Management Service (AWS KMS) is a service that allows you to create and
manage encryption keys to protect your data3. AWS KMS does not provide a way to
access data on Amazon S3 without traversing the internet. Therefore, option D is incorrect.
AWS Transit Gateway is a service that enables you to connect your Amazon Virtual Private
Clouds (VPCs) and your on-premises networks to a single gateway. You can create a
gateway in AWS Transit Gateway to access Amazon S3 securely from both the Region and
the on-premises location using AWS Direct Connect. Therefore, option B is correct.
Question # 160
A company has a financial application that produces reports. The reports average 50 KB insize and are stored in Amazon S3. The reports are frequently accessed during the firstweek after production and must be stored for several years. The reports must beretrievable within 6 hours.Which solution meets these requirements MOST cost-effectively?
A. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier after 7days. B. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days. C. Use S3 Intelligent-Tiering. Configure S3 Intelligent-Tiering to transition the reports to S3Standard-Infrequent Access (S3 Standard-IA) and S3 Glacier. D. Use S3 Standard. Use an S3 Lifecycle rule to transition the reports to S3 Glacier DeepArchive after 7 days.
Answer: A
Explanation: To store and retrieve reports that are frequently accessed during the first
week and must be stored for several years, S3 Standard and S3 Glacier are suitable
solutions. S3 Standard offers high durability, availability, and performance for frequently
accessed data. S3 Glacier offers secure and durable storage for long-term data archiving
at a low cost. S3 Lifecycle rules can be used to transition the reports from S3 Standard to
S3 Glacier after 7 days, which can reduce storage costs. S3 Glacier also supports retrieval
within 6 hours.
References:
Storage Classes
Object Lifecycle Management
Retrieving Archived Objects from Amazon S3 Glacier
Question # 161
A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the documents cannot be overwritten ordeleted. The company needs to encrypt the documents at rest and rotate the encryptionkeys automatically every year.Which combination of steps should a solutions architect take to meet these requirementswith the LEAST operational overhead? (Select TWO.)
A. Store the documents in Amazon S3. Use S3 Object Lock in governance mode. B. Store the documents in Amazon S3. Use S3 Object Lock in compliance mode. C. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3).Configure key rotation. D. Use server-side encryption with AWS Key Management Service (AWS KMS) customermanaged keys. Configure key rotation. E. Use server-side encryption with AWS Key Management Service (AWS KMS) customerprovided (imported) keys. Configure key rotation.
Answer: B,D
Explanation: Consider using the default aws/s3 KMS key if: You're uploading or accessing
S3 objects using AWS Identity and Access Management (IAM) principals that are in the
same AWS account as the AWS KMS key. You don't want to manage policies for the KMS
key. Consider using a customer managed key if: You want to create, rotate, disable, or
define access controls for the key. You want to grant cross-account access to your S3
objects. You can configure the policy of a customer managed key to allow access from
A company hosts an internal serverless application on AWS by using Amazon APIGateway and AWS Lambda. The company's employees report issues with high latencywhen they begin using the application each day. The company wants to reduce latency.Which solution will meet these requirements?
A. Increase the API Gateway throttling limit. B. Set up a scheduled scaling to increase Lambda provisioned concurrency beforeemployees begin to use the application each day. C. Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for thealarm at the beginning of each day. D. Increase the Lambda function memory.
Answer: B
Explanation: AWS Lambda is a serverless compute service that lets you run code without
provisioning or managing servers. Lambda scales automatically based on the incoming
requests, but it may take some time to initialize new instances of your function if there is a
sudden increase in demand. This may result in high latency or cold starts for your
application. To avoid this, you can use provisioned concurrency, which ensures that your
function is initialized and ready to respond at any time. You can also set up a scheduled
scaling policy that increases the provisioned concurrency before employees begin to use
the application each day, and decreases it when the demand is low. References:
A company offers a food delivery service that is growing rapidly. Because of the growth, thecompany’s order processing system is experiencing scaling problems during peak traffichours. The current architecture includes the following:• A group of Amazon EC2 instances that run in an Amazon EC2 Auto Scaling group tocollect orders from the application• Another group of EC2 instances that run in an Amazon EC2 Auto Scaling group to fulfillordersThe order collection process occurs quickly, but the order fulfillment process can takelonger. Data must not be lost because of a scaling event.A solutions architect must ensure that the order collection process and the order fulfillmentprocess can both scale properly during peak traffic hours. The solution must optimizeutilization of the company’s AWS resources.Which solution meets these requirements?
A. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the AutoScaling groups. Configure each Auto Scaling group’s minimum capacity according to peakworkload values. B. Use Amazon CloudWatch metrics to monitor the CPU of each instance in the AutoScaling groups. Configure a CloudWatch alarm to invoke an Amazon Simple NotificationService (Amazon SNS) topic that creates additional Auto Scaling groups on demand. C. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for ordercollection and another for order fulfillment. Configure the EC2 instances to poll theirrespective queue. Scale the Auto Scaling groups based on notifications that the queuessend. D. Provision two Amazon Simple Queue Service (Amazon SQS) queues: one for ordercollection and another for order fulfillment. Configure the EC2 instances to poll theirrespective queue. Create a metric based on a backlog per instance calculation. Scale theAuto Scaling groups based on this metric.
Answer: D
Explanation: The number of instances in your Auto Scaling group can be driven by how
long it takes to process a message and the acceptable amount of latency (queue delay).
The solution is to use a backlog per instance metric with the target value being the
acceptable backlog per instance to maintain.
Question # 164
A company sends AWS CloudTrail logs from multiple AWS accounts to an Amazon S3bucket in a centralized account. The company must keep the CloudTrail logs. Thecompany must also be able to query the CloudTrail logs at any timeWhich solution will meet these requirements?
A. Use the CloudTraiI event history in the centralized account to create an Amazon Athenatable. Query the CloudTrail logs from Athena. B. Configure an Amazon Neptune instance to manage the CloudTrail logs. Query theCloudTraiI logs from Neptune. C. Configure CloudTrail to send the logs to an Amazon DynamoDB table. Create adashboard in Amazon QulCkSight to query the logs in the table. D. use Amazon Athena to create an Athena notebook. Configure CloudTrail to send thelogs to the notebook. Run queries from Athena.
Answer: A
Explanation: it allows the company to keep the CloudTrail logs and query them at any
time. By using the CloudTrail event history in the centralized account, the company can
view, filter, and download recent API activity across multiple AWS accounts. By creating an
Amazon Athena table from the CloudTrail event history, the company can use a serverless
interactive query service that makes it easy to analyze data in S3 using standard SQL. By
querying the CloudTrail logs from Athena, the company can gain insights into user activity
and resource changes. References:
Viewing Events with CloudTrail Event History
Querying AWS CloudTrail Logs
Amazon Athena
Question # 165
A company stores its data on premises. The amount of data is growing beyond thecompany's available capacity.The company wants to migrate its data from the on-premises location to an Amazon S3bucket The company needs a solution that will automatically validate the integrity of thedata after the transferWhich solution will meet these requirements?
A. Order an AWS Snowball Edge device Configure the Snowball Edge device to performthe online data transfer to an S3 bucket. B. Deploy an AWS DataSync agent on premises. Configure the DataSync agent to performthe online data transfer to an S3 bucket. C. Create an Amazon S3 File Gateway on premises. Configure the S3 File Gateway toperform the online data transfer to an S3 bucket D. Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configurethe accelerator to perform the online data transfer to an S3 bucket.
Answer: B
Explanation: it allows the company to migrate its data from the on-premises location to an
Amazon S3 bucket and automatically validate the integrity of the data after the transfer. By
deploying an AWS DataSync agent on premises, the company can use a fully managed
data transfer service that makes it easy to move large amounts of data to and from AWS.
By configuring the DataSync agent to perform the online data transfer to an S3 bucket, the
company can take advantage of DataSync’s features, such as encryption, compression,
bandwidth throttling, and data validation. DataSync automatically verifies data integrity at
both source and destination after each transfer task. References:
AWS DataSync
Deploying an Agent for AWS DataSync
How AWS DataSync Works
Question # 166
A company is building a RESTful serverless web application on AWS by using Amazon APIGateway and AWS Lambda. The users of this web application will be geographicallydistributed, and the company wants to reduce the latency of API requests to these usersWhich type of endpoint should a solutions architect use to meet these requirements?
A. Private endpoint B. Regional endpoint C. Interface VPC endpoint D. Edge-optimzed endpoint
Answer: D
Explanation: An edge-optimized API endpoint is best for geographically distributed clients,
as it routes the API requests to the nearest CloudFront Point of Presence (POP). This
reduces the latency and improves the performance of the API. Edge-optimized endpoints
are the default type for API Gateway REST APIs1.
A regional API endpoint is intended for clients in the same region as the API, and it does
not use CloudFront to route the requests. A private API endpoint is an API endpoint that
can only be accessed from a VPC using an interface VPC endpoint. A regional or private
endpoint would not meet the requirement of reducing the latency for geographically distributed users1.
Question # 167
A company runs multiple Amazon EC2 Linux instances in a VPC across two AvailabilityZones. The instances host applications that use a hierarchical directory structure. Theapplications need to read and write rapidly and concurrently to shared storage.What should a solutions architect do to meet these requirements?
A. Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC. B. Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS filesystem from each EC2 instance. C. Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store(Amazon EBS) volume. Attach the EBS volume to all the EC2 instances. D. Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that areattached to each EC2 instance. Synchromze the EBS volumes across the different EC2instances.
Answer: B
Explanation: it allows the EC2 instances to read and write rapidly and concurrently to
shared storage across two Availability Zones. Amazon EFS provides a scalable, elastic,
and highly available file system that can be mounted from multiple EC2 instances. Amazon
EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon
EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of
concurrency. References:
Amazon EFS Features
Using Amazon EFS with Amazon EC2
Question # 168
A solutions architect is using an AWS CloudFormation template to deploy a three-tier web application. The web application consists of a web tier and an application tier that storesand retrieves user data in Amazon DynamoDB tables. The web and application tiers arehosted on Amazon EC2 instances, and the database tier is not publicly accessible. Theapplication EC2 instances need to access the DynamoDB tables without exposing APIcredentials in the template.What should the solutions architect do to meet these requirements?
A. Create an IAM role to read the DynamoDB tables. Associate the role with the applicationinstances by referencing an instance profile. B. Create an IAM role that has the required permissions to read and write from theDynamoDB tables. Add the role to the EC2 instance profile, and associate the instanceprofile with the application instances. C. Use the parameter section in the AWS CloudFormation template to have the user inputaccess and secret keys from an already-created IAM user that has the requiredpermissions to read and write from the DynamoDB tables. D. Create an IAM user in the AWS CloudFormation template that has the requiredpermissions to read and write from the DynamoDB tables. Use the GetAtt function toretrieve the access and secret keys, and pass them to the application instances throughthe user data.
Answer: B
Explanation: it allows the application EC2 instances to access the DynamoDB tables
without exposing API credentials in the template. By creating an IAM role that has the
required permissions to read and write from the DynamoDB tables and adding it to the EC2
instance profile, the application instances can use temporary security credentials that are
automatically rotated by AWS. This is a secure and best practice way to grant access to
AWS resources from EC2 instances. References:
IAM Roles for Amazon EC2
Using Instance Profiles
Question # 169
A solutions architect is designing the storage architecture for a new web application usedfor storing and viewing engineering drawings. All application components will be deployedon the AWS infrastructure.The application design must support caching to minimize the amount of time that users waitfor the engineering drawings to load. The application must be able to store petabytes ofdata. Which combination of storage and caching should the solutions architect use?
A. Amazon S3 with Amazon CloudFront B. Amazon S3 Glacier with Amazon ElastiCache C. Amazon Elastic Block Store (Amazon EBS) volumes with Amazon CloudFront D. AWS Storage Gateway with Amazon ElastiCache
Answer: A
Explanation: To store and view engineering drawings with caching support, Amazon S3
and Amazon CloudFront are suitable solutions. Amazon S3 can store any amount of data
with high durability, availability, and performance. Amazon CloudFront can distribute the
engineering drawings to edge locations closer to the users, which can reduce the latency
and improve the user experience. Amazon CloudFront can also cache the engineering
drawings at the edge locations, which can minimize the amount of time that users wait for
the drawings to load.
References:
What Is Amazon S3?
What Is Amazon CloudFront?
Question # 170
A company's website handles millions of requests each day, and the number of requestscontinues to increase. A solutions architect needs to improve the response time of the webapplication. The solutions architect determines that the application needs to decreaselatency when retrieving product details from theAmazon DynamoDB table.Which solution will meet these requirements with the LEAST amount of operationaloverhead?
A. Set up a DynamoDB Accelerator (DAX) cluster. Route all read requests through DAX. B. Set up Amazon ElastiCache for Redis between the DynamoDB table and the webapplication. Route all read requests through Redis. C. Set up Amazon ElastiCache for Memcached between the DynamoDB table and the webapplication. Route all read requests through Memcached. D. Set up Amazon DynamoDB Streams on the table, and have AWS Lambda read from thetable and populate Amazon ElastiCache. Route all read requests through ElastiCache.
Answer: A
Explanation: it allows the company to improve the response time of the web application
and decrease latency when retrieving product details from the Amazon DynamoDB table.
By setting up a DynamoDB Accelerator (DAX) cluster, the company can use a fully
managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x
performance improvement. By routing all read requests through DAX, the company can
reduce the number of read operations on the DynamoDB table and improve the user
experience. References:
Amazon DynamoDB Accelerator (DAX)
Using DAX with DynamoDB
Question # 171
A social media company is building a feature for its website. The feature will give users the
ability to upload photos. The company expects significant increases in demand during large
events and must ensure that the website can handle the upload traffic from users.
Which solution meets these requirements with the MOST scalability?
A. Upload files from the user's browser to the application servers. Transfer the files to an
Amazon S3 bucket. B. Provision an AWS Storage Gateway file gateway. Upload files directly from the user'sbrowser to the file gateway. C. Generate Amazon S3 presigned URLs in the application. Upload files directly from theuser's browser into an S3 bucket. D. Provision an Amazon Elastic File System (Amazon EFS) file system Upload files directlyfrom the user's browser to the file system
Answer: C
Explanation: This approach allows users to upload files directly to S3 without passing
through the application servers, reducing the load on the application and improving
scalability. It leverages the client-side capabilities to handle the file uploads and offloads
the processing to S3.
Question # 172
A company hosts an application on Amazon EC2 instances that run in a single AvailabilityZone. The application is accessible by using the transport layer of the Open SystemsInterconnection (OSI) model. The company needs the application architecture to have highavailabilityWhich combination of steps will meet these requirements MOST cost-effectively? (SelectTWO_)
A. Configure new EC2 instances in a different AvailabiIity Zone. Use Amazon Route 53 toroute traffic to all instances. B. Configure a Network Load Balancer in front of the EC2 instances. C. Configure a Network Load Balancer tor TCP traffic to the instances. Configure anApplication Load Balancer tor HTTP and HTTPS traffic to the instances. D. Create an Auto Scaling group for the EC2 instances. Configure the Auto Scaling groupto use multiple Availability Zones. Configure the Auto Scaling group to run applicationhealth checks on the instances_ E. Create an Amazon CloudWatch alarm. Configure the alarm to restart EC2 instances thattransition to a stopped state
Answer: A,D
Explanation: To achieve high availability for an application that runs on EC2 instances, the
application should be deployed across multiple Availability Zones and use a load balancer
to distribute traffic. An Auto Scaling group can be used to launch and manage EC2
instances in multiple Availability Zones and perform health checks on them. A Network
Load Balancer can be used to handle transport layer traffic to the EC2 instances.
References:
Auto Scaling Groups
What Is a Network Load Balancer?
Question # 173
A company has a service that reads and writes large amounts of data from an Amazon S3bucket in the same AWS Region. The service is deployed on Amazon EC2 instances withinthe private subnet of a VPC. The service communicates with Amazon S3 over a NATgateway in the public subnet. However, the company wants a solution that will reduce thedata output costs.Which solution will meet these requirements MOST cost-effectively?
A. Provision a dedicated EC2 NAT instance in the public subnet. Configure the route tablefor the private subnet to use the elastic network interface of this instance as the destinationfor all S3 traffic. B. Provision a dedicated EC2 NAT instance in the private subnet. Configure the route tablefor the public subnet to use the elastic network interface of this instance as the destinationfor all S3 traffic. C. Provision a VPC gateway endpoint. Configure the route table for the private subnet touse the gateway endpoint as the route for all S3 traffic. D. Provision a second NAT gateway. Configure the route table for the private subnet to usethis NAT gateway as the destination for all S3 traffic.
Answer: C
Question # 174
A company is planning to use an Amazon DynamoDB table for data storage. The companyis concerned about cost optimization. The table will not be used on most mornings. In theevenings, the read and write traffic will often be unpredictable. When traffic spikes occur,they will happen very quickly.What should a solutions architect recommend?
A. Create a DynamoDB table in on-demand capacity mode. B. Create a DynamoDB table with a global secondary inde C. Create a DynamoDB table with provisioned capacity and auto scaling. D. Create a DynamoDB table in provisioned capacity mode, and configure it as a global table
Answer: A
Explanation: Provisioned capacity is best if you have relatively predictable application
traffic, run applications whose traffic is consistent, and ramps up or down gradually. Ondemand
capacity mode is best when you have unknown workloads, unpredictable
application traffic and also if you only want to pay exactly for what you use. The on-demand
pricing model is ideal for bursty, new, or unpredictable workloads whose traffic can spike in
seconds or minutes, and when under-provisioned capacity would impact the user
A manufacturing company has machine sensors that upload .csv files to an Amazon S3
bucket. These .csv files must be converted into images and must be made available as
soon as possible for the automatic generation of graphical reports.
The images become irrelevant after 1 month, but the .csv files must be kept to train
machine learning (ML) models twice a year. The ML trainings and audits are planned
weeks in advance.
Which combination of steps will meet these requirements MOST cost-effectively? (Select
TWO.)
A. Launch an Amazon EC2 Spot Instance that downloads the .csv files every hour,
generates the image files, and uploads the images to the S3 bucket. B. Design an AWS Lambda function that converts the .csv files into images and stores the
images in the S3 bucket. Invoke the Lambda function when a .csv file is uploaded. C. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded. Expire the image files after 30 days. D. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 1 day after they are uploaded. Expire the image files after 30 days. E. Create S3 Lifecycle rules for .csv files and image files in the S3 bucket. Transition the .csv files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 1 day after they are uploaded. Keep the image files in Reduced Redundancy Storage (RRS).
Answer: B,C Explanation: These answers are correct because they meet the requirements of
converting the .csv files into images, making them available as soon as possible, and
minimizing the storage costs. AWS Lambda is a service that lets you run code without
provisioning or managing servers. You can use AWS Lambda to design a function that
converts the .csv files into images and stores the images in the S3 bucket. You can invoke
the Lambda function when a .csv file is uploaded to the S3 bucket by using an S3 event
notification. This way, you can ensure that the images are generated and made available
as soon as possible for the graphical reports. S3 Lifecycle is a feature that enables you to
manage your objects so that they are stored cost effectively throughout their lifecycle. You
can create S3 Lifecycle rules for .csv files and image files in the S3 bucket to transition
them to different storage classes or expire them based on your business needs. You can
transition the .csv files from S3 Standard to S3 Glacier 1 day after they are uploaded, since
they are only needed twice a year for ML trainings and audits that are planned weeks in
advance. S3 Glacier is a storage class for data archiving that offers secure, durable, and
extremely low-cost storage with retrieval times ranging from minutes to hours. You can
expire the image files after 30 days, since they become irrelevant after 1 month.
References:
https://docs.aws.amazon.com/lambda/latest/dg/welcome.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/NotificationHowTo.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecyclemgmt.htmlhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-classintro.html#sc-glacier
Question # 176
A company wants to move from many standalone AWS accounts to a consolidated, multiaccount architecture The company plans to create many new AWS accounts for different
business units. The company needs to authenticate access to these AWS accounts by
using a centralized corporate directory service.
Which combination of actions should a solutions architect recommend to meet these
requirements? (Select TWO.)
A. Create a new organization in AWS Organizations with all features turned on. Create the
new AWS accounts in the organization. B. Set up an Amazon Cognito identity pool. Configure AWS 1AM Identity Center (AWS
Single Sign-On) to accept Amazon Cognito authentication. C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS 1AM Identity Center (AWS Single Sign-On) to AWS Directory Service. D. Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly. E. Set up AWS 1AM Identity Center (AWS Single Sign-On) in the organization. Configure 1AM Identity Center, and integrate it with the company's corporate directory service.
Answer: A,E Explanation: AWS Organizations is a service that helps users centrally manage and
govern multiple AWS accounts. It allows users to create organizational units (OUs) to
group accounts based on business needs or other criteria. It also allows users to define
and attach service control policies (SCPs) to OUs or accounts to restrict the actions that
can be performed by the accounts1. By creating a new organization in AWS Organizations
with all features turned on, the solution can consolidate and manage the new AWS
accounts for different business units.
AWS IAM Identity Center (formerly known as AWS Single Sign-On) is a service that
provides single sign-on access for all of your AWS accounts and cloud applications. It
connects with Microsoft Active Directory through AWS Directory Service to allow users in
that directory to sign in to a personalized AWS access portal using their existing Active
Directory user names and passwords. From the AWS access portal, users have access to
all the AWS accounts and cloud applications that they have permissions for2. By setting up
IAM Identity Center in the organization and integrating it with the company’s corporate
directory service, the solution can authenticate access to these AWS accounts using a
centralized corporate directory service.
B. Set up an Amazon Cognito identity pool. Configure AWS 1AM Identity Center (AWS
Single Sign-On) to accept Amazon Cognito authentication. This solution will not meet the
requirement of authenticating access to these AWS accounts by using a centralized
corporate directory service, as Amazon Cognito is a service that provides user sign-up,
sign-in, and access control for web and mobile applications, not for corporate directory
services3.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS 1AM
Identi-ty Center (AWS Single Sign-On) to AWS Directory Service. This solution will not
work, as SCPs are used to restrict the actions that can be performed by the accounts in an
organization, not to manage the accounts themselves1. Also, IAM Identity Center cannot
be added to AWS Directory Service, as it is a separate service that connects with Microsoft
Active Directory through AWS Directory Service2.
D. Create a new organization in AWS Organizations. Configure the organization’s
authentication mechanism to use AWS Directory Service directly. This solution will not
work, as AWS Organizations does not have an authentication mechanism that can use
AWS Directory Service directly. AWS Organizations relies on IAM Identity Center to provide single sign-on access for the accounts in an organization.
Reference URL:
https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html
Question # 177
A company is deploying a new application on Amazon EC2 instances. The application
writes data to Amazon Elastic Block Store (Amazon EBS) volumes. The company needs to
ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
A. Create an 1AM role that specifies EBS encryption. Attach the role to the EC2 instances. B. Create the EBS volumes as encrypted volumes. Attach the EBS volumes to the EC2 instances C. Create an EC2 instance tag that has a key of Encrypt and a value of True. Tag all instances that require encryption at the EBS level. D. Create an AWS Key Management Service (AWS KMS) key policy that enforces EBS encryption in the account. Ensure that the key policy is active
Answer: B Explanation: The solution that will meet the requirement of ensuring that all data that is
written to the EBS volumes is encrypted at rest is B. Create the EBS volumes as encrypted
volumes and attach the encrypted EBS volumes to the EC2 instances. When you create an
EBS volume, you can specify whether to encrypt the volume. If you choose to encrypt the
volume, all data written to the volume is automatically encrypted at rest using AWSmanaged keys. You can also use customer-managed keys (CMKs) stored in AWS KMS to
encrypt and protect your EBS volumes. You can create encrypted EBS volumes and attach
them to EC2 instances to ensure that all data written to the volumes is encrypted at rest.
Question # 178
A serverless application uses Amazon API Gateway. AWS Lambda, and Amazon
DynamoDB. The Lambda function needs permissions to read and write to the DynamoDB
table.
Which solution will give the Lambda function access to the DynamoDB table MOST
securely?
A. Create an 1AM user with programmatic access to the Lambda function. Attach a policy
to the user that allows read and write access to the DynamoDB table. Store the
access_key_id and secret_access_key parameters as part of the Lambda environment
variables. Ensure that other AWS users do not have read and write access to the Lambda
function configuration B. Create an 1AM role that includes Lambda as a trusted service. Attach a policy to the role that allows read and write access to the DynamoDB table. Update the configuration of the Lambda function to use the new role as the execution role. C. Create an 1AM user with programmatic access to the Lambda function. Attach a policy to the user that allows read and write access to the DynamoDB table. Store the access_key_id and secret_access_key parameters in AWS Systems Manager Parameter Store as secure string parameters. Update the Lambda function code to retrieve the secure string parameters before connecting to the DynamoDB table. D. Create an 1AM role that includes DynamoDB as a trusted service. Attach a policy to the role that allows read and write access from the Lambda function. Update the code of the Lambda function to attach to the new role as an execution role.
Answer: B Explanation: Option B suggests creating an IAM role that includes Lambda as a trusted
service, meaning the role is specifically designed for Lambda functions. The role should
have a policy attached to it that grants the required read and write access to the
DynamoDB table.
Question # 179
A company wants to use artificial intelligence (Al) to determine the quality of its customer
service calls. The company currently manages calls in four different languages, including
English. The company will offer new languages in the future. The company does not have
the resources to regularly maintain machine learning (ML) models.
The company needs to create written sentiment analysis reports from the customer service
call recordings. The customer service call recording text must be translated into English.
Which combination of steps will meet these requirements? (Select THREE.)
A. Use Amazon Comprehend to translate the audio recordings into English. B. Use Amazon Lex to create the written sentiment analysis reports. C. Use Amazon Polly to convert the audio recordings into text. D. Use Amazon Transcribe to convert the audio recordings in any language into text. E. Use Amazon Translate to translate text in any language to English. F. Use Amazon Comprehend to create the sentiment analysis reports.
Answer: D,E,F Explanation: These answers are correct because they meet the requirements of creating
written sentiment analysis reports from the customer service call recordings in any
language and translating them into English. Amazon Transcribe is a service that uses
advanced machine learning technologies to recognize speech in audio files and transcribe
them into text. You can use Amazon Transcribe to convert the audio recordings in any
language into text, and specify the language code of the source audio. Amazon Translate
is a neural machine translation service that delivers fast, high-quality, and affordable
language translation. You can use Amazon Translate to translate text in any language to
English, and specify the source and target language codes. Amazon Comprehend is a
natural language processing (NLP) service that uses machine learning to find insights and relationships in text. You can use Amazon Comprehend to create the sentiment analysis
reports, which determine if the text is positive, negative, neutral, or mixed.
References:
https://docs.aws.amazon.com/transcribe/latest/dg/what-is-transcribe.htmlhttps://docs.aws.amazon.com/translate/latest/dg/what-is.htmlhttps://docs.aws.amazon.com/comprehend/latest/dg/how-sentiment.html
Question # 180
A company needs to configure a real-time data ingestion architecture for its application.
The company needs an API. a process that transforms data as the data is streamed, and a
storage solution for the data.
Which solution will meet these requirements with the LEAST operational overhead?
A. Deploy an Amazon EC2 instance to host an API that sends data to an Amazon Kinesis
data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the
Kinesis data stream as a data source. Use AWS Lambda functions to transform the data.
Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. B. Deploy an Amazon EC2 instance to host an API that sends data to AWS Glue. Stop source/destination checking on the EC2 instance. Use AWS Glue to transform the data and to send the data to Amazon S3. C. Configure an Amazon API Gateway API to send data to an Amazon Kinesis data stream. Create an Amazon Kinesis Data Firehose delivery stream that uses the Kinesis data stream as a data source. Use AWS Lambda functions to transform the data. Use the Kinesis Data Firehose delivery stream to send the data to Amazon S3. D. Configure an Amazon API Gateway API to send data to AWS Glue. Use AWS Lambda functions to transform the data. Use AWS Glue to send the data to Amazon S3.
Answer: C
Explanation: It uses Amazon Kinesis Data Firehose which is a fully managed service for
delivering real-time streaming data to destinations such as Amazon S3. This service
requires less operational overhead as compared to option A, B, and D. Additionally, it also
uses Amazon API Gateway which is a fully managed service for creating, deploying, and
managing APIs. These services help in reducing the operational overhead and automating
the data ingestion process.
Question # 181
A 4-year-old media company is using the AWS Organizations all features feature set fo
organize its AWS accounts. According to he company's finance team, the billing
information on the member accounts
must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
A. Add all finance team users to an IAM group. Attach an AWS managed policy named
Billing to the group. B. Attach an identity-based policy to deny access to the billing information to all users, including the root user. C. Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU). D. Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Answer: C Explanation: Service Control Policies (SCP): SCPs are an integral part of AWS
Organizations and allow you to set fine-grained permissions on the organizational units
(OUs) within your AWS Organization. SCPs provide central control over the maximum
permissions that can be granted to member accounts, including the root user. Denying
Access to Billing Information: By creating an SCP and attaching it to the root OU, you can
explicitly deny access to billing information for all accounts within the organization. SCPs
can be used to restrict access to various AWS services and actions, including billingrelated services. Granular Control: SCPs enable you to define specific permissions and
restrictions at the organizational unit level. By denying access to billing information at the
root OU, you can ensure that no member accounts, including root users, have access to
the billing information.
Question # 182
A company uses on-premises servers to host its applications The company is running out
of storage capacity. The applications use both block storage and NFS storage. The
company needs a high-performing solution that supports local caching without rearchitecting its existing applications.
Which combination of actions should a solutions architect take to meet these
requirements? (Select TWO.)
A. Mount Amazon S3 as a file system to the on-premises servers. B. Deploy an AWS Storage Gateway file gateway to replace NFS storage. C. Deploy AWS Snowball Edge to provision NFS mounts to on-premises servers. D. Deploy an AWS Storage Gateway volume gateway to replace the block storage E. Deploy Amazon Elastic File System (Amazon EFS) volumes and mount them to onpremises servers.
Answer: B,D Explanation: https://aws.amazon.com/storagegateway/file/
File Gateway provides a seamless way to connect to the cloud in order to store application
data files and backup images as durable objects in Amazon S3 cloud storage. File
Gateway offers SMB or NFS-based access to data in Amazon S3 with local caching. It can
be used for on-premises applications, and for Amazon EC2-based applications that need
file protocol access to S3 object storage.
https://aws.amazon.com/storagegateway/volume/
Volume Gateway presents cloud-backed iSCSI block storage volumes to your on-premises applications. Volume Gateway stores and manages on-premises data in Amazon S3 on
your behalf and operates in either cache mode or stored mode. In the cached Volume
Gateway mode, your primary data is stored in Amazon S3, while retaining your frequently
accessed data locally in the cache for low latency access.
Question # 183
A company operates a two-tier application for image processing. The application uses two
Availability Zones, each with one public subnet and one private subnet. An Application
Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for
the application tier use the private subnets.
Users report that the application is running more slowly than expected. A security audit of
the web server log files shows that the application is receiving millions of illegitimate
requests from a small number of IP addresses. A solutions architect needs to resolve the
immediate performance problem while the company investigates a more permanent
solution.
What should the solutions architect recommend to meet this requirement?
A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses
that are consuming resources. B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources. D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources
Answer: B Explanation: Deny the request from the first entry at the public subnet, dont allow it to cross and get to the private subnet. In this scenario, the security audit reveals that the application is receiving millions of illegitimate requests from a small number of IP addresses. To address this issue, it is
recommended to modify the network ACL (Access Control List) for the web tier subnets. By
adding an inbound deny rule specifically targeting the IP addresses that are consuming
resources, the network ACL can block the illegitimate traffic at the subnet level before it
reaches the web servers. This will help alleviate the excessive load on the web tier and
improve the application's performance.
Question # 184
A gaming company uses Amazon DynamoDB to store user information such as geographic
location, player data, and leaderboards. The company needs to configure continuous
backups to an Amazon S3 bucket with a minimal amount of coding. The backups must not
affect availability of the application and must not affect the read capacity units (RCUs) that
are defined for the table
Which solution meets these requirements?
A. Use an Amazon EMR cluster. Create an Apache Hive job to back up the data to
Amazon S3. B. Export the data directly from DynamoDB to Amazon S3 with continuous backups. Turn on point-in-time recovery for the table. C. Configure Amazon DynamoDB Streams. Create an AWS Lambda function to consume the stream and export the data to an Amazon S3 bucket. D. Create an AWS Lambda function to export the data from the database tables to Amazon S3 on a regular basis. Turn on point-in-time recovery for the table.
A company has an on-premises server that uses an Oracle database to process and store
customer information The company wants to use an AWS database service to achieve higher availability and to improve application performance. The company also wants to
offload reporting from its primary database system.
Which solution will meet these requirements in the MOST operationally efficient way?
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB
instance in multiple AWS Regions Point the reporting functions toward a separate DB
instance from the primary DB instance. B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica. C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database Direct the reporting functions to use the reader instance in the cluster deployment D. Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the reader instances.
Answer: D Explanation: Amazon Aurora is a fully managed relational database that is compatible with
MySQL and PostgreSQL. It provides up to five times better performance than MySQL and
up to three times better performance than PostgreSQL. It also provides high availability and
durability by replicating data across multiple Availability Zones and continuously backing up
data to Amazon S31. By using Amazon RDS deployed in a Multi-AZ instance deployment
to create an Amazon Aurora database, the solution can achieve higher availability and
improve application performance.
Amazon Aurora supports read replicas, which are separate instances that share the same
underlying storage as the primary instance. Read replicas can be used to offload read-only
queries from the primary instance and improve performance. Read replicas can also be
used for reporting functions2. By directing the reporting functions to the reader instances,
the solution can offload reporting from its primary database system.
A. Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB
instance in multiple AWS Regions Point the reporting functions toward a separate DB
instance from the pri-mary DB instance. This solution will not meet the requirement of using
an AWS database service, as AWS DMS is a service that helps users migrate databases to
AWS, not a database service itself. It also involves creating multiple DB instances in
different Regions, which may increase complexity and cost.
B. Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a
read replica in the same zone as the primary DB instance. Direct the reporting functions to
the read replica. This solution will not meet the requirement of achieving higher availability,
as a Single-AZ deployment does not provide failover protection in case of an Availability
Zone outage. It also involves using Oracle as the database engine, which may not provide
better performance than Aurora.
C. Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle
database Di-rect the reporting functions to use the reader instance in the cluster deployment. This solution will not meet the requirement of improving application
performance, as Oracle may not provide better performance than Aurora. It also involves
using a cluster deployment, which is only supported for Aurora, not for Oracle.
Reference URL: https://aws.amazon.com/rds/aurora/
Question # 186
A company has a small Python application that processes JSON documents and outputs
the results to an on-premises SQL database. The application runs thousands of times each
day. The company wants to move the application to the AWS Cloud. The company needs a
highly available solution that maximizes scalability and minimizes operational overhead.
Which solution will meet these requirements?
A. Place the JSON documents in an Amazon S3 bucket. Run the Python code on multiple
Amazon EC2 instances to process the documents. Store the results in an Amazon Aurora
DB cluster B. Place the JSON documents in an Amazon S3 bucket. Create an AWS Lambda function that runs the Python code to process the documents as they arrive in the S3 bucket. Store the results in an Amazon Aurora DB cluster. C. Place the JSON documents in an Amazon Elastic Block Store (Amazon EBS) volume. Use the EBS Multi-Attach feature to attach the volume to multiple Amazon EC2 instances. Run the Python code on the EC2 instances to process the documents. Store the results on an Amazon RDS DB instance. D. Place the JSON documents in an Amazon Simple Queue Service (Amazon SQS) queue as messages Deploy the Python code as a container on an Amazon Elastic Container Service (Amazon ECS) cluster that is configured with the Amazon EC2 launch type. Use the container to process the SQS messages. Store the results on an Amazon RDS DB instance.
Answer: B Explanation: By placing the JSON documents in an S3 bucket, the documents will be
stored in a highly durable and scalable object storage service. The use of AWS Lambda
allows the company to run their Python code to process the documents as they arrive in the
S3 bucket without having to worry about the underlying infrastructure. This also allows for
horizontal scalability, as AWS Lambda will automatically scale the number of instances of
the function based on the incoming rate of requests. The results can be stored in an
Amazon Aurora DB cluster, which is a fully-managed, high-performance database service
that is compatible with MySQL and PostgreSQL. This will provide the necessary durability
and scalability for the results of the processing.
https://aws.amazon.com/rds/aurora/
Question # 187
A company wants lo build a web application on AWS. Client access requests to the website
are not predictable and can be idle for a long time. Only customers who have paid a
subscription fee can have the ability to sign in and use the web application.
Which combination of steps will meet these requirements MOST cost-effectively? (Select
THREE.)
A. Create an AWS Lambda function to retrieve user information from Amazon DynamoDB.
Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to
the Lambda function. B. Create an Amazon Elastic Container Service (Amazon ECS) service behind an Application Load Balancer to retrieve user information from Amazon RDS. Create an Amazon API Gateway endpoint to accept RESTful APIs. Send the API calls to the Lambda function. C. Create an Amazon Cogmto user pool to authenticate users D. Create an Amazon Cognito identity pool to authenticate users. E. Use AWS Amplify to serve the frontend web content with HTML. CSS, and JS. Use an integrated Amazon CloudFront configuration. F. Use Amazon S3 static web hosting with PHP. CSS. and JS. Use Amazon CloudFront to serve the frontend web content.
A company runs a three-tier web application in the AWS Cloud that operates across three
Availability Zones. The application architecture has an Application Load Balancer, an
Amazon EC2 web server that hosts user session states, and a MySQL database that runs
on an EC2 instance. The company expects sudden increases in application traffic. The company wants to be able to scale to meet future application capacity demands and to
ensure high availability across all three Availability Zones.
Which solution will meet these requirements?
A. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster
deployment. Use Amazon ElastiCache for Redis with high availability to store session data
and to cache reads. Migrate the web server to an Auto Scaling group that is in three
Availability Zones. B. Migrate the MySQL database to Amazon RDS for MySQL with a Multi-AZ DB cluster deployment. Use Amazon ElastiCache for Memcached with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones. C. Migrate the MySQL database to Amazon DynamoDB. Use DynamoDB Accelerator (DAX) to cache reads. Store the session data in DynamoDB. Migrate the web server to an Auto Scaling group that is in three Availability Zones. D. Migrate the MySQL database to Amazon RDS for MySQL in a single Availability Zone. Use Amazon ElastiCache for Redis with high availability to store session data and to cache reads. Migrate the web server to an Auto Scaling group that is in three Availability Zones.
Answer: A Explanation: This answer is correct because it meets the requirements of scaling to meet
future application capacity demands and ensuring high availability across all three
Availability Zones. By migrating the MySQL database to Amazon RDS for MySQL with a
Multi-AZ DB cluster deployment, the company can benefit from automatic failover, backup,
and patching of the database across multiple Availability Zones. By using Amazon
ElastiCache for Redis with high availability, the company can store session data and cache
reads in a fast, in-memory data store that can also fail over across Availability Zones. By
migrating the web server to an Auto Scaling group that is in three Availability Zones, the
company can automatically scale the web server capacity based on the demand and traffic
patterns.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.ht
ml
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoFailover.htmlhttps://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-
auto-scaling.html
Question # 189
A company runs a website that uses a content management system (CMS) on Amazon
EC2. The CMS runs on a single EC2 instance and uses an Amazon Aurora MySQL MultiAZ DB instance for the data tier. Website images are stored on an Amazon Elastic Block
Store (Amazon EBS) volume that is mounted inside the EC2 instance.
Which combination of actions should a solutions architect take to improve the performance
and resilience of the website? (Select TWO.)
A. Move the website images into an Amazon S3 bucket that is mounted on every EC2
instance. B. Share the website images by using an NFS share from the primary EC2 instance. Mount this share on the other EC2 instances. C. Move the website images onto an Amazon Elastic File System (Amazon EFS) file system that is mounted on every EC2 instance. D. Create an Amazon Machine Image (AMI) from the existing EC2 instance Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an accelerator in AWS Global Accelerator for the website. E. Create an Amazon Machine Image (AMI) from the existing EC2 instance. Use the AMI to provision new instances behind an Application Load Balancer as part of an Auto Scaling group. Configure the Auto Scaling group to maintain a minimum of two instances. Configure an Amazon CloudFront distribution for the website.
Answer: C,E
Explanation: Option C provides moving the website images onto an Amazon EFS file
system that is mounted on every EC2 instance. Amazon EFS provides a scalable and fully
managed file storage solution that can be accessed concurrently from multiple EC2
instances. This ensures that the website images can be accessed efficiently and
consistently by all instances, improving performance In Option E The Auto Scaling group
maintains a minimum of two instances, ensuring resilience by automatically replacing any
unhealthy instances. Additionally, configuring an Amazon CloudFront distribution for the
website further improves performance by caching content at edge locations closer to the
end-users, reducing latency and improving content delivery. Hence combining these
actions, the website's performance is improved through efficient image storage and content
delivery
Question # 190
A company moved its on-premises PostgreSQL database to an Amazon RDS for
PostgreSQL DB instance. The company successfully launched a new product. The
workload on the database has increased.
The company wants to accommodate the larger workload without adding infrastructure.
Which solution will meet these requirements MOST cost-effectively?
A. Buy reserved DB instances for the total workload. Make the Amazon RDS for
PostgreSQL DB instance larger. B. Make the Amazon RDS for PostgreSQL DB instance a Multi-AZ DB instance. C. Buy reserved DB instances for the total workload. Add another Amazon RDS for PostgreSQL DB instance. D. Make the Amazon RDS for PostgreSQL DB instance an on-demand DB instance.
Answer: A Explanation: This answer is correct because it meets the requirements of accommodating
the larger workload without adding infrastructure and minimizing the cost. Reserved DB
instances are a billing discount applied to the use of certain on-demand DB instances in
your account. Reserved DB instances provide you with a significant discount compared to
on-demand DB instance pricing. You can buy reserved DB instances for the total workload
and choose between three payment options: No Upfront, Partial Upfront, or All Upfront.
You can make the Amazon RDS for PostgreSQL DB instance larger by modifying its
instance type to a higher performance class. This way, you can increase the CPU,
memory, and network capacity of your DB instance and handle the increased workload.
References:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWith
ReservedDBInstances.html
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanc
eClass.html
Question # 191
A company's data platform uses an Amazon Aurora MySQL database. The database has
multiple read replicas and multiple DB instances across different Availability Zones. Users
have recently reported errors from the database that indicate that there are too many connections. The company wants to reduce the failover time by 20% when a read replica is
promoted to primary writer.
Which solution will meet this requirement?
A. Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment. B. Use Amazon RDS Proxy in front of the Aurora database. C. Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections. D. Switch to Amazon Redshift with relocation capability.
Answer: B Explanation: Amazon RDS Proxy is a service that provides a fully managed, highly
available database proxy for Amazon RDS and Aurora databases. It allows you to pool and
share database connections, reduce database load, and improve application scalability and
availability.
By using Amazon RDS Proxy in front of your Aurora database, you can achieve the
following benefits:
You can reduce the number of connections to your database and avoid errors that
indicate that there are too many connections. Amazon RDS Proxy handles the
connection management and multiplexing for you, so you can use fewer database
connections and resources.
You can reduce the failover time by 20% when a read replica is promoted to
primary writer. Amazon RDS Proxy automatically detects failures and routes traffic
to the new primary instance without requiring changes to your application code or
configuration. According to a benchmark test, using Amazon RDS Proxy reduced
the failover time from 66 seconds to 53 seconds, which is a 20% improvement.
You can improve the security and compliance of your database access. Amazon
RDS Proxy integrates with AWS Secrets Manager and AWS Identity and Access
Management (IAM) to enable secure and granular authentication and authorization
for your database connections.
Question # 192
A company uses Amazon EC2 instances to host its internal systems. As part of a
deployment operation, an administrator tries to use the AWS CLI to terminate an EC2
instance. However, the administrator receives a 403 (Access Denied) error message.
The administrator is using an IAM role that has the following IAM policy attached:
What is the cause of the unsuccessful request?
A. The EC2 instance has a resource-based policy with a Deny statement. B. The principal has not been specified in the policy statement C. The "Action" field does not grant the actions that are required to terminate the EC2
instance. D. The request to terminate the EC2 instance does not originate from the CIDR blocks 192.0.2.0/24 or 203.0 113.0/24
Answer: D
Question # 193
A company uses Amazon API Gateway to run a private gateway with two REST APIs in the
same VPC. The BuyStock RESTful web service calls the CheckFunds RESTful
web service to ensure that enough funds are available before a stock can be purchased.
The company has noticed in the VPC flow logs that the BuyStock RESTful web
service calls the CheckFunds RESTful web service over the internet instead of through the
VPC. A solutions architect must implement a solution so that the APIs
communicate through the VPC. Which solution will meet these requirements with the FEWEST changes to the code?
(Select Correct Option/s and give detailed explanation from AWS Certified Solutions
Architect - Associate (SAA-C03) Study Manual or documents)
A. Add an X-APl-Key header in the HTTP header for authorization. B. Use an interface endpoint. C. Use a gateway endpoint. D. Add an Amazon Simple Queue Service (Amazon SQS) queue between the two REST APIs.
Answer: B Explanation: Using an interface endpoint will allow the BuyStock RESTful web service and the CheckFunds RESTful web service to communicate through the VPC without any changes to the code. An interface endpoint creates an elastic network interface (ENI) in the customer's VPC, and then configures the route tables to route traffic from the APIs to the ENI. This will ensure that the two APIs will communicate through the VPC without any changes to the code.
Question # 194
A company has multiple Windows file servers on premises. The company wants to migrate
and consolidate its files into an Amazon FSx for Windows File Server file system. File
permissions must be preserved to ensure that access rights do not change.
Which solutions will meet these requirements? (Select TWO.)
A. Deploy AWS DataSync agents on premises. Schedule DataSync tasks to transfer the
data to the FSx for Windows File Server file system. B. Copy the shares on each file server into Amazon S3 buckets by using the AWS CLI Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system. C. Remove the drives from each file server Ship the drives to AWS for import into Amazon
S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server
file system D. Order an AWS Snowcone device. Connect the device to the on-premises network. Launch AWS DataSync agents on the device. Schedule DataSync tasks to transfer the data to the FSx for Windows File Server file system, E. Order an AWS Snowball Edge Storage Optimized device. Connect the device to the onpremises network. Copy data to the device by using the AWS CLI. Ship the device back to AWS for import into Amazon S3. Schedule AWS DataSync tasks to transfer the data to the FSx for Windows File Server file system.
Answer: A,D Explanation: A This option involves deploying DataSync agents on your on-premises file
servers and using DataSync to transfer the data directly to the FSx for Windows File
Server. DataSync ensures that file permissions are preserved during the migration process.
D This option involves using an AWS Snowcone device, a portable data transfer device.
You would connect the Snowcone device to your on-premises network, launch DataSync
agents on the device, and schedule DataSync tasks to transfer the data to FSx for
Windows File Server. DataSync handles the migration process while preserving file
permissions.
Question # 195
A company is running a microservices application on Amazon EC2 instances. The
company wants to migrate the application to an Amazon Elastic Kubernetes Service
(Amazon EKS) cluster for scalability. The company must configure the Amazon EKS
control plane with endpoint private access set to true and endpoint public access set to
false to maintain security compliance The company must also put the data plane in private
subnets. However, the company has received error notifications because the node cannot
join the cluster.
Which solution will allow the node to join the cluster?
A. Grant the required permission in AWS Identity and Access Management (1AM) to the
AmazonEKSNodeRole 1AM role. B. Create interface VPC endpoints to allow nodes to access the control plane. C. Recreate nodes in the public subnet Restrict security groups for EC2 nodes D. Allow outbound traffic in the security group of the nodes.
A company wants to create an application to store employee data in a hierarchical
structured relationship. The company needs a minimum-latency response to high-traffic
queries for the employee data and must protect any sensitive data. The company also
needs to receive monthly email messages if any financial information is present in the
employee data.
Which combination of steps should a solutions architect take to meet these requirements?
(Select TWO.)
A. Use Amazon Redshift to store the employee data in hierarchies. Unload the data to
Amazon S3 every month. B. Use Amazon DynamoDB to store the employee data in hierarchies. Export the data to Amazon S3 every month. C. Configure Amazon fvlacie for the AWS account. Integrate Macie with Amazon EventBridge to send monthly events to AWS Lambda. D. Use Amazon Athena to analyze the employee data in Amazon S3. Integrate Athena with Amazon QuickSight to publish analysis dashboards and share the dashboards with users. E. Configure Amazon Macie for the AWS account Integrate Macie with Amazon EventBridge to send monthly notifications through an Amazon Simple Notification Service (Amazon SNS) subscription.
A company wants to use high-performance computing and artificial intelligence to improve
its fraud prevention and detection technology. The company requires distributed processing
to complete a single workload as quickly as possible.
Which solution will meet these requirements?
A. Use Amazon Elastic Kubernetes Service (Amazon EKS) and multiple containers. B. Use AWS ParallelCluster and the Message Passing Interface (MPI) libraries. C. Use an Application Load Balancer and Amazon EC2 instances. D. Use AWS Lambda functions.
Answer: B Explanation: AWS ParallelCluster is a service that allows you to create and manage highperformance computing (HPC) clusters on AWS. It supports multiple schedulers, including
AWS Batch, which can run distributed workloads across multiple EC2 instances1.
MPI is a standard for message passing between processes in parallel computing. It
provides functions for sending and receiving data, synchronizing processes, and managing
communication groups2.
By using AWS ParallelCluster and MPI libraries, you can take advantage of the following
benefits:
You can easily create and configure HPC clusters that meet your specific
requirements, such as instance type, number of nodes, network configuration, and
storage options1.
You can leverage the scalability and elasticity of AWS to run large-scale parallel
workloads without worrying about provisioning or managing servers1.
You can use MPI libraries to optimize the performance and efficiency of your
parallel applications by enabling inter-process communication and data
exchange2.
You can choose from a variety of MPI implementations that are compatible with
AWS ParallelCluster, such as Open MPI, Intel MPI, and MPICH3.
Question # 198
A company runs container applications by using Amazon Elastic Kubernetes Service
(Amazon EKS) and the Kubernetes Horizontal Pod Autoscaler. The workload is not
consistent throughout the day. A solutions architect notices that the number of nodes does
not automatically scale out when the existing nodes have reached maximum capacity in the
cluster, which causes performance issues
Which solution will resolve this issue with the LEAST administrative overhead?
A. Scale out the nodes by tracking the memory usage B. Use the Kubernetes Cluster Autoscaler to manage the number of nodes in the cluster. C. Use an AWS Lambda function to resize the EKS cluster automatically. D. Use an Amazon EC2 Auto Scaling group to distribute the workload.
Answer: B
Explanation: The Kubernetes Cluster Autoscaler is a component that automatically adjusts
the number of nodes in your cluster when pods fail or are rescheduled onto other nodes. It
uses Auto Scaling groups to scale up or down the nodes according to the demand and
capacity of your cluster1.
By using the Kubernetes Cluster Autoscaler in your Amazon EKS cluster, you can achieve
the following benefits:
You can improve the performance and availability of your container applications by
ensuring that there are enough nodes to run your pods and that there are no idle
nodes wasting resources.
You can reduce the administrative overhead of managing your cluster size
manually or using custom scripts. The Cluster Autoscaler handles the scaling
decisions and actions for you based on the metrics and events from your cluster.
You can leverage the integration of Amazon EKS and AWS Auto Scaling to
optimize the cost and efficiency of your cluster. You can use features such as
launch templates, mixed instances policies, and spot instances to customize your
node configuration and save up to 90% on compute costs2
Question # 199
A global marketing company has applications that run in the ap-southeast-2 Region and
the eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate
securely with databases that run in a VPC in ap-southeast-2.
Which network design will meet these requirements?
A. Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2
VPC. Create an inbound rule in the eu-west-1 application security group that allows traffic
from the database server IP addresses in the ap-southeast-2 security group. B. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west1 VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1. C. Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west1 VPC. Update the subnet route tables Create an inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses. D. Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the application servers in eu-west-1.
Answer: C
Question # 200
A company migrated a MySQL database from the company's on-premises data center to
an Amazon RDS for MySQL DB instance. The company sized the RDS DB instance to
meet the company's average daily workload. Once a month, the database performs slowly
when the company runs queries for a report. The company wants to have the ability to run
reports and maintain the performance of the daily workloads.
Which solution will meet these requirements?
A. Create a read replica of the database. Direct the queries to the read replica. B. Create a backup of the database. Restore the backup to another DB instance. Direct the queries to the new database. C. Export the data to Amazon S3. Use Amazon Athena to query the S3 bucket. D. Resize the DB instance to accommodate the additional workload.
Answer: C Explanation: Amazon Athena is a service that allows you to run SQL queries on data
stored in Amazon S3. It is serverless, meaning you do not need to provision or manage any
infrastructure. You only pay for the queries you run and the amount of data scanned1.
By using Amazon Athena to query your data in Amazon S3, you can achieve the following
benefits:
You can run queries for your report without affecting the performance of your
Amazon RDS for MySQL DB instance. You can export your data from your DB
instance to an S3 bucket and use Athena to query the data in the bucket. This
way, you can avoid the overhead and contention of running queries on your DB
instance.
You can reduce the cost and complexity of running queries for your report. You do
not need to create a read replica or a backup of your DB instance, which would
incur additional charges and require maintenance. You also do not need to resize
your DB instance to accommodate the additional workload, which would increase
your operational overhead.
You can leverage the scalability and flexibility of Amazon S3 and Athena. You can
store large amounts of data in S3 and query them with Athena without worrying
about capacity or performance limitations. You can also use different formats,
compression methods, and partitioning schemes to optimize your data storage and
query performance1.
Question # 201
A company has migrated multiple Microsoft Windows Server workloads to Amazon EC2
instances that run in the us-west-1 Region. The company manually backs up the workloads
to create an image as needed.
In the event of a natural disaster in the us-west-1 Region, the company wants to recover
workloads quickly in the us-west-2 Region. The company wants no more than 24 hours of
data loss on the EC2 instances. The company also wants to automate any backups of the
EC2 instances.
Which solutions will meet these requirements with the LEAST administrative effort? (Select
TWO.)
A. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create
a backup based on tags. Schedule the backup to run twice daily. Copy the image on
demand. B. Create an Amazon EC2-backed Amazon Machine Image (AMI) lifecycle policy to create a backup based on tags. Schedule the backup to run twice daily. Configure the copy to the us-west-2 Region. C. Create backup vaults in us-west-1 and in us-west-2 by using AWS Backup. Create a backup plan for the EC2 instances based on tag values. Create an AWS Lambda function to run as a scheduled job to copy the backup data to us-west-2. D. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Define the destination for the copy as us-west2. Specify the backup schedule to run twice daily. E. Create a backup vault by using AWS Backup. Use AWS Backup to create a backup plan for the EC2 instances based on tag values. Specify the backup schedule to run twice daily. Copy on demand to us-west-2.
Answer: B,D
Explanation: Option B suggests using an EC2-backed Amazon Machine Image (AMI)
lifecycle policy to automate the backup process. By configuring the policy to run twice daily
and specifying the copy to the us-west-2 Region, the company can ensure regular backups
are created and copied to the alternate region. Option D proposes using AWS Backup,
which provides a centralized backup management solution. By creating a backup vault and
backup plan based on tag values, the company can automate the backup process for the
EC2 instances. The backup schedule can be set to run twice daily, and the destination for
the copy can be defined as the us-west-2 Region.
Both options automate the backup process and include copying the backups to the uswest-2 Region, ensuring data resilience in the event of a disaster. These solutions
minimize administrative effort by leveraging automated backup and copy mechanisms
provided by AWS services.
Question # 202
A company runs a container application by using Amazon Elastic Kubernetes Service
(Amazon EKS). The application includes microservices that manage customers and place
orders. The company needs to route incoming requests to the appropriate microservices.
Which solution will meet this requirement MOST cost-effectively?
A. Use the AWS Load Balancer Controller to provision a Network Load Balancer. B. Use the AWS Load Balancer Controller to provision an Application Load Balancer. C. Use an AWS Lambda function to connect the requests to Amazon EKS. D. Use Amazon API Gateway to connect the requests to Amazon EKS.
Answer: B Explanation: An Application Load Balancer is a type of Elastic Load Balancer that
operates at the application layer (layer 7) of the OSI model. It can distribute incoming traffic
across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and
Lambda functions. It can also route requests based on the content of the request, such as
the host name, path, or query parameters1. The AWS Load Balancer Controller is a controller that helps you manage Elastic Load
Balancers for your Kubernetes cluster. It can provision Application Load Balancers or
Network Load Balancers when you create Kubernetes Ingress or Service resources2.
By using the AWS Load Balancer Controller to provision an Application Load Balancer for
your Amazon EKS cluster, you can achieve the following benefits:
You can route incoming requests to the appropriate microservices based on the
rules you define in your Ingress resource. For example, you can route requests
with different host names or paths to different microservices that handle customers
and orders2.
You can improve the performance and availability of your container applications by
distributing the load across multiple targets and enabling health checks and
automatic scaling1.
You can reduce the cost and complexity of managing your load balancers by using
a single controller that integrates with Amazon EKS and Kubernetes. You do not
need to manually create or configure load balancers or update them when your
cluster changes2.
Question # 203
An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming
low on disk space. A solutions architect wants to increase the disk space without downtime.
Which solution meets these requirements with the LEAST amount of effort?
A. Enable storage autoscaling in RDS. B. Increase the RDS database instance size. C. Change the RDS database instance storage type to Provisioned IOPS. D. Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
A social media company wants to allow its users to upload images in an application that is hosted in the AWS Cloud. The company needs a solution that automatically resizes the
images so that the images can be displayed on multiple device types. The application
experiences unpredictable traffic patterns throughout the day. The company is seeking a
highly available solution that maximizes scalability.
What should a solutions architect do to meet these requirements?
A. Create a static website hosted in Amazon S3 that invokes AWS Lambda functions to
resize the images and store the images in an Amazon S3 bucket. B. Create a static website hosted in Amazon CloudFront that invokes AWS Step Functions to resize the images and store the images in an Amazon RDS database. C. Create a dynamic website hosted on a web server that runs on an Amazon EC2 instance Configure a process that runs on the EC2 instance to resize the images and store the images in an Amazon S3 bucket. D. Create a dynamic website hosted on an automatically scaling Amazon Elastic Container Service (Amazon ECS) cluster that creates a resize job in Amazon Simple Queue Service (Amazon SQS). Set up an image-resizing program that runs on an Amazon EC2 instance to process the resize jobs
Answer: A Explanation: By using Amazon S3 and AWS Lambda together, you can create a
serverless architecture that provides highly scalable and available image resizing
capabilities. Here's how the solution would work: Set up an Amazon S3 bucket to store the
original images uploaded by users. Configure an event trigger on the S3 bucket to invoke
an AWS Lambda function whenever a new image is uploaded. The Lambda function can
be designed to retrieve the uploaded image, perform the necessary resizing operations
based on device requirements, and store the resized images back in the S3 bucket or a
different bucket designated for resized images. Configure the Amazon S3 bucket to make
the resized images publicly accessible for serving to users.
Question # 205
A retail company uses a regional Amazon API Gateway API for its public REST APIs. The
API Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias
record. A solutions architect needs to create a solution that has minimal effects on
customers and minimal data loss to release the new version of APIs.
Which solution will meet these requirements?
A. Create a canary release deployment stage for API Gateway. Deploy the latest API
version. Point an appropriate percentage of traffic to the canary stage. After API
verification, promote the canary stage to the production stage. B. Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new version of the API to the production stage. C. Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage. D. Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API. Point the Route 53 alias record to the
new API Gateway API custom domain name.
Answer: A
Explanation: This answer is correct because it meets the requirements of releasing the
new version of APIs with minimal effects on customers and minimal data loss. A canary
release deployment is a software development strategy in which a new version of an API is
deployed for testing purposes, and the base version remains deployed as a production
release for normal operations on the same stage. In a canary release deployment, total API
traffic is separated at random into a production release and a canary release with a preconfigured ratio. Typically, the canary release receives a small percentage of API traffic
and the production release takes up the rest. The updated API features are only visible to
API traffic through the canary. You can adjust the canary traffic percentage to optimize test
coverage or performance. By keeping canary traffic small and the selection random, most
users are not adversely affected at any time by potential bugs in the new version, and no
single user is adversely affected all the time. After the test metrics pass your requirements,
you can promote the canary release to the production release and disable the canary from
the deployment. This makes the new features available in the production stage.
References:
https://docs.aws.amazon.com/apigateway/latest/developerguide/canaryrelease.html
Amazon SAA-C03 Frequently Asked Questions
Answer: What is the passing score for the SAA-C03 exam?
The passing score for the SAA-C03 exam is 720 out of 1000.
How many questions are on the SAA-C03 exam?
Answer: The SAA-C03 exam consists of 65 multiple choice and multiple response questions.
What is the time limit for the SAA-C03 exam?
Answer: The time limit for the SAA-C03 exam is 130 minutes.
What are the recommended study materials for the SAA-C03 exam?
Answer: The recommended study materials for the SAA-C03 exam include the
AWS Certified Solutions Architect Associate Exam Guide, AWS
documentation, white papers, and hands-on experience with AWS services.
Can the SAA-C03 exam be taken online?
Answer: Yes, the SAA-C03 exam is delivered online through the AWS certification platform.
What is the cost of the SAA-C03 exam?
Answer: The cost of the SAA-C03 exam is $150 USD.
What is the format of the SAA-C03 exam?
Answer: The SAA-C03 exam consists of multiple choice and multiple
response questions and is delivered in a computer-based format.
How long is the SAA-C03 certification valid for?
Answer: The SAA-C03 certification is valid for three years, after which
recertification is required to maintain the certification.
What are the topics covered in the SAA-C03 exam?
The SAA-C03 exam covers topics such as AWS core services, design and
deployment of scalable, highly available, and fault-tolerant systems,
implementation of security and compliance solutions, and more.
What are the eligibility criteria for taking the SAA-C03 exam?
There are no specific eligibility criteria for taking the SAA-C03 exam.
However, it is recommended to have at least one year of experience with
the AWS platform, as well as an understanding of AWS services,
architecture, security, and billing.
What is the average salary of an AWS Certified Solutions Architect -
Associate?
The average salary of an AWS Certified Solutions Architect - Associate
varies depending on several factors such as location, industry, and
experience. On average, the salary for an AWS Certified Solutions
Architect - Associate ranges from $90,000 to $150,000 per year.
What industries commonly use AWS Certified Solutions Architects -
Associate?
AWS Certified Solutions Architects - Associate are in high demand across
many industries, including technology, finance, healthcare, e-commerce,
and more. These professionals are able to design, deploy, and manage
scalable and secure cloud-based systems on the AWS platform.
What are the career paths for an AWS Certified Solutions Architect -
Associate?
The career paths for an AWS Certified Solutions Architect - Associate
can vary depending on their interests and goals. Some common career
paths include advancing to an AWS Certified Solutions Architect -
Professional, pursuing additional AWS certifications, or moving into
management or leadership roles within their organization.
What additional certifications or training can an AWS Certified
Solutions Architect - Associate pursue to advance their career?
An AWS Certified Solutions Architect - Associate can pursue additional
AWS certifications, such as the AWS Certified Solutions Architect -
Professional, AWS Certified DevOps Engineer, or AWS Certified Big Data -
Specialty. They can also pursue training in specific AWS services, such
as Amazon S3, Amazon EC2, or Amazon RDS.
How does obtaining an AWS Certified Solutions Architect - Associate
certification impact one's job prospects and earning potential?
Obtaining an AWS Certified Solutions Architect - Associate certification
can positively impact one's job prospects and earning potential.
Employers often view AWS certification as a sign of technical expertise
and experience, and certified individuals are typically offered higher
salaries and more job opportunities.
What are the job duties and responsibilities of an AWS Certified
Solutions Architect - Associate?
The job duties and responsibilities of an AWS Certified Solutions
Architect - Associate include designing, deploying, and managing
scalable, secure, and highly available systems on the AWS platform,
evaluating and recommending AWS services for specific business needs,
and working with stakeholders to ensure the proper operation and
performance of AWS-based systems.
How does the demand for AWS Certified Solutions Architects - Associate
vary by region and industry?
The demand for AWS Certified Solutions Architects - Associate varies by
region and industry, with higher demand in regions with a strong
technology presence and in industries that heavily rely on cloud-based
systems.
What are some of the most challenging and rewarding aspects of being an
AWS Certified Solutions Architect - Associate?
The most challenging aspect of being an AWS Certified Solutions
Architect - Associate is staying current with the rapidly evolving AWS
platform and new services and features. The most rewarding aspect is the
opportunity to work on exciting and innovative projects, and the
satisfaction of delivering solutions that drive business success.
How does continuous education and keeping up with the latest advancements in AWS technology impact the success and growth of an AWS Certified Solutions Architect - Associate?
Continuous education and keeping up with the latest advancements in AWS
technology is crucial for the success and growth of an AWS Certified
Solutions Architect - Associate. The AWS platform is constantly
evolving, and certified professionals
Customers Feedback
What our clients say about SAA-C03 Practice Questions
Emi
Apr 19, 2024
Hi Guys I am pleased to inform you that I passed my SAA-C03 exam on the first try thanks to these great exam dumps!
Emily Smith
Apr 18, 2024
I found the SAA-C03 exam to be a comprehensive assessment of my AWS knowledge. The real-world scenarios and practical questions helped me to see how my skills could be applied in a real-world setting. Today i passed my AWS Certified Solutions Architect - Associate (SAA-C03) exam thanks to salesforcexamdumps.com if anyone want to get exam information you can get from here. https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf
Sophia Kim
Apr 18, 2024
I appreciated the format of the SAA-C03 exam dumps, with a mix of multiple-choice and hands-on questions. It was a great way to test both my technical knowledge and practical skills. I got PDF + Exam Engine package and i never found such material before.
Youssef Abdelhakim
Apr 17, 2024
These questions are helpful for passing the exam, but if you want truly to learn the material.
Maria Lopez
Apr 17, 2024
I received my SAA-C03 exam results immediately after completing it and was pleasantly surprised with a 92% mark. Truly amazing!
Rachel Chen
Apr 16, 2024
I thought the SAA-C03 exam was well-structured and gave a good representation of the skills and knowledge necessary to be a successful AWS Solutions Architect. The questions were challenging, but not impossible, which I felt was a good balance.
Ji-hyun
Apr 16, 2024
Salesforcexamdumps.com Study Material and questions are extremely informative and were a huge help to me. I got 90% marks.
Michael Brown
Apr 15, 2024
The SAA-C03 exam was a great way to measure my growth as an AWS Solutions Architect. I was pleased to see that all the hard work I put into studying paid off, as I was able to pass the exam on my first try.
Liam O'Brien
Apr 15, 2024
Compared to other websites, this one is much more affordable and provides the same questions and answers. I received a fantastic score of 90%.
I was nervous about taking the SAA-C03 exam, but after using the practice exams and study material provided my salesforcexamdumps, I felt well-prepared. The questions were a good mix of technical and practical, and I felt confident in my ability to answer them. Overall, it was a great experience and I'm happy to have passed!
The exam consisted of 65 questions and 59 questions from this study material. I achieved a mark of 90% on the test. Good luck to those taking the exam!
Petrova
Apr 12, 2024
These SAA-C03 Practice tests feel like real exams! They are very accurate and I highly recommend them.
Henrik Bjornsen
Apr 12, 2024
I am delighted to recommend this website to my friends. I personally used it to prepare for my SAA-C03 exam, and I can attest that the questions and answers were 100% accurate.
Sarah Johnson
Apr 11, 2024
I recently took the SAA-C03 exam and I'm happy with the report that I passed my AWS Certified Solutions Architect - Associate (SAA-C03) Exam on my first attempt! The questions on the exam were similar to the ones I practiced with through my study materials provided by salesforcexamdumps, which helped me feel confident and prepared.
Muhammad Talha
Apr 11, 2024
I took the SAA-C03 exam after completing the AWS Solutions Architect Associate Dumps preparation, and I found it to be a natural progression in terms of difficulty. The questions were challenging, but they accurately reflected the skills and knowledge necessary for the role of a Solutions Architect. Today i passed my AWS Certified Solutions Architect - Associate (SAA-C03) Exam with 98% marks.
Khan
Apr 10, 2024
I am thrilled to have discovered Salesforcexamdumps! It's amazing how easy it is to read, understand, and study each exam section, taking detailed notes. Thank you so much!
Amelia Collins
Apr 10, 2024
These exam dumps are worth every penny I spent. I passed the SAA-C03 exam with flying colors thanks to these questions. Thanks Salesforcexamdumps.com.
Leave a comment
Your email address will not be published. Required fields are marked *
Leave a comment
Your email address will not be published. Required fields are marked *