A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC. A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for some requests. What should the solutions architect do to resolve this issue?
A. Disable session affinity (sticky sessions) on the ALB.
B. Replace the ALB with a Network Load Balancer.
C. Increase the number of EC2 instances in each Availability Zone.
D. Adjust the frequency of the health checks on the ALB's target group.
A company is designing a new internal web application in the AWS Cloud. The new application must securely retrieve and store multiple employee usernames and passwords from an AWS managed service. Which solution will meet these requirements with the LEAST operational overhead?
A. Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS
Cloud Formation and the BatchGetSecretValue API to retrieve usernames and passwords
from Parameter Store.
B. Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation
and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and
passwords from Secrets Manager.
C. Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS
Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the
usernames and passwords from Parameter Store.
D. Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation
and the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets
Manager.
A law firm needs to make hundreds of files readable for the general public. The law firm must prevent members of the public from modifying or deleting the files before a specified future date. Which solution will meet these requirements MOST securely?
A. Upload the files to an Amazon S3 bucket that is configured for static website hosting.
Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the
specified date.
B. Create a new Amazon S3 bucket. Enable S3 Versioning. Use S3 Object Lock and set a retention period based on the specified date. Create an Amazon CloudFront distribution to serve content from the bucket. Use an S3 bucket policy to restrict access to the CloudFront origin access control (OAC).
C. Create a new Amazon S3 bucket. Enable S3 Versioning. Configure an event trigger to run an AWS Lambda function if a user modifies or deletes an object. Configure the Lambda function to replace the modified or deleted objects with the original versions of the objects from a private S3 bucket.
D. Upload the files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period based on the specified date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
A company is designing an application on AWS that processes sensitive data. The application stores and processes financial data for multiple customers. To meet compliance requirements, the data for each customer must be encrypted separately at rest by using a secure, centralized key management solution. The company wants to use AWS Key Management Service (AWS KMS) to implement encryption. Which solution will meet these requirements with the LEAST operational overhead'
A. Generate a unique encryption key for each customer. Store the keys in an Amazon S3
bucket. Enable server-side encryption.
B. Deploy a hardware security appliance in the AWS environment that securely stores
customer-provided encryption keys. Integrate the security appliance with AWS KMS to
encrypt the sensitive data in the application.
C. Create a single AWS KMS key to encrypt all sensitive data across the application.
D. Create separate AWS KMS keys for each customer's data that have granular access
control and logging enabled.
A company needs to give a globally distributed development team secure access to the company's AWS resources in a way that complies with security policies. The company currently uses an on-premises Active Directory for internal authentication. The company uses AWS Organizations to manage multiple AWS accounts that support multiple projects. The company needs a solution to integrate with the existing infrastructure to provide centralized identity management and access control. Which solution will meet these requirements with the LEAST operational overhead?
A. Set up AWS Directory Service to create an AWS managed Microsoft Active Directory on
AWS. Establish a trust relationship with the on-premises Active Directory. Use 1AM roles
that are assigned to Active Directory groups to access AWS resources within the
company's AWS accounts.
B. Create an 1AM user for each developer. Manually manage permissions for each 1AM
user based on each user's involvement with each project. Enforce multi-factor
authentication (MFA) as an additional layer of security.
C. Use AD Connector in AWS Directory Service to connect to the on-premises Active
Directory. Integrate AD Connector with AWS 1AM Identity Center. Configure permissions
sets to give each AD group access to specific AWS accounts and resources.
D. Use Amazon Cognito to deploy an identity federation solution. Integrate the identity
federation solution with the on-premises Active Directory. Use Amazon Cognito to provide
access tokens for developers to access AWS accounts and resources.
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads. Which solution will meet these requirements with the MOST operational efficiency?
A. Create an Amazon DynamoDB database table configured with global tables.
B. Create an Amazon RDS database with Multi-AZ deployments
C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
D. Create an Amazon RDS database configured with cross-Region read replicas.
A company deploys its applications on Amazon Elastic Kubernetes Service (Amazon EKS) behind an Application Load Balancer in an AWS Region. The application needs to store data in a PostgreSQL database engine. The company wants the data in the database to be highly available. The company also needs increased capacity for read workloads. Which solution will meet these requirements with the MOST operational efficiency?
A. Create an Amazon DynamoDB database table configured with global tables.
B. Create an Amazon RDS database with Multi-AZ deployments
C. Create an Amazon RDS database with Multi-AZ DB cluster deployment.
D. Create an Amazon RDS database configured with cross-Region read replicas.
A company has an application that runs on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on Amazon EC2 instances. The application has a U1 that uses Amazon DynamoDB and data services that use Amazon S3 as part of the application deployment. The company must ensure that the EKS Pods for the U1 can access only Amazon DynamoDB and that the EKS Pods for the data services can access only Amazon S3. The company uses AWS Identity and Access Management |IAM). Which solution meets these requirements?
A. Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required
permissions. Attach both 1AM policies to the EC2 instance profile. Use role-based access
control (RBAC) to control access to Amazon S3 or DynamoDB (or the respective EKS
Pods.
B. Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required
permissions. Attach the Amazon S3 1AM policy directly to the EKS Pods (or the data
services and the DynamoDB policy to the EKS Pods for the U1.
C. Create separate Kubernetes service accounts for the U1 and data services to assume
an 1AM role. Attach the Amazon S3 Full Access policy to the data services account and
the AmazonDynamoDBFullAccess policy to the U1 service account.
D. Create separate Kubernetes service accounts for the U1 and data services to assume
an 1AM role. Use 1AM Role for Service Accounts (IRSA) to provide access to the EKS
Pods for the U1 to Amazon S3 and the EKS Pods for the data services to DynamoDB.
A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (PII). The company recently discovered that S3 buckets have some objects that contain PII. The company needs to automatically detect PII in S3 buckets and to notify the company's security team. Which solution will meet these requirements?
A. Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData
event type from Macie findings and to send an Amazon Simple Notification Service
(Amazon SNS) notification to the security team.
B. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL
event type from GuardDuty findings and to send an Amazon Simple Notification Service
(Amazon SNS) notification to the security team.
C. Use Amazon Macie. Create an Amazon EventBridge rule to filter the
SensitiveData:S3Object/Personal event type from Macie findings and to send an Amazon
Simple Queue Service (Amazon SQS) notification to the security team.
D. Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL
event type from GuardDuty findings and to send an Amazon Simple Queue Service
(Amazon SQS) notification to the security team.
A company stores 5 PB of archived data on physical tapes. The company needs to preserve the data for another 10 years. The data center that stores the tapes has a 10 Gbps Direct Connect connection to an AWS Region. The company wants to migrate the data to AWS within the next 6 months.
A. Read the data from the tapes on premises. Use local storage to stage the data. Use
AWS DataSync to migrate the data to Amazon S3 Glacier Flexible Retrieval storage.
B. Use an on-premises backup application to read the data from the tapes. Use the backup application to write directly to Amazon S3 Glacier Deep Archive storage.
C. Order multiple AWS Snowball Edge devices. Copy the physical tapes to virtual tapes on
the Snowball Edge devices. Ship the Snowball Edge devices to AWS. Create an S3
Lifecycle policy to move the tapes to Amazon S3 Glacier Instant Retrieval storage.
D. Configure an on-premises AWS Storage Gateway Tape Gateway. Create virtual tapes in
the AWS Cloud. Use backup software to copy the physical tapes to the virtual tapes. Move
the virtual tapes to Amazon S3 Glacier Deep Archive storage.