A retail company has several businesses. The IT team for each business manages its own AWS account. Each team account is part of an organization in AWS Organizations. Each team monitors its product inventory levels in an Amazon DynamoDB table in the team's own AWS account. The company is deploying a central inventory reporting application into a shared AWS account. The application must be able to read items from all the teams' DynamoDB tables. Which authentication option will meet these requirements MOST securely?
A. Integrate DynamoDB with AWS Secrets Manager in the inventory application account.Configure the application to use the correct secret from Secrets Manager to authenticateand read the DynamoDB table. Schedule secret rotation for every 30 days. B. In every business account, create an 1AM user that has programmatic access.Configure the application to use the correct 1AM user access key ID and secret access keyto authenticate and read the DynamoDB table. Manually rotate 1AM access keys every 30days. C. In every business account, create an 1AM role named BU_ROLE with a policy that givesthe role access to the DynamoDB table and a trust policy to trust a specific role in theinventory application account. In the inventory account, create a role named APP_ROLEthat allows access to the STS AssumeRole API operation. Configure the application to useAPP_ROLE and assume the cross-account role BU_ROLE to read the DynamoDB table. D. Integrate DynamoDB with AWS Certificate Manager (ACM). Generate identitycertificates to authenticate DynamoDB. Configure the application to use the correctcertificate to authenticate and read the DynamoDB table.
Answer: C Explanation: This solution meets the requirements most securely because it uses IAM roles and the STS AssumeRole API operation to authenticate and authorize the inventoryapplication to access the DynamoDB tables in different accounts. IAM roles are moresecure than IAM users or certificates because they do not require long-term credentials orpasswords. Instead, IAM roles provide temporary security credentials that are automaticallyrotated and can be configured with a limited duration. The STS AssumeRole API operationenables you to request temporary credentials for a role that you are allowed to assume. Byusing this operation, you can delegate access to resources that are in different AWSaccounts that you own or that are owned by third parties. The trust policy of the role defineswhich entities can assume the role, and the permissions policy of the role defines whichactions can be performed on the resources. By using this solution, you can avoid hardcodingcredentials or certificates in the inventory application, and you can also avoidstoring them in Secrets Manager or ACM. You can also leverage the built-in securityfeatures of IAM and STS, such as MFA, access logging, and policy conditions.References: IAM RolesSTS AssumeRoleTutorial: Delegate Access Across AWS Accounts Using IAM Roles
Question # 322
A company built an application with Docker containers and needs to run the application in the AWS Cloud The company wants to use a managed sen/ice to host the application The solution must scale in and out appropriately according to demand on the individual container services The solution also must not result in additional operational overhead or infrastructure to manage Which solutions will meet these requirements? (Select TWO)
A. Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate. B. Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate. C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes.
Answer: A,B Explanation: These options are the best solutions because they allow the company to run the application with Docker containers in the AWS Cloud using a managed service thatscales automatically and does not require any infrastructure to manage. By using AWSFargate, the company can launch and run containers without having to provision, configure,or scale clusters of EC2 instances. Fargate allocates the right amount of computeresources for each container and scales them up or down as needed. By using AmazonECS or Amazon EKS, the company can choose the container orchestration platform thatsuits its needs. Amazon ECS is a fully managed service that integrates with other AWSservices and simplifies the deployment and management of containers. Amazon EKS is amanaged service that runs Kubernetes on AWS and provides compatibility with existingKubernetes tools and plugins.C. Provision an Amazon API Gateway API Connect the API to AWS Lambda to run thecontainers. This option is not feasible because AWS Lambda does not support runningDocker containers directly. Lambda functions are executed in a sandboxed environmentthat is isolated from other functions and resources. To run Docker containers on Lambda,the company would need to use a custom runtime or a wrapper library that emulates theDocker API, which can introduce additional complexity and overhead.D. Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.This option is not optimal because it requires the company to manage the EC2 instancesthat host the containers. The company would need to provision, configure, scale, patch,and monitor the EC2 instances, which can increase the operational overhead andinfrastructure costs.E. Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 workernodes. This option is not ideal because it requires the company to manage the EC2instances that host the containers. The company would need to provision, configure, scale,patch, and monitor the EC2 instances, which can increase the operational overhead andinfrastructure costs.References:1 AWS Fargate - Amazon Web Services2 Amazon Elastic Container Service - Amazon Web Services3 Amazon Elastic Kubernetes Service - Amazon Web Services4 AWS Lambda FAQs - Amazon Web Services
Question # 323
A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data files A solutions architect needs to implement a highly available SFTP solution that minimizes operational overhead. Which solution will meet these requirements?
A. Use AWS Transfer Family to configure an SFTP-enabled server with a publiclyaccessible endpoint Choose the S3 data lake as the destination B. Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpointURL to the new partner Share the S3 File Gateway endpoint with the newpartner C. Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partnerto upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2instance to upload files to the S3 data lake D. Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network LoadBalancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instancesto upload files to the S3 data lake.
Answer: A Explanation: This option is the most cost-effective and simple way to enable SFTP accessto the S3 data lake. AWS Transfer Family is a fully managed service that supports securefile transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabledserver with a public endpoint and associate it with your S3 bucket. You can also use AWSIdentity and Access Management (IAM) roles and policies to control access to your S3 datalake. The service scales automatically to handle any volume of file transfers and provideshigh availability and durability. You do not need to provision, manage, or patch any serversor load balancers.Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is ahybrid cloud storage service that provides a local file system interface to S3. You can use itto store and retrieve files as objects in S3 using standard file protocols such as NFS andSMB. However, it does not support SFTP protocol, and it requires deploying a file gatewayappliance on-premises or on EC2.Option C is not cost-effective or scalable because it requires launching and managing anEC2 instance in a private subnet and setting up a VPN connection for the new partner. Thiswould incur additional costs for the EC2 instance, the VPN connection, and the datatransfer. It would also introduce complexity and security risks to the solution. Moreover, itwould require running a cron job script on the EC2 instance to upload files to the S3 datalake, which is not efficient or reliable.Option D is not cost-effective or scalable because it requires launching and managingmultiple EC2 instances in a private subnet and placing a NLB in front of them. This wouldincur additional costs for the EC2 instances, the NLB, and the data transfer. It would alsointroduce complexity and security risks to the solution. Moreover, it would require running acron job script on the EC2 instances to upload files to the S3 data lake, which is notefficient or reliable. References:What Is AWS Transfer Family?What Is Amazon S3 File Gateway?What Is Amazon EC2?[What Is Amazon Virtual Private Cloud?][What Is a Network Load Balancer?]
Question # 324
A company hosts an application used to upload files to an Amazon S3 bucket Once uploaded, the files are processed to extract metadata which takes less than 5 seconds The volume and frequency of the uploads varies from a few files each hour to hundreds of concurrent uploads The company has asked a solutions architect to design a cost-effective architecture that will meet these requirements.What should the solutions architect recommend?
A. Configure AWS CloudTrail trails to tog S3 API calls Use AWS AppSync to process thefiles. B. Configure an object-created event notification within the S3 bucket to invoke an AWSLambda function to process the files. C. Configure Amazon Kinesis Data Streams to process and send data to Amazon S3.Invoke an AWS Lambda function to process the files. D. Configure an Amazon Simple Notification Service (Amazon SNS) topic to process thefiles uploaded to Amazon S3 Invoke an AWS Lambda function to process the files.
Answer: B Explanation: This option is the most cost-effective and scalable way to process the filesuploaded to S3. AWS CloudTrail is used to log API calls, not to trigger actions based onthem. AWS AppSync is a service for building GraphQL APIs, not for processing files.Amazon Kinesis Data Streams is used to ingest and process streaming data, not to senddata to S3. Amazon SNS is a pub/sub service that can be used to notify subscribers ofevents, not to process files. References:Using AWS Lambda with Amazon S3AWS CloudTrail FAQsWhat Is AWS AppSync?[What Is Amazon Kinesis Data Streams?][What Is Amazon Simple Notification Service?]
Question # 325
A company runs analytics software on Amazon EC2 instances The software accepts job requests from users to process data that has been uploaded to Amazon S3 Users report that some submitted data is not being processed Amazon CloudWatch reveals that the EC2 instances have a consistent CPU utilization at or near 100% The company wants to improve system performance and scale the system based on user load. What should a solutions architect do to meet these requirements?
A. Create a copy of the instance Place all instances behind an Application Load Balancer B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. D. Route incoming requests to Amazon Simple Queue Service (Amazon SQS) Configurean EC2 Auto Scaling group based on queue size Update the software to read from the queue.
Answer: D Explanation: This option is the best solution because it allows the company to decouplethe analytics software from the user requests and scale the EC2 instances dynamicallybased on the demand. By using Amazon SQS, the company can create a queue thatstores the user requests and acts as a buffer between the users and the analytics software.This way, the software can process the requests at its own pace without losing any data oroverloading the EC2 instances. By using EC2 Auto Scaling, the company can create anAuto Scaling group that launches or terminates EC2 instances automatically based on thesize of the queue. This way, the company can ensure that there are enough instances tohandle the load and optimize the cost and performance of the system. By updating thesoftware to read from the queue, the company can enable the analytics software toconsume the requests from the queue and process the data from Amazon S3.A. Create a copy of the instance Place all instances behind an Application Load Balancer.This option is not optimal because it does not address the root cause of the problem, whichis the high CPU utilization of the EC2 instances. An Application Load Balancer candistribute the incoming traffic across multiple instances, but it cannot scale the instancesbased on the load or reduce the processing time of the analytics software. Moreover, thisoption can incur additional costs for the load balancer and the extra instances.B. Create an S3 VPC endpoint for Amazon S3 Update the software to reference theendpoint. This option is not effective because it does not solve the issue of the high CPUutilization of the EC2 instances. An S3 VPC endpoint can enable the EC2 instances toaccess Amazon S3 without going through the internet, which can improve the networkperformance and security. However, it cannot reduce the processing time of the analyticssoftware or scale the instances based on the load.C. Stop the EC2 instances. Modify the instance type to one with a more powerful CPU andmore memory. Restart the instances. This option is not scalable because it does notaccount for the variability of the user load. Changing the instance type to a more powerfulone can improve the performance of the analytics software, but it cannot adjust the numberof instances based on the demand. Moreover, this option can increase the cost of thesystem and cause downtime during the instance modification.References:1 Using Amazon SQS queues with Amazon EC2 Auto Scaling - Amazon EC2 AutoScaling2 Tutorial: Set up a scaled and load-balanced application - Amazon EC2 AutoScaling3 Amazon EC2 Auto Scaling FAQs
Question # 326
A company is deploying an application that processes streaming data in near-real time The company plans to use Amazon EC2 instances for the workload The network architecture must be configurable to provide the lowest possible latency between nodes Which combination of network solutions will meet these requirements? (Select TWO)
A. Enable and configure enhanced networking on each EC2 instance B. Group the EC2 instances in separate accounts C. Run the EC2 instances in a cluster placement group D. Attach multiple elastic network interfaces to each EC2 instance E. Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answer: A,C Explanation: These options are the most suitable ways to configure the networkarchitecture to provide the lowest possible latency between nodes. Option A enables andconfigures enhanced networking on each EC2 instance, which is a feature that improvesthe network performance of the instance by providing higher bandwidth, lower latency, andlower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or ElasticFabric Adapter (EFA) to provide direct access to the network hardware. You can enableand configure enhanced networking by choosing a supported instance type and acompatible operating system, and installing the required drivers. Option C runs the EC2instances in a cluster placement group, which is a logical grouping of instances within asingle Availability Zone that are placed close together on the same underlying hardware.Cluster placement groups provide the lowest network latency and the highest networkthroughput among the placement group options. You can run the EC2 instances in acluster placement group by creating a placement group and launching the instances into it.Option B is not suitable because grouping the EC2 instances in separate accounts doesnot provide the lowest possible latency between nodes. Separate accounts are used toisolate and organize resources for different purposes, such as security, billing, orcompliance. However, they do not affect the network performance or proximity of theinstances. Moreover, grouping the EC2 instances in separate accounts would incuradditional costs and complexity, and it would require setting up cross-account networkingand permissions.Option D is not suitable because attaching multiple elastic network interfaces to each EC2instance does not provide the lowest possible latency between nodes. Elastic networkinterfaces are virtual network interfaces that can be attached to EC2 instances to provideadditional network capabilities, such as multiple IP addresses, multiple subnets, orenhanced security. However, they do not affect the network performance or proximity of theinstances. Moreover, attaching multiple elastic network interfaces to each EC2 instancewould consume additional resources and limit the instance type choices. Option E is not suitable because using Amazon EBS optimized instance types does notprovide the lowest possible latency between nodes. Amazon EBS optimized instance typesare instances that provide dedicated bandwidth for Amazon EBS volumes, which are blockstorage volumes that can be attached to EC2 instances. EBS optimized instance typesimprove the performance and consistency of the EBS volumes, but they do not affect thenetwork performance or proximity of the instances. Moreover, using EBS optimizedinstance types would incur additional costs and may not be necessary for the streamingdata workload. References:Enhanced networking on LinuxPlacement groupsElastic network interfacesAmazon EBS-optimized instances
Question # 327
A company runs a container application on a Kubernetes cluster in the company's data center The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue The data center cannot scale fast enough to meet the company's expanding business needs The company wants to migrate the workloads to AWS Which solution will meet these requirements with the LEAST operational overhead? \
A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. B. Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS)Use Amazon MQ to retrieve the messages. C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ toretrieve the messages. D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service(Amazon SQS) to retrieve the messages.
Answer: B Explanation: This option is the best solution because it allows the company to migrate the container application to AWS with minimal changes and leverage a managed service to runthe Kubernetes cluster and the message queue. By using Amazon EKS, the company canrun the container application on a fully managed Kubernetes control plane that iscompatible with the existing Kubernetes tools and plugins. Amazon EKS handles theprovisioning, scaling, patching, and security of the Kubernetes cluster, reducing theoperational overhead and complexity. By using Amazon MQ, the company can use a fullymanaged message broker service that supports AMQP and other popular messagingprotocols. Amazon MQ handles the administration, maintenance, and scaling of themessage broker, ensuring high availability, durability, and security of the messages.A. Migrate the container application to Amazon Elastic Container Service (Amazon ECS)Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This optionis not optimal because it requires the company to change the container orchestrationplatform from Kubernetes to ECS, which can introduce additional complexity and risk.Moreover, it requires the company to change the messaging protocol from AMQP to SQS,which can also affect the application logic and performance. Amazon ECS and AmazonSQS are both fully managed services that simplify the deployment and management ofcontainers and messages, but they may not be compatible with the existing applicationarchitecture and requirements.C. Use highly available Amazon EC2 instances to run the application Use Amazon MQ toretrieve the messages. This option is not ideal because it requires the company to managethe EC2 instances that host the container application. The company would need toprovision, configure, scale, patch, and monitor the EC2 instances, which can increase theoperational overhead and infrastructure costs. Moreover, the company would need toinstall and maintain the Kubernetes software on the EC2 instances, which can also addcomplexity and risk. Amazon MQ is a fully managed message broker service that supportsAMQP and other popular messaging protocols, but it cannot compensate for the lack of amanaged Kubernetes service.D. Use AWS Lambda functions to run the application Use Amazon Simple Queue Service(Amazon SQS) to retrieve the messages. This option is not feasible because AWS Lambdadoes not support running container applications directly. Lambda functions are executed ina sandboxed environment that is isolated from other functions and resources. To run container applications on Lambda, the company would need to use a custom runtime or awrapper library that emulates the container API, which can introduce additional complexityand overhead. Moreover, Lambda functions have limitations in terms of available CPU,memory, and runtime, which may not suit the application needs. Amazon SQS is a fullymanaged message queue service that supports asynchronous communication, but it doesnot support AMQP or other messaging protocols.References:1 Amazon Elastic Kubernetes Service - Amazon Web Services2 Amazon MQ - Amazon Web Services3 Amazon Elastic Container Service - Amazon Web Services4 AWS Lambda FAQs - Amazon Web Services
Question # 328
A company runs a real-time data ingestion solution on AWS. The solution consists of the most recent version of Amazon Managed Streaming for Apache Kafka (Amazon MSK). The solution is deployed in a VPC in private subnets across three Availability Zones. A solutions architect needs to redesign the data ingestion solution to be publicly available over the internet. The data in transit must also be encrypted. Which solution will meet these requirements with the MOST operational efficiency?
A. Configure public subnets in the existing VPC. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. B. Create a new VPC that has public subnets. Deploy an MSK cluster in the publicsubnets. Update the MSK cluster security settings to enable mutual TLS authentication. C. Deploy an Application Load Balancer (ALB) that uses private subnets. Configure an ALBsecurity group inbound rule to allow inbound traffic from the VPC CIDR block for HTTPSprotocol. D. Deploy a Network Load Balancer (NLB) that uses private subnets. Configure an NLBlistener for HTTPS communication over the internet.
Answer: A Explanation: The solution that meets the requirements with the most operational efficiency is to configure public subnets in the existing VPC and deploy an MSK cluster in the publicsubnets. This solution allows the data ingestion solution to be publicly available over theinternet without creating a new VPC or deploying a load balancer. The solution alsoensures that the data in transit is encrypted by enabling mutual TLS authentication, whichrequires both the client and the server to present certificates for verification. This solutionleverages the public access feature of Amazon MSK, which is available for clusters runningApache Kafka 2.6.0 or later versions1.The other solutions are not as efficient as the first one because they either createunnecessary resources or do not encrypt the data in transit. Creating a new VPC withpublic subnets would incur additional costs and complexity for managing network resourcesand routing. Deploying an ALB or an NLB would also add more costs and latency for thedata ingestion solution. Moreover, an ALB or an NLB would not encrypt the data in transitby itself, unless they are configured with HTTPS listeners and certificates, which wouldrequire additional steps and maintenance. Therefore, these solutions are not optimal for thegiven requirements.References:Public access - Amazon Managed Streaming for Apache Kafka
Question # 329
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job. Which solution will meet these requirements?
A. Use AWS App2Container (A2C) to containerize the job. Run the job as an AmazonElastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU(vCPU) and 1 GB of memory. B. Copy the code into an AWS Lambda function that has 1 GB of memory. Create anAmazon EventBridge scheduled rule to run the code each hour. C. Use AWS App2Container (A2C) to containerize the job. Install the container in theexisting Amazon Machine Image (AMI). Ensure that the schedule stops the container whenthe task finishes. D. Configure the existing schedule to stop the EC2 instance at the completion of the joband restart the EC2 instance when the next job starts.
Answer: B Explanation: AWS Lambda is a serverless compute service that allows you to run codewithout provisioning or managing servers. You can create Lambda functions using variouslanguages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculatedbased on the number of requests and the duration of your code execution. You can useAmazon EventBridge to trigger your Lambda function on a schedule, such as every hour,using cron or rate expressions. This solution will optimize the costs to run the job, as youwill not pay for any idle time or unused resources, unlike running the job on an EC2instance. References: 1: AWS Lambda - FAQs2, General Information section2: Tutorial:Schedule AWS Lambda functions using EventBridge3, Introduction section3: Scheduleexpressions using rate or cron - AWS Lambda4, Introduction section.
Question # 330
An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations The applications run on Amazon Aurora PostgreSQL databases across all the accounts The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases Which solution will meet these requirements in the MOST operationally efficient way?
A. Attach service control policies (SCPs) to the root of the organization to identify the failedlogin attempts B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the memberaccounts of the organization C. Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export thelog data to a central Amazon S3 bucket D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a centralAmazon S3 bucket
Answer: C Explanation: This option is the most operationally efficient way to meet the requirements because it allows the company to monitor and analyze the database login activity across allthe accounts in the organization. By publishing the Aurora general logs to a log group inAmazon CloudWatch Logs, the company can enable the logging of the databaseconnections, disconnections, and failed authentication attempts. By exporting the log datato a central Amazon S3 bucket, the company can store the log data in a durable and costeffectiveway and use other AWS services or tools to perform further analysis or alerting onthe log data. For example, the company can use Amazon Athena to query the log data inAmazon S3, or use Amazon SNS to send notifications based on the log data.A. Attach service control policies (SCPs) to the root of the organization to identify the failedlogin attempts. This option is not effective because SCPs are not designed to identify thefailed login attempts, but to restrict the actions that the users and roles can perform in themember accounts of the organization. SCPs are applied to the AWS API calls, not to thedatabase login attempts. Moreover, SCPs do not provide any logging or analysiscapabilities for the database activity.B. Enable the Amazon RDS Protection feature in Amazon GuardDuty for the memberaccounts of the organization. This option is not optimal because the Amazon RDSProtection feature in Amazon GuardDuty is not available for Aurora PostgreSQLdatabases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the databaselogin attempts, but the network and API activity related to the RDS instances.D. Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a centralAmazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capturethe database login attempts, but only the AWS API calls made by or on behalf of theAurora PostgreSQL database. For example, AWS CloudTrail can record the events suchas creating, modifying, or deleting the database instances, clusters, or snapshots, but notthe events such as connecting, disconnecting, or failing to authenticate to the database.References:1 Working with Amazon Aurora PostgreSQL - Amazon Aurora2 Working with log groups and log streams - Amazon CloudWatch Logs3 Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs[4] Amazon GuardDuty FAQs[5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon RelationalDatabase Service