Amazon SAP-C02 Sample Questions

Question # 31

A company has an loT platform that runs in an on-premises environment. The platform consists of a server that connects to loT devices by using the MQTT protocol. The platform collects telemetry data from the devices at least once every 5 minutes The platform also stores device metadata in a MongoDB cluster An application that is installed on an on-premises machine runs periodic jobs to aggregate and transform the telemetry and device metadata The application creates reports that users view by using another web application that runs on the same on-premises machine The periodic jobs take 120-600 seconds to run However, the web application is always running. The company is moving the platform to AWS and must reduce the operational overhead of the stack. Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE.) 

A. Use AWS Lambda functions to connect to the loT devices
B. Configure the loT devices to publish to AWS loT Core
C. Write the metadata to a self-managed MongoDB database on an Amazon EC2 instance
D. Write the metadata to Amazon DocumentDB (with MongoDB compatibility)
E. Use AWS Step Functions state machines with AWS Lambda tasks to prepare thereports and to write the reports to Amazon S3 Use Amazon CloudFront with an S3 origin toserve the reports
F. Use an Amazon Elastic Kubernetes Service (Amazon EKS) cluster with Amazon EC2instances to prepare the reports Use an ingress controller in the EKS cluster to serve thereports


Question # 32

A company is designing an AWS environment tor a manufacturing application. The application has been successful with customers, and the application's user base has increased. The company has connected the AWS environment to the company's onpremises data center through a 1 Gbps AWS Direct Connect connection. The company has configured BGP for the connection. The company must update the existing network connectivity solution to ensure that the solution is highly available, fault tolerant, and secure. Which solution win meet these requirements MOST cost-effectively? 

A. Add a dynamic private IP AWS Site-to-Site VPN as a secondary path to secure data intransit and provide resilience for the Direct Conned connection. Configure MACsec toencrypt traffic inside the Direct Connect connection.
B. Provision another Direct Conned connection between the company's on-premises datacenter and AWS to increase the transfer speed and provide resilience. Configure MACsecto encrypt traffic inside the Dried Conned connection.
C. Configure multiple private VIFs. Load balance data across the VIFs between the onpremisesdata center and AWS to provide resilience.
D. Add a static AWS Site-to-Site VPN as a secondary path to secure data in transit and toprovide resilience for the Direct Connect connection.


Question # 33

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzjp compression. The company must retain the log files indefinitely. A security engineer occasionally analyzes the togs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use. Which solution will meet these requirements with the LARGEST performance improvement? 

A. Create an AWS Lambda function to decompress the gzip flies and to compress the tileswith bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3event notification for the S3 bucket.
B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configurationto move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded
C. Update the VPC flow log configuration to store the files in Apache Parquet format.Specify Hourly partitions for the log files.
D. Create a new Athena workgroup without data usage control limits. Use Athena engineversion 2.


Question # 34

An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a solutions architect to design a simple, highly available, and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays. Which of the following is the MOST reliable approach to meet the requirements? 

A. Receive the orders in an Amazon EC2-hosted database and use EC2 instances toprocess them.
B. Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function toprocess them.
C. Receive the orders using the AWS Step Functions program and launch an Amazon ECScontainer to process them.
D. Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances toprocess them.


Question # 35

A company that is developing a mobile game is making game assets available in two AWS Regions. Game assets are served from a set of Amazon EC2 instances behind an Application Load Balancer (ALB) in each Region. The company requires game assets to be fetched from the closest Region. If game assess become unavailable in the closest Region, they should the fetched from the other Region. What should a solutions architect do to meet these requirement? 

A. Create an Amazon CloudFront distribution. Create an origin group with one origin foreach ALB. Set one of the origins as primary.
B. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 failoverrouting record pointing to the two ALBs. Set the Evaluate Target Health value Yes.
C. Create two Amazon CloudFront distributions, each with one ALB as the origin. Createan Amazon Route 53 failover routing record pointing to the two CloudFront distributions.Set the Evaluate Target Health value to Yes.
D. Create an Amazon Route 53 health check tor each ALB. Create a Route 53 latency aliasrecord pointing to the two ALBs. Set the Evaluate Target Health value to Yes.


Question # 36

A flood monitoring agency has deployed more than 10.000 water-level monitoring sensors. Sensors send continuous data updates, and each update is less than 1 MB in size. The agency has a fleet of on-premises application servers. These servers receive upda.es 'on the sensors, convert the raw data into a human readable format, and write the results loan on-premises relational database server. Data analysts then use simple SOL queries to monitor the data. The agency wants to increase overall application availability and reduce the effort that is required to perform maintenance tasks These maintenance tasks, which include updates and patches to the application servers, cause downtime. While an application server is down, data is lost from sensors because the remaining servers cannot handle the entire workload. The agency wants a solution that optimizes operational overhead and costs. A solutions architect recommends the use of AWS loT Core to collect the sensor data. What else should the solutions architect recommend to meet these requirements? 

A. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to .csv format, and insert it into anAmazon Aurora MySQL DB instance. Instruct the data analysts to query the data directlyfrom the DB instance.
B. Send the sensor data to Amazon Kinesis Data Firehose. Use an AWS Lambda functionto read the Kinesis Data Firehose data, convert it to Apache Parquet format and save it toan Amazon S3 bucket. Instruct the data analysts to query the data by using AmazonAthena.
C. Send the sensor data to an Amazon Managed Service for Apache Flink {previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to .csv formatand store it in an Amazon S3 bucket. Import the data into an Amazon Aurora MySQL DBinstance. Instruct the data analysts to query the data directly from the DB instance.
D. Send the sensor data to an Amazon Managed Service for Apache Flink (previouslyknown as Amazon Kinesis Data Analytics) application to convert the data to ApacheParquet format and store it in an Amazon S3 bucket Instruct the data analysis to query thedata by using Amazon Athena.


Question # 37

A company has many services running in its on-premises data center. The data center is connected to AWS using AWS Direct Connect (DX)and an IPsec VPN. The service data is sensitive and connectivity cannot traverse the interne. The company wants to expand to a new market segment and begin offering Is services to other companies that are using AWS. Which solution will meet these requirements? 

A. Create a VPC Endpoint Service that accepts TCP traffic, host it behind a Network LoadBalancer, and make the service available over DX.
B. Create a VPC Endpoint Service that accepts HTTP or HTTPS traffic, host it behind anApplication Load Balancer, and make the service available over DX.
C. Attach an internet gateway to the VPC. and ensure that network access control andsecurity group rules allow the relevant inbound and outbound traffic.
D. Attach a NAT gateway to the VPC. and ensue that network access control and securitygroup rules allow the relevant inbound and outbound traffic.


Question # 38

A company wants to establish a dedicated connection between its on-premises infrastructure and AWS. The company is setting up a 1 Gbps AWS Direct Connect connection to its account VPC. The architecture includes a transit gateway and a Direct Connect gateway to connect multiple VPCs and the on-premises infrastructure. The company must connect to VPC resources over a transit VIF by using the Direct Connect connection. Which combination of steps will meet these requirements? (Select TWO.) 

A. Update the 1 Gbps Direct Connect connection to 10 Gbps.
B. Advertise the on-premises network prefixes over the transit VIF.
C. Adverse the VPC prefixes from the Direct Connect gateway to the on-premises networkover the transit VIF.
D. Update the Direct Connect connection's MACsec encryption mode attribute to mustencrypt.
E. Associate a MACsec Connection Key Name-Connectivity Association Key (CKN/CAK)pair with the Direct Connect connection.


Question # 39

A company hosts an intranet web application on Amazon EC2 instances behind an Application Load Balancer (ALB). Currently, users authenticate to the application against an internal user database. The company needs to authenticate users to the application by using an existing AWS Directory Service for Microsoft Active Directory directory. All users with accounts in the directory must have access to the application. Which solution will meet these requirements? 

A. Create a new app client in the directory. Create a listener rule for the ALB. Specify theauthenticate-oidc action for the listener rule. Configure the listener rule with the appropriateissuer, client ID and secret, and endpoint details for the Active Directory service. Configurethe new app client with the callback URL that the ALB provides.
B. Configure an Amazon Cognito user pool. Configure the user pool with a federatedidentity provider (IdP) that has metadata from the directory. Create an app client. Associatethe app client with the user pool. Create a listener rule for the ALB. Specify theauthenticate-cognito action for the listener rule. Configure the listener rule to use the userpool and app client.
C. Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that hasan entity type of SAML 2.0 federation. Configure a role policy that allows access to theALB. Configure the new role as the default authenticated user role for the IdP. Create alistener rule for the ALB. Specify the authenticate-oidc action for the listener rule.
D. Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as anexternal identity provider (IdP) that uses SAML. Use the automatic provisioning method.Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a rolepolicy that allows access to the ALB. Attach the new role to all groups. Create a listenerrule for the ALB. Specify the authenticate-cognito action for the listener rule.


Question # 40

A public retail web application uses an Application Load Balancer (ALB) in front of Amazon EC2 instances running across multiple Availability Zones (AZs) in a Region backed by an Amazon RDS MySQL Multi-AZ deployment. Target group health checks are configured to use HTTP and pointed at the product catalog page. Auto Scaling is configured to maintain the web fleet size based on the ALB health check. Recently, the application experienced an outage. Auto Scaling continuously replaced the instances during the outage. A subsequent investigation determined that the web server metrics were within the normal range, but the database tier was experiencing high toad, resulting in severely elevated query response times. Which of the following changes together would remediate these issues while improving monitoring capabilities for the availability and functionality of the entire application stack for future growth? (Select TWO.) 

A. Configure read replicas for Amazon RDS MySQL and use the single reader endpoint inthe web application to reduce the load on the backend database tier.
B. Configure the target group health check to point at a simple HTML page instead of aproduct catalog page and the Amazon Route 53 health check against the product page toevaluate full application functionality. Configure Ama7on CloudWatch alarms to notifyadministrators when the site fails.
C. Configure the target group health check to use a TCP check of the Amazon EC2 webserver and the Amazon Route S3 health check against the product page to evaluate fullapplication functionality. Configure Amazon CloudWatch alarms to notify administratorswhen the site fails.
D. Configure an Amazon CtoudWatch alarm for Amazon RDS with an action to recover ahigh-load, impaired RDS instance in the database tier.
E. Configure an Amazon Elastic ache cluster and place it between the web application andRDS MySQL instances to reduce the load on the backend database tier.


‹ First23456Last ›

Download All Questions PDF Check Customers Feedbacks