MLS-C01 dumps
5 Star


Customer Rating & Feedbacks
98%


Exactly Questions Came From Dumps

Amazon MLS-C01 Question Answers

AWS Certified Machine Learning - Specialty Dumps July 2024

Are you tired of looking for a source that'll keep you updated on the AWS Certified Machine Learning - Specialty Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Amazon MLS-C01 Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a AWS Certified Specialty Certified.

discount banner
PDF $100  $40
Test Engine
$140  $56
PDF + Test Engine $180  $72

Here are Amazon MLS-C01 PDF available features:

281 questions with answers Updation Date : 22 Jul, 2024
1 day study required to pass exam 100% Passing Assurance
100% Money Back Guarantee Free 3 Months Updates
Last 24 Hours Result
87

Students Passed

93%

Average Marks

97%

Questions From Dumps

4387

Total Happy Clients

What is Amazon MLS-C01?

Amazon MLS-C01 is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The AWS Certified Specialty Certification validates a candidate's expertise to work with Amazon. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the AWS Certified Machine Learning - Specialty Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Amazon MLS-C01 Practice Questions. Passing AWS Certified Machine Learning - Specialty will be your ticket to a better future!

Pass with Amazon MLS-C01 Braindumps!

Contrary to the belief that certification exams are generally hard to get through, passing AWS Certified Machine Learning - Specialty is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Amazon MLS-C01 PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Amazon AWS Certified Specialty certification is all about having the right information. Hence, we filled our Amazon MLS-C01 Dumps with all the necessary data you need to pass. These carefully curated sets of AWS Certified Machine Learning - Specialty Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Amazon MLS-C01 Braindumps now!

We aim to provide all AWS Certified Specialty certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Amazon MLS-C01 Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.

Why Choose Amazon MLS-C01 PDF?

Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the AWS Certified Machine Learning - Specialty candidates. Not every Amazon exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the AWS Certified Specialty solutions. Amazon MLS-C01 Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Amazon MLS-C01 Dumps with ease.

If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].

Amazon MLS-C01 Sample Questions

Question # 1

A data scientist stores financial datasets in Amazon S3. The data scientist uses AmazonAthena to query the datasets by using SQL.The data scientist uses Amazon SageMaker to deploy a machine learning (ML) model. Thedata scientist wants to obtain inferences from the model at the SageMaker endpointHowever, when the data …. ntist attempts to invoke the SageMaker endpoint, the datascientist receives SOL statement failures The data scientist's 1AM user is currently unableto invoke the SageMaker endpointWhich combination of actions will give the data scientist's 1AM user the ability to invoke the SageMaker endpoint? (Select THREE.)

A. Attach the AmazonAthenaFullAccess AWS managed policy to the user identity.
B. Include a policy statement for the data scientist's 1AM user that allows the 1AM user toperform the sagemaker: lnvokeEndpoint action,
C. Include an inline policy for the data scientist’s 1AM user that allows SageMaker to readS3 objects
D. Include a policy statement for the data scientist's 1AM user that allows the 1AM user toperform the sagemakerGetRecord action.
E. Include the SQL statement "USING EXTERNAL FUNCTION ml_function_name" in theAthena SQL query.
F. Perform a user remapping in SageMaker to map the 1AM user to another 1AM user thatis on the hosted endpoint.


Question # 2

A Machine Learning Specialist is designing a scalable data storage solution for AmazonSageMaker. There is an existing TensorFlow-based model implemented as a train.py scriptthat relies on static training data that is currently stored as TFRecords.Which method of providing training data to Amazon SageMaker would meet the businessrequirements with the LEAST development overhead?

A. Use Amazon SageMaker script mode and use train.py unchanged. Point the AmazonSageMaker training invocation to the local path of the data without reformatting the trainingdata.
B. Use Amazon SageMaker script mode and use train.py unchanged. Put the TFRecorddata into an Amazon S3 bucket. Point the Amazon SageMaker training invocation to the S3bucket without reformatting the training data.
C. Rewrite the train.py script to add a section that converts TFRecords to protobuf andingests the protobuf data instead of TFRecords.
D. Prepare the data in the format accepted by Amazon SageMaker. Use AWS Glue orAWS Lambda to reformat and store the data in an Amazon S3 bucket.


Question # 3

A credit card company wants to identify fraudulent transactions in real time. A data scientistbuilds a machine learning model for this purpose. The transactional data is captured andstored in Amazon S3. The historic data is already labeled with two classes: fraud (positive)and fair transactions (negative). The data scientist removes all the missing data and buildsa classifier by using the XGBoost algorithm in Amazon SageMaker. The model producesthe following results:• True positive rate (TPR): 0.700• False negative rate (FNR): 0.300• True negative rate (TNR): 0.977• False positive rate (FPR): 0.023• Overall accuracy: 0.949Which solution should the data scientist use to improve the performance of the model?

A. Apply the Synthetic Minority Oversampling Technique (SMOTE) on the minority class inthe training dataset. Retrain the model with the updated training data.
B. Apply the Synthetic Minority Oversampling Technique (SMOTE) on the majority class in the training dataset. Retrain the model with the updated training data.
C. Undersample the minority class.
D. Oversample the majority class.


Question # 4

A pharmaceutical company performs periodic audits of clinical trial sites to quickly resolvecritical findings. The company stores audit documents in text format. Auditors haverequested help from a data science team to quickly analyze the documents. The auditorsneed to discover the 10 main topics within the documents to prioritize and distribute thereview work among the auditing team members. Documents that describe adverse eventsmust receive the highest priority. A data scientist will use statistical modeling to discover abstract topics and to provide a listof the top words for each category to help the auditors assess the relevance of the topic.Which algorithms are best suited to this scenario? (Choose two.)

A. Latent Dirichlet allocation (LDA)
B. Random Forest classifier
C. Neural topic modeling (NTM)
D. Linear support vector machine
E. Linear regression


Question # 5

A media company wants to create a solution that identifies celebrities in pictures that usersupload. The company also wants to identify the IP address and the timestamp details fromthe users so the company can prevent users from uploading pictures from unauthorizedlocations.Which solution will meet these requirements with LEAST development effort?

A. Use AWS Panorama to identify celebrities in the pictures. Use AWS CloudTrail tocapture IP address and timestamp details.
B. Use AWS Panorama to identify celebrities in the pictures. Make calls to the AWSPanorama Device SDK to capture IP address and timestamp details.
C. Use Amazon Rekognition to identify celebrities in the pictures. Use AWS CloudTrail tocapture IP address and timestamp details.
D. Use Amazon Rekognition to identify celebrities in the pictures. Use the text detectionfeature to capture IP address and timestamp details.


Question # 6

A retail company stores 100 GB of daily transactional data in Amazon S3 at periodicintervals. The company wants to identify the schema of the transactional data. Thecompany also wants to perform transformations on the transactional data that is in AmazonS3.The company wants to use a machine learning (ML) approach to detect fraud in thetransformed data.Which combination of solutions will meet these requirements with the LEAST operationaloverhead? {Select THREE.)

A. Use Amazon Athena to scan the data and identify the schema.
B. Use AWS Glue crawlers to scan the data and identify the schema.
C. Use Amazon Redshift to store procedures to perform data transformations
D. Use AWS Glue workflows and AWS Glue jobs to perform data transformations.
E. Use Amazon Redshift ML to train a model to detect fraud.
F. Use Amazon Fraud Detector to train a model to detect fraud.


Question # 7

An automotive company uses computer vision in its autonomous cars. The companytrained its object detection models successfully by using transfer learning from aconvolutional neural network (CNN). The company trained the models by using PyTorch through the Amazon SageMaker SDK.The vehicles have limited hardware and compute power. The company wants to optimizethe model to reduce memory, battery, and hardware consumption without a significantsacrifice in accuracy.Which solution will improve the computational efficiency of the models?

A. Use Amazon CloudWatch metrics to gain visibility into the SageMaker training weights,gradients, biases, and activation outputs. Compute the filter ranks based on the traininginformation. Apply pruning to remove the low-ranking filters. Set new weights based on thepruned set of filters. Run a new training job with the pruned model.
B. Use Amazon SageMaker Ground Truth to build and run data labeling workflows. Collecta larger labeled dataset with the labelling workflows. Run a new training job that uses thenew labeled data with previous training data.
C. Use Amazon SageMaker Debugger to gain visibility into the training weights, gradients,biases, and activation outputs. Compute the filter ranks based on the training information.Apply pruning to remove the low-ranking filters. Set the new weights based on the prunedset of filters. Run a new training job with the pruned model.
D. Use Amazon SageMaker Model Monitor to gain visibility into the ModelLatency metricand OverheadLatency metric of the model after the company deploys the model. Increasethe model learning rate. Run a new training job.


Question # 8

A media company is building a computer vision model to analyze images that are on socialmedia. The model consists of CNNs that the company trained by using images that thecompany stores in Amazon S3. The company used an Amazon SageMaker training job inFile mode with a single Amazon EC2 On-Demand Instance.Every day, the company updates the model by using about 10,000 images that thecompany has collected in the last 24 hours. The company configures training with only oneepoch. The company wants to speed up training and lower costs without the need to makeany code changes.Which solution will meet these requirements?

A. Instead of File mode, configure the SageMaker training job to use Pipe mode. Ingest thedata from a pipe.
B. Instead Of File mode, configure the SageMaker training job to use FastFile mode withno Other changes.
C. Instead Of On-Demand Instances, configure the SageMaker training job to use SpotInstances. Make no Other changes.
D. Instead Of On-Demand Instances, configure the SageMaker training job to use SpotInstances. Implement model checkpoints.


Question # 9

A data scientist is building a forecasting model for a retail company by using the mostrecent 5 years of sales records that are stored in a data warehouse. The dataset containssales records for each of the company's stores across five commercial regions The datascientist creates a working dataset with StorelD. Region. Date, and Sales Amount ascolumns. The data scientist wants to analyze yearly average sales for each region. Thescientist also wants to compare how each region performed compared to average salesacross all commercial regions.Which visualization will help the data scientist better understand the data trend?

A. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each store. Create a bar plot, faceted by year, of average sales foreach store. Add an extra bar in each facet to represent average sales.
B. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each store. Create a bar plot, colored by region and faceted by year,of average sales for each store. Add a horizontal line in each facet to represent averagesales.
C. Create an aggregated dataset by using the Pandas GroupBy function to get averagesales for each year for each region Create a bar plot of average sales for each region. Addan extra bar in each facet to represent average sales.
D. Create an aggregated dataset by using the Pandas GroupBy function to get average sales for each year for each region Create a bar plot, faceted by year, of average sales foreach region Add a horizontal line in each facet to represent average sales.


Question # 10

A data scientist is training a large PyTorch model by using Amazon SageMaker. It takes 10hours on average to train the model on GPU instances. The data scientist suspects thattraining is not converging and thatresource utilization is not optimal.What should the data scientist do to identify and address training issues with the LEASTdevelopment effort?

A. Use CPU utilization metrics that are captured in Amazon CloudWatch. Configure aCloudWatch alarm to stop the training job early if low CPU utilization occurs.
B. Use high-resolution custom metrics that are captured in Amazon CloudWatch. Configurean AWS Lambda function to analyze the metrics and to stop the training job early if issuesare detected.
C. Use the SageMaker Debugger vanishing_gradient and LowGPUUtilization built-in rulesto detect issues and to launch the StopTrainingJob action if issues are detected.
D. Use the SageMaker Debugger confusion and feature_importance_overweight built-inrules to detect issues and to launch the StopTrainingJob action if issues are detected.


Question # 11

A company builds computer-vision models that use deep learning for the autonomousvehicle industry. A machine learning (ML) specialist uses an Amazon EC2 instance thathas a CPU: GPU ratio of 12:1 to train the models.The ML specialist examines the instance metric logs and notices that the GPU is idle half ofthe time The ML specialist must reduce training costs without increasing the duration of thetraining jobs.Which solution will meet these requirements?

A. Switch to an instance type that has only CPUs.
B. Use a heterogeneous cluster that has two different instances groups.
C. Use memory-optimized EC2 Spot Instances for the training jobs.
D. Switch to an instance type that has a CPU GPU ratio of 6:1.


Question # 12

An engraving company wants to automate its quality control process for plaques. Thecompany performs the process before mailing each customized plaque to a customer. Thecompany has created an Amazon S3 bucket that contains images of defects that shouldcause a plaque to be rejected. Low-confidence predictions must be sent to an internal teamof reviewers who are using Amazon Augmented Al (Amazon A2I).Which solution will meet these requirements?

A. Use Amazon Textract for automatic processing. Use Amazon A2I with AmazonMechanical Turk for manual review.
B. Use Amazon Rekognition for automatic processing. Use Amazon A2I with a privateworkforce option for manual review.
C. Use Amazon Transcribe for automatic processing. Use Amazon A2I with a privateworkforce option for manual review.
D. Use AWS Panorama for automatic processing Use Amazon A2I with AmazonMechanical Turk for manual review


Question # 13

An Amazon SageMaker notebook instance is launched into Amazon VPC The SageMakernotebook references data contained in an Amazon S3 bucket in another account Thebucket is encrypted using SSE-KMS The instance returns an access denied error whentrying to access data in Amazon S3.Which of the following are required to access the bucket and avoid the access deniederror? (Select THREE)

A. An AWS KMS key policy that allows access to the customer master key (CMK)
B. A SageMaker notebook security group that allows access to Amazon S3
C. An 1AM role that allows access to the specific S3 bucket
D. A permissive S3 bucket policy
E. An S3 bucket owner that matches the notebook owner
F. A SegaMaker notebook subnet ACL that allow traffic to Amazon S3.


Question # 14

A machine learning (ML) engineer has created a feature repository in Amazon SageMakerFeature Store for the company. The company has AWS accounts for development,integration, and production. The company hosts a feature store in the developmentaccount. The company uses Amazon S3 buckets to store feature values offline. Thecompany wants to share features and to allow the integration account and the productionaccount to reuse the features that are in the feature repository. Which combination of steps will meet these requirements? (Select TWO.)

A. Create an IAM role in the development account that the integration account andproduction account can assume. Attach IAM policies to the role that allow access to thefeature repository and the S3 buckets.
B. Share the feature repository that is associated the S3 buckets from the developmentaccount to the integration account and the production account by using AWS ResourceAccess Manager (AWS RAM).
C. Use AWS Security Token Service (AWS STS) from the integration account and theproduction account to retrieve credentials for the development account.
D. Set up S3 replication between the development S3 buckets and the integration andproduction S3 buckets.
E. Create an AWS PrivateLink endpoint in the development account for SageMaker.


Question # 15

A network security vendor needs to ingest telemetry data from thousands of endpoints thatrun all over the world. The data is transmitted every 30 seconds in the form of records thatcontain 50 fields. Each record is up to 1 KB in size. The security vendor uses AmazonKinesis Data Streams to ingest the data. The vendor requires hourly summaries of therecords that Kinesis Data Streams ingests. The vendor will use Amazon Athena to querythe records and to generate the summaries. The Athena queries will target 7 to 12 of theavailable data fields.Which solution will meet these requirements with the LEAST amount of customization totransform and store the ingested data?

A. Use AWS Lambda to read and aggregate the data hourly. Transform the data and storeit in Amazon S3 by using Amazon Kinesis Data Firehose.
B. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using a short-lived Amazon EMR cluster.
C. Use Amazon Kinesis Data Analytics to read and aggregate the data hourly. Transformthe data and store it in Amazon S3 by using Amazon Kinesis Data Firehose.
D. Use Amazon Kinesis Data Firehose to read and aggregate the data hourly. Transform the data and store it in Amazon S3 by using AWS Lambda.


Question # 16

A data scientist is building a linear regression model. The scientist inspects the dataset andnotices that the mode of the distribution is lower than the median, and the median is lowerthan the mean.Which data transformation will give the data scientist the ability to apply a linear regressionmodel?

A. Exponential transformation
B. Logarithmic transformation
C. Polynomial transformation
D. Sinusoidal transformation


Question # 17

A car company is developing a machine learning solution to detect whether a car is presentin an image. The image dataset consists of one million images. Each image in the datasetis 200 pixels in height by 200 pixels in width. Each image is labeled as either having a caror not having a car.Which architecture is MOST likely to produce a model that detects whether a car is presentin an image with the highest accuracy?

A. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a linear output layer that outputs the probability that an image contains a car.
B. Use a deep convolutional neural network (CNN) classifier with the images as input.Include a softmax output layer that outputs the probability that an image contains a car.
C. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include alinear output layer that outputs the probability that an image contains a car.
D. Use a deep multilayer perceptron (MLP) classifier with the images as input. Include asoftmax output layer that outputs the probability that an image contains a car.


Question # 18

A university wants to develop a targeted recruitment strategy to increase new studentenrollment. A data scientist gathers information about the academic performance history ofstudents. The data scientist wants to use the data to build student profiles. The universitywill use the profiles to direct resources to recruit students who are likely to enroll in theuniversity.Which combination of steps should the data scientist take to predict whether a particularstudent applicant is likely to enroll in the university? (Select TWO)

A. Use Amazon SageMaker Ground Truth to sort the data into two groups named"enrolled" or "not enrolled."
B. Use a forecasting algorithm to run predictions.
C. Use a regression algorithm to run predictions.
D. Use a classification algorithm to run predictions
E. Use the built-in Amazon SageMaker k-means algorithm to cluster the data into twogroups named "enrolled" or "not enrolled."


Question # 19

An insurance company developed a new experimental machine learning (ML) model toreplace an existing model that is in production. The company must validate the quality ofpredictions from the new experimental model in a production environment before thecompany uses the new experimental model to serve general user requests.Which one model can serve user requests at a time. The company must measure theperformance of the new experimental model without affecting the current live trafficWhich solution will meet these requirements?

A. A/B testing
B. Canary release
C. Shadow deployment
D. Blue/green deployment


Question # 20

A company wants to detect credit card fraud. The company has observed that an averageof 2% of credit card transactions are fraudulent. A data scientist trains a classifier on ayear's worth of credit card transaction data. The classifier needs to identify the fraudulenttransactions. The company wants to accurately capture as many fraudulent transactions aspossible.Which metrics should the data scientist use to optimize the classifier? (Select TWO.)

A. Specificity
B. False positive rate
C. Accuracy
D. Fl score
E. True positive rate


Question # 21

A company deployed a machine learning (ML) model on the company website to predictreal estate prices. Several months after deployment, an ML engineer notices that theaccuracy of the model has gradually decreased.The ML engineer needs to improve the accuracy of the model. The engineer also needs toreceive notifications for any future performance issues.Which solution will meet these requirements?

A. Perform incremental training to update the model. Activate Amazon SageMaker Model Monitor to detect model performance issues and to send notifications.
B. Use Amazon SageMaker Model Governance. Configure Model Governance toautomatically adjust model hyper para meters. Create a performance threshold alarm inAmazon CloudWatch to send notifications.
C. Use Amazon SageMaker Debugger with appropriate thresholds. Configure Debugger tosend Amazon CloudWatch alarms to alert the team Retrain the model by using only datafrom the previous several months.
D. Use only data from the previous several months to perform incremental training toupdate the model. Use Amazon SageMaker Model Monitor to detect model performanceissues and to send notifications.


Question # 22

A retail company wants to build a recommendation system for the company's website. Thesystem needs to provide recommendations for existing users and needs to base thoserecommendations on each user's past browsing history. The system also must filter out anyitems that the user previously purchased.Which solution will meet these requirements with the LEAST development effort?

A. Train a model by using a user-based collaborative filtering algorithm on AmazonSageMaker. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.
B. Use an Amazon Personalize PERSONALIZED_RANKING recipe to train a model.Create a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetPersonalizedRanking APIoperation to get the real-time recommendations.
C. Use an Amazon Personalize USER_ PERSONAL IZATION recipe to train a modelCreate a real-time filter to exclude items that the user previously purchased. Create anddeploy a campaign on Amazon Personalize. Use the GetRecommendations API operationto get the real-time recommendations.
D. Train a neural collaborative filtering model on Amazon SageMaker by using GPU instances. Host the model on a SageMaker real-time endpoint. Configure an Amazon APIGateway API and an AWS Lambda function to handle real-time inference requests that theweb application sends. Exclude the items that the user previously purchased from theresults before sending the results back to the web application.


Question # 23

A machine learning (ML) specialist is using Amazon SageMaker hyperparameteroptimization (HPO) to improve a model’s accuracy. The learning rate parameter is specifiedin the following HPO configuration: During the results analysis, the ML specialist determines that most of the training jobs hada learning rate between 0.01 and 0.1. The best result had a learning rate of less than 0.01.Training jobs need to run regularly over a changing dataset. The ML specialist needs tofind a tuning mechanism that uses different learning rates more evenly from the providedrange between MinValue and MaxValue.Which solution provides the MOST accurate result?

A.Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this HPO job.
B.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue while using the same number of training jobs for each HPOjob:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.
C.Modify the HPO configuration as follows: Select the most accurate hyperparameter configuration form this training job.
D.Run three different HPO jobs that use different learning rates form the following intervalsfor MinValue and MaxValue. Divide the number of training jobs for each HPO job by three:[0.01, 0.1][0.001, 0.01][0.0001, 0.001]Select the most accurate hyperparameter configuration form these three HPO jobs.


Question # 24

A data engineer is preparing a dataset that a retail company will use to predict the numberof visitors to stores. The data engineer created an Amazon S3 bucket. The engineersubscribed the S3 bucket to an AWS Data Exchange data product for general economicindicators. The data engineer wants to join the economic indicator data to an existing tablein Amazon Athena to merge with the business data. All these transformations must finishrunning in 30-60 minutes.Which solution will meet these requirements MOST cost-effectively?

A. Configure the AWS Data Exchange product as a producer for an Amazon Kinesis datastream. Use an Amazon Kinesis Data Firehose delivery stream to transfer the data toAmazon S3 Run an AWS Glue job that will merge the existing business data with theAthena table. Write the result set back to Amazon S3.
B. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS Lambdafunction. Program the Lambda function to use Amazon SageMaker Data Wrangler tomerge the existing business data with the Athena table. Write the result set back toAmazon S3.
C. Use an S3 event on the AWS Data Exchange S3 bucket to invoke an AWS LambdaFunction Program the Lambda function to run an AWS Glue job that will merge the existingbusiness data with the Athena table Write the results back to Amazon S3.
D. Provision an Amazon Redshift cluster. Subscribe to the AWS Data Exchange productand use the product to create an Amazon Redshift Table Merge the data in AmazonRedshift. Write the results back to Amazon S3.


Question # 25

An online delivery company wants to choose the fastest courier for each delivery at themoment an order is placed. The company wants to implement this feature for existing usersand new users of its application. Data scientists have trained separate models withXGBoost for this purpose, and the models are stored in Amazon S3. There is one model fofeach city where the company operates.The engineers are hosting these models in Amazon EC2 for responding to the web clientrequests, with one instance for each model, but the instances have only a 5% utilization inCPU and memory, ....operation engineers want to avoid managing unnecessary resources.Which solution will enable the company to achieve its goal with the LEAST operationaloverhead?

A. Create an Amazon SageMaker notebook instance for pulling all the models fromAmazon S3 using the boto3 library. Remove the existing instances and use the notebook toperform a SageMaker batch transform for performing inferences offline for all the possibleusers in all the cities. Store the results in different files in Amazon S3. Point the web clientto the files.
B. Prepare an Amazon SageMaker Docker container based on the open-source multimodelserver. Remove the existing instances and create a multi-model endpoint inSageMaker instead, pointing to the S3 bucket containing all the models Invoke theendpoint from the web client at runtime, specifying the TargetModel parameter according tothe city of each request.
C. Keep only a single EC2 instance for hosting all the models. Install a model server in theinstance and load each model by pulling it from Amazon S3. Integrate the instance with theweb client using Amazon API Gateway for responding to the requests in real time,specifying the target resource according to the city of each request.
D. Prepare a Docker container based on the prebuilt images in Amazon SageMaker.Replace the existing instances with separate SageMaker endpoints. one for each citywhere the company operates. Invoke the endpoints from the web client, specifying the URL and EndpomtName parameter according to the city of each request.


Question # 26

A company is using Amazon Polly to translate plaintext documents to speech forautomated company announcements However company acronyms are beingmispronounced in the current documents How should a Machine Learning Specialistaddress this issue for future documents?

A. Convert current documents to SSML with pronunciation tags
B. Create an appropriate pronunciation lexicon.
C. Output speech marks to guide in pronunciation
D. Use Amazon Lex to preprocess the text files for pronunciation


Question # 27

A company wants to predict the classification of documents that are created from anapplication. New documents are saved to an Amazon S3 bucket every 3 seconds. Thecompany has developed three versions of a machine learning (ML) model within AmazonSageMaker to classify document text. The company wants to deploy these three versions to predict the classification of each document.Which approach will meet these requirements with the LEAST operational overhead?

A. Configure an S3 event notification that invokes an AWS Lambda function when newdocuments are created. Configure the Lambda function to create three SageMaker batchtransform jobs, one batch transform job for each model for each document.
B. Deploy all the models to a single SageMaker endpoint. Treat each model as aproduction variant. Configure an S3 event notification that invokes an AWS Lambdafunction when new documents are created. Configure the Lambda function to call eachproduction variant and return the results of each model.
C. Deploy each model to its own SageMaker endpoint Configure an S3 event notificationthat invokes an AWS Lambda function when new documents are created. Configure theLambda function to call each endpoint and return the results of each model.
D. Deploy each model to its own SageMaker endpoint. Create three AWS Lambdafunctions. Configure each Lambda function to call a different endpoint and return theresults. Configure three S3 event notifications to invoke the Lambda functions when newdocuments are created.


Question # 28

A company wants to create an artificial intelligence (Al) yoga instructor that can lead largeclasses of students. The company needs to create a feature that can accurately count thenumber of students who are in a class. The company also needs a feature that candifferentiate students who are performing a yoga stretch correctly from students who areperforming a stretch incorrectly....etermine whether students are performing a stretch correctly, the solution needs tomeasure the location and angle of each student's arms and legs A data scientist must useAmazon SageMaker to ...ss video footage of a yoga class by extracting image frames andapplying computer vision models.Which combination of models will meet these requirements with the LEAST effort? (SelectTWO.)

A. Image Classification
B. Optical Character Recognition (OCR)
C. Object Detection
D. Pose estimation
E. Image Generative Adversarial Networks (GANs)


Question # 29

A data scientist is working on a public sector project for an urban traffic system. Whilestudying the traffic patterns, it is clear to the data scientist that the traffic behavior at eachlight is correlated, subject to a small stochastic error term. The data scientist must modelthe traffic behavior to analyze the traffic patterns and reduce congestion.How will the data scientist MOST effectively model the problem?

A. The data scientist should obtain a correlated equilibrium policy by formulating thisproblem as a multi-agent reinforcement learning problem.
B. The data scientist should obtain the optimal equilibrium policy by formulating thisproblem as a single-agent reinforcement learning problem.
C. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using historical data through a supervised learning approach.
D. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using unlabeled simulated data representing the new trafficpatterns in the city and applying an unsupervised learning approach.


Question # 30

An ecommerce company wants to use machine learning (ML) to monitor fraudulenttransactions on its website. The company is using Amazon SageMaker to research, train,deploy, and monitor the ML models.The historical transactions data is in a .csv file that is stored in Amazon S3 The datacontains features such as the user's IP address, navigation time, average time on eachpage, and the number of clicks for ....session. There is no label in the data to indicate if atransaction is anomalous.Which models should the company use in combination to detect anomalous transactions?(Select TWO.)

A. IP Insights
B. K-nearest neighbors (k-NN)
C. Linear learner with a logistic function
D. Random Cut Forest (RCF)
E. XGBoost


Question # 31

A company wants to predict stock market price trends. The company stores stock marketdata each business day in Amazon S3 in Apache Parquet format. The company stores 20GB of data each day for each stock code.A data engineer must use Apache Spark to perform batch preprocessing datatransformations quickly so the company can complete prediction jobs before the stockmarket opens the next day. The company plans to track more stock market codes andneeds a way to scale the preprocessing data transformations.Which AWS service or feature will meet these requirements with the LEAST developmenteffort over time?

A. AWS Glue jobs
B. Amazon EMR cluster
C. Amazon Athena
D. AWS Lambda


Question # 32

A company wants to forecast the daily price of newly launched products based on 3 yearsof data for older product prices, sales, and rebates. The time-series data has irregulartimestamps and is missing some values.Data scientist must build a dataset to replace the missing values. The data scientist needsa solution that resamptes the data daily and exports the data for further modeling.Which solution will meet these requirements with the LEAST implementation effort?

A. Use Amazon EMR Serveriess with PySpark.
B. Use AWS Glue DataBrew.
C. Use Amazon SageMaker Studio Data Wrangler.
D. Use Amazon SageMaker Studio Notebook with Pandas.


Question # 33

A company operates large cranes at a busy port. The company plans to use machinelearning (ML) for predictive maintenance of the cranes to avoid unexpected breakdownsand to improve productivity.The company already uses sensor data from each crane to monitor the health of thecranes in real time. The sensor data includes rotation speed, tension, energy consumption,vibration, pressure, and …perature for each crane. The company contracts AWS MLexperts to implement an ML solution.Which potential findings would indicate that an ML-based solution is suitable for thisscenario? (Select TWO.)

A. The historical sensor data does not include a significant number of data points andattributes for certain time periods.
B. The historical sensor data shows that simple rule-based thresholds can predict cranefailures.
C. The historical sensor data contains failure data for only one type of crane model that isin operation and lacks failure data of most other types of crane that are in operation.
D. The historical sensor data from the cranes are available with high granularity for the last3 years.
E. The historical sensor data contains most common types of crane failures that thecompany wants to predict.


Question # 34

A company is creating an application to identify, count, and classify animal images that areuploaded to the company’s website. The company is using the Amazon SageMaker imageclassification algorithm with an ImageNetV2 convolutional neural network (CNN). Thesolution works well for most animal images but does not recognize many animal speciesthat are less common.The company obtains 10,000 labeled images of less common animal species and storesthe images in Amazon S3. A machine learning (ML) engineer needs to incorporate theimages into the model by using Pipe mode in SageMaker.Which combination of steps should the ML engineer take to train the model? (Choose two.)

A. Use a ResNet model. Initiate full training mode by initializing the network with randomweights.
B. Use an Inception model that is available with the SageMaker image classificationalgorithm.
C. Create a .lst file that contains a list of image files and corresponding class labels. Uploadthe .lst file to Amazon S3.
D. Initiate transfer learning. Train the model by using the images of less common species.
E. Use an augmented manifest file in JSON Lines format.


Question # 35

A machine learning (ML) specialist is using the Amazon SageMaker DeepAR forecastingalgorithm to train a model on CPU-based Amazon EC2 On-Demand instances. The modelcurrently takes multiple hours to train. The ML specialist wants to decrease the trainingtime of the model.Which approaches will meet this requirement7 (SELECT TWO )

A. Replace On-Demand Instances with Spot Instances
B. Configure model auto scaling dynamically to adjust the number of instancesautomatically.
C. Replace CPU-based EC2 instances with GPU-based EC2 instances.
D. Use multiple training instances.
E. Use a pre-trained version of the model. Run incremental training.


Question # 36

A manufacturing company has a production line with sensors that collect hundreds ofquality metrics. The company has stored sensor data and manual inspection results in adata lake for several months. To automate quality control, the machine learning team mustbuild an automated mechanism that determines whether the produced goods are goodquality, replacement market quality, or scrap quality based on the manual inspectionresults.Which modeling approach will deliver the MOST accurate prediction of product quality?

A. Amazon SageMaker DeepAR forecasting algorithm
B. Amazon SageMaker XGBoost algorithm
C. Amazon SageMaker Latent Dirichlet Allocation (LDA) algorithm
D. A convolutional neural network (CNN) and ResNet


Question # 37

A data scientist at a financial services company used Amazon SageMaker to train anddeploy a model that predicts loan defaults. The model analyzes new loan applications andpredicts the risk of loan default. To train the model, the data scientist manually extractedloan data from a database. The data scientist performed the model training anddeployment steps in a Jupyter notebook that is hosted on SageMaker Studio notebooks.The model's prediction accuracy is decreasing over time. Which combination of slept in theMOST operationally efficient way for the data scientist to maintain the model's accuracy?(Select TWO.)

A. Use SageMaker Pipelines to create an automated workflow that extracts fresh data,trains the model, and deploys a new version of the model.
B. Configure SageMaker Model Monitor with an accuracy threshold to check for model drift.Initiate an Amazon CloudWatch alarm when the threshold is exceeded. Connect theworkflow in SageMaker Pipelines with the CloudWatch alarm to automatically initiateretraining.
C. Store the model predictions in Amazon S3 Create a daily SageMaker Processing jobthat reads the predictions from Amazon S3, checks for changes in model predictionaccuracy, and sends an email notification if a significant change is detected.
D. Rerun the steps in the Jupyter notebook that is hosted on SageMaker Studio notebooksto retrain the model and redeploy a new version of the model.
E. Export the training and deployment code from the SageMaker Studio notebooks into aPython script. Package the script into an Amazon Elastic Container Service (Amazon ECS)task that an AWS Lambda function can initiate.


Question # 38

A data scientist uses Amazon SageMaker Data Wrangler to define and performtransformations and feature engineering on historical data. The data scientist saves thetransformations to SageMaker Feature Store.The historical data is periodically uploaded to an Amazon S3 bucket. The data scientistneeds to transform the new historic data and add it to the online feature store The datascientist needs to prepare the .....historic data for training and inference by using nativeintegrations.Which solution will meet these requirements with the LEAST development effort?

A. Use AWS Lambda to run a predefined SageMaker pipeline to perform thetransformations on each new dataset that arrives in the S3 bucket.
B. Run an AWS Step Functions step and a predefined SageMaker pipeline to perform thetransformations on each new dalaset that arrives in the S3 bucket
C. Use Apache Airflow to orchestrate a set of predefined transformations on each newdataset that arrives in the S3 bucket.
D. Configure Amazon EventBridge to run a predefined SageMaker pipeline to perform thetransformations when a new data is detected in the S3 bucket.


Question # 39

A financial services company wants to automate its loan approval process by building amachine learning (ML) model. Each loan data point contains credit history from a thirdpartydata source and demographic information about the customer. Each loan approvalprediction must come with a report that contains an explanation for why the customer wasapproved for a loan or was denied for a loan. The company will use Amazon SageMaker tobuild the model.Which solution will meet these requirements with the LEAST development effort?

A. Use SageMaker Model Debugger to automatically debug the predictions, generate theexplanation, and attach the explanation report.
B. Use AWS Lambda to provide feature importance and partial dependence plots. Use theplots to generate and attach the explanation report.
C. Use SageMaker Clarify to generate the explanation report. Attach the report to thepredicted results.
D. Use custom Amazon Cloud Watch metrics to generate the explanation report. Attach thereport to the predicted results.


Question # 40

A manufacturing company has structured and unstructured data stored in an Amazon S3bucket. A Machine Learning Specialist wants to use SQL to run queries on this data.Which solution requires the LEAST effort to be able to query this data?

A. Use AWS Data Pipeline to transform the data and Amazon RDS to run queries.
B. Use AWS Glue to catalogue the data and Amazon Athena to run queries.
C. Use AWS Batch to run ETL on the data and Amazon Aurora to run the queries.
D. Use AWS Lambda to transform the data and Amazon Kinesis Data Analytics to runqueries.


Question # 41

A data scientist has been running an Amazon SageMaker notebook instance for a fewweeks. During this time, a new version of Jupyter Notebook was released along withadditional software updates. The security team mandates that all running SageMakernotebook instances use the latest security and software updates provided by SageMaker.How can the data scientist meet these requirements?

A. Call the CreateNotebookInstanceLifecycleConfig API operation
B. Create a new SageMaker notebook instance and mount the Amazon Elastic Block Store(Amazon EBS) volume from the original instance
C. Stop and then restart the SageMaker notebook instance
D. Call the UpdateNotebookInstanceLifecycleConfig API operation


Question # 42

A large company has developed a B1 application that generates reports and dashboardsusing data collected from various operational metrics The company wants to provideexecutives with an enhanced experience so they can use natural language to get data fromthe reports The company wants the executives to be able ask questions using written andspoken interlacesWhich combination of services can be used to build this conversational interface? (SelectTHREE)

A. Alexa for Business
B. Amazon Connect
C. Amazon Lex
D. Amazon Poly
E. Amazon Comprehend
F. Amazon Transcribe


Question # 43

A manufacturing company needs to identify returned smartphones that have beendamaged by moisture. The company has an automated process that produces 2.000diagnostic values for each phone. The database contains more than five million phoneevaluations. The evaluation process is consistent, and there are no missing values in thedata. A machine learning (ML) specialist has trained an Amazon SageMaker linear learnerML model to classify phones as moisture damaged or not moisture damaged by using allavailable features. The model's F1 score is 0.6.What changes in model training would MOST likely improve the model's F1 score? (SelectTWO.)

A. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the SageMaker principal component analysis (PCA) algorithm.
B. Continue to use the SageMaker linear learner algorithm. Reduce the number of featureswith the scikit-iearn multi-dimensional scaling (MDS) algorithm.
C. Continue to use the SageMaker linear learner algorithm. Set the predictor type toregressor.
D. Use the SageMaker k-means algorithm with k of less than 1.000 to train the model
E. Use the SageMaker k-nearest neighbors (k-NN) algorithm. Set a dimension reductiontarget of less than 1,000 to train the model.


Question # 44

A beauty supply store wants to understand some characteristics of visitors to the store. Thestore has security video recordings from the past several years. The store wants togenerate a report of hourly visitors from the recordings. The report should group visitors byhair style and hair color.Which solution will meet these requirements with the LEAST amount of effort?

A. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an ResNet-50 algorithm to determine hair style and hair color.
B. Use an object detection algorithm to identify a visitor’s hair in video frames. Pass theidentified hair to an XGBoost algorithm to determine hair style and hair color.
C. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an ResNet-50 algorithm to determine hair style and hair color.
D. Use a semantic segmentation algorithm to identify a visitor’s hair in video frames. Passthe identified hair to an XGBoost algorithm to determine hair style and hair.


Question # 45

Each morning, a data scientist at a rental car company creates insights about the previousday’s rental car reservation demands. The company needs to automate this process bystreaming the data to Amazon S3 in near real time. The solution must detect high-demandrental cars at each of the company’s locations. The solution also must create avisualization dashboard that automatically refreshes with the most recent data.Which solution will meet these requirements with the LEAST development time?

A. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize the data in QuickSight.
B. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
C. Use Amazon Kinesis Data Firehose to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using the Random Cut Forest (RCF) trained model inAmazon SageMaker. Visualize the data in Amazon QuickSight.
D. Use Amazon Kinesis Data Streams to stream the reservation data directly to AmazonS3. Detect high-demand outliers by using Amazon QuickSight ML Insights. Visualize thedata in QuickSight.


Question # 46

A company wants to conduct targeted marketing to sell solar panels to homeowners. Thecompany wants to use machine learning (ML) technologies to identify which housesalready have solar panels. The company has collected 8,000 satellite images as training data and will use Amazon SageMaker Ground Truth to label the data.The company has a small internal team that is working on the project. The internal teamhas no ML expertise and no ML experience.Which solution will meet these requirements with the LEAST amount of effort from theinternal team?

A. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use AmazonRekognition Custom Labels for model training and hosting.
B. Set up a private workforce that consists of the internal team. Use the private workforceto label the data. Use Amazon Rekognition Custom Labels for model training and hosting.
C. Set up a private workforce that consists of the internal team. Use the private workforceand the SageMaker Ground Truth active learning feature to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.
D. Set up a public workforce. Use the public workforce to label the data. Use theSageMaker Object Detection algorithm to train a model. Use SageMaker batch transformfor inference.


Question # 47

A finance company needs to forecast the price of a commodity. The company has compileda dataset of historical daily prices. A data scientist must train various forecasting models on80% of the dataset and must validate the efficacy of those models on the remaining 20% ofthe dataset.What should the data scientist split the dataset into a training dataset and a validationdataset to compare model performance?

A. Pick a date so that 80% to the data points precede the date Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
B. Pick a date so that 80% of the data points occur after the date. Assign that group of datapoints as the training dataset. Assign all the remaining data points to the validation dataset.
C. Starting from the earliest date in the dataset. pick eight data points for the trainingdataset and two data points for the validation dataset. Repeat this stratified sampling untilno data points remain.
D. Sample data points randomly without replacement so that 80% of the data points are inthe training dataset. Assign all the remaining data points to the validation dataset.


Question # 48

A chemical company has developed several machine learning (ML) solutions to identifychemical process abnormalities. The time series values of independent variables and thelabels are available for the past 2 years and are sufficient to accurately model the problem.The regular operation label is marked as 0. The abnormal operation label is marked as 1 .Process abnormalities have a significant negative effect on the companys profits. Thecompany must avoid these abnormalities.Which metrics will indicate an ML solution that will provide the GREATEST probability ofdetecting an abnormality?

A. Precision = 0.91Recall = 0.6
B. Precision = 0.61Recall = 0.98
C. Precision = 0.7Recall = 0.9
D. Precision = 0.98Recall = 0.8


Question # 49

A machine learning (ML) specialist uploads 5 TB of data to an Amazon SageMaker Studioenvironment. The ML specialist performs initial data cleansing. Before the ML specialistbegins to train a model, the ML specialist needs to create and view an analysis report thatdetails potential bias in the uploaded data.Which combination of actions will meet these requirements with the LEAST operationaloverhead? (Choose two.)

A. Use SageMaker Clarify to automatically detect data bias
B. Turn on the bias detection option in SageMaker Ground Truth to automatically analyzedata features.
C. Use SageMaker Model Monitor to generate a bias drift report.
D. Configure SageMaker Data Wrangler to generate a bias report.
E. Use SageMaker Experiments to perform a data check


Question # 50

A company uses sensors on devices such as motor engines and factory machines tomeasure parameters, temperature and pressure. The company wants to use the sensordata to predict equipment malfunctions and reduce services outages.The Machine learning (ML) specialist needs to gather the sensors data to train a model topredict device malfunctions The ML spoctafst must ensure that the data does not containoutliers before training the ..el.What can the ML specialist meet these requirements with the LEAST operationaloverhead?

A. Load the data into an Amazon SagcMaker Studio notebook. Calculate the first and thirdquartile Use a SageMaker Data Wrangler data (low to remove only values that are outside of those quartiles.
B. Use an Amazon SageMaker Data Wrangler bias report to find outliers in the dataset Usea Data Wrangler data flow to remove outliers based on the bias report.
C. Use an Amazon SageMaker Data Wrangler anomaly detection visualization to findoutliers in the dataset. Add a transformation to a Data Wrangler data flow to removeoutliers.
D. Use Amazon Lookout for Equipment to find and remove outliers from the dataset.


Question # 51

A data scientist wants to use Amazon Forecast to build a forecasting model for inventorydemand for a retail company. The company has provided a dataset of historic inventorydemand for its products as a .csv file stored in an Amazon S3 bucket. The table belowshows a sample of the dataset. How should the data scientist transform the data?

A. Use ETL jobs in AWS Glue to separate the dataset into a target time series dataset andan item metadata dataset. Upload both datasets as .csv files to Amazon S3.
B. Use a Jupyter notebook in Amazon SageMaker to separate the dataset into a relatedtime series dataset and an item metadata dataset. Upload both datasets as tables inAmazon Aurora.
C. Use AWS Batch jobs to separate the dataset into a target time series dataset, a relatedtime series dataset, and an item metadata dataset. Upload them directly to Forecast from alocal machine.
D. Use a Jupyter notebook in Amazon SageMaker to transform the data into the optimizedprotobuf recordIO format. Upload the dataset in this format to Amazon S3.


Question # 52

The chief editor for a product catalog wants the research and development team to build amachine learning system that can be used to detect whether or not individuals in acollection of images are wearing the company's retail brand. The team has a set of trainingdata.Which machine learning algorithm should the researchers use that BEST meets theirrequirements?

A. Latent Dirichlet Allocation (LDA)
B. Recurrent neural network (RNN)
C. K-means
D. Convolutional neural network (CNN)


Question # 53

A wildlife research company has a set of images of lions and cheetahs. The companycreated a dataset of the images. The company labeled each image with a binary label thatindicates whether an image contains a lion or cheetah. The company wants to train amodel to identify whether new images contain a lion or cheetah..... Dh Amazon SageMaker algorithm will meet this requirement?

A. XGBoost
B. Image Classification - TensorFlow
C. Object Detection - TensorFlow
D. Semantic segmentation - MXNet


Question # 54

A company’s data scientist has trained a new machine learning model that performs betteron test data than the company’s existing model performs in the production environment.The data scientist wants to replace the existing model that runs on an Amazon SageMakerendpoint in the production environment. However, the company is concerned that the newmodel might not work well on the production environment data.The data scientist needs to perform A/B testing in the production environment to evaluatewhether the new model performs well on production environment data.Which combination of steps must the data scientist take to perform the A/B testing?(Choose two.)

A. Create a new endpoint configuration that includes a production variant for each of thetwo models.
B. Create a new endpoint configuration that includes two target variants that point todifferent endpoints.
C. Deploy the new model to the existing endpoint.
D. Update the existing endpoint to activate the new model.
E. Update the existing endpoint to use the new endpoint configuration.


Question # 55

A data science team is working with a tabular dataset that the team stores in Amazon S3.The team wants to experiment with different feature transformations such as categoricalfeature encoding. Then the team wants to visualize the resulting distribution of the dataset.After the team finds an appropriate set of feature transformations, the team wants toautomate the workflow for feature transformations.Which solution will meet these requirements with the MOST operational efficiency?

A. Use Amazon SageMaker Data Wrangler preconfigured transformations to explorefeature transformations. Use SageMaker Data Wrangler templates for visualization. Exportthe feature processing workflow to a SageMaker pipeline for automation.
B. Use an Amazon SageMaker notebook instance to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
C. Use AWS Glue Studio with custom code to experiment with different featuretransformations. Save the transformations to Amazon S3. Use Amazon QuickSight forvisualization. Package the feature processing steps into an AWS Lambda function forautomation.
D. Use Amazon SageMaker Data Wrangler preconfigured transformations to experimentwith different feature transformations. Save the transformations to Amazon S3. UseAmazon QuickSight for visualzation. Package each feature transformation step into aseparate AWS Lambda function. Use AWS Step Functions for workflow automation.


Question # 56

A Machine Learning Specialist is training a model to identify the make and model ofvehicles in images The Specialist wants to use transfer learning and an existing modeltrained on images of general objects The Specialist collated a large custom dataset ofpictures containing different vehicle makes and models.What should the Specialist do to initialize the model to re-train it with the custom data?

A. Initialize the model with random weights in all layers including the last fully connectedlayer
B. Initialize the model with pre-trained weights in all layers and replace the last fullyconnected layer.
C. Initialize the model with random weights in all layers and replace the last fully connectedlayer
D. Initialize the model with pre-trained weights in all layers including the last fully connectedlayer


Question # 57

A retail company is ingesting purchasing records from its network of 20,000 stores toAmazon S3 by using Amazon Kinesis Data Firehose. The company uses a small, serverbasedapplication in each store to send the data to AWS over the internet. The companyuses this data to train a machine learning model that is retrained each day. The company'sdata science team has identified existing attributes on these records that could becombined to create an improved model.Which change will create the required transformed records with the LEAST operationaloverhead?

A. Create an AWS Lambda function that can transform the incoming records. Enable datatransformation on the ingestion Kinesis Data Firehose delivery stream. Use the Lambdafunction as the invocation target.
B. Deploy an Amazon EMR cluster that runs Apache Spark and includes the transformationlogic. Use Amazon EventBridge (Amazon CloudWatch Events) to schedule an AWS Lambda function to launch the cluster each day and transform the records that accumulatein Amazon S3. Deliver the transformed records to Amazon S3.
C. Deploy an Amazon S3 File Gateway in the stores. Update the in-store software todeliver data to the S3 File Gateway. Use a scheduled daily AWS Glue job to transform thedata that the S3 File Gateway delivers to Amazon S3.
D. Launch a fleet of Amazon EC2 instances that include the transformation logic. Configurethe EC2 instances with a daily cron job to transform the records that accumulate in AmazonS3. Deliver the transformed records to Amazon S3.


Question # 58

A company wants to enhance audits for its machine learning (ML) systems. The auditingsystem must be able to perform metadata analysis on the features that the ML models use.The audit solution must generate a report that analyzes the metadata. The solution alsomust be able to set the data sensitivity and authorship of features.Which solution will meet these requirements with the LEAST development effort?

A. Use Amazon SageMaker Feature Store to select the features. Create a data flow toperform feature-level metadata analysis. Create an Amazon DynamoDB table to storefeature-level metadata. Use Amazon QuickSight to analyze the metadata.
B. Use Amazon SageMaker Feature Store to set feature groups for the current featuresthat the ML models use. Assign the required metadata for each feature. Use SageMakerStudio to analyze the metadata.
C. Use Amazon SageMaker Features Store to apply custom algorithms to analyze thefeature-level metadata that the company requires. Create an Amazon DynamoDB table tostore feature-level metadata. Use Amazon QuickSight to analyze the metadata.
D. Use Amazon SageMaker Feature Store to set feature groups for the current featuresthat the ML models use. Assign the required metadata for each feature. Use AmazonQuickSight to analyze the metadata.


Question # 59

A company's machine learning (ML) specialist is building a computer vision model toclassify 10 different traffic signs. The company has stored 100 images of each class in Amazon S3, and the company has another 10.000 unlabeled images. All the images comefrom dash cameras and are a size of 224 pixels * 224 pixels. After several training runs, themodel is overfitting on the training data.Which actions should the ML specialist take to address this problem? (Select TWO.)

A. Use Amazon SageMaker Ground Truth to label the unlabeled images
B. Use image preprocessing to transform the images into grayscale images.
C. Use data augmentation to rotate and translate the labeled images.
D. Replace the activation of the last layer with a sigmoid.
E. Use the Amazon SageMaker k-nearest neighbors (k-NN) algorithm to label theunlabeled images.


Question # 60

An obtain relator collects the following data on customer orders: demographics, behaviors,location, shipment progress, and delivery time. A data scientist joins all the collecteddatasets. The result is a single dataset that includes 980 variables.The data scientist must develop a machine learning (ML) model to identify groups ofcustomers who are likely to respond to a marketing campaign.Which combination of algorithms should the data scientist use to meet this requirement?(Select TWO.)

A. Latent Dirichlet Allocation (LDA)
B. K-means
C. Se mantic feg mentation
D. Principal component analysis (PCA)
E. Factorization machines (FM)


Question # 61

A data engineer needs to provide a team of data scientists with the appropriate dataset torun machine learning training jobs. The data will be stored in Amazon S3. The dataengineer is obtaining the data from an Amazon Redshift database and is using join queriesto extract a single tabular dataset. A portion of the schema is as follows:...traction Timestamp (Timeslamp)...JName(Varchar)...JNo (Varchar)Th data engineer must provide the data so that any row with a CardNo value of NULL isremoved. Also, the TransactionTimestamp column must be separated into aTransactionDate column and a isactionTime column Finally, the CardName column mustbe renamed to NameOnCard.The data will be extracted on a monthly basis and will be loaded into an S3 bucket. Thesolution must minimize the effort that is needed to set up infrastructure for the ingestionand transformation. The solution must be automated and must minimize the load on theAmazon Redshift clusterWhich solution meets these requirements?

A. Set up an Amazon EMR cluster Create an Apache Spark job to read the data from theAmazon Redshift cluster and transform the data. Load the data into the S3 bucket. Schedule the job to run monthly.
B. Set up an Amazon EC2 instance with a SQL client tool, such as SQL Workbench/J. toquery the data from the Amazon Redshift cluster directly. Export the resulting dataset into aWe. Upload the file into the S3 bucket. Perform these tasks monthly.
C. Set up an AWS Glue job that has the Amazon Redshift cluster as the source and the S3bucket as the destination Use the built-in transforms Filter, Map. and RenameField toperform the required transformations. Schedule the job to run monthly.
D. Use Amazon Redshift Spectrum to run a query that writes the data directly to the S3bucket. Create an AWS Lambda function to run the query monthly


Question # 62

A manufacturing company wants to create a machine learning (ML) model to predict whenequipment is likely to fail. A data science team already constructed a deep learning model by using TensorFlow and a custom Python script in a local environment. The companywants to use Amazon SageMaker to train the model.Which TensorFlow estimator configuration will train the model MOST cost-effectively?

A. Turn on SageMaker Training Compiler by addingcompiler_config=TrainingCompilerConfig() as a parameter. Pass the script to the estimatorin the call to the TensorFlow fit() method.
B. Turn on SageMaker Training Compiler by addingcompiler_config=TrainingCompilerConfig() as a parameter. Turn on managed spot trainingby setting the use_spot_instances parameter to True. Pass the script to the estimator in thecall to the TensorFlow fit() method.
C. Adjust the training script to use distributed data parallelism. Specify appropriate valuesfor the distribution parameter. Pass the script to the estimator in the call to the TensorFlowfit() method.
D. Turn on SageMaker Training Compiler by addingcompiler_config=TrainingCompilerConfig() as a parameter. Set theMaxWaitTimeInSeconds parameter to be equal to the MaxRuntimeInSeconds parameter.Pass the script to the estimator in the call to the TensorFlow fit() method.


Question # 63

A data scientist obtains a tabular dataset that contains 150 correlated features withdifferent ranges to build a regression model. The data scientist needs to achieve moreefficient model training by implementing a solution that minimizes impact on the model'sperformance. The data scientist decides to perform a principal component analysis (PCA)preprocessing step to reduce the number of features to a smaller set of independentfeatures before the data scientist uses the new features in the regression model.Which preprocessing step will meet these requirements?

A. Use the Amazon SageMaker built-in algorithm for PCA on the dataset to transform thedata
B. Load the data into Amazon SageMaker Data Wrangler. Scale the data with a Min MaxScaler transformation step Use the SageMaker built-in algorithm for PCA on the scaleddataset to transform the data.
C. Reduce the dimensionality of the dataset by removing the features that have the highestcorrelation Load the data into Amazon SageMaker Data Wrangler Perform a StandardScaler transformation step to scale the data Use the SageMaker built-in algorithm for PCAon the scaled dataset to transform the data
D. Reduce the dimensionality of the dataset by removing the features that have the lowestcorrelation. Load the data into Amazon SageMaker Data Wrangler. Perform a Min MaxScaler transformation step to scale the data. Use the SageMaker built-in algorithm for PCAon the scaled dataset to transform the data.


Question # 64

A manufacturing company has structured and unstructured data stored in an Amazon S3 bucket A Machine Learning Specialist wants to use SQL to run queries on this data. Whichsolution requires the LEAST effort to be able to query this data?

A. Use AWS Data Pipeline to transform the data and Amazon RDS to run queries.
B. Use AWS Glue to catalogue the data and Amazon Athena to run queries
C. Use AWS Batch to run ETL on the data and Amazon Aurora to run the quenes
D. Use AWS Lambda to transform the data and Amazon Kinesis Data Analytics to run queries


Question # 65

A Machine Learning Specialist is using Amazon Sage Maker to host a model for a highlyavailable customer-facing application.The Specialist has trained a new version of the model, validated it with historical data, andnow wants to deploy it to production To limit any risk of a negative customer experience,the Specialist wants to be able to monitor the model and roll it back, if neededWhat is the SIMPLEST approach with the LEAST risk to deploy the model and roll it back,if needed?

A. Create a SageMaker endpoint and configuration for the new model version. Redirectproduction traffic to the new endpoint by updating the client configuration. Revert traffic tothe last version if the model does not perform as expected.
B. Create a SageMaker endpoint and configuration for the new model version. Redirectproduction traffic to the new endpoint by using a load balancer Revert traffic to the lastversion if the model does not perform as expected.
C. Update the existing SageMaker endpoint to use a new configuration that is weighted tosend 5% of the traffic to the new variant. Revert traffic to the last version by resetting theweights if the model does not perform as expected.
D. Update the existing SageMaker endpoint to use a new configuration that is weighted tosend 100% of the traffic to the new variant Revert traffic to the last version by resetting theweights if the model does not perform as expected.


Question # 66

A company is building a demand forecasting model based on machine learning (ML). In thedevelopment stage, an ML specialist uses an Amazon SageMaker notebook to performfeature engineering during work hours that consumes low amounts of CPU and memoryresources. A data engineer uses the same notebook to perform data preprocessing once aday on average that requires very high memory and completes in only 2 hours. The datapreprocessing is not configured to use GPU. All the processes are running well on anml.m5.4xlarge notebook instance.The company receives an AWS Budgets alert that the billing for this month exceeds theallocated budget.Which solution will result in the MOST cost savings?

A. Change the notebook instance type to a memory optimized instance with the samevCPU number as the ml.m5.4xlarge instance has. Stop the notebook when it is not in use.Run both data preprocessing and feature engineering development on that instance. 
B. Keep the notebook instance type and size the same. Stop the notebook when it is not inuse. Run data preprocessing on a P3 instance type with the same memory as theml.m5.4xlarge instance by using Amazon SageMaker Processing. 
C. Change the notebook instance type to a smaller general purpose instance. Stop thenotebook when it is not in use. Run data preprocessing on an ml.r5 instance with the samememory size as the ml.m5.4xlarge instance by using Amazon SageMaker Processing. 
D. Change the notebook instance type to a smaller general purpose instance. Stop thenotebook when it is not in use. Run data preprocessing on an R5 instance with the samememory size as the ml.m5.4xlarge instance by using the Reserved Instance option. 


Question # 67

A manufacturing company wants to use machine learning (ML) to automate quality controlin its facilities. The facilities are in remote locations and have limited internet connectivity.The company has 20 of training data that consists of labeled images of defective productparts. The training data is in the corporate on-premises data center.The company will use this data to train a model for real-time defect detection in new partsas the parts move on a conveyor belt in the facilities. The company needs a solution thatminimizes costs for compute infrastructure and that maximizes the scalability of resourcesfor training. The solution also must facilitate the company’s use of an ML model in the lowconnectivity environments.Which solution will meet these requirements?

A. Move the training data to an Amazon S3 bucket. Train and evaluate the model by usingAmazon SageMaker. Optimize the model by using SageMaker Neo. Deploy the model on aSageMaker hosting services endpoint. 
B. Train and evaluate the model on premises. Upload the model to an Amazon S3 bucket.Deploy the model on an Amazon SageMaker hosting services endpoint. 
C. Move the training data to an Amazon S3 bucket. Train and evaluate the model by usingAmazon SageMaker. Optimize the model by using SageMaker Neo. Set up an edge devicein the manufacturing facilities with AWS IoT Greengrass. Deploy the model on the edgedevice. 
D. Train the model on premises. Upload the model to an Amazon S3 bucket. Set up anedge device in the manufacturing facilities with AWS IoT Greengrass. Deploy the model onthe edge device. 


Question # 68

A company is building a predictive maintenance model based on machine learning (ML).The data is stored in a fully private Amazon S3 bucket that is encrypted at rest with AWSKey Management Service (AWS KMS) CMKs. An ML specialist must run datapreprocessing by using an Amazon SageMaker Processing job that is triggered from codein an Amazon SageMaker notebook. The job should read data from Amazon S3, process it,and upload it back to the same S3 bucket. The preprocessing code is stored in a containerimage in Amazon Elastic Container Registry (Amazon ECR). The ML specialist needs togrant permissions to ensure a smooth data preprocessing workflowWhich set of actions should the ML specialist take to meet these requirements?

A. Create an IAM role that has permissions to create Amazon SageMaker Processing jobs,S3 read and write access to the relevant S3 bucket, and appropriate KMS and ECRpermissions. Attach the role to the SageMaker notebook instance. Create an AmazonSageMaker Processing job from the notebook. 
B. Create an IAM role that has permissions to create Amazon SageMaker Processing jobs.Attach the role to the SageMaker notebook instance. Create an Amazon SageMakerProcessing job with an IAM role that has read and write permissions to the relevant S3bucket, and appropriate KMS and ECR permissions. 
C. Create an IAM role that has permissions to create Amazon SageMaker Processing jobsand to access Amazon ECR. Attach the role to the SageMaker notebook instance. Set upboth an S3 endpoint and a KMS endpoint in the default VPC. Create Amazon SageMakerProcessing jobs from the notebook. 
D. Create an IAM role that has permissions to create Amazon SageMaker Processing jobs.Attach the role to the SageMaker notebook instance. Set up an S3 endpoint in the defaultVPC. Create Amazon SageMaker Processing jobs with the access key and secret key ofthe IAM user with appropriate KMS and ECR permissions. 


Question # 69

A machine learning specialist is developing a proof of concept for government users whoseprimary concern is security. The specialist is using Amazon SageMaker to train aconvolutional neural network (CNN) model for a photo classifier application. The specialistwants to protect the data so that it cannot be accessed and transferred to a remote host bymalicious code accidentally installed on the training container.Which action will provide the MOST secure protection?

A. Remove Amazon S3 access permissions from the SageMaker execution role. 
B. Encrypt the weights of the CNN model. 
C. Encrypt the training and validation dataset. 
D. Enable network isolation for training jobs. 


Question # 70

A company wants to create a data repository in the AWS Cloud for machine learning (ML)projects. The company wants to use AWS to perform complete ML lifecycles and wants touse Amazon S3 for the data storage. All of the company’s data currently resides onpremises and is 40 in size.The company wants a solution that can transfer and automatically update data between theon-premises object storage and Amazon S3. The solution must support encryption,scheduling, monitoring, and data integrity validation.Which solution meets these requirements?

A. Use the S3 sync command to compare the source S3 bucket and the destination S3bucket. Determine which source files do not exist in the destination S3 bucket and whichsource files were modified. 
B. Use AWS Transfer for FTPS to transfer the files from the on-premises storage toAmazon S3. 
C. Use AWS DataSync to make an initial copy of the entire dataset. Schedule subsequentincremental transfers of changing data until the final cutover from on premises to AWS. 
D. Use S3 Batch Operations to pull data periodically from the on-premises storage. EnableS3 Versioning on the S3 bucket to protect against accidental overwrites. 


Question # 71

A machine learning (ML) specialist must develop a classification model for a financialservices company. A domain expert provides the dataset, which is tabular with 10,000 rowsand 1,020 features. During exploratory data analysis, the specialist finds no missing valuesand a small percentage of duplicate rows. There are correlation scores of > 0.9 for 200feature pairs. The mean value of each feature is similar to its 50th percentile.Which feature engineering strategy should the ML specialist use with Amazon SageMaker?

A. Apply dimensionality reduction by using the principal component analysis (PCA)algorithm. 
B. Drop the features with low correlation scores by using a Jupyter notebook. 
C. Apply anomaly detection by using the Random Cut Forest (RCF) algorithm. 
D. Concatenate the features with high correlation scores by using a Jupyter notebook. 


Question # 72

A Machine Learning Specialist is designing a scalable data storage solution for AmazonSageMaker. There is an existing TensorFlow-based model implemented as a train.py scriptthat relies on static training data that is currently stored as TFRecordsWhich method of providing training data to Amazon SageMaker would meet the businessrequirements with the LEAST development overhead?

A. Use Amazon SageMaker script mode and use train.py unchanged. Point the AmazonSageMaker training invocation to the local path of the data without reformatting the trainingdata. 
B. Use Amazon SageMaker script mode and use train.py unchanged. Put the TFRecorddata into an Amazon S3 bucket. Point the Amazon SageMaker training invocation to the S3bucket without reformatting the training data. 
C. Rewrite the train.py script to add a section that converts TFRecords to protobuf andingests the protobuf data instead of TFRecords. 
D. Prepare the data in the format accepted by Amazon SageMaker. Use AWS Glue orAWS Lambda to reformat and store the data in an Amazon S3 bucket. 


Question # 73

A data scientist is using the Amazon SageMaker Neural Topic Model (NTM) algorithm tobuild a model that recommends tags from blog posts. The raw blog post data is stored inan Amazon S3 bucket in JSON format. During model evaluation, the data scientistdiscovered that the model recommends certain stopwords such as "a," "an,” and "the" astags to certain blog posts, along with a few rare words that are present only in certain blogentries. After a few iterations of tag review with the content team, the data scientist noticesthat the rare words are unusual but feasible. The data scientist also must ensure that thetag recommendations of the generated model do not include the stopwords.What should the data scientist do to meet these requirements?

A. Use the Amazon Comprehend entity recognition API operations. Remove the detectedwords from the blog post data. Replace the blog post data source in the S3 bucket. 
B. Run the SageMaker built-in principal component analysis (PCA) algorithm with the blogpost data from the S3 bucket as the data source. Replace the blog post data in the S3bucket with the results of the training job. 
C. Use the SageMaker built-in Object Detection algorithm instead of the NTM algorithm forthe training job to process the blog post data. 
D. Remove the stopwords from the blog post data by using the Count Vectorizer function inthe scikit-learn library. Replace the blog post data in the S3 bucket with the results of thevectorizer. 


Question # 74

A Data Scientist received a set of insurance records, each consisting of a record ID, thefinal outcome among200 categories, and the date of the final outcome. Some partial information on claimcontents is also provided,but only for a few of the 200 categories. For each outcome category, there are hundreds ofrecords distributedover the past 3 years. The Data Scientist wants to predict how many claims to expect ineach category from month to month, a few months in advance.What type of machine learning model should be used?

A. Classification month-to-month using supervised learning of the 200 categories based onclaim contents. 
B. Reinforcement learning using claim IDs and timestamps where the agent will identifyhow many claims in each category to expect from month to month. 
C. Forecasting using claim IDs and timestamps to identify how many claims in eachcategory to expect from month to month. 
D. Classification with supervised learning of the categories for which partial information onclaim contents is provided, and forecasting using claim IDs and timestamps for all other categories. 


Question # 75

A Machine Learning Specialist uploads a dataset to an Amazon S3 bucket protected withserver-sideencryption using AWS KMS.How should the ML Specialist define the Amazon SageMaker notebook instance so it canread the samedataset from Amazon S3?

A. Define security group(s) to allow all HTTP inbound/outbound traffic and assign thosesecurity group(s) to the Amazon SageMaker notebook instance. 
B. onfigure the Amazon SageMaker notebook instance to have access to the VPC. Grantpermission in the KMS key policy to the notebook’s KMS role. 
C. Assign an IAM role to the Amazon SageMaker notebook with S3 read access to thedataset. Grant permission in the KMS key policy to that role. 
D. Assign the same KMS key used to encrypt data in Amazon S3 to the AmazonSageMaker notebook instance. 


Question # 76

A company provisions Amazon SageMaker notebook instances for its data science teamand creates Amazon VPC interface endpoints to ensure communication between the VPCand the notebook instances. All connections to the Amazon SageMaker API are containedentirely and securely using the AWS network. However, the data science team realizes thatindividuals outside the VPC can still connect to the notebook instances across the internet.Which set of actions should the data science team take to fix the issue?

A. Modify the notebook instances' security group to allow traffic only from the CIDR rangesof the VPC. Apply this security group to all of the notebook instances' VPC interfaces. 
B. Create an IAM policy that allows the sagemaker:CreatePresignedNotebooklnstanceUrland sagemaker:DescribeNotebooklnstance actions from only the VPC endpoints. Applythis policy to all IAM users, groups, and roles used to access the notebook instances. 
C. Add a NAT gateway to the VPC. Convert all of the subnets where the AmazonSageMaker notebook instances are hosted to private subnets. Stop and start all of thenotebook instances to reassign only private IP addresses. 
D. Change the network ACL of the subnet the notebook is hosted in to restrict access toanyone outside the VPC. 


Question # 77

A data scientist is working on a public sector project for an urban traffic system. Whilestudying the traffic patterns, it is clear to the data scientist that the traffic behavior at eachlight is correlated, subject to a small stochastic error term. The data scientist must modelthe traffic behavior to analyze the traffic patterns and reduce congestionHow will the data scientist MOST effectively model the problem?

A. The data scientist should obtain a correlated equilibrium policy by formulating thisproblem as a multi-agent reinforcement learning problem. 
B. The data scientist should obtain the optimal equilibrium policy by formulating thisproblem as a single-agent reinforcement learning problem. 
C. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using historical data through a supervised learning approach. 
D. Rather than finding an equilibrium policy, the data scientist should obtain accuratepredictors of traffic flow by using unlabeled simulated data representing the new trafficpatterns in the city and applying an unsupervised learning approach. 


Question # 78

A company is converting a large number of unstructured paper receipts into images. Thecompany wants to create a model based on natural language processing (NLP) to findrelevant entities such as date, location, and notes, as well as some custom entities such asreceipt numbers.The company is using optical character recognition (OCR) to extract text for data labeling.However, documents are in different structures and formats, and the company is facingchallenges with setting up the manual workflows for each document type. Additionally, thecompany trained a named entity recognition (NER) model for custom entity detection usinga small sample size. This model has a very low confidence score and will require retrainingwith a large dataset.Which solution for text extraction and entity detection will require the LEAST amount ofeffort?

A. Extract text from receipt images by using Amazon Textract. Use the AmazonSageMaker BlazingText algorithm to train on the text for entities and custom entities. 
B. Extract text from receipt images by using a deep learning OCR model from the AWSMarketplace. Use the NER deep learning model to extract entities. 
C. Extract text from receipt images by using Amazon Textract. Use Amazon Comprehendfor entity detection, and use Amazon Comprehend custom entity recognition for customentity detection. 
D. Extract text from receipt images by using a deep learning OCR model from the AWSMarketplace. Use Amazon Comprehend for entity detection, and use Amazon Comprehendcustom entity recognition for custom entity detection. 


Question # 79

A machine learning specialist is developing a regression model to predict rental rates fromrental listings. A variable named Wall_Color represents the most prominent exterior wallcolor of the property. The following is the sample data, excluding all other variables: The specialist chose a model that needs numerical input data.Which feature engineering approaches should the specialist use to allow the regressionmodel to learn from the Wall_Color data? (Choose two.)

A. Apply integer transformation and set Red = 1, White = 5, and Green = 10. 
B. Add new columns that store one-hot representation of colors. 
C. Replace the color name string by its length. 
D. Create three columns to encode the color in RGB format. 
E. Replace each color name by its training set frequency. 


Question # 80

A company has set up and deployed its machine learning (ML) model into production withan endpoint using Amazon SageMaker hosting services. The ML team has configuredautomatic scaling for its SageMaker instances to support workload changes. During testing,the team notices that additional instances are being launched before the new instances areready. This behavior needs to change as soon as possible.How can the ML team solve this issue?

A. Decrease the cooldown period for the scale-in activity. Increase the configuredmaximum capacity of instances. 
B. Replace the current endpoint with a multi-model endpoint using SageMaker. 
C. Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inferenceendpoint. 
D. Increase the cooldown period for the scale-out activity. 


Question # 81

A power company wants to forecast future energy consumption for its customers inresidential properties and commercial business properties. Historical power consumptiondata for the last 10 years is available. A team of data scientists who performed the initialdata analysis and feature selection will include the historical power consumption data anddata such as weather, number of individuals on the property, and public holidays.The data scientists are using Amazon Forecast to generate the forecasts.Which algorithm in Forecast should the data scientists use to meet these requirements?

A. Autoregressive Integrated Moving Average (AIRMA) 
B. Exponential Smoothing (ETS) 
C. Convolutional Neural Network - Quantile Regression (CNN-QR) 
D. Prophet 


Question # 82

A company ingests machine learning (ML) data from web advertising clicks into an AmazonS3 data lake. Click data is added to an Amazon Kinesis data stream by using the KinesisProducer Library (KPL). The data is loaded into the S3 data lake from the data stream byusing an Amazon Kinesis Data Firehose delivery stream. As the data volume increases, anML specialist notices that the rate of data ingested into Amazon S3 is relatively constant.There also is an increasing backlog of data for Kinesis Data Streams and Kinesis DataFirehose to ingest.Which next step is MOST likely to improve the data ingestion rate into Amazon S3?

A. Increase the number of S3 prefixes for the delivery stream to write to. 
B. Decrease the retention period for the data stream. 
C. Increase the number of shards for the data stream. 
D. Add more consumers using the Kinesis Client Library (KCL). 


Question # 83

A machine learning specialist is running an Amazon SageMaker endpoint using the built-inobject detection algorithm on a P3 instance for real-time predictions in a company'sproduction application. When evaluating the model's resource utilization, the specialistnotices that the model is using only a fraction of the GPU.Which architecture changes would ensure that provisioned resources are being utilizedeffectively?

A. Redeploy the model as a batch transform job on an M5 instance. 
B. Redeploy the model on an M5 instance. Attach Amazon Elastic Inference to theinstance. 
C. Redeploy the model on a P3dn instance. 
D. Deploy the model onto an Amazon Elastic Container Service (Amazon ECS) clusterusing a P3 instance. 


Question # 84

A company wants to predict the sale prices of houses based on available historical salesdata. The targetvariable in the company’s dataset is the sale price. The features include parameters suchas the lot size, livingarea measurements, non-living area measurements, number of bedrooms, number ofbathrooms, year built,and postal code. The company wants to use multi-variable linear regression to predicthouse sale prices.Which step should a machine learning specialist take to remove features that are irrelevantfor the analysisand reduce the model’s complexity?

A. Plot a histogram of the features and compute their standard deviation. Remove featureswith high variance. 
B. Plot a histogram of the features and compute their standard deviation. Remove featureswith low variance. 
C. Build a heatmap showing the correlation of the dataset against itself. Remove featureswith low mutual correlation scores. 
D. Run a correlation check of all features against the target variable. Remove features withlow target variable correlation scores. 


Amazon MLS-C01 Latest Result Cards

result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card
result card

Amazon MLS-C01 Frequently Asked Questions


Customers Feedback

What our clients say about MLS-C01 Study Resources

    Xander Reyes     Jul 25, 2024
I was skeptical at first, but the MLS-C01 dumps exceeded my expectations. They are a must-have for anyone taking the AWS Machine Learning Specialty exam I got 910/1000 thanks.
    Oliver Walker     Jul 24, 2024
I successfully utilized the "2 for discount" offer and also shared the exam with a friend as I only needed to pass one exam. I am pleased to share that the strategy worked out well for both of us, as we both passed. I would like to express my gratitude to the team. Thank you!
    Roma     Jul 24, 2024
I tried other study materials, but the MLS-C01 dumps were the most effective. They covered all the important topics, and the explanations were clear and concise. Thanks Saleforcexamdumps.com
    Jameson Singh     Jul 23, 2024
I was recommended these dumps by a friend and they turned out to be fantastic. I passed the AWS Certified Machine Learning - Specialty exam thanks to salesforcexamdumps.com
    Nathanial Wright     Jul 23, 2024
The MLS-C01 dumps are a game-changer. They helped me identify my weaknesses and focus my study efforts. I highly recommend them.
    William Chen     Jul 22, 2024
The MLS-C01 exam dumps have made the preparation process incredibly easy. I passed with a 94% marks.
    Penelope Martinez     Jul 22, 2024
If you want to pass the AWS Machine Learning Specialty exam on the first try, then the MLS-C01 dumps are the way to go. They are easy to follow and provide everything you need to succeed.
    Mason Rodriguez     Jul 21, 2024
Salesforcexamdumps.com is a fantastic website The questions and explanations provided are top-notch, and the MLS-C01 practice Question are a great way to test your readiness. Highly recommended!
    Emma     Jul 21, 2024
I am happy to inform you that I have passed the MLS-C01 exam and can confirm that the dump is valid.
    Khadija     Jul 20, 2024
The MLS-C01 dumps are excellent! They helped me prepare for the exam in a short amount of time, and I passed with flying colors.

Leave a comment

Your email address will not be published. Required fields are marked *

Rating / Feedback About This Exam