A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language. Which solution will align the LLM response quality with the company's expectations?
A. Adjust the prompt. B. Choose an LLM of a different size. C. Increase the temperature. D. Increase the Top K value.
Answer: A Explanation:Adjusting the prompt is the correct solution to align the LLM outputs with the company'sexpectations for short, specific language responses.Adjust the Prompt:Why Option A is Correct:Why Other Options are Incorrect:
Question # 12
A company is using few-shot prompting on a base model that is hosted on Amazon Bedrock. The model currently uses 10 examples in the prompt. The model is invoked once daily and is performing well. The company wants to lower the monthly cost. Which solution will meet these requirements?
A. Customize the model by using fine-tuning. B. Decrease the number of tokens in the prompt. C. Increase the number of tokens in the prompt. D. Use Provisioned Throughput.
Answer: B Explanation:Decreasing the number of tokens in the prompt reduces the cost associated with using anLLM model on Amazon Bedrock, as costs are often based on the number of tokensprocessed by the model.Token Reduction Strategy:Why Option B is Correct:Why Other Options are Incorrect:
Question # 13
A company is developing a new model to predict the prices of specific items. The model performed well on the training dataset. When the company deployed the model to production, the model's performance decreased significantly. What should the company do to mitigate this problem?
A. Reduce the volume of data that is used in training. B. Add hyperparameters to the model. C. Increase the volume of data that is used in training. D. Increase the model training time.
Answer: C Explanation:When a model performs well on the training data but poorly in production, it is often due tooverfitting. Overfitting occurs when a model learns patterns and noise specific to thetraining data, which does not generalize well to new, unseen data in production. Increasingthe volume of data used in training can help mitigate this problem by providing a morediverse and representative dataset, which helps the model generalize better.Option C (Correct): "Increase the volume of data that is used intraining":Increasing the data volume can help the model learn more generalizedpatterns rather than specific features of the training dataset, reducing overfittingand improving performance in production.Option A:"Reduce the volume of data that is used in training" is incorrect, asreducing data volume would likely worsen the overfitting problem.Option B:"Add hyperparameters to the model" is incorrect because addinghyperparameters alone does not address the issue of data diversity or modelgeneralization.Option D:"Increase the model training time" is incorrect because simply increasingtraining time does not prevent overfitting; the model needs more diverse data.AWS AI Practitioner References:Best Practices for Model Training on AWS:AWS recommends using a larger andmore diverse training dataset to improve a model's generalization capability andreduce the risk of overfitting.
Question # 14
A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months. Which AWS solution should the company use to automate the generation of graphs?
A. Amazon Q in Amazon EC2 B. Amazon Q Developer C. Amazon Q in Amazon QuickSight D. Amazon Q in AWS Chatbot
Answer: C Explanation:Amazon QuickSight is a fully managed business intelligence (BI) service that allows usersto create and publish interactive dashboards that include visualizations like graphs, charts,and tables. "Amazon Q" is the natural language query feature within Amazon QuickSight. Itenables users to ask questions about their data in natural language and receive visualresponses such as graphs.Option C (Correct): "Amazon Q in Amazon QuickSight":This is the correct answerbecause Amazon QuickSight Q is specifically designed to allow users to exploretheir data through natural language queries, and it can automatically generategraphs to display sales data and other metrics. This makes it an ideal choice forthe company to automate the generation of graphs showing total sales for its topsellingproducts across various retail locations.Option A, B, and D:These options are incorrect:AWS AI Practitioner References:Amazon QuickSight Qis designed to provide insights from data by using naturallanguage queries, making it a powerful tool for generating automated graphs andvisualizations directly from queried data.Business Intelligence (BI) on AWS:AWS services such as Amazon QuickSightprovide business intelligence capabilities, including automated reporting andvisualization features, which are ideal for companies seeking to visualize data likesales trends over time.
Question # 15
A company is building a large language model (LLM) question answering chatbot. The company wants to decrease the number of actions call center employees need to take to respond to customer questions. Which business objective should the company use to evaluate the effect of the LLM chatbot?
A. Website engagement rate B. Average call duration C. Corporate social responsibility D. Regulatory compliance
Answer: B Explanation:The business objective to evaluate the effect of an LLM chatbot aimed at reducing theactions required by call center employees should beaverage call duration.Average Call Duration:Why Option B is Correct:Why Other Options are Incorrect:
Question # 16
A company wants to create a chatbot by using a foundation model (FM) on Amazon Bedrock. The FM needs to access encrypted data that is stored in an Amazon S3 bucket.The data is encrypted with Amazon S3 managed keys (SSE-S3). The FM encounters a failure when attempting to access the S3 bucket data. Which solution will meet these requirements?
A. Ensure that the role that Amazon Bedrock assumes has permission to decrypt data with the correct encryption key. B. Set the access permissions for the S3 buckets to allow public access to enable accessover the internet. C. Use prompt engineering techniques to tell the model to look for information in AmazonS3. D. Ensure that the S3 data does not contain sensitive information.
Answer: A Explanation: Amazon Bedrock needs the appropriate IAM role with permission to access and decryptdata stored in Amazon S3. If the data is encrypted with Amazon S3 managed keys (SSES3),the role that Amazon Bedrock assumes must have the required permissions to accessand decrypt the encrypted data.Option A (Correct): "Ensure that the role that Amazon Bedrock assumes haspermission to decrypt data with the correct encryption key":This is the correctsolution as it ensures that the AI model can access the encrypted data securelywithout changing the encryption settings or compromising data security.Option B:"Set the access permissions for the S3 buckets to allow public access" isincorrect because it violates security best practices by exposing sensitive data tothe public.Option C:"Use prompt engineering techniques to tell the model to look forinformation in Amazon S3" is incorrect as it does not address the encryption andpermission issue.Option D:"Ensure that the S3 data does not contain sensitive information" isincorrect because it does not solve the access problem related to encryption.AWS AI Practitioner References:Managing Access to Encrypted Data in AWS:AWS recommends using proper IAMroles and policies to control access to encrypted data stored in S3.
Question # 17
A company wants to use a large language model (LLM) on Amazon Bedrock for sentiment analysis. The company wants to know how much information can fit into one prompt. Which consideration will inform the company's decision?
A. Temperature B. Context window C. Batch size D. Model size
Answer: B Explanation:The context window determines how much information can fit into a single prompt whenusing a large language model (LLM) like those on Amazon Bedrock.Context Window:Why Option B is Correct:Why Other Options are Incorrect
Question # 18
What are tokens in the context of generative AI models?
A. Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units. B. Tokens are the mathematical representations of words or concepts used in generative AI models. C. Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks. D. Tokens are the specific prompts or instructions given to a generative AI model to generate output.
Answer: A Explanation:Tokens in generative AI models are the smallest units that the model processes, typicallyrepresenting words, subwords, or characters. They are essential for the model tounderstand and generate language, breaking down text into manageable parts forprocessing.Option A (Correct): "Tokens are the basic units of input and output that agenerative AI model operates on, representing words, subwords, or other linguisticunits":This is the correct definition of tokens in the context of generative AI models.Option B:"Mathematical representations of words" describes embeddings, nottokens.Option C:"Pre-trained weights of a model" refers to the parameters of a model, nottokens. Option D:"Prompts or instructions given to a model" refers to the queries orcommands provided to a model, not tokens.AWS AI Practitioner References:Understanding Tokens in NLP:AWS provides detailed explanations of how tokensare used in natural language processing tasks by AI models, such as in AmazonComprehend and other AWS AI services.
Question # 19
A company is building an ML model to analyze archived data. The company must perform inference on large datasets that are multiple GBs in size. The company does not need to access the model predictions immediately. Which Amazon SageMaker inference option will meet these requirements?
A. Batch transform B. Real-time inference C. Serverless inference D. Asynchronous inference
Answer: A Explanation:Batch transform in Amazon SageMaker is designed for offline processing of large datasets.It is ideal for scenarios where immediate predictions are not required, and the inferencecan be done on large datasets that are multiple gigabytes in size. This method processesdata in batches, making it suitable for analyzing archived data without the need for real- time access to predictions.Option A (Correct): "Batch transform":This is the correct answer because batchtransform is optimized for handling large datasets and is suitable when immediateaccess to predictions is not required.Option B:"Real-time inference" is incorrect because it is used for low-latency, realtimeprediction needs, which is not required in this case.Option C:"Serverless inference" is incorrect because it is designed for small-scale,intermittent inference requests, not for large batch processing.Option D:"Asynchronous inference" is incorrect because it is used when immediatepredictions are required, but with high throughput, whereas batch transform ismore suitable for very large datasets.AWS AI Practitioner References:Batch Transform on AWS SageMaker:AWS recommends using batch transformfor large datasets when real-time processing is not needed, ensuring costeffectivenessand scalability.
Question # 20
A company has built a chatbot that can respond to natural language questions with images. The company wants to ensure that the chatbot does not return inappropriate or unwanted images. Which solution will meet these requirements?
A. Implement moderation APIs. B. Retrain the model with a general public dataset. C. Perform model validation. D. Automate user feedback integration.
Answer: A Explanation:Moderation APIs, such as Amazon Rekognition’s Content Moderation API, can help filterand block inappropriate or unwanted images from being returned by a chatbot. These APIsare specifically designed to detect and manage undesirable content in images.Option A (Correct): "Implement moderation APIs":This is the correct answerbecause moderation APIs are designed to identify and filter inappropriate content,ensuring the chatbot does not return unwanted images.Option B:"Retrain the model with a general public dataset" is incorrect becauseretraining does not directly prevent inappropriate content from being returned.Option C:"Perform model validation" is incorrect as it ensures model correctness,not content moderation.Option D:"Automate user feedback integration" is incorrect because user feedbackdoes not prevent inappropriate images in real-time.AWS AI Practitioner References:AWS Content Moderation Services:AWS provides moderation APIs for filteringunwanted content from applications.