A company uses SAML federation to grant users access to AWS accounts. A company workload that is in an isolated AWS account runs on immutable infrastructure with no human access to Amazon EC2. The company requires a specialized user known as a break glass user to have access to the workload AWS account and instances in the case of SAML errors. A recent audit discovered that the company did not create the break glass user for the AWS account that contains the workload. The company must create the break glass user. The company must log any activities of the break glass user and send the logs to a security team. Which combination of solutions will meet these requirements? (Select TWO.)
A. Create a local individual break glass IAM user for the security team. Create a trail inAWS CloudTrail that has Amazon CloudWatch Logs turned on. Use Amazon EventBridgeto monitor local user activities. B. Create a break glass EC2 key pair for the AWS account. Provide the key pair to thesecurity team. Use AWS CloudTraiI to monitor key pair activity. Send notifications to thesecurity team by using Amazon Simple Notification Service (Amazon SNS). C. Create a break glass IAM role for the account. Allow security team members to performthe AssumeRoleWithSAML operation. Create an AWS Cloud Trail trail that has AmazonCloudWatch Logs turned on. Use Amazon EventBridge to monitor security team activities. D. Create a local individual break glass IAM user on the operating system level of each workload instance. Configure unrestricted security groups on the instances to grant accessto the break glass IAM users. E. Configure AWS Systems Manager Session Manager for Amazon EC2. Configure anAWS Cloud Trail filter based on Session Manager. Send the results to an Amazon SimpleNotification Service (Amazon SNS) topic.
Answer: A,E Explanation:The combination of solutions that will meet the requirements are:A. Create a local individual break glass IAM user for the security team. Create atrail in AWS CloudTrail that has Amazon CloudWatch Logs turned on. UseAmazon EventBridge to monitor local user activities. This is a valid solutionbecause it allows the security team to access the workload AWS account andinstances using a local IAM user that does not depend on SAML federation. It alsoenables logging and monitoring of the break glass user activities using AWSCloudTrail, Amazon CloudWatch Logs, and Amazon EventBridge123.E. Configure AWS Systems Manager Session Manager for Amazon EC2.Configure an AWS CloudTrail filter based on Session Manager. Send the results toan Amazon Simple Notification Service (Amazon SNS) topic. This is a validsolution because it allows the security team to access the workload instanceswithout opening any inbound ports or managing SSH keys or bastion hosts. It alsoenables logging and notification of the break glass user activities using AWSCloudTrail, Session Manager, and Amazon SNS456.The other options are incorrect because:B. Creating a break glass EC2 key pair for the AWS account and providing it to thesecurity team is not a valid solution, because it requires opening inbound ports onthe instances and managing SSH keys, which increases the security risk andcomplexity7.C. Creating a break glass IAM role for the account and allowing security teammembers to perform the AssumeRoleWithSAML operation is not a valid solution,because it still depends on SAML federation, which might not work in case ofSAML errors8.D. Creating a local individual break glass IAM user on the operating system levelof each workload instance and configuring unrestricted security groups on theinstances to grant access to the break glass IAM users is not a valid solution,because it requires opening inbound ports on the instances and managing multiplelocal users, which increases the security risk and complexity9.References:1: Creating an IAM User in Your AWS Account 2: Creating a Trail - AWS CloudTrail 3:Using Amazon EventBridge with AWS CloudTrail 4: Setting up Session Manager - AWSSystems Manager 5: Logging Session Manager sessions - AWS Systems Manager 6:Amazon Simple Notification Service 7: Connecting to your Linux instance using SSH -Amazon Elastic Compute Cloud 8: AssumeRoleWithSAML - AWS Security Token Service9: IAM Users - AWS Identity and Access Management
Question # 52
A security engineer must use AWS Key Management Service (AWS KMS) to design a key management solution for a set of Amazon Elastic Block Store (Amazon EBS) volumes that contain sensitive data. The solution needs to ensure that the key material automatically expires in 90 days. Which solution meets these criteria?
A. A customer managed CMK that uses customer provided key material B. A customer managed CMK that uses AWS provided key material C. An AWS managed CMK D. Operation system-native encryption that uses GnuPG
Answer: A Explanation:https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kms/import-keymaterial. htmlaws kms import-key-material \--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \--encrypted-key-material fileb://EncryptedKeyMaterial.bin \--import-token fileb://ImportToken.bin \--expiration-model KEY_MATERIAL_EXPIRES \--valid-to 2021-09-21T19:00:00ZThe correct answer is A. A customer managed CMK that uses customer provided keymaterial.A customer managed CMK is a KMS key that you create, own, and manage in your AWSaccount. You have full control over the key configuration, permissions, rotation, anddeletion. You can use a customer managed CMK to encrypt and decrypt data in AWSservices that are integrated with AWS KMS, such as Amazon EBS1.A customer managed CMK can use either AWS provided key material or customerprovided key material. AWS provided key material is generated by AWS KMS and neverleaves the service unencrypted. Customer provided key material is generated outside ofAWS KMS and imported into a customer managed CMK. You can specify an expirationdate for the imported key material, after which the CMK becomes unusable until youreimport new key material2.To meet the criteria of automatically expiring the key material in 90 days, you need to usecustomer provided key material and set the expiration date accordingly. This way, you can ensure that the data encrypted with the CMK will not be accessible after 90 days unlessyou reimport new key material and re-encrypt the data.The other options are incorrect for the following reasons:B. A customer managed CMK that uses AWS provided key material does not expireautomatically. You can enable automatic rotation of the key material every year, but thisdoes not prevent access to the data encrypted with the previous key material. You wouldneed to manually delete the CMK and its backing key material to make the datainaccessible3.C. An AWS managed CMK is a KMS key that is created, owned, and managed by an AWSservice on your behalf. You have limited control over the key configuration, permissions,rotation, and deletion. You cannot use an AWS managed CMK to encrypt data in otherAWS services or applications. You also cannot set an expiration date for the key material ofan AWS managed CMK4.D. Operation system-native encryption that uses GnuPG is not a solution that uses AWSKMS. GnuPG is a command line tool that implements the OpenPGP standard forencrypting and signing data. It does not integrate with Amazon EBS or other AWS services.It also does not provide a way to automatically expire the key material used for encryption5.References:1: Customer Managed Keys - AWS Key Management Service 2: [Importing Key Material inAWS Key Management Service (AWS KMS) - AWS Key Management Service] 3: [RotatingCustomer Master Keys - AWS Key Management Service] 4: [AWS Managed Keys - AWSKey Management Service] 5: The GNU Privacy Guard
Question # 53
A security engineer is trying to use Amazon EC2 Image Builder to create an image of an EC2 instance. The security engineer has configured the pipeline to send logs to an Amazon S3 bucket. When the security engineer runs the pipeline, the build fails with the following error: “AccessDenied: Access Denied status code: 403”. The security engineer must resolve the error by implementing a solution that complies with best practices for least privilege access. Which combination of steps will meet these requirements? (Choose two.)
A. Ensure that the following policies are attached to the IAM role that the security engineeris using: EC2InstanceProfileForImageBuilder,EC2InstanceProfileForImageBuilderECRContainerBuilds, andAmazonSSMManagedInstanceCore. B. Ensure that the following policies are attached to the instance profile for the EC2 instance: EC2InstanceProfileForImageBuilder,EC2InstanceProfileForImageBuilderECRContainerBuilds, andAmazonSSMManagedInstanceCore. C. Ensure that the AWSImageBuilderFullAccess policy is attached to the instance profilefor the EC2 instance. D. Ensure that the security engineer’s IAM role has the s3:PutObject permission for the S3bucket. E. Ensure that the instance profile for the EC2 instance has the s3:PutObject permissionfor the S3 bucket.
Answer: B,E Explanation: The most likely cause of the error is that the instance profile for the EC2 instance does nothave the s3:PutObject permission for the S3 bucket. This permission is needed to uploadlogs to the bucket. Therefore, the security engineer should ensure that the instance profilehas this permission.One possible solution is to attach the AWSImageBuilderFullAccess policy to the instanceprofile for the EC2 instance. This policy grants full access to Image Builder resources andrelated AWS services, including the s3:PutObject permission for any bucket with“imagebuilder” in its name. However, this policy may grant more permissions thannecessary, which violates the principle of least privilege.Another possible solution is to create a custom policy that only grants the s3:PutObjectpermission for the specific S3 bucket that is used for logging. This policy can be attached tothe instance profile along with the other policies that are required for Image Builderfunctionality: EC2InstanceProfileForImageBuilder,EC2InstanceProfileForImageBuilderECRContainerBuilds, andAmazonSSMManagedInstanceCore. This solution follows the principle of least privilegemore closely than the previous one. Ensure that the following policies are attached to the instance profile for the EC2instance: EC2InstanceProfileForImageBuilder,EC2InstanceProfileForImageBuilderECRContainerBuilds, andAmazonSSMManagedInstanceCore.Ensure that the instance profile for the EC2 instance has the s3:PutObjectpermission for the S3 bucket. This can be done by either attaching theAWSImageBuilderFullAccess policy or creating a custom policy with thispermission.1: Using managed policies for EC2 Image Builder - EC2 Image Builder 2: PutObject -Amazon Simple Storage Service 3: AWSImageBuilderFullAccess - AWS Managed Policy
Question # 54
A company has contracted with a third party to audit several AWS accounts. To enable the audit, cross- account IAM roles have been created in each account targeted for audit. The Auditor is having trouble accessing some of the accounts. Which of the following may be causing this problem? (Choose three.)
A. The external ID used by the Auditor is missing or incorrect. B. The Auditor is using the incorrect password. C. The Auditor has not been granted sts:AssumeRole for the role in the destination account. D. The Amazon EC2 role used by the Auditor must be set to the destination account role. E. The secret key used by the Auditor is missing or incorrect. F. The role ARN used by the Auditor is missing or incorrect.
Answer: A,C,F Explanation: The following may be causing the problem for the Auditor: A. The external ID used by the Auditor is missing or incorrect. This is a possiblecause, because the external ID is a unique identifier that is used to establish atrust relationship between the accounts. The external ID must match the one thatis specified in the role’s trust policy in the destination account1.C. The Auditor has not been granted sts:AssumeRole for the role in the destinationaccount. This is a possible cause, because sts:AssumeRole is the API action thatallows the Auditor to assume the cross-account role and obtain temporarycredentials. The Auditor must have an IAM policy that allows them to callsts:AssumeRole for the role ARN in the destination account2.F. The role ARN used by the Auditor is missing or incorrect. This is a possiblecause, because the role ARN is the Amazon Resource Name of the cross-accountrole that the Auditor wants to assume. The role ARN must be valid and exist in thedestination account3.
Question # 55
A Security Engineer is working with a Product team building a web application on AWS. The application uses Amazon S3 to host the static content, Amazon API Gateway to provide RESTful services; and Amazon DynamoDB as the backend data store. The users already exist in a directory that is exposed through a SAML identity provider. Which combination of the following actions should the Engineer take to enable users to be authenticated into the web application and call APIs? (Choose three.)
A. Create a custom authorization service using AWS Lambda. B. Configure a SAML identity provider in Amazon Cognito to map attributes to the AmazonCognito user pool attributes. C. Configure the SAML identity provider to add the Amazon Cognito user pool as a relying party. D. Configure an Amazon Cognito identity pool to integrate with social login providers. E. Update DynamoDB to store the user email addresses and passwords. F. Update API Gateway to use a COGNITO_USER_POOLS authorizer.
Answer: B,C,F Explanation: The combination of the following actions should the Engineer take to enableusers to be authenticated into the web application and call APIs are:B. Configure a SAML identity provider in Amazon Cognito to map attributes to theAmazon Cognito user pool attributes. This is a necessary step to federate theexisting users from the SAML identity provider to the Amazon Cognito user pool, which will be used for authentication and authorization1.C. Configure the SAML identity provider to add the Amazon Cognito user pool as arelying party. This is a necessary step to establish a trust relationship between theSAML identity provider and the Amazon Cognito user pool, which will allow theusers to sign in using their existing credentials2.F. Update API Gateway to use a COGNITO_USER_POOLS authorizer. This is anecessary step to enable API Gateway to use the Amazon Cognito user pool asan authorizer for the RESTful services, which will validate the identity or accesstokens that are issued by Amazon Cognito when a user signs in successfully3.The other options are incorrect because:A. Creating a custom authorization service using AWS Lambda is not a necessarystep, because Amazon Cognito user pools can provide built-in authorizationfeatures, such as scopes and groups, that can be used to control access to APIresources4.D. Configuring an Amazon Cognito identity pool to integrate with social loginproviders is not a necessary step, because the users already exist in a directorythat is exposed through a SAML identity provider, and there is no requirement tosupport social login providers5.E. Updating DynamoDB to store the user email addresses and passwords is not anecessary step, because the user credentials are already stored in the SAMLidentity provider, and there is no need to duplicate them in DynamoDB6.References:1: Using Tokens with User Pools 2: Adding SAML Identity Providers to a User Pool 3:Control Access to a REST API Using Amazon Cognito User Pools as Authorizer 4: APIAuthorization with Resource Servers and OAuth 2.0 Scopes 5: Using Identity Pools(Federated Identities) 6: Amazon DynamoDB
Question # 56
A company has an organization with SCPs in AWS Organizations. The root SCP for the organization is as follows:
The company's developers are members of a group that has an IAM policy that allows access to Amazon Simple Email Service (Amazon SES) by allowing ses:* actions. The account is a child to an OU that has an SCP that allows Amazon SES. The developers are receiving a not-authorized error when they try to access Amazon SES through the AWS Management Console. Which change must a security engineer implement so that the developers can access Amazon SES?
A. Add a resource policy that allows each member of the group to access Amazon SES. B. Add a resource policy that allows "Principal": {"AWS": "arn:aws:iam::accountnumber:group/Dev"}. C. Remove the AWS Control Tower control (guardrail) that restricts access to AmazonSES. D. Remove Amazon SES from the root SCP.
Answer: D Explanation:The correct answer is D. Remove Amazon SES from the root SCP. This answer is correct because the root SCP is the most restrictive policy that applies to allaccounts in the organization. The root SCP explicitly denies access to Amazon SES byusing the NotAction element, which means that any action that is not listed in the elementis denied. Therefore, removing Amazon SES from the root SCP will allow the developers toaccess it, as long as there are no other SCPs or IAM policies that deny it.The other options are incorrect because:A. Adding a resource policy that allows each member of the group to accessAmazon SES is not a solution, because resource policies are not supported byAmazon SES1. Resource policies are policies that are attached to AWSresources, such as S3 buckets or SNS topics, to control access to thoseresources2. Amazon SES does not have any resources that can have resourcepolicies attached to them.B. Adding a resource policy that allows “Principal”: {“AWS”: “arn:aws:iam::accountnumber:group/Dev”} is not a solution, because resource policies do not supportIAM groups as principals3. Principals are entities that can perform actions on AWSresources, such as IAM users, roles, or AWS accounts4. IAM groups are notprincipals, but collections of IAM users that share the same permissions5.C. Removing the AWS Control Tower control (guardrail) that restricts access toAmazon SES is not a solution, because AWS Control Tower does not have anyguardrails that restrict access to Amazon SES6. Guardrails are high-level rulesthat govern the overall behavior of an organization’s accounts7. AWS ControlTower provides a set of predefined guardrails that cover security, compliance, andoperations domains8.References:1: Amazon Simple Email Service endpoints and quotas 2: Resource-based policies andIAM policies 3: Specifying a principal in a policy 4: Policy elements: Principal 5: IAM groups6: AWS Control Tower guardrails reference 7: AWS Control Tower concepts 8: AWSControl Tower guardrails
Question # 57
A company is evaluating its security posture. In the past, the company has observed issues with specific hosts and host header combinations that affected the company's business. The company has configured AWS WAF web ACLs as an initial step to mitigate these issues. The company must create a log analysis solution for the AWS WAF web ACLs to monitor problematic activity. The company wants to process all the AWS WAF logs in a central location. The company must have the ability to filter out requests based on specific hosts. A security engineer starts to enable access logging for the AWS WAF web ACLs. What should the security engineer do next to meet these requirements with the MOST operational efficiency?
A. Specify Amazon Redshift as the destination for the access logs. Deploy the AmazonAthena Redshift connector. Use Athena to query the data from Amazon Redshift and tofilter the logs by host. B. Specify Amazon CloudWatch as the destination for the access logs. Use AmazonCloudWatch Logs Insights to design a query to filter the logs by host. C. Specify Amazon CloudWatch as the destination for the access logs. Export theCloudWatch logs to an Amazon S3 bucket. Use Amazon Athena to query the logs and tofilter the logs by host. D. Specify Amazon CloudWatch as the destination for the access logs. Use AmazonRedshift Spectrum to query the logs and to filter the logs by host.
Answer: C Explanation: The correct answer is C. Specify Amazon CloudWatch as the destination for the accesslogs. Export the CloudWatch logs to an Amazon S3 bucket. Use Amazon Athena to querythe logs and to filter the logs by host.According to the AWS documentation1, AWS WAF offers logging for the traffic that yourweb ACLs analyze. The logs include information such as the time that AWS WAF receivedthe request from your protected AWS resource, detailed information about the request, andthe action setting for the rule that the request matched. You can send your logs to anAmazon CloudWatch Logs log group, an Amazon Simple Storage Service (Amazon S3)bucket, or an Amazon Kinesis Data Firehose.To create a log analysis solution for the AWS WAF web ACLs, you can use AmazonAthena, which is an interactive query service that makes it easy to analyze data in AmazonS3 using standard SQL2. You can use Athena to query and filter the AWS WAF logs byhost or any other criteria. Athena is serverless, so there is no infrastructure to manage, andyou pay only for the queries that you run.To use Athena with AWS WAF logs, you need to export the CloudWatch logs to an S3bucket. You can do this by creating a subscription filter that sends your log events to aKinesis Data Firehose delivery stream, which then delivers the data to an S3 bucket3.Alternatively, you can use AWS DMS to migrate your CloudWatch logs to S34.After you have exported your CloudWatch logs to S3, you can create a table in Athena thatpoints to your S3 bucket and use the AWS service log format that matches your logschema5. For example, if you are using JSON format for your AWS WAF logs, you can usethe AWSJSONSerDe serde. Then you can run SQL queries on your Athena table and filterthe results by host or any other field in your log data.Therefore, this solution meets the requirements of creating a log analysis solution for theAWS WAF web ACLs with the most operational efficiency. This solution does not requiresetting up any additional infrastructure or services, and it leverages the existing capabilitiesof CloudWatch, S3, and Athena.The other options are incorrect because: A. Specifying Amazon Redshift as the destination for the access logs is notpossible, because AWS WAF does not support sending logs directly to Redshift.You would need to use an intermediate service such as Kinesis Data Firehose orAWS DMS to load the data from CloudWatch or S3 to Redshift. Deploying theAmazon Athena Redshift connector is not necessary, because you can queryRedshift data directly from Athena without using a connector6. This solution wouldalso incur additional costs and operational overhead of managing a Redshiftcluster.B. Specifying Amazon CloudWatch as the destination for the access logs ispossible, but using Amazon CloudWatch Logs Insights to design a query to filterthe logs by host is not efficient or scalable. CloudWatch Logs Insights is a featurethat enables you to interactively search and analyze your log data in CloudWatchLogs7. However, CloudWatch Logs Insights has some limitations, such as amaximum query duration of 20 minutes, a maximum of 20 log groups per query,and a maximum retention period of 24 months8. These limitations may affect yourability to perform complex and long-running analysis on your AWS WAF logs.D. Specifying Amazon CloudWatch as the destination for the access logs ispossible, but using Amazon Redshift Spectrum to query the logs and filter them byhost is not efficient or cost-effective. Redshift Spectrum is a feature of AmazonRedshift that enables you to run queries against exabytes of data in S3 withoutloading or transforming any data9. However, Redshift Spectrum requires aRedshift cluster to process the queries, which adds additional costs andoperational overhead. Redshift Spectrum also charges you based on the numberof bytes scanned by each query, which can be expensive if you have largevolumes of log data10.References:1: Logging AWS WAF web ACL traffic - Amazon Web Services 2: What Is AmazonAthena? - Amazon Athena 3: Streaming CloudWatch Logs Data to Amazon S3 - AmazonCloudWatch Logs 4: Migrate data from CloudWatch Logs using AWS Database MigrationService - AWS Database Migration Service 5: Querying AWS service logs - AmazonAthena 6: Querying data from Amazon Redshift - Amazon Athena 7: Analyzing log datawith CloudWatch Logs Insights - Amazon CloudWatch Logs 8: CloudWatch Logs Insightsquotas - Amazon CloudWatch 9: Querying external data using Amazon Redshift Spectrum- Amazon Redshift 10: Amazon Redshift Spectrum pricing - Amazon Redshift
Question # 58
A company uses AWS Organizations. The company wants to implement short-term credentials for third-party AWS accounts to use to access accounts within the com-pany's organization. Access is for the AWS Management Console and third-party software-as-aservice (SaaS) applications. Trust must be enhanced to prevent two external accounts from using the same credentials. The solution must require the least possible operational effort. Which solution will meet these requirements?
A. Use a bearer token authentication with OAuth or SAML to manage and share a centralAmazon Cognito user pool across multiple Amazon API Gateway APIs. B. Implement AWS IAM Identity Center (AWS Single Sign-On), and use an identi-ty sourceof choice. Grant access to users and groups from other accounts by using permission setsthat are assigned by account. C. Create a unique IAM role for each external account. Create a trust policy. Use AWS Secrets Manager to create a random external key. D. Create a unique IAM role for each external account. Create a trust policy that includes acondition that uses the sts:Externalld condition key.
Answer: D Explanation: The correct answer is D. To implement short-term credentials for third-party AWS accounts, you can use IAM rolesand trust policies. A trust policy is a JSON policy document that defines who can assumethe role. You can specify the AWS account ID of the third-party account as a principal inthe trust policy, and use the sts:ExternalId condition key to enhance the security of the role.The sts:ExternalId condition key is a unique identifier that is agreed upon by both partiesand included in the AssumeRole request. This way, you can prevent the “confused deputy”problem, where an unauthorized party can use the same role as a legitimate party.Option A is incorrect because bearer token authentication with OAuth or SAML is notsuitable for granting access to AWS accounts and resources. Amazon Cognito and APIGateway are used for building web and mobile applications that require user authenticationand authorization.Option B is incorrect because AWS IAM Identity Center (AWS Single Sign-On) is a servicethat simplifies the management of access to multiple AWS accounts and cloud applicationsfor your workforce users. It does not support granting access to third-party AWS accounts.Option C is incorrect because using AWS Secrets Manager to create a random externalkey is not necessary and adds operational complexity. You can use the sts:ExternalIdcondition key instead to provide a unique identifier for each external account.
Question # 59
A company uses AWS Organizations to manage several AWs accounts. The company processes a large volume of sensitive data. The company uses a serverless approach to microservices. The company stores all the data in either Amazon S3 or Amazon DynamoDB. The company reads the data by using either AWS lambda functions or container-based services that the company hosts on Amazon Elastic Kubernetes Service (Amazon EKS) on AWS Fargate. The company must implement a solution to encrypt all the data at rest and enforce least privilege data access controls. The company creates an AWS Key Management Service (AWS KMS) customer managed key. What should the company do next to meet these requirements?
A. Create a key policy that allows the kms:Decrypt action only for Amazon S3 andDynamoDB. Create an SCP that denies the creation of S3 buckets and DynamoDB tablesthat are not encrypted with the key. B. Create an 1AM policy that denies the kms:Decrypt action for the key. Create a Lambdafunction than runs on a schedule to attach the policy to any new roles. Create an AWSConfig rule to send alerts for resources that are not encrypted with the key. C. Create a key policy that allows the kms:Decrypt action only for Amazon S3, DynamoDB,Lambda, and Amazon EKS. Create an SCP that denies the creation of S3 buckets andDynamoDB tables that are not encrypted with the key. D. Create a key policy that allows the kms:Decrypt action only for Amazon S3, DynamoDB,Lambda, and Amazon EKS. Create an AWS Config rule to send alerts for resources thatare not encrypted with the key.
Answer: B
Question # 60
A security engineer is creating an AWS Lambda function. The Lambda function needs to use a role that is named LambdaAuditRole to assume a role that is named AcmeAuditFactoryRole in a different AWS account. When the code is processed, the following error message appears: "An error oc-curred (AccessDenied) when calling the AssumeRole operation." Which combination of steps should the security engineer take to resolve this er-ror? (Select TWO.)
A. Ensure that LambdaAuditRole has the sts:AssumeRole permission for AcmeAuditFactoryRole. B. Ensure that LambdaAuditRole has the AWSLambdaBasicExecutionRole managedpolicy attached. C. Ensure that the trust policy for AcmeAuditFactoryRole allows the sts:AssumeRole actionfrom LambdaAuditRole. D. Ensure that the trust policy for LambdaAuditRole allows the sts:AssumeRole action fromthe lambda.amazonaws.com service. E. Ensure that the sts:AssumeRole API call is being issued to the us-east-I Regionendpoint.