A company hosts a public website on an Amazon EC2 instance. HTTPS traffic must be able to access the website. The company uses SSH for management of the web server. The website is on the subnet 10.0.1.0/24. The management subnet is 192.168.100.0/24. A security engineer must create a security group for the EC2 instance. Which combination of steps should the security engineer take to meet these requirements in the MOST secure manner? (Select TWO.)
A. Allow port 22 from source 0.0.0.0/0. B. Allow port 443 from source 0.0.0.0/0. C. Allow port 22 from 192.168.100.0/24. D. Allow port 22 from 10.0.1.0/24. E. Allow port 443 from 10.0.1.0/24.
Answer: B,C Explanation: The correct answer is B and C. B. Allow port 443 from source 0.0.0.0/0.This is correct because port 443 is used for HTTPS traffic, which must be able to accessthe website from any source IP address.C. Allow port 22 from 192.168.100.0/24.This is correct because port 22 is used for SSH, which is the management protocol for theweb server. The management subnet is 192.168.100.0/24, so only this subnet should beallowed to access port 22.A. Allow port 22 from source 0.0.0.0/0.This is incorrect because it would allow anyone to access port 22, which is a security risk.SSH should be restricted to the management subnet only.D. Allow port 22 from 10.0.1.0/24.This is incorrect because it would allow the website subnet to access port 22, which isunnecessary and a security risk. SSH should be restricted to the management subnet only.E. Allow port 443 from 10.0.1.0/24.This is incorrect because it would limit the HTTPS traffic to the website subnet only, whichdefeats the purpose of having a public website.Reference: Security groups
Question # 62
A security engineer is configuring a mechanism to send an alert when three or more failed sign-in attempts to the AWS Management Console occur during a 5-minute period. The security engineer creates a trail in AWS CloudTrail to assist in this work. Which solution will meet these requirements?
A. In CloudTrail, turn on Insights events on the trail. Configure an alarm on the insight witheventName matching ConsoleLogin and errorMessage matching “Failed authentication”.Configure a threshold of 3 and a period of 5 minutes. B. Configure CloudTrail to send events to Amazon CloudWatch Logs. Create a metric filterfor the relevant log group. Create a filter pattern with eventName matching ConsoleLoginand errorMessage matching “Failed authentication”. Create a CloudWatch alarm with athreshold of 3 and a period of 5 minutes. C. Create an Amazon Athena table from the CloudTrail events. Run a query for eventNamematching ConsoleLogin and for errorMessage matching “Failed authentication”. Create anotification action from the query to send an Amazon Simple Notification Service (AmazonSNS) notification when the count equals 3 within a period of 5 minutes. D. In AWS Identity and Access Management Access Analyzer, create a new analyzer.Configure the analyzer to send an Amazon Simple Notification Service (Amazon SNS)notification when a failed sign-in event occurs 3 times for any IAM user within a period of 5minutes.
Answer: B Explanation: The correct answer is B. Configure CloudTrail to send events to Amazon CloudWatchLogs. Create a metric filter for the relevant log group. Create a filter pattern withc eventName matching ConsoleLogin and errorMessage matching “Failed authentication”.Create a CloudWatch alarm with a threshold of 3 and a period of 5 minutes.This answer is correct because it meets the requirements of sending an alert when three ormore failed sign-in attempts to the AWS Management Console occur during a 5-minuteperiod. By configuring CloudTrail to send events to CloudWatch Logs, the security engineercan create a metric filter that matches the desired pattern of failed sign-in events. Then, bycreating a CloudWatch alarm based on the metric filter, the security engineer can set athreshold of 3 and a period of 5 minutes, and choose an action such as sending an email oran Amazon Simple Notification Service (Amazon SNS) message when the alarm istriggered12.The other options are incorrect because:A. Turning on Insights events on the trail and configuring an alarm on the insight isnot a solution, because Insights events are used to analyze unusual activity inmanagement events, such as spikes in API call volume or error rates. Insightsevents do not capture failed sign-in attempts to the AWS Management Console3.C. Creating an Amazon Athena table from the CloudTrail events and running aquery for failed sign-in events is not a solution, because it does not provide amechanism to send an alert based on the query results. Amazon Athena is aninteractive query service that allows analyzing data in Amazon S3 using standardSQL, but it does not support creating notifications or alarms from queries4.D. Creating an analyzer in AWS Identity and Access Management AccessAnalyzer and configuring it to send an Amazon SNS notification when a failedsign-in event occurs 3 times for any IAM user within a period of 5 minutes is not asolution, because IAM Access Analyzer is not a service that monitors sign-inevents, but a service that helps identify resources that are shared with externalentities. IAM Access Analyzer does not generate findings for failed sign-inattempts to the AWS Management Console5.References:1: Sending CloudTrail Events to CloudWatch Logs - AWS CloudTrail 2: Creating AlarmsBased on Metric Filters - Amazon CloudWatch 3: Analyzing unusual activity inmanagement events - AWS CloudTrail 4: What is Amazon Athena? - Amazon Athena 5:Using AWS Identity and Access Management Access Analyzer - AWS Identity and AccessManagement
Question # 63
A company is using AWS Organizations to implement a multi-account strategy. The company does not have on-premises infrastructure. All workloads run on AWS. The company currently has eight member accounts. The company anticipates that it will have no more than 20 AWS accounts total at any time. The company issues a new security policy that contains the following requirements: • No AWS account should use a VPC within the AWS account for workloads. • The company should use a centrally managed VPC that all AWS accounts can access to launch workloads in subnets. • No AWS account should be able to modify another AWS account's application resources within the centrally managed VPC. • The centrally managed VPC should reside in an existing AWS account that is named Account-A within an organization. The company uses an AWS CloudFormation template to create a VPC that contains multiple subnets in Account-A. This template exports the subnet IDs through the CloudFormation Outputs section. Which solution will complete the security setup to meet these requirements?
A. Use a CloudFormation template in the member accounts to launch workloads. Configurethe template to use the Fn::lmportValue function to obtain the subnet ID values. B. Use a transit gateway in the VPC within Account-A. Configure the member accounts touse the transit gateway to access the subnets in Account-A to launch workloads. C. Use AWS Resource Access Manager (AWS RAM) to share Account-A's VPC subnetswith the remaining member accounts. Configure the member accounts to use the sharedsubnets to launch workloads. D. Create a peering connection between Account-A and the remaining member accounts.Configure the member accounts to use the subnets in Account-A through the VPC peeringconnection to launch workloads.
Answer: C Explanation: The correct answer is C. Use AWS Resource Access Manager (AWS RAM) to shareAccount-A’s VPC subnets with the remaining member accounts. Configure the memberaccounts to use the shared subnets to launch workloads.This answer is correct because AWS RAM is a service that helps you securely share yourAWS resources across AWS accounts, within your organization or organizational units(OUs), and with IAM roles and users for supported resource types1. One of the supportedresource types is VPC subnets2, which means you can share the subnets in Account-A’sVPC with the other member accounts using AWS RAM. This way, you can meet therequirements of using a centrally managed VPC, avoiding duplicate VPCs in each account,and launching workloads in shared subnets. You can also control the access to the sharedsubnets by using IAM policies and resource-based policies3, which can prevent oneaccount from modifying another account’s resources.The other options are incorrect because:A. Using a CloudFormation template in the member accounts to launch workloadsand using the Fn::ImportValue function to obtain the subnet ID values is not asolution, because Fn::ImportValue can only import values that have been exportedby another stack within the same region4. This means that you cannot useFn::ImportValue to reference the subnet IDs that are exported by Account-A’sCloudFormation template, unless all the member accounts are in the same regionas Account-A. This option also does not avoid creating duplicate VPCs in eachaccount, which is one of the requirements.B. Using a transit gateway in the VPC within Account-A and configuring themember accounts to use the transit gateway to access the subnets in Account-A tolaunch workloads is not a solution, because a transit gateway does not allow youto launch workloads in another account’s subnets. A transit gateway is a networktransit hub that enables you to route traffic between your VPCs and on-premisesnetworks5, but it does not enable you to share subnets across accounts.D. Creating a peering connection between Account-A and the remaining memberaccounts and configuring the member accounts to use the subnets in Account-Athrough the VPC peering connection to launch workloads is not a solution,because a VPC peering connection does not allow you to launch workloads in another account’s subnets. A VPC peering connection is a networking connectionbetween two VPCs that enables you to route traffic between them privately6, but itdoes not enable you to share subnets across accounts.References:1: What is AWS Resource Access Manager? 2: Shareable AWS resources 3: Managingpermissions for shared resources 4: Fn::ImportValue 5: What is a transit gateway? 6: Whatis VPC peering?
Question # 64
A Security Engineer is asked to update an AWS CloudTrail log file prefix for an existing trail. When attempting to save the change in the CloudTrail console, the Security Engineer receives the following error message: `There is a problem with the bucket policy.` What will enable the Security Engineer to save the change?
A. Create a new trail with the updated log file prefix, and then delete the original trail.Update the existing bucket policy in the Amazon S3 console with the new log file prefix,and then update the log file prefix in the CloudTrail console. B. Update the existing bucket policy in the Amazon S3 console to allow the SecurityEngineer's Principal to perform PutBucketPolicy, and then update the log file prefix in theCloudTrail console. C. Update the existing bucket policy in the Amazon S3 console with the new log file prefix,and then update the log file prefix in the CloudTrail console. D. Update the existing bucket policy in the Amazon S3 console to allow the SecurityEngineer's Principal to perform GetBucketPolicy, and then update the log file prefix in theCloudTrail console.
Answer: C Explanation: The correct answer is C. Update the existing bucket policy in the Amazon S3 console withthe new log file prefix, and then update the log file prefix in the CloudTrail console.According to the AWS documentation1, a bucket policy is a resource-based policy that youcan use to grant access permissions to your Amazon S3 bucket and the objects in it. Onlythe bucket owner can associate a policy with a bucket. The permissions attached to thebucket apply to all of the objects in the bucket that are owned by the bucket owner.When you create a trail in CloudTrail, you can specify an existing S3 bucket or create anew one to store your log files. CloudTrail automatically creates a bucket policy for your S3bucket that grants CloudTrail write-only access to deliver log files to your bucket. Thebucket policy also grants read-only access to AWS services that you can use to view andanalyze your log data, such as Amazon Athena, Amazon CloudWatch Logs, and AmazonQuickSight.If you want to update the log file prefix for an existing trail, you must also update theexisting bucket policy in the S3 console with the new log file prefix. The log file prefix is partof the resource ARN that identifies the objects in your bucket that CloudTrail can access. Ifyou don’t update the bucket policy with the new log file prefix, CloudTrail will not be able todeliver log files to your bucket, and you will receive an error message when you try to savethe change in the CloudTrail console.The other options are incorrect because:A. Creating a new trail with the updated log file prefix, and then deleting theoriginal trail is not necessary and may cause data loss or inconsistency. You cansimply update the existing trail and its associated bucket policy with the new logfile prefix.B. Updating the existing bucket policy in the S3 console to allow the Security Engineer’s Principal to perform PutBucketPolicy is not relevant to this issue. ThePutBucketPolicy action allows you to create or replace a policy on a bucket, but itdoes not affect CloudTrail’s ability to deliver log files to your bucket. You still needto update the existing bucket policy with the new log file prefix.D. Updating the existing bucket policy in the S3 console to allow the SecurityEngineer’s Principal to perform GetBucketPolicy is not relevant to this issue. TheGetBucketPolicy action allows you to retrieve a policy on a bucket, but it does notaffect CloudTrail’s ability to deliver log files to your bucket. You still need to updatethe existing bucket policy with the new log file prefix.References:1: Using bucket policies - Amazon Simple Storage Service
Question # 65
A company needs complete encryption of the traffic between external users and an application. The company hosts the application on a fleet of Amazon EC2 instances that run in an Auto Scaling group behind an Application Load Balancer (ALB). How can a security engineer meet these requirements?
A. Create a new Amazon-issued certificate in AWS Secrets Manager. Export the certificatefrom Secrets Manager. Import the certificate into the ALB and the EC2 instances. B. Create a new Amazon-issued certificate in AWS Certificate Manager (ACM). Associatethe certificate with the ALB. Export the certificate from ACM. Install the certificate on theEC2 instances. C. Import a new third-party certificate into AWS Identity and Access Management (IAM).Export the certificate from IAM. Associate the certificate with the ALB and the EC2instances. D. Import a new third-party certificate into AWS Certificate Manager (ACM). Associate thecertificate with the ALB. Install the certificate on the EC2 instances.
Answer: D Explanation: The correct answer is D. Import a new third-party certificate into AWS Certificate Manager(ACM). Associate the certificate with the ALB. Install the certificate on the EC2 instances.This answer is correct because it meets the requirements of complete encryption of thetraffic between external users and the application. By importing a third-party certificate intoACM, the security engineer can use it to secure the communication between the ALB andthe clients. By installing the same certificate on the EC2 instances, the security engineercan also secure the communication between the ALB and the instances. This way, both thefront-end and back-end connections are encrypted with SSL/TLS1.The other options are incorrect because:A. Creating a new Amazon-issued certificate in AWS Secrets Manager is not asolution, because AWS Secrets Manager is not a service for issuing certificates,but for storing and managing secrets such as database credentials and API keys2.
Question # 66
A company is using Amazon Elastic Container Service (Amazon ECS) to run its containerbased application on AWS. The company needs to ensure that the container images contain no severe vulnerabilities. The company also must ensure that only specific IAM roles and specific AWS accounts can access the container images. Which solution will meet these requirements with the LEAST management overhead?
A. Pull images from the public container registry. Publish the images to Amazon ElasticContainer Registry (Amazon ECR) repositories with scan on push configured in acentralized AWS account. Use a CI/CD pipeline to deploy the images to different AWSaccounts. Use identity-based policies to restrict access to which IAM principals can accessthe images. B. Pull images from the public container registry. Publish the images to a private containerregistry that is hosted on Amazon EC2 instances in a centralized AWS account. Deployhost-based container scanning tools to EC2 instances that run Amazon ECS. Restrictaccess to the container images by using basic authentication over HTTPS. C. Pull images from the public container registry. Publish the images to Amazon ElasticContainer Registry (Amazon ECR) repositories with scan on push configured in acentralized AWS account. Use a CI/CD pipeline to deploy the images to different AWSaccounts. Use repository policies and identity-based policies to restrict access to whichIAM principals and accounts can access the images. D. Pull images from the public container registry. Publish the images to AWS CodeArtifactrepositories in a centralized AWS account. Use a CI/CD pipeline to deploy the images todifferent AWS accounts. Use repository policies and identity-based policies to restrictaccess to which IAM principals and accounts can access the images.
Answer: C Explanation: The correct answer is C. Pull images from the public container registry. Publish the imagesto Amazon Elastic Container Registry (Amazon ECR) repositories with scan on pushconfigured in a centralized AWS account. Use a CI/CD pipeline to deploy the images todifferent AWS accounts. Use repository policies and identity-based policies to restrictaccess to which IAM principals and accounts can access the images.This solution meets the requirements because:Amazon ECR is a fully managed container registry service that supports Dockerand OCI images and artifacts1. It integrates with Amazon ECS and other AWSservices to simplify the development and deployment of container-basedapplications.Amazon ECR provides image scanning on push, which uses the CommonVulnerabilities and Exposures (CVEs) database from the open-source Clair projectto detect software vulnerabilities in container images2. The scan results areavailable in the AWS Management Console, AWS CLI, or AWS SDKs2.Amazon ECR supports cross-account access to repositories, which allows sharingimages across multiple AWS accounts3. This can be achieved by using repositorypolicies, which are resource-based policies that specify which IAM principals and accounts can access the repositories and what actions they can perform4.Additionally, identity-based policies can be used to control which IAM roles in eachaccount can access the repositories5.The other options are incorrect because:A. This option does not use repository policies to restrict cross-account access tothe images, which is a requirement. Identity-based policies alone are not sufficientto control access to Amazon ECR repositories5.B. This option does not use Amazon ECR, which is a fully managed service thatprovides image scanning and cross-account access features. Hosting a privatecontainer registry on EC2 instances would require more management overheadand additional security measures.D. This option uses AWS CodeArtifact, which is a fully managed artifact repositoryservice that supports Maven, npm, NuGet, PyPI, and generic package formats6.However, AWS CodeArtifact does not support Docker or OCI container images,which are required for Amazon ECS applications.
Question # 67
A company uses infrastructure as code (IaC) to create AWS infrastructure. The company writes the code as AWS CloudFormation templates to deploy the infrastructure. The company has an existing CI/CD pipeline that the company can use to deploy these templates. After a recent security audit, the company decides to adopt a policy-as-code approach to improve the company's security posture on AWS. The company must prevent the deployment of any infrastructure that would violate a security policy, such as an unencrypted Amazon Elastic Block Store (Amazon EBS) volume. Which solution will meet these requirements?
A. Turn on AWS Trusted Advisor. Configure security notifications as webhooks in thepreferences section of the CI/CD pipeline. B. Turn on AWS Config. Use the prebuilt rules or customized rules. Subscribe the CI/CDpipeline to an Amazon Simple Notification Service (Amazon SNS) topic that receivesnotifications from AWS Config. C. Create rule sets in AWS CloudFormation Guard. Run validation checks forCloudFormation templates as a phase of the CI/CD process. D. Create rule sets as SCPs. Integrate the SCPs as a part of validation control in a phaseof the CI/CD process.
Answer: C Explanation: The correct answer is C. Create rule sets in AWS CloudFormation Guard. Run validationchecks for CloudFormation templates as a phase of the CI/CD process.This answer is correct because AWS CloudFormation Guard is a tool that helps youimplement policy-as-code for your CloudFormation templates. You can use Guard to writerules that define your security policies, such as requiring encryption for EBS volumes, andthen validate your templates against those rules before deploying them. You can integrateGuard into your CI/CD pipeline as a step that runs the validation checks and prevents thedeployment of any non-compliant templates12.The other options are incorrect because:A. Turning on AWS Trusted Advisor and configuring security notifications aswebhooks in the preferences section of the CI/CD pipeline is not a solution,because AWS Trusted Advisor is not a policy-as-code tool, but a service thatprovides recommendations to help you follow AWS best practices. Trusted Advisordoes not allow you to define your own security policies or validate yourCloudFormation templates against them3.B. Turning on AWS Config and using the prebuilt or customized rules is not asolution, because AWS Config is not a policy-as-code tool, but a service thatmonitors and records the configuration changes of your AWS resources. AWSConfig does not allow you to validate your CloudFormation templates beforedeploying them, but only evaluates the compliance of your resources after they arecreated4.D. Creating rule sets as SCPs and integrating them as a part of validation controlin a phase of the CI/CD process is not a solution, because SCPs are not policy-ascodetools, but policies that you can use to manage permissions in your AWSOrganizations. SCPs do not allow you to validate your CloudFormation templates,but only restrict the actions that users and roles can perform in your accounts5.References: 1: What is AWS CloudFormation Guard? 2: Introducing AWS CloudFormation Guard 2.0 3:AWS Trusted Advisor 4: What Is AWS Config? 5: Service control policies - AWSOrganizations
Question # 68
A company is using Amazon Route 53 Resolver for its hybrid DNS infrastructure. The company has set up Route 53 Resolver forwarding rules for authoritative domains that arehosted on on-premises DNS servers. A new security mandate requires the company to implement a solution to log and query DNS traffic that goes to the on-premises DNS servers. The logs must show details of the source IP address of the instance from which the query originated. The logs also must show the DNS name that was requested in Route 53 Resolver. Which solution will meet these requirements?
A. Use VPC Traffic Mirroring. Configure all relevant elastic network interfaces as the trafficsource, include amazon-dns in the mirror filter, and set Amazon CloudWatch Logs as themirror target. Use CloudWatch Insights on the mirror session logs to run queries on thesource IP address and DNS name. B. Configure VPC flow logs on all relevant VPCs. Send the logs to an Amazon S3 bucket.Use Amazon Athena to run SQL queries on the source IP address and DNS name. C. Configure Route 53 Resolver query logging on all relevant VPCs. Send the logs toAmazon CloudWatch Logs. Use CloudWatch Insights to run queries on the source IPaddress and DNS name. D. Modify the Route 53 Resolver rules on the authoritative domains that forward to the onpremisesDNS servers. Send the logs to an Amazon S3 bucket. Use Amazon Athena to runSQL queries on the source IP address and DNS name.
Answer: C Explanation: The correct answer is C. Configure Route 53 Resolver query logging on all relevant VPCs.Send the logs to Amazon CloudWatch Logs. Use CloudWatch Insights to run queries onthe source IP address and DNS name.According to the AWS documentation1, Route 53 Resolver query logging lets you log theDNS queries that Route 53 Resolver handles for your VPCs. You can send the logs toCloudWatch Logs, Amazon S3, or Kinesis Data Firehose. The logs include informationsuch as the following:The AWS Region where the VPC was createdThe ID of the VPC that the query originated fromThe IP address of the instance that the query originated fromThe instance ID of the resource that the query originated fromThe date and time that the query was first madeThe DNS name requested (such as prod.example.com)The DNS record type (such as A or AAAA)The DNS response code, such as NoError or ServFailThe DNS response data, such as the IP address that is returned in response to theDNS queryYou can use CloudWatch Insights to run queries on your log data and analyze the results using graphs and statistics2. You can filter and aggregate the log data based on any field,and use operators and functions to perform calculations and transformations. For example,you can use CloudWatch Insights to find out how many queries were made for a specificdomain name, or which instances made the most queries.Therefore, this solution meets the requirements of logging and querying DNS traffic thatgoes to the on-premises DNS servers, showing details of the source IP address of theinstance from which the query originated, and the DNS name that was requested in Route53 Resolver.The other options are incorrect because:A. Using VPC Traffic Mirroring would not capture the DNS queries that go to theon-premises DNS servers, because Traffic Mirroring only copies network trafficfrom an elastic network interface of an EC2 instance to a target for analysis3.Traffic Mirroring does not include traffic that goes through a Route 53 Resolveroutbound endpoint, which is used to forward queries to on-premises DNSservers4. Therefore, this solution would not meet the requirements.B. Configuring VPC flow logs on all relevant VPCs would not capture the DNSname that was requested in Route 53 Resolver, because flow logs only recordinformation about the IP traffic going to and from network interfaces in a VPC5.Flow logs do not include any information about the content or payload of a packet,such as a DNS query or response. Therefore, this solution would not meet therequirements.D. Modifying the Route 53 Resolver rules on the authoritative domains that forwardto the on-premises DNS servers would not enable logging of DNS queries,because Resolver rules only specify how to forward queries for specified domainnames to your network6. Resolver rules do not have any logging functionality bythemselves. Therefore, this solution would not meet the requirements.References:1: Resolver query logging - Amazon Route 53 2: Analyzing log data with CloudWatch LogsInsights - Amazon CloudWatch 3: What is Traffic Mirroring? - Amazon Virtual Private Cloud4: Outbound Resolver endpoints - Amazon Route 53 5: Logging IP traffic using VPC FlowLogs - Amazon Virtual Private Cloud 6: Managing forwarding rules - Amazon Route 53
Question # 69
A security engineer is troubleshooting an AWS Lambda function that is named MyLambdaFunction. The function is encountering an error when the function attempts to read the objects in an Amazon S3 bucket that is named DOC-EXAMPLE-BUCKET. The S3 bucket has the following bucket policy:
Which change should the security engineer make to the policy to ensure that the Lambda function can read the bucket objects?
A. Remove the Condition element. Change the Principal element to the following:{“AWS”: “arn "aws" ::: lambda ::: function:MyLambdaFunction”} B. Change the Action element to the following:" s3:GetObject*"" s3:GetBucket*" C. Change the Resource element to "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*". D. Change the Resource element to "arn:aws:lambda:::function:MyLambdaFunction".Change the Principal element to the following:{“Service”: “s3.amazonaws.com”}
Answer: C Explanation: The correct answer is C. Change the Resource element to “arn:aws:s3:::DOC-EXAMPLEBUCKET/*”.The reason is that the Resource element in the bucket policy specifies which objects in thebucket are affected by the policy. In this case, the policy only applies to the bucket itself,not the objects inside it. Therefore, the Lambda function cannot access the objects with thes3:GetObject permission. To fix this, the Resource element should include a wildcard (*) tomatch all objects in the bucket. This way, the policy grants the Lambda function permissionto read any object in the bucket. The other options are incorrect for the following reasons:A. Removing the Condition element would not help, because it only restrictsaccess based on the source IP address of the request. The Principal elementshould not be changed to the Lambda function ARN, because it specifies who isallowed or denied access by the policy. The policy should allow access to anyprincipal ("*") and rely on IAM roles or policies to control access to the Lambdafunction.B. Changing the Action element to include s3:GetBucket* would not help, becauseit would grant additional permissions that are not needed by the Lambda function,such as s3:GetBucketAcl or s3:GetBucketPolicy. The s3:GetObject* permission issufficient for reading objects in the bucket.D. Changing the Resource element to the Lambda function ARN would not makesense, because it would mean that the policy applies to the Lambda function itself,not the bucket or its objects. The Principal element should not be changed tos3.amazonaws.com, because it would grant access to any AWS service that usesS3, not just Lambda.
Question # 70
A security engineer wants to forward custom application-security logs from an Amazon EC2 instance to Amazon CloudWatch. The security engineer installs the CloudWatch agent on the EC2 instance and adds the path of the logs to the CloudWatch configuration file. However, CloudWatch does not receive the logs. The security engineer verifies that the awslogs service is running on the EC2 instance. What should the security engineer do next to resolve the issue?
A. Add AWS CloudTrail to the trust policy of the EC2 instance. Send the custom logs toCloudTrail instead of CloudWatch. B. Add Amazon S3 to the trust policy of the EC2 instance. Configure the application towrite the custom logs to an S3 bucket that CloudWatch can use to ingest the logs. C. Add Amazon Inspector to the trust policy of the EC2 instance. Use Amazon Inspectorinstead of the CloudWatch agent to collect the custom logs. D. Attach the CloudWatchAgentServerPolicy AWS managed policy to the EC2 instancerole.
Answer: D Explanation: The correct answer is D. Attach the CloudWatchAgentServerPolicy AWS managed policyto the EC2 instance role.According to the AWS documentation1, the CloudWatch agent is a software agent that youcan install on your EC2 instances to collect system-level metrics and logs. To use theCloudWatch agent, you need to attach an IAM role or user to the EC2 instance that grantspermissions for the agent to perform actions on your behalf. TheCloudWatchAgentServerPolicy is an AWS managed policy that provides the necessarypermissions for the agent to write metrics and logs to CloudWatch2. By attaching this policyto the EC2 instance role, the security engineer can resolve the issue of CloudWatch notreceiving the custom application-security logs.The other options are incorrect for the following reasons:A. Adding AWS CloudTrail to the trust policy of the EC2 instance is not relevant,because CloudTrail is a service that records API activity in your AWS account, notcustom application logs3. Sending the custom logs to CloudTrail instead ofCloudWatch would not meet the requirement of forwarding them to CloudWatch.B. Adding Amazon S3 to the trust policy of the EC2 instance is not necessary,because S3 is a storage service that does not require any trust relationship withEC2 instances4. Configuring the application to write the custom logs to an S3bucket that CloudWatch can use to ingest the logs would be an alternativesolution, but it would be more complex and costly than using the CloudWatchagent directly.C. Adding Amazon Inspector to the trust policy of the EC2 instance is not helpful,because Inspector is a service that scans EC2 instances for softwarevulnerabilities and unintended network exposure, not custom application logs5.Using Amazon Inspector instead of the CloudWatch agent would not meet therequirement of forwarding them to CloudWatch.References:1: Collect metrics, logs, and traces with the CloudWatch agent - Amazon CloudWatch 2: CloudWatchAgentServerPolicy - AWS Managed Policy 3: What Is AWS CloudTrail? - AWSCloudTrail 4: Amazon S3 FAQs - Amazon Web Services 5: Automated SoftwareVulnerability Management - Amazon Inspector - AWS