DBS-C01 dumps
5 Star


Customer Rating & Feedbacks
98%


Exactly Questions Came From Dumps

Amazon DBS-C01 Question Answers

AWS Certified Database - Specialty Dumps April 2024

Are you tired of looking for a source that'll keep you updated on the AWS Certified Database - Specialty Exam? Plus, has a collection of affordable, high-quality, and incredibly easy Amazon DBS-C01 Practice Questions? Well then, you are in luck because Salesforcexamdumps.com just updated them! Get Ready to become a AWS Certified Database Certified.

discount banner
PDF $100  $40
Test Engine $140  $56
PDF + Test Engine $180  $72

Here are Amazon DBS-C01 PDF available features:

324 questions with answers Updation Date : 24 Apr, 2024
1 day study required to pass exam 100% Passing Assurance
100% Money Back Guarantee Free 3 Months Updates
Last 24 Hours Result
86

Students Passed

96%

Average Marks

97%

Questions From Dumps

4898

Total Happy Clients

What is Amazon DBS-C01?

Amazon DBS-C01 is a necessary certification exam to get certified. The certification is a reward to the deserving candidate with perfect results. The AWS Certified Database Certification validates a candidate's expertise to work with Amazon. In this fast-paced world, a certification is the quickest way to gain your employer's approval. Try your luck in passing the AWS Certified Database - Specialty Exam and becoming a certified professional today. Salesforcexamdumps.com is always eager to extend a helping hand by providing approved and accepted Amazon DBS-C01 Practice Questions. Passing AWS Certified Database - Specialty will be your ticket to a better future!

Pass with Amazon DBS-C01 Braindumps!

Contrary to the belief that certification exams are generally hard to get through, passing AWS Certified Database - Specialty is incredibly easy. Provided you have access to a reliable resource such as Salesforcexamdumps.com Amazon DBS-C01 PDF. We have been in this business long enough to understand where most of the resources went wrong. Passing Amazon AWS Certified Database certification is all about having the right information. Hence, we filled our Amazon DBS-C01 Dumps with all the necessary data you need to pass. These carefully curated sets of AWS Certified Database - Specialty Practice Questions target the most repeated exam questions. So, you know they are essential and can ensure passing results. Stop wasting your time waiting around and order your set of Amazon DBS-C01 Braindumps now!

We aim to provide all AWS Certified Database certification exam candidates with the best resources at minimum rates. You can check out our free demo before pressing down the download to ensure Amazon DBS-C01 Practice Questions are what you wanted. And do not forget about the discount. We always provide our customers with a little extra.

Why Choose Amazon DBS-C01 PDF?

Unlike other websites, Salesforcexamdumps.com prioritize the benefits of the AWS Certified Database - Specialty candidates. Not every Amazon exam candidate has full-time access to the internet. Plus, it's hard to sit in front of computer screens for too many hours. Are you also one of them? We understand that's why we are here with the AWS Certified Database solutions. Amazon DBS-C01 Question Answers offers two different formats PDF and Online Test Engine. One is for customers who like online platforms for real-like Exam stimulation. The other is for ones who prefer keeping their material close at hand. Moreover, you can download or print Amazon DBS-C01 Dumps with ease.

If you still have some queries, our team of experts is 24/7 in service to answer your questions. Just leave us a quick message in the chat-box below or email at [email protected].

Amazon DBS-C01 Sample Questions

Question # 1

A database specialist is designing the database for a software-as-a-service (SaaS) versionof an employee information application. In the current architecture, the change history ofemployee records is stored in a single table in an Amazon RDS for Oracle database.Triggers on the employee table populate the history table with historical records.This architecture has two major challenges. First, there is no way to guarantee that therecords have not been changed in the history table. Second, queries on the history tableare slow because of the large size of the table and the need to run the queries against alarge subset of data in the table.The database specialist must design a solution that prevents modification of the historicalrecords. The solution also must maximize the speed of the queries.Which solution will meet these requirements?

A. Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streamsto keep track of changes. Use DynamoDB Accelerator (DAX) to improve queryperformance.
B. Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB)for historical records and to an Amazon OpenSearch Service domain for queries.
C. Use Amazon Aurora PostgreSQL to store employee record history in a single table. UseAurora Auto Scaling to provision more capacity.
D. Build a solution that uses an Amazon Redshift cluster for historical records. Query theRedshift cluster directly as needed.


Question # 2

A company runs a customer relationship management (CRM) system that is hosted on- premises with a MySQL database as the backend. A custom stored procedure is used tosend email notifications to another system when data is inserted into a table. The companyhas noticed that the performance of the CRM system has decreased due to databasereporting applications used by various teams. The company requires an AWS solution thatwould reduce maintenance, improve performance, and accommodate the email notificationfeature.Which AWS solution meets these requirements?

A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodatethe reporting applications. Configure a stored procedure and an AWS Lambda function thatuses Amazon SES to send email notifications to the other system.
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reportingapplications. Configure Amazon RDS event subscriptions to publish a message to anAmazon SNS topic and subscribe the other system's email address to the topic.
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodatethe reporting applications. Configure Amazon SES integration to send email notifications tothe other system.
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configurea stored procedure and an AWS Lambda function to publish a message to an AmazonSNS topic. Subscribe the other system's email address to the topic.


Question # 3

A company is running a mobile app that has a backend database in Amazon DynamoDB.The app experiences sudden increases and decreases in activity throughout the day. Thecompanys operations team notices that DynamoDB read and write requests are beingthrottled at different times, resulting in a negative customer experienceWhich solution will solve the throttling issue without requiring changes to the app?

A. Add a DynamoD3 table in a secondary AWS Region. Populate the additional table byusing DynamoDB Streams.
B. Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.
C. use on-demand capacity mode tor the DynamoDB table.
D. use DynamoDB Accelerator (DAX).


Question # 4

A company needs to deploy an Amazon Aurora PostgreSQL DB instance into multipleaccounts. The company will initiate each DB instance from an existing Aurora PostgreSQLDB instance that runs in ashared account. The company wants the process to be repeatable in case the companyadds additional accounts in the future. The company also wants to be able to verify ifmanual changes have been madeto the DB instance configurations after the company deploys the DB instances.A database specialist has determined that the company needs to create an AWSCloudFormation template with the necessary configuration to create a DB instance in anaccount by using a snapshot of the existing DB instance to initialize the DB instance. Thecompany will also use the CloudFormation template's parameters to provide key values forthe DB instance creation (account ID, etc.).Which final step will meet these requirements in the MOST operationally efficient way?

A. Create a bash script to compare the configuration to the current DB instanceconfiguration and to report any changes.
B. Use the CloudFormation drift detection feature to check if the DB instance configurations have changed.C. Set up CloudFormation to use drift detection to send notifications if the DB instanceconfigurations have been changed.
D. Create an AWS Lambda function to compare the configuration to the current DBinstance configuration and to report any changes.


Question # 5

A company migrated an on-premises Oracle database to Amazon RDS for Oracle. Adatabase specialist needs to monitor the latency of the database.Which solution will meet this requirement with the LEAST operational overhead?

A. Publish RDS Performance insights metrics to Amazon CloudWatch. Add AWSCloudTrail filters to monitor database performance
B. Install Oracle Statspack. Enable the performance statistics feature to collect, store, anddisplay performance data to monitor database performance.
C. Enable RDS Performance Insights to visualize the database load. Enable EnhancedMonitoring to view how different threads use the CPU
D. Create a new DB parameter group that includes the AllocatedStorage,DBInstanceClassMemory, and DBInstanceVCPU variables. Enable RDS PerformanceInsights


Question # 6

A company is using an Amazon Aurora PostgreSQL database for a project with agovernment agency. All database communications must be encrypted in transit. All non-SSL/TLS connection requests must be rejected.What should a database specialist do to meet these requirements?

A. Set the rds.force SSI parameter in the DB cluster parameter group to default.
B. Set the rds.force_ssl parameter in the DB cluster parameter group to 1.
C. Set the rds.force_ssl parameter in the DB cluster parameter group to 0.
D. Set the SQLNET.SSL VERSION option in the DB cluster option group to 12.


Question # 7

A company plans to use AWS Database Migration Service (AWS DMS) to migrate itsdatabase from one Amazon EC2 instance to another EC2 instance as a full load task. Thecompany wants the database to be inactive during the migration. The company will use adms.t3.medium instance to perform the migration and will use the default settings for themigration.Which solution will MOST improve the performance of the data migration?

A. Increase the number of tables that are loaded in parallel.
B. Drop all indexes on the source tables.
C. Change the processing mode from the batch optimized apply option to transactionalmode.
D. Enable Multi-AZ on the target database while the full load task is in progress.


Question # 8

A company has deployed an application that uses an Amazon RDS for MySQL DB cluster.The DB cluster uses three read replicas. The primary DB instance is an8XL-sized instance, and the read replicas are each XL-sized instances.Users report that database queries are returning stale data. The replication lag indicatesthat the replicas are 5 minutes behind the primary DB instance. Status queries on thereplicas show that the SQL_THREAD is 10 binlogs behind the IO_THREAD and that theIO_THREAD is 1 binlog behind the primary.Which changes will reduce the lag? (Choose two.)

A. Deploy two additional read replicas matching the existing replica DB instance size.
B. Migrate the primary DB instance to an Amazon Aurora MySQL DB cluster and add threeAurora Replicas.
C. Move the read replicas to the same Availability Zone as the primary DB instance.
D. Increase the instance size of the primary DB instance within the same instance class.
E. Increase the instance size of the read replicas to the same size and class as the primaryDB instance.


Question # 9

A media company hosts a highly available news website on AWS but needs to improve itspage load time, especially during very popular news releases. Once a news page ispublished, it is very unlikely to change unless an error is identified. The company hasdecided to use Amazon ElastiCache.What is the recommended strategy for this use case?

A. Use ElastiCache for Memcached with write-through and long time to live (TTL)
B. Use ElastiCache for Redis with lazy loading and short time to live (TTL)
C. Use ElastiCache for Memcached with lazy loading and short time to live (TTL)
D. Use ElastiCache for Redis with write-through and long time to live (TTL)


Question # 10

A database specialist needs to enable IAM authentication on an existing Amazon AuroraPostgreSQL DB cluster. The database specialist already has modified the DB clustersettings, has created IAM and database credentials, and has distributed the credentials tothe appropriate users.What should the database specialist do next to establish the credentials for the users touse to log in to the DB cluster?

A. Add the users' IAM credentials to the Aurora cluster parameter group.
B. Run the generate-db-auth-token command with the user names to generate a temporarypassword for the users.
C. Add the users' IAM credentials to the default credential profile, Use the AWSManagement Console to access the DB cluster.
D. Use an AWS Security Token Service (AWS STS) token by sending the IAM access keyand secret key as headers to the DB cluster API endpoint.


Question # 11

An ecommerce company uses Amazon DynamoDB as the backend for its paymentssystem. A new regulation requires the company to log all data access requests for financialaudits. For this purpose, the company plans to use AWS logging and save logs to AmazonS3How can a database specialist activate logging on the database?

A. Use AWS CloudTrail to monitor DynamoDB control-plane operations. Create aDynamoDB stream to monitor data-plane operations. Pass the stream to Amazon KinesisData Streams. Use that stream as a source for Amazon Kinesis Data Firehose to store thedata in an Amazon S3 bucket.
B. Use AWS CloudTrail to monitor DynamoDB data-plane operations. Create a DynamoDBstream to monitor control-plane operations. Pass the stream to Amazon Kinesis DataStreams. Use that stream as a source for Amazon Kinesis Data Firehose to store the datain an Amazon S3 bucket.
C. Create two trails in AWS CloudTrail. Use Trail1 to monitor DynamoDB control-planeoperations. Use Trail2 to monitor DynamoDB data-plane operations.
D. Use AWS CloudTrail to monitor DynamoDB data-plane and control-plane operations.


Question # 12

A startup company in the travel industry wants to create an application that includes apersonal travel assistant to display information for nearby airports based on user location.The application will use Amazon DynamoDB and must be able to access and displayattributes such as airline names, arrival times, and flight numbers. However, the applicationmust not be able to access or display pilot names or passenger counts.Which solution will meet these requirements MOST cost-effectively?

A. Use a proxy tier between the application and DynamoDB to regulate access to specifictables, items, and attributes.
B. Use IAM policies with a combination of IAM conditions and actions to implement finegrainedaccess control.
C. Use DynamoDB resource policies to regulate access to specific tables, items, andattributes.
D. Configure an AWS Lambda function to extract only allowed attributes from tables basedon user profiles.


Question # 13

A database specialist is working on an Amazon RDS for PostgreSQL DB instance that isexperiencing application performance issues due to the addition of new workloads. Thedatabase has 5 ¢’ of storage space with Provisioned IOPS. Amazon CloudWatch metricsshow that the average disk queue depth is greater than200 and that the disk I/O response time is significantly higher than usual.What should the database specialist do to improve the performance of the applicationimmediately?

A. Increase the Provisioned IOPS rate on the storage.
B. Increase the available storage space.
C. Use General Purpose SSD (gp2) storage with burst credits.
D. Create a read replica to offload Read IOPS from the DB instance.


Question # 14

A bike rental company operates an application to track its bikes. The application receiveslocation and condition data from bike sensors. The application also receives rentaltransaction data from the associated mobile app.The application uses Amazon DynamoDB as its database layer. The company hasconfigured DynamoDB with provisioned capacity set to 20% above the expected peak loadof the application. On an average day, DynamoDB used 22 billion read capacity units(RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usagechanges smoothly over the course of the day and is generally shaped like a bell curve. Thetiming and magnitude of peaks vary based on the weather and season, but the generalshape is consistent.Which solution will provide the MOST cost optimization of the DynamoDB database layer?

A. Change the DynamoDB tables to use on-demand capacity.
B. Use AWS Auto Scaling and configure time-based scaling.
C. Enable DynamoDB capacity-based auto scaling.
D. Enable DynamoDB Accelerator (DAX).


Question # 15

A news portal is looking for a data store to store 120 GB of metadata about its posts andcomments. The posts and comments are not frequently looked up or updated. However,occasional lookups are expected to be served with single-digit millisecond latency onaverage.What is the MOST cost-effective solution?

A. Use Amazon DynamoDB with on-demand capacity mode. Purchase reserved capacity.
B. Use Amazon ElastiCache for Redis for data storage. Turn off cluster mode.
C. Use Amazon S3 Standard-Infrequent Access (S3 Standard-IA) for data storage and useAmazon Athena to query the data.
D. Use Amazon DynamoDB with on-demand capacity mode. Switch the table class toDynamoDB Standard-Infrequent Access (DynamoDB Standard-IA).


Question # 16

A global company is creating an application. The application must be highly available. Thecompany requires an RTO and an RPO of less than 5 minutes. The company needs adatabase that will provide the ability to set up an active-active configuration and near realtimesynchronization of data across tables in multiple AWS Regions. Which solution will meet these requirements?

A. Amazon RDS for MariaDB with cross-Region read replicas
B. Amazon RDS With a Multi-AZ deployment
C. Amazon DynamoDB global tables
D. Amazon DynamoDB With a global secondary index (GSI)


Question # 17

A company uses a large, growing, and high performance on-premises Microsoft SQLServer instance With an Always On availability group cluster size of 120 TIE. The companyuses a third-party backup product that requires system-level access to the databases. Thecompany will continue to use this third-party backup product in the future. The company wants to move the DB cluster to AWS with the least possible downtime anddata loss. The company needs a 2 Gbps connection to sustain Always On asynchronousdata replication between the company's data center and AWS.Which combination of actions should a database specialist take to meet theserequirements? (Select THREE.)

A. Establish an AWS Direct Connect hosted connection between the companfs data centerand AWS
B. Create an AWS Site-to-Site VPN connection between the companVs data center andAWS over the internet
C. Use AWS Database Migration Service (AWS DMS) to migrate the on-premises SQLServer databases to Amazon RDS for SQL Server Configure Always On availability groupsfor SQL Server.
D. Deploy a new SQL Server Always On availability group DB cluster on Amazon EC2Configure Always On distributed availability groups between the on-premises DB clusterand the AWS DB cluster_ Fail over to the AWS DB cluster when it is time to migrate.
E. Grant system-level access to the third-party backup product to perform backups of theAmazon RDS for SQL Server DB instance.
F. Configure the third-party backup product to perform backups of the DB cluster onAmazon EC2.


Question # 18

A company's application team needs to select an AWS managed database service to storeapplication and user data. The application team is familiar with MySQL but is open to newsolutions. The application and user data is stored in 10 tables and is de-normalized. Theapplication will access this data through an API layer using an unique ID in each table. Thecompany expects the traffic to be light at first, but the traffic Will Increase to thousands oftransactions each second within the first year- The database service must support activereadsand writes in multiple AWS Regions at the same time_ Query response times need to beless than 100 ms Which AWS database solution will meet these requirements?

A. Deploy an Amazon RDS for MySQL environment in each Region and leverage AWSDatabase Migration Service (AWS DMS) to set up a multi-Region bidirectional replication
B. Deploy an Amazon Aurora MySOL global database with write forwarding turned on
C. Deploy an Amazon DynamoDB database with global tables
D. Deploy an Amazon DocumentDB global cluster across multiple Regions.


Question # 19

A database specialist wants to ensure that an Amazon Aurora DB cluster is alwaysautomatically upgraded to the most recent minor version available. Noticing that there is anew minor version available, the database specialist has issues an AWS CLI command toenable automatic minor version updates. The command runs successfully, but checking theAurora DB cluster indicates that no update to the Aurora version has been made.What might account for this? (Choose two.)

A. The new minor version has not yet been designated as preferred and requires a manualupgrade.
B. Configuring automatic upgrades using the AWS CLI is not supported. This must beenabled expressly using the AWS Management Console.
C. Applying minor version upgrades requires sufficient free space.
D. The AWS CLI command did not include an apply-immediately parameter.
E. Aurora has detected a breaking change in the new minor version and has automaticallyrejected the upgrade.


Question # 20

A financial services company is using AWS Database Migration Service (AWS OMS) tomigrate Its databases from on-premises to AWS. A database administrator is working onreplicating a database to AWS from on-premises using full load and change data capture(CDC). During the CDC replication, the database administrator observed that the targetlatency was high and slowly increasing-What could be the root causes for this high target latency? (Select TWO.)

A. There was ongoing maintenance on the replication instance
B. The source endpoint was changed by modifyng the task
C. Loopback changes had affected the source and target instances-
D. There was no primary key or index in the target database.
E. There were resource bottlenecks in the replication instance


Question # 21

A company has an Amazon Redshift cluster with database audit logging enabled. Asecurity audit shows that raw SQL statements that run against the Redshift cluster arebeing logged to an Amazon S3 bucket. The security team requires that authentication logsare generated for use in an intrusion detection system (IDS), but the security team does notrequire SQL queries.What should a database specialist do to remediate this issue?

A. Set the parameter to true in the database parameter group.
B. Turn off the query monitoring rule in the Redshift cluster's workload management(WLM).
C. Set the enable_user_activity_logging parameter to false in the database parametergroup.
D. Disable audit logging on the Redshift cluster.


Question # 22

An online retailer uses Amazon DynamoDB for its product catalog and order data. Somepopular items have led to frequently accessed keys in the data, and the company is usingDynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessedkeys. As the number of popular products is growing, the company realizes that more itemsneed to be cached. The company observes a high cache miss rate and needs a solution toaddress this issue.What should a database specialist do to accommodate the changing requirements forDAX?

A. Increase the number of nodes in the existing DAX cluster.
B. Create a new DAX cluster with more nodes. Change the DAX endpoint in the applicationto point to the new cluster.
C. Create a new DAX cluster using a larger node type. Change the DAX endpoint in theapplication to point to the new cluster.
D. Modify the node type in the existing DAX cluster.


Question # 23

A business's production database is hosted on a single-node Amazon RDS for MySQL DBinstance. The database instance is hosted in a United States AWS Region.A week before a significant sales event, a fresh database maintenance update is released.The maintenance update has been designated as necessary. The firm want to minimize thedatabase instance's downtime and requests that a database expert make the databaseinstance highly accessible until the sales event concludes.Which solution will satisfy these criteria?

A. Defer the maintenance update until the sales event is over.
B. Create a read replica with the latest update. Initiate a failover before the sales event.
C. Create a read replica with the latest update. Transfer all read-only traffic to the readreplica during the sales event.
D. Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.


Question # 24

A company runs online transaction processing (OLTP) workloads on an Amazon RDS forPostgreSQL Multi-AZ DB instance. The company recently conducted tests on the databaseafter business hours, andthe tests generated additional database logs. As a result, free storage of the DB instance islow and is expected to be exhausted in 2 days.The company wants to recover the free storage that the additional logs consumed. Thesolution must not result in downtime for the database.Which solution will meet these requirements?

A. Modify the rds.log_retention_period parameter to 0. Reboot the DB instance to save thechanges.
B. Modify the rds.log_retention_period parameter to 1440. Wait up to 24 hours fordatabase logs to be deleted.
C. Modify the temp file_limit parameter to a smaller value to reclaim space on the DBinstance.
D. Modify the rds.log_retention_period parameter to 1440. Reboot the DB instance to savethe changes.


Question # 25

A company has an existing system that uses a single-instance Amazon DocumentDB (withMongoDB compatibility) cluster. Read requests account for 75% of the system queries.Write requests are expected to increase by 50% after an upcoming global release. Adatabase specialist needs to design a solution that improves the overall databaseperformance without creating additional application overhead.Which solution will meet these requirements?

A. Recreate the cluster with a shared cluster volume. Add two instances to serve both readrequests and write requests.
B. Add one read replica instance. Activate a shared cluster volume. Route all read queriesto the read replica instance.
C. Add one read replica instance. Set the read preference to secondary preferred.
D. Add one read replica instance. Update the application to route all read queries to theread replica instance.


Question # 26

A financial company is hosting its web application on AWS. The application's database is hosted on Amazon RDS for MySQL with automated backups enabled.The application has caused a logical corruption of the database, which is causing theapplication to become unresponsive. The specific time of the corruption has beenidentified, and it was within the backup retention period.How should a database specialist recover the database to the most recent point beforecorruption?

A. Use the point-in-time restore capability to restore the DB instance to the specified time.No changes to the application connection string are required.
B. Use the point-in-time restore capability to restore the DB instance to the specified time.Change the application connection string to the new, restored DB instance.
C. Restore using the latest automated backup. Change the application connection string tothe new, restored DB instance.
D. Restore using the appropriate automated backup. No changes to the applicationconnection string are required.


Question # 27

A company is running critical applications on AWS. Most of the application deploymentsuse Amazon Aurora MySQL for the database stack. The company uses AWSCloudFormation to deploy the DB instances.The company's application team recently implemented a CI/CD pipeline. A databaseengineer needs to integrate the database deployment CloudFormation stack with the newlybuilt CllCD platform. Updates to the CloudFormation stack must not update existingproduction database resources.Which CloudFormation stack policy action should the database engineer implement tomeet these requirements?

A. Use a Deny statement for the Update:Modify action on the production databaseresources.
B. Use a Deny statement for the action on the production database resources.
C. Use a Deny statement for the Update:Delete action on the production databaseresources.
D. Use a Deny statement for the Update:Replace action on the production databaseresources.


Question # 28

A gaming company is building a mobile game that will have as many as 25,000 activeconcurrent users in the first 2 weeks after launch. The game has a leaderboard that showsthe 10 highest scoring players over the last 24 hours. The leaderboard calculations areprocessed by an AWS Lambda function, which takes about 10 seconds. The companywants the data on the leaderboard to be no more than 1 minute old.Which architecture will meet these requirements in the MOST operationally efficient way?

A. Deliver the player data to an Amazon Timestream database. Create an AmazonElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis.Create a scheduled event with Amazon EventBridge to invoke the Lambda function onceevery minute. Reconfigure the game server to query the Redis cluster for the leaderboarddata.
B. Deliver the player data to an Amazon Timestream database. Create an AmazonDynamoDB table. Configure the Lambda function to store the results in DynamoDB. Createa scheduled event with Amazon EventBridge to invoke the Lambda function once everyminute. Reconfigure the game server to query the DynamoDB table for the leaderboarddata.
C. Deliver the player data to an Amazon Aurora MySQL database. Create an AmazonDynamoDB table. Configure the Lambda function to store the results in MySQL. Create ascheduled event with Amazon EventBridge to invoke the Lambda function once everyminute. Reconfigure the game server to query the DynamoDB table for the leaderboarddata.
D. Deliver the player data to an Amazon Neptune database. Create an AmazonElastiCache for Redis cluster. Configure the Lambda function to store the results in Redis.Create a scheduled event with Amazon EventBridge to invoke the Lambda function onceevery minute. Reconfigure the game server to query the Redis cluster for the leaderboarddata.


Question # 29

In one AWS account, a business runs a two-tier ecommerce application. An Amazon RDSfor MySQL Multi-AZ database instance serves as the application's backend. A developerremoved the database instance in the production environment by accident. Although theorganization recovers the database, the incident results in hours of outage and financialloss.Which combination of adjustments would reduce the likelihood that this error will occuragain in the future? (Select three.)

A. Grant least privilege to groups, IAM users, and roles.
B. Allow all users to restore a database from a backup.
C. Enable deletion protection on existing production DB instances.
D. Use an ACL policy to restrict users from DB instance deletion.
E. Enable AWS CloudTrail logging and Enhanced Monitoring.


Question # 30

Application developers have reported that an application is running slower as more usersare added. The application database is running on an Amazon AuroraDB cluster with an Aurora Replica. The application is written to take advantage of readscaling through reader endpoints. A database specialist looks at the performance metricsof the database and determines that, as new users were added to the database, theprimary instance CPU utilization steadily increased while the Aurora Replica CPU utilizationremained steady.How can the database specialist improve database performance while ensuring minimaldowntime?

A. Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then,reduce the number of replicas once the application meets service level objectives.
B. Modify the primary instance to a larger instance size that offers more CPU capacity.
C. Modify a replica to a larger instance size that has more CPU capacity. Then, promotethe modified replica.
D. Restore the Aurora DB cluster to one that has an instance size with more CPU capacity.Then, swap the names of the old and new DB clusters.


Question # 31

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of itsmobile application. The application is running continuously and a database specialist issatisfied with high availability and fast failover, but is concerned about performancedegradation after failover.How can the database specialist minimize the performance degradation after failover?

A. Enable cluster cache management for the Aurora DB cluster and set the promotionpriority for the writer DB instance and replica to tier-0
B. Enable cluster cache management tor the Aurora DB cluster and set the promotionpriority for the writer DB instance and replica to tier-1
C. Enable Query Plan Management for the Aurora DB cluster and perform a manual plancapture
D. Enable Query Plan Management for the Aurora DB cluster and force the query optimizerto use the desired plan


Question # 32

A large financial services company uses Amazon ElastiCache for Redis for its newapplication that has a global user base. A database administrator must develop a cachingsolution that will be availableacross AWS Regions and include low-latency replication and failover capabilities fordisaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers. Which solution meets these requirements with the LEAST amount of operational effort?

A. Enable cluster mode in ElastiCache for Redis. Then create multiple clusters acrossRegions and replicate the cache data by using AWS Database Migration Service (AWSDMS). Promote a cluster in the failover Region to handle production traffic when DR isrequired.
B. Create a global datastore in ElastiCache for Redis. Then create replica clusters in twoother Regions. Promote one of the replica clusters as primary when DR is required.
C. Disable cluster mode in ElastiCache for Redis. Then create multiple replication groupsacross Regions and replicate the cache data by using AWS Database Migration Service(AWS DMS). Promote a replication group in the failover Region to primary when DR isrequired.
D. Create a snapshot of ElastiCache for Redis in the primary Region and copy it to thefailover Region. Use the snapshot to restore the cluster from the failover Region when DRis required.


Question # 33

A company is planning to migrate a 40 TB Oracle database to an Amazon AuroraPostgreSQL DB cluster by using a single AWS Database Migration Service (AWS DMS)task within a single replication instance. During early testing, AWS DMS is not scaling tothe company's needs. Full load and change data capture (CDC) are taking days tocomplete.The source database server and the target DB cluster have enough network bandwidth andCPU bandwidth for the additional workload. The replication instance has enough resourcesto support the replication. A database specialist needs to improve database performance,reduce data migration time, and create multiple DMS tasks.Which combination of changes will meet these requirements? (Choose two.)

A. Increase the value of the ParallelLoadThreads parameter in the DMS task settings forthe tables.
B. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasksparameter to a higher value.
C. Use a smaller set of tables with each DMS task. Set the MaxFullLoadSubTasksparameter to a lower value.
D. Use parallel load with different data boundaries for larger tables.
E. Run the DMS tasks on a larger instance class. Increase local storage on the instance.


Question # 34

An ecommerce company is running Amazon RDS for Microsoft SQL Server. The companyis planning to perform testing in a development environment with production data. Thedevelopment environment and the production environment are in separate AWS accounts.Both environments use AWS Key Management Service (AWS KMS) encrypted databaseswith both manual and automated snapshots. A database specialist needs to share a KMSencrypted production RDS snapshot with the development account.Which combination of steps should the database specialist take to meet theserequirements? (Select THREE.)

A. Create an automated snapshot. Share the snapshot from the production account to thedevelopment account.
B. Create a manual snapshot. Share the snapshot from the production account to thedevelopment account.
C. Share the snapshot that is encrypted by using the development account default KMSencryption key.
D. Share the snapshot that is encrypted by using the production account custom KMSencryption key.
E. Allow the development account to access the production account KMS encryption key.
F. Allow the production account to access the development account KMS encryption key.


Question # 35

A database specialist needs to replace the encryption key for an Amazon RDS DBinstance. The database specialist needs to take immediate action to ensure security of thedatabase.Which solution will meet these requirements?

A. Modify the DB instance to update the encryption key. Perform this update immediatelywithout waiting for the next scheduled maintenance window.
B. Export the database to an Amazon S3 bucket. Import the data to an existing DBinstance by using the export file. Specify a new encryption key during the import process.
C. Create a manual snapshot of the DB instance. Create an encrypted copy of thesnapshot by using a new encryption key. Create a new DB instance from the encryptedsnapshot.
D. Create a manual snapshot of the DB instance. Restore the snapshot to a new DBinstance. Specify a new encryption key during the restoration process.


Question # 36

A healthcare company is running an application on Amazon EC2 in a public subnet andusing Amazon DocumentDB (with MongoDB compatibility) as the storage layer. An auditreveals that the traffic betweenthe application and Amazon DocumentDB is not encrypted and that the DocumentDBcluster is not encrypted at rest. A database specialist must correct these issues and ensurethat the data in transit and thedata at rest are encrypted.Which actions should the database specialist take to meet these requirements? (SelectTWO.)

A. Download the SSH RSA public key for Amazon DocumentDB. Update the applicationconfiguration to use the instance endpoint instead of the cluster endpoint and run queriesover SSH.
B. Download the SSL .pem public key for Amazon DocumentDB. Add the key to theapplication package and make sure the application is using the key while connecting to thecluster.
C. Create a snapshot of the unencrypted cluster. Restore the unencrypted snapshot as anew cluster with the —storage-encrypted parameter set to true. Update the application topoint to the new cluster.
D. Create an Amazon DocumentDB VPC endpoint to prevent the traffic from going to theAmazon DocumentDB public endpoint. Set a VPC endpoint policy to allow only theapplication instance's security group to connect.
E. Activate encryption at rest using the modify-db-cluster command with the —storageencryptedparameter set to true. Set the security group of the cluster to allow only theapplication instance's security group to connect.


Question # 37

A database specialist needs to move an Amazon ROS DB instance from one AWS accountto another AWS account.Which solution will meet this requirement with the LEAST operational effort?

A. Use AWS Database Migration Service (AWS DMS) to migrate the DB instance from thesource AWS account to the destination AWS account.
B. Create a DB snapshot of the DB instance. Share the snapshot With the destination AWSaccount Create a new DB instance by restoring the snapshot in the destination AWSaccount
C. Create a Multi-AZ deployment tor the DB instance. Create a read replica tor the DBinstance in the source AWS account. use the read replica to replicate the data into the DBinstance in the destination AWS account
D. Use AWS DataSync to back up the DB instance in the source AWS account Use AWSResource Access Manager (AWS RAM) to restore the backup in the destination AWSaccount.


Question # 38

A development team at an international gaming company is experimenting with AmazonDynamoDB to store in-game events for three mobile games. The most popular game hostsa maximum of 500,000 concurrent users, and the least popular game hosts a maximum of10,000 concurrent users. The average size of an event is 20 KB, and the average usersession produces one event each second. Each event is tagged with a time in millisecondsand a globally unique identifier.The lead developer created a single DynamoDB table for the events with the followingschema:Partition key: game nameSort key: event identifierLocal secondary index: player identifierEvent timeThe tests were successful in a small-scale development environment. However, whendeployed to production, new events stopped being added to the table and the logs showDynamoDB failures with the ItemCollectionSizeLimitExceededException error code. Which design change should a database specialist recommend to the development team?

A. Use the player identifier as the partition key. Use the event time as the sort key. Add aglobal secondary index with the game name as the partition key and the event time as thesort key.
B. Create two tables. Use the game name as the partition key in both tables. Use the eventtime as the sort key for the first table. Use the player identifier as the sort key for thesecond table.
C. Replace the sort key with a compound value consisting of the player identifier collatedwith the event time, separated by a dash. Add a local secondary index with the playeridentifier as the sort key.
D. Create one table for each game. Use the player identifier as the partition key. Use theevent time as the sort key.


Question # 39

An online advertising website uses an Amazon DynamoDB table with on-demand capacitymode as its data store. The website also has a DynamoDB Accelerator(DAX) cluster in the same VPC as its web application server. The application needs toperform infrequent writes and many strongly consistent reads from the data store byquerying the DAX cluster.During a performance audit, a systems administrator notices that the application can lookup items by using the DAX cluster. However, the QueryCacheHits metric for the DAXcluster consistently shows 0 while the QueryCacheMisses metric continuously keepsgrowing in Amazon CloudWatch.What is the MOST likely reason for this occurrence?

A. A VPC endpoint was not added to access DynamoDB.
B. Strongly consistent reads are always passed through DAX to DynamoDB.
C. DynamoDB is scaling due to a burst in traffic, resulting in degraded performance.
D. A VPC endpoint was not added to access CloudWatch.


Question # 40

A company is using AWS CloudFormation to provision and manage infrastructureresources, including a production database. During a recent CloudFormation stack update,a database specialist observed that changes were made to a database resource that isnamed ProductionDatabase. The company wants to prevent changes to onlyProductionDatabase during future stack updates.Which stack policy will meet this requirement?

A. Option A
B. Option B
C. Option C
D. Option D


Question # 41

A software company is conducting a security audit of its three-node Amazon AuroraMySQL DB cluster.Which finding is a security concern that needs to be addressed?

A. The AWS account root user does not have the minimum privileges required for clientapplications.
B. Encryption in transit is not configured for all Aurora native backup processes.
C. Each Aurora DB cluster node is not in a separate private VPC with restricted access.
D. The IAM credentials used by the application are not rotated regularly.


Question # 42

A company wants to improve its ecommerce website on AWS. A database specialistdecides to add Amazon ElastiCache for Redis in the implementation stack to ease theworkload off the database and shorten the website response times. The databasespecialist must also ensure the ecommerce website is highly available within thecompany's AWS Region.How should the database specialist deploy ElastiCache to meet this requirement?

A. Launch an ElastiCache for Redis cluster using the AWS CLI with the -cluster-enabledswitch.
B. Launch an ElastiCache for Redis cluster and select read replicas in different AvailabilityZones.
C. Launch two ElastiCache for Redis clusters in two different Availability Zones. ConfigureRedis streams to replicate the cache from the primary cluster to another.
D. Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster'ssnapshot to a different Availability Zone during disaster recovery.


Question # 43

A database specialist wants to ensure that an Amazon Aurora DB cluster is alwaysautomatically upgraded to the most recent minor version available. Noticing that there is anew minor version available, the database specialist has issues an AWS CLI command toenable automatic minor version updates. The command runs successfully, but checking theAurora DB cluster indicates that no update to the Aurora version has been made.What might account for this? (Choose two.)

A. The new minor version has not yet been designated as preferred and requires a manualupgrade.
B. Configuring automatic upgrades using the AWS CLI is not supported. This must beenabled expressly using the AWS Management Console.
C. Applying minor version upgrades requires sufficient free space.
D. The AWS CLI command did not include an apply-immediately parameter.
E. Aurora has detected a breaking change in the new minor version and has automaticallyrejected the upgrade.


Question # 44

A company has more than 100 AWS accounts that need Amazon RDS instances. Thecompany wants to build an automated solution to deploy the RDS instances with specificcompliance parameters. The data does not need to be replicated. The company needs tocreate the databases within 1 dayWhich solution will meet these requirements in the MOST operationally efficient way?

A. Create RDS resources by using AWS CloudFormation. Share the CloudFormationtemplate with each account.
B. Create an RDS snapshot. Share the snapshot With each account Deploy the snapshotinto each account
C. use AWS CloudFormation to create RDS instances in each account. Run AWSDatabase Migration Service (AWS DMS) replication to each ot the created instances.
D. Create a script by using the AWS CLI to copy the ROS instance into the other accountsfrom a template account.


Question # 45

A company runs an ecommerce application on premises on Microsoft SQL Server. Thecompany is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.The company wants to minimize database server maintenance and operating costs afterthe migration is completed. The company also wants to minimize the need to rewrite codeas part of the migration effort.Which solution will meet these requirements?

A. Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.
B. Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for queryprocessing.
C. Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.
D. Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.


Question # 46

A company uses an Amazon Redshift cluster to run its analytical workloads. Corporatepolicy requires that the company's data be encrypted at rest with customer managed keys.The company's disaster recovery plan requires that backups of the cluster be copied intoanother AWS Region on a regular basis.How should a database specialist automate the process of backing up the cluster data incompliance with these policies?

A. Copy the AWS Key Management Service (AWS KMS) customer managed key from thesource Region to the destination Region. Set up an AWS Glue job in the source Region tocopy the latest snapshot of the Amazon Redshift cluster from the source Region to thedestination Region. Use a time-based schedule in AWS Glue to run the job on a dailybasis.
B. Create a new AWS Key Management Service (AWS KMS) customer managed key inthe destination Region. Create a snapshot copy grant in the destination Region specifyingthe new key. In the source Region, configure cross-Region snapshots for the AmazonRedshift cluster specifying the destination Region, the snapshot copy grant, and retentionperiods for the snapshot.
C. Copy the AWS Key Management Service (AWS KMS) customer-managed key from thesource Region to the destination Region. Create Amazon S3 buckets in each Region usingthe keys from their respective Regions. Use Amazon EventBridge (Amazon CloudWatchEvents) to schedule an AWS Lambda function in the source Region to copy the latestsnapshot to the S3 bucket in that Region. Configure S3 Cross-Region Replication to copythe snapshots to the destination Region, specifying the source and destination KMS keyIDs in the replication configuration.
D. Use the same customer-supplied key materials to create a CMK with the same privatekey in the destination Region. Configure cross-Region snapshots in the source Regiontargeting the destination Region. Specify the corresponding CMK in the destination Regionto encrypt the snapshot.


Question # 47

A database specialist needs to move a table from a database that is running on an AmazonAurora PostgreSQL DB cluster into a new and distinct database cluster. The new table inthe new database must be updated with any changes to the original table that happenwhile the migration is in progress.The original table contains a column to store data as large as 2 GB in the form of largebinary objects (LOBs). A few records are large in size, but most of the LOB data is smallerthan 32 KB.What is the FASTEST way to replicate all the data from the original table?

A. Use AWS Database Migration Service (AWS DMS) with ongoing replication in full LOBmode.
B. Take a snapshot of the database. Create a new DB instance by using the snapshot.
C. Use AWS Database Migration Service (AWS DMS) with ongoing replication in limitedLOB mode.
D. Use AWS Database Migration Service (AWS DMS) with ongoing replication in inlineLOB mode.


Question # 48

A company recently migrated its line-of-business (LOB) application to AWS. The application uses an Amazon RDS for SQL Server DB instance as its database engine. The company must set up cross-Region disaster recovery for the application. The company needs a solution with the lowest possible RPO and RTO. Which solution will meet these requirements?

A. Create a cross-Region read replica of the DB instance. Promote the read replica at the time of failover. 
B. Set up SQL replication from the DB instance to an Amazon EC2 instance in the disaster recovery Region. Promote the EC2 instance as the primary server. 
C. Use AWS Database Migration Service (AWS KMS) for ongoing replication of the DB instance in the disaster recovery Region. 
D. Take manual snapshots of the DB instance in the primary Region. Copy the snapshots to the disaster recovery Region. 


Question # 49

A financial services company runs an on-premises MySQL database for a critical application. The company is dissatisfied with its current database disaster recovery (DR) solution. The application experiences a significant amount of downtime whenever the database fails over to its DR facility. The application also experiences slower response times when reports are processed on the same database. To minimize the downtime in DR situations, the company has decided to migrate the database to AWS. The company requires a solution that is highly available and the most cost-effective. Which solution meets these requirements?

A. Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the replica instance endpoint and report queries to reference the primary DB instance endpoint. 
B. Create an Amazon RDS for MySQL Multi-AZ DB instance and configure a read replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint. 
C. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the cluster endpoint and report queries to reference the reader endpoint. 
D. Create an Amazon Aurora DB cluster and configure an Aurora Replica in a different Availability Zone. Configure the application to reference the primary DB instance endpoint and report queries to reference the replica instance endpoint. 


Question # 50

A company has branch offices in the United States and Singapore. The company has a three-tier web application that uses a shared database. The database runs on an Amazon RDS for MySQL DB instance that is hosted in the us-west-2 Region. The application has a distributed front end that is deployed in us-west-2 and in the ap-southeast-1 Region. The company uses this front end as a dashboard that provides statistics to sales managers in each branch office. The dashboard loads more slowly in the Singapore branch office than in the United States branch office. The company needs a solution so that the dashboard loads consistently for users in each location. Which solution will meet these requirements in the MOST operationally efficient way?

A. Take a snapshot of the DB instance in us-west-2. Create a new DB instance in apsoutheast-2 from the snapshot. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance. 
B. Create an RDS read replica in ap-southeast-1 from the primary DB instance in us-west2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica. 
C. Create a new DB instance in ap-southeast-1. Use AWS Database Migration Service (AWS DMS) and change data capture (CDC) to update the new DB instance in apsoutheast-1. Reconfigure the ap-southeast-1 front-end dashboard to access the new DB instance. 
D. Create an RDS read replica in us-west-2, where the primary DB instance resides. Create a read replica in ap-southeast-1 from the read replica in us-west-2. Reconfigure the ap-southeast-1 front-end dashboard to access the read replica in ap-southeast-1. 


Question # 51

A software-as-a-service (SaaS) company is using an Amazon Aurora Serverless DB cluster for its production MySQL database. The DB cluster has general logs and slow query logs enabled. A database engineer must use the most operationally efficient solution with minimal resource utilization to retain the logs and facilitate interactive search and analysis. Which solution meets these requirements?

A. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Athena and Amazon QuickSight to search and analyze the logs. 
B. Download the logs from the DB cluster and store them in Amazon S3 by using manual scripts. Use Amazon Athena and Amazon QuickSight to search and analyze the logs. 
C. Use an AWS Lambda function to ship database logs to an Amazon S3 bucket. Use Amazon Elasticsearch Service (Amazon ES) and Kibana to search and analyze the logs. 
D. Use Amazon CloudWatch Logs Insights to search and analyze the logs when the logs are automatically uploaded by the DB cluster. 


Question # 52

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to RDS Data API is private and never passes through the public internet. What should the database specialist do to meet this requirement?

A. Modify the Aurora Serverless cluster by selecting a VPC with private subnets. 
B. Modify the Aurora Serverless cluster by unchecking the publicly accessible option. 
C. Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API. 
D. Create a gateway VPC endpoint for RDS Data API. 


Question # 53

A company runs a customer relationship management (CRM) system that is hosted onpremises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature. Which AWS solution meets these requirements?

A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system. 
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic. 
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system. 
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic. 


Question # 54

A security team is conducting an audit for a financial company. The security team discovers that the database credentials of an Amazon RDS for MySQL DB instance are hardcoded in the source code. The source code is stored in a shared location for automatic deployment and is exposed to all users who can access the location. A database specialist must use encryption to ensure that the credentials are not visible in the source code. Which solution will meet these requirements?

A. Use an AWS Key Management Service (AWS KMS) key to encrypt the most recent database backup. Restore the backup as a new database to activate encryption. 
B. Store the source code to access the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the code with calls to Systems Manager. 
C. Store the credentials in an AWS Systems Manager Parameter Store secure string parameter that is encrypted by AWS Key Management Service (AWS KMS). Access the credentials with calls to Systems Manager. 
D. Use an AWS Key Management Service (AWS KMS) key to encrypt the DB instance at rest. Activate RDS encryption in transit by using SSL certificates. 


Question # 55

Developers have requested a new Amazon Redshift cluster so they can load new thirdparty marketing data. The new cluster is ready and the user credentials are given to the developers. The developers indicate that their copy jobs fail with the following error message: “Amazon Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied.” The developers need to load this data soon, so a database specialist must act quickly to solve this issue. What is the MOST secure solution?

A. Create a new IAM role with the same user name as the Amazon Redshift developer user ID. Provide the IAM role with read-only access to Amazon S3 with the assume role action. 
B. Create a new IAM role with read-only access to the Amazon S3 bucket and include the assume role action. Modify the Amazon Redshift cluster to add the IAM role. 
C. Create a new IAM role with read-only access to the Amazon S3 bucket with the assume role action. Add this role to the developer IAM user ID used for the copy job that ended with an error message. 
D. Create a new IAM user with access keys and a new role with read-only access to the Amazon S3 bucket. Add this role to the Amazon Redshift cluster. Change the copy job to use the access keys created. 


Question # 56

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC. Which solutions will reduce this latency? (Choose two.)

A. Configure the DMS task to run in full large binary object (LOB) mode. 
B. Configure the DMS task to run in limited large binary object (LOB) mode. 
C. Create a Multi-AZ replication instance. 
D. Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions. 
E. Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions. 


Question # 57

A company plans to migrate a MySQL-based application from an on-premises environment to AWS. The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover. Which solution on AWS will meet these requirements with the LEAST operational overhead?

A. Deploy an Amazon RDS DB instance with a read replica. 
B. Deploy an Amazon RDS Multi-AZ DB instance. 
C. Deploy Amazon DynamoDB global tables. 
D. Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured. 


Question # 58

A company's development team needs to have production data restored in a staging AWS account. The production database is running on an Amazon RDS for PostgreSQL Multi-AZ DB instance, which has AWS KMS encryption enabled using the default KMS key. A database specialist planned to share the most recent automated snapshot with the staging account, but discovered that the option to share snapshots is disabled in the AWS Management Console. What should the database specialist do to resolve this? 

A. Disable automated backups in the DB instance. Share both the automated snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account and enable automated backups. 
B. Copy the automated snapshot specifying a custom KMS encryption key. Share both the copied snapshot and the custom KMS encryption key with the staging account. Restore the snapshot to the staging account within the same Region. 
C. Modify the DB instance to use a custom KMS encryption key. Share both the automated snapshot and the custom KMS encryption key with the staging account. Restore the snapshot in the staging account. 
D. Copy the automated snapshot while keeping the default KMS key. Share both the snapshot and the default KMS key with the staging account. Restore the snapshot in the staging account. 


Question # 59

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most costeffective solution that will automatically scale and is highly available. Which solution meets these requirements?

A. Amazon DynamoDB with on-demand capacity mode 
B. Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled 
C. Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs) 
D. Amazon Aurora with one writer node and two cross-Region Aurora Replicas 


Question # 60

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are welldefined. The service has an availability target of 99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data. Which database solution meets these requirements at the LOWEST cost?

A. Amazon Neptune 
B. Amazon Aurora PostgreSQL Serverless 
C. Amazon RDS for PostgreSQL 
D. Amazon DynamoDB 


Question # 61

Recently, a gaming firm purchased a popular iOS game that is especially popular during the Christmas season. The business has opted to include a leaderboard into the game, which will be powered by Amazon DynamoDB. The application's load is likely to increase significantly throughout the Christmas season. Which solution satisfies these criteria at the lowest possible cost?

A. DynamoDB Streams 
B. DynamoDB with DynamoDB Accelerator 
C. DynamoDB with on-demand capacity mode 
D. DynamoDB with provisioned capacity mode with Auto Scaling 


Question # 62

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table. The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application. Which solution will meet these requirements? 

A. Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.
 B. Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table. 
C. Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application. 
D. Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure. 


Question # 63

A finance company migrated its 3 ¢’ on-premises PostgreSQL database to an Amazon Aurora PostgreSQL DB cluster. During a review after the migration, a database specialist discovers that the database is not encrypted at rest. The database must be encrypted at rest as soon as possible to meet security requirements. The database specialist must enable encryption for the DB cluster with minimal downtime. Which solution will meet these requirements? 

A. Modify the unencrypted DB cluster using the AWS Management Console. Enable encryption and choose to apply the change immediately. 
B. Take a snapshot of the unencrypted DB cluster and restore it to a new DB cluster with encryption enabled. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster. 
C. Create an encrypted Aurora Replica of the unencrypted DB cluster. Promote the Aurora Replica as the new master. 
D. Create a new DB cluster with encryption enabled and use the pg_dump and pg_restore utilities to load data to the new DB cluster. Update any database connection strings to reference the new DB cluster endpoint, and then delete the unencrypted DB cluster. 


Question # 64

An internet advertising firm stores its data in an Amazon DynamoDb table. Amazon DynamoDB Streams are enabled on the table, and one of the keys has a global secondary index. The table is encrypted using a customer-managed AWS Key Management Service (AWS KMS) key. The firm has chosen to grow worldwide and want to duplicate the database using DynamoDB global tables in a new AWS Region. An administrator observes the following upon review: No role with the dynamodb: CreateGlobalTable permission exists in the account. An empty table with the same name exists in the new Region where replication is desired. A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired. Which settings will prevent you from creating a global table or replica in the new Region? (Select two.)

A. A global secondary index with the same partition key but a different sort key exists in the new Region where replication is desired. 
B. An empty table with the same name exists in the Region where replication is desired. 
C. No role with the dynamodb:CreateGlobalTable permission exists in the account. 
D. DynamoDB Streams is enabled for the table. 
E. The table is encrypted using a KMS customer managed key. 


Question # 65

A company is planning to use Amazon RDS for SQL Server for one of its critical applications. The company's security team requires that the users of the RDS for SQL Server DB instance are authenticated with on-premises Microsoft Active Directory credentials. Which combination of steps should a database specialist take to meet this requirement? (Choose three.) 

A. Extend the on-premises Active Directory to AWS by using AD Connector. 
B. Create an IAM user that uses the AmazonRDSDirectoryServiceAccess managed IAM policy. 
C. Create a directory by using AWS Directory Service for Microsoft Active Directory. 
D. Create an Active Directory domain controller on Amazon EC2. 
E. Create an IAM role that uses the AmazonRDSDirectoryServiceAccess managed IAM policy. 
F. Create a one-way forest trust from the AWS Directory Service for Microsoft Active Directory directory to the on-premises Active Directory. 


Question # 66

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora PostgreSQL database on AWS. The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration. What is the MOST operationally efficient solution that meets these requirements? 

A. Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation. 
B. Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convert-lowercase. 
C. Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase. 
D. Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase. 


Question # 67

A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other nonproduction environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production. What is most secure solution to store the master password?

A. Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template. 
B. Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template. 
C. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation. 
D. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation. 


Question # 68

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environmentspecific settings separately, and wants to minimize rework due to configuration errors. Which process should the Database Specialist recommend to meet these requirements?

A. Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter. 
B. Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
 C. Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack. 
D. Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console. 


Question # 69

A retail company uses Amazon Redshift Spectrum to run complex analytical queries on objects that are stored in an Amazon S3 bucket. The objects are joined with multiple dimension tables that are stored in an Amazon Redshift database. The company uses the database to create monthly and quarterly aggregated reports. Users who attempt to run queries are reporting the following error message: error: Spectrum Scan Error: Access throttled Which solution will resolve this error? 

A. Check file sizes of fact tables in Amazon S3, and look for large files. Break up large files into smaller files of equal size between 100 MB and 1 GB 
B. Reduce the number of queries that users can run in parallel. 
C. Check file sizes of fact tables in Amazon S3, and look for small files. Merge the small files into larger files of at least 64 MB in size. 
D. Review and optimize queries that submit a large aggregation step to Redshift Spectrum.


Question # 70

A manufacturing company has an. inventory system that stores information in an Amazon Aurora MySQL DB cluster. The database tables are partitioned. The database size has grown to 3 TB. Users run one-time queries by using a SQL client. Queries that use an equijoin to join large tables are taking a long time to run. Which action will improve query performance with the LEAST operational effort?

A. Migrate the database to a new Amazon Redshift data warehouse. 
B. Enable hash joins on the database by setting the variable optimizer_switch to hash_join=on. 
C. Take a snapshot of the DB cluster. Create a new DB instance by using the snapshot, and enable parallel query mode. 
D. Add an Aurora read replica. 


Question # 71

A business is launching a new Amazon RDS for SQL Server database instance. The organization wishes to allow auditing of the SQL Server database. Which measures should a database professional perform in combination to achieve this requirement? (Select two.) 

A. Create a service-linked role for Amazon RDS that grants permissions for Amazon RDS to store audit logs on Amazon S3. 
B. Set up a parameter group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the parameter group with the DB instance.
 C. Disable Multi-AZ on the DB instance, and then enable auditing. Enable Multi-AZ after auditing is enabled. 
D. Disable automated backup on the DB instance, and then enable auditing. Enable automated backup after auditing is enabled. 
E. Set up an options group to configure an IAM role and an Amazon S3 bucket for audit log storage. Associate the options group with the DB instance. 


Question # 72

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-atrest encryption must be enabled for the target DB instance. Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

A. Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database. 
B. Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3. 
C. Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance. 
D. Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3. 
E. Encrypt the data with client-side encryption before transferring the data to Amazon RDS. 


Question # 73

A company uses an on-premises Microsoft SQL Server database to host relational and JSON data and to run daily ETL and advanced analytics. The company wants to migrate the database to the AWS Cloud. Database specialist must choose one or more AWS services to run the company's workloads. Which solution will meet these requirements in the MOST operationally efficient manner?

A. Use Amazon Redshift for relational data. Use Amazon DynamoDB for JSON data 
B. Use Amazon Redshift for relational data and JSON data. 
C. Use Amazon RDS for relational data. Use Amazon Neptune for JSON data 
D. Use Amazon Redshift for relational data. Use Amazon S3 for JSON data. 


Question # 74

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC. The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access. Which security strategy should a database specialist implement to meet these requirements?

A. Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet. 
B. Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.
 C. Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running. 
D. Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed. 


Question # 75

A company has a quarterly customer survey. The survey uses an Amazon EC2 instance that is hosted in a public subnet to host a customer survey website. The company uses an Amazon RDS DB instance that is hosted in a private subnet in the same VPC to store the survey results. The company takes a snapshot of the DB instance after a survey is complete, deletes the DB instance, and then restores the DB instance from the snapshot when the survey needs to be conducted again. A database specialist discovers that the customer survey website times out when it attempts to establish a connection to the restored DB instance. What is the root cause of this problem?

A. The VPC peering connection has not been configured properly for the EC2 instance to communicate with the DB instance.
B. The route table of the private subnet that hosts the DB instance does not have a NAT gateway configured for communication with the EC2 instance. 
C. The public subnet that hosts the EC2 instance does not have an internet gateway configured for communication with the DB instance. 
D. The wrong security group was associated with the new DB instance when it was restored from the snapshot. 


Question # 76

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery reenabled before using the DB instance to store production data. What should a database specialist do so that point-in-time recovery can be successful? 

A. Enable binary logging in the DB parameter group used by the DB instance. 
B. Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs. 
C. Modify the DB instance and configure a backup retention period 
D. Set up a scheduled job to create manual DB instance snapshots. 


Question # 77

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days. What should a database specialist do to meet this requirement with the LEAST amount of downtime?

A. Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance. 
B. Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance. 
C. Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance. 
D. Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance. 


Question # 78

A company is using an Amazon ElastiCache for Redis cluster to host its online shopping website. Shoppers receive the following error when the website's application queries the cluster: Which solutions will resolve this memory issues with the LEAST amount of effort? (Choose three.) 

A. Reduce the TTL value for keys on the node. 
B. Choose a larger node type. 
C. Test different values in the parameter group for the maxmemory-policy parameter to find the ideal value to use. 
D. Increase the number of nodes. 
E. Monitor the EngineCPUUtilization Amazon CloudWatch metric. Create an AWS Lambda function to delete keys on nodes when a threshold is reached. 
F. Increase the TTL value for keys on the node


Question # 79

A company has an on-premises production Microsoft SQL Server with 250 GB of data in one database. A database specialist needs to migrate this on-premises SQL Server to Amazon RDS for SQL Server. The nightly native SQL Server backup file is approximately 120 GB in size. The application can be down for an extended period of time to complete the migration. Connectivity between the on-premises environment and AWS can be initiated from on-premises only. How can the database be migrated from on-premises to Amazon RDS with the LEAST amount of effort?

A. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Download the backup files on an Amazon EC2 instance and restore them from the EC2 instance into the new production RDS instance. 
B. Back up the SQL Server database using a native SQL Server backup. Upload the backup files to Amazon S3. Restore the backup files from the S3 bucket into the new production RDS instance. 
C. Provision and configure AWS DMS. Set up replication between the on-premises SQL Server environment to replicate the database to the new production RDS instance. 
D. Back up the SQL Server database using AWS Backup. Once the backup is complete, restore the completed backup to an Amazon EC2 instance and move it to the new production RDS instance. 


Question # 80

An information management services company is storing JSON documents on premises. The company is using a MongoDB 3.6 database but wants to migrate to AWS. The solution must be compatible, scalable, and fully managed. The solution also must result in as little downtime as possible during the migration. Which solution meets these requirements? 

A. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of Amazon DocumentDB (with MongoDB compatibility). 
B. Create an AWS Database Migration Service (AWS DMS) replication instance, a source endpoint for MongoDB, and a target endpoint of a MongoDB image that is hosted on Amazon EC2 
C. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to Amazon DocumentDB (with MongoDB compatibility). 
D. Use the mongodump and mongorestore tools to migrate the data from the source MongoDB deployment to a MongoDB image that is hosted on Amazon EC2. 


Question # 81

A company requires near-real-time notifications when changes are made to Amazon RDS DB security groups. Which solution will meet this requirement with the LEAST operational overhead?

A. Configure an RDS event notification subscription for DB security group events.
B. Create an AWS Lambda function that monitors DB security group changes. Create an Amazon Simple Notification Service (Amazon SNS) topic for notification. 
C. Turn on AWS CloudTrail. Configure notifications for the detection of changes to DB security groups. 
D. Configure an Amazon CloudWatch alarm for RDS metrics about changes to DB security groups. 


Question # 82

A company uses Amazon Aurora MySQL as the primary database engine for many of its applications. A database specialist must create a dashboard to provide the company with information about user connections to databases. According to compliance requirements, the company must retain all connection logs for at least 7 years. Which solution will meet these requirements MOST cost-effectively?

A. Enable advanced auditing on the Aurora cluster to log CONNECT events. Export audit logs from Amazon CloudWatch to Amazon S3 by using an AWS Lambda function that is invoked by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Build a dashboard by using Amazon QuickSight. 
B. Capture connection attempts to the Aurora cluster with AWS Cloud Trail by using the DescribeEvents API operation. Create a CloudTrail trail to export connection logs to Amazon S3. Build a dashboard by using Amazon QuickSight. 
C. Start a database activity stream for the Aurora cluster. Push the activity records to an Amazon Kinesis data stream. Build a dynamic dashboard by using AWS Lambda. 
D. Publish the DatabaseConnections metric for the Aurora DB instances to Amazon CloudWatch. Build a dashboard by using CloudWatch dashboards. 


Question # 83

A database professional is tasked with the task of migrating 25 GB of data files from an onpremises storage system to an Amazon Neptune database. Which method of data loading is the FASTEST?

A. Upload the data to Amazon S3 and use the Loader command to load the data from Amazon S3 into the Neptune database. 
B. Write a utility to read the data from the on-premises storage and run INSERT statements in a loop to load the data into the Neptune database. 
C. Use the AWS CLI to load the data directly from the on-premises storage into the Neptune database. 
D. Use AWS DataSync to load the data directly from the on-premises storage into the Neptune database. 


Question # 84

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture. Which solution will meet these requirements?

A. Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases. 
B. Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases. 
C. Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.
 D. Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases. 


Question # 85

Amazon DynamoDB global tables are being used by a business to power an online gaming game. The game is played by gamers from all around the globe. As the game became popularity, the amount of queries to DynamoDB substantially rose. Recently, gamers have complained about the game's condition being inconsistent between nations. A database professional notices that the ReplicationLatency metric for many replica tables is set to an abnormally high value. Which strategy will resolve the issue? 

A. Configure all replica tables to use DynamoDB auto scaling. 
B. Configure a DynamoDB Accelerator (DAX) cluster on each of the replicas. 
C. Configure the primary table to use DynamoDB auto scaling and the replica tables to use manually provisioned capacity. 
D. Configure the table-level write throughput limit service quota to a higher value. 


Question # 86

A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message: €Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3. Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

A. Check that Amazon S3 has an IAM role granting read access to Neptune 
B. Check that an Amazon S3 VPC endpoint exists 
C. Check that a Neptune VPC endpoint exists 
D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3 
E. Check that Neptune has an IAM role granting read access to Amazon S3 


Question # 87

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform. Which combination of steps should the company take to meet these requirements? (Choose two.)

A. Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled. 
B. Deploy an ElastiCache for Memcached global datastore. 
C. Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup. 
D. Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available. 
E. Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates. 


Question # 88

A pharmaceutical company's drug search API is using an Amazon Neptune DB cluster. A bulk uploader process automatically updates the information in the database a few times each week. A few weeks ago during a bulk upload, a database specialist noticed that the database started to respond frequently with a ThrottlingException error. The problem also occurred with subsequent uploads. The database specialist must create a solution to prevent ThrottlingException errors for the database. The solution must minimize the downtime of the cluster. Which solution meets these requirements?

A. Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica. 
B. Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone. 
C. Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance. 
D. Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster. 


Question # 89

A company is using Amazon Aurora MySQL as the database for its retail application on AWS. The company receives a notification of a pending database upgrade and wants to ensure upgrades do not occur before or during the most critical time of year. Company leadership is concerned that an Amazon RDS maintenance window will cause an outage during data ingestion. Which step can be taken to ensure that the application is not interrupted?

A. Disable weekly maintenance on the DB cluster. 
B. Clone the DB cluster and migrate it to a new copy of the database. 
C. Choose to defer the upgrade and then find an appropriate down time for patching. 
D. Set up an Aurora Replica and promote it to primary at the time of patching. 


Question # 90

A software company uses an Amazon RDS for MySQL Multi-AZ DB instance as a data store for its critical applications. During an application upgrade process, a database specialist runs a custom SQL script that accidentally removes some of the default permissions of the master user. What is the MOST operationally efficient way to restore the default permissions of the master user?

A. Modify the DB instance and set a new master user password. 
B. Use AWS Secrets Manager to modify the master user password and restart the DB instance. 
C. Create a new master user for the DB instance. 
D. Review the IAM user that owns the DB instance, and add missing permissions. 


Question # 91

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a MySQL database that is hosted in Amazon RDS. After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections. Which solution will meet this requirement?

A. Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON. 
B. Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user. 
C. Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established. 
D. Update the DB instance, and enable the Require Transport Layer Security option. 


Question # 92

For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls. Which sequence of actions enables the database professional to upload the data most quickly? (Select three.) 

A. Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file. 
B. Ensure the vertices and edges are specified in different .csv files with proper header column formatting. 
C. Use AWS DMS to move data from Amazon S3 to the Neptune Loader. 
D. Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands. 
E. Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket. 
F. Create an S3 VPC endpoint and issue an HTTP POST to the database€™s loader endpoint.


Question # 93

A company is using an Amazon Aurora MySQL database with Performance Insights enabled. A database specialist is checking Performance Insights and observes an alert message that starts with the following phrase: `Performance Insights is unable to collect SQL Digest statistics on new queries`¦` Which action will resolve this alert message?

A. Truncate the events_statements_summary_by_digest table. 
B. Change the AWS Key Management Service (AWS KMS) key that is used to enable Performance Insights. 
C. Set the value for the performance_schema parameter in the parameter group to 1. 
D. Disable and reenable Performance Insights to be effective in the next maintenance window. 


Question # 94

A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS. Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

A. Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases. 
B. Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.
C. On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances. 
D. On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.
 E. Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration. 
F. On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration. 


Question # 95

An online gaming company is using an Amazon DynamoDB table in on-demand mode to store game scores. After an intensive advertisement campaign in South America, the average number of concurrent users rapidly increases from 100,000 to 500,000 in less than 10 minutes every day around 5 PM. The on-call software reliability engineer has observed that the application logs contain a high number of DynamoDB throttling exceptions caused by game score insertions around 5 PM. Customer service has also reported that several users are complaining about their scores not being registered. How should the database administrator remediate this issue at the lowest cost? 

A. Enable auto scaling and set the target usage rate to 90%. 
B. Switch the table to provisioned mode and enable auto scaling. 
C. Switch the table to provisioned mode and set the throughput to the peak value. 
D. Create a DynamoDB Accelerator cluster and use it to access the DynamoDB table. 


Question # 96

A bike rental company operates an application to track its bikes. The application receives location and condition data from bike sensors. The application also receives rental transaction data from the associated mobile app. The application uses Amazon DynamoDB as its database layer. The company has configured DynamoDB with provisioned capacity set to 20% above the expected peak load of the application. On an average day, DynamoDB used 22 billion read capacity units (RCUs) and 60 billion write capacity units (WCUs). The application is running well. Usage changes smoothly over the course of the day and is generally shaped like a bell curve. The timing and magnitude of peaks vary based on the weather and season, but the general shape is consistent. Which solution will provide the MOST cost optimization of the DynamoDB database layer

A. Change the DynamoDB tables to use on-demand capacity. 
B. Use AWS Auto Scaling and configure time-based scaling. 
C. Enable DynamoDB capacity-based auto scaling. 
D. Enable DynamoDB Accelerator (DAX). 


Question # 97

A startup company is building a new application to allow users to visualize their onpremises and cloud networking components. The company expects billions of components to be stored and requires responses in milliseconds. The application should be able to identify: The networks and routes affected if a particular component fails. The networks that have redundant routes between them. The networks that do not have redundant routes between them. The fastest path between two networks. Which database engine meets these requirements?

A. Amazon Aurora MySQL 
B. Amazon Neptune 
C. Amazon ElastiCache for Redis 
D. Amazon DynamoDB 


Amazon DBS-C01 Frequently Asked Questions


Customers Feedback

What our clients say about DBS-C01 Braindumps

    Ross     Apr 25, 2024
During my search for reliable DBS-C01 preparation materials, I came across Salesforcexamdumps.com. After purchasing their PDF dumps with the assurance of their customer support team that they were the most up-to-date and would guarantee a full refund if I didn't pass, I was pleased to find that the test was extremely beneficial in my actual exam. Thanks to their website, I am now AWS certified, and I highly recommend their services to others.
    Mohamed Rizwan     Apr 24, 2024
Salesforcexamdumps.com is the ultimate resource for anyone preparing for their DBS-C01 certification exam.
    Thomas Robinson     Apr 24, 2024
I would like to express my gratitude to Salesforcexamdumps.com for its exceptional exam materials. Thanks to their assistance, not only cleared my certification exam but also achieved an impressive score of 89%! Passing DBS-C01 exam has been a goal of mine since 2019, and it became a reality because of the efforts of the Salesforcexamdumps.com team.
    Jameson     Apr 23, 2024
I highly recommend this to others as the DBS-C01 material is incredibly helpful. I passed my exam !!
    Patricia     Apr 23, 2024
Salesforcexamdumps.com thank to you and your team for designed fantastic material i clear my DBS-C01 exam thanks !!
    Rachel Andrews     Apr 22, 2024
Authentic DBS-C01 Dumps is 100% valid. Excellent study guide. I got 93% score
    Pierre Dubois     Apr 22, 2024
Best website with updated DBS-C01 practice exam questions to prepare for your exam. Passed my last exam with 91% marks.
    Eduardo     Apr 21, 2024
Salesforcexamdumps.com is widely regarded as the most reliable resource that I have ever used. Previously, my colleagues would search multiple websites to purchase study materials for the AWS Certified Database – Specialty exam, but unfortunately, they were often deceived by other sites. However, I was confident in purchasing Salesforcexamdumps.com Amazon DBS-C01 exam questions, receiving a remarkable score of 95%.
    Grace Jackson     Apr 21, 2024
I am thrilled to say that Salesforcexamdumps.com was instrumental in my success as they provided genuine exam questions for me to practice with. With their help, I passed my certification exam on my first try. Achieving this certification was once just a dream, but thanks to Salesforcexamdumps.com, Now it became a reality.
    Andrea Torres     Apr 20, 2024
I failed due to a scammed website was a hurtful experience for me. However, a friend recommended Salesforcexamdumps.com for exam preparation for the AWS Database Specialty certification. This certification provides valuable recognition and distinguishes between various AWS database services. Thanks to Salesforcexamdumps.com, I was able to pass the exam with an impressive score of 90%. I am truly grateful for their help.
    Silva     Apr 20, 2024
Absolutely thrilled with the experience and the quality of preparation materials provided by Salesforcexamdumps.com for my DBS-C01 certification exam.
    Anushka     Apr 19, 2024
The team at Salesforcexamdumps.com truly goes above and beyond to ensure their customers' success. The questions and answers for the DBS-C01 exam were spot on and the customer service was excellent. I highly recommend this website for anyone looking to pass their certification exam on the first try.
    Rafaela     Apr 19, 2024
Amazing experience, a best platform with updated exam questions answers, this dumps helped me a lot DBS-C01 certification. Thanks a lot..
    Alex Turner     Apr 18, 2024
I am delighted to share that using Salesforcexamdumps.com exam dumps for the first time resulted in a positive outcome for me. The questions were precise and up-to-date, and I am grateful for their assistance.
    Abdullah Al-Saud     Apr 18, 2024
Really thank you Salesforcexamdumps.com helped me to prepare and passing DBS-C01 certification exam. I recommended this website to get updated preparation materials. thanks again.
    Tomoko     Apr 17, 2024
The Amazon DBS-C01 dumps are excellent and highly impressive in terms of content! Although it may appear challenging at first, practicing with these materials made it significantly easier to efficiently prepare for the AWS Certified Database – Specialty exam
    Yumi     Apr 17, 2024
Excellent and detailed questions and answers. Best customer service! I passed my DBS-C01 exam with the help of Salesforcexamdumps.com preparation materials. I highly recommend this product .

Leave a comment

Your email address will not be published. Required fields are marked *

Rating / Feedback About This Exam