Deck 5: AWS Certified Database - Specialty
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Question
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/156
Play
Full screen (f)
Deck 5: AWS Certified Database - Specialty
1
A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium-sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas. In the event of a primary failure, what will occur?
A) Aurora will promote an Aurora Replica that is of the same size as the primary instance
B) Aurora will promote an arbitrary Aurora Replica
C) Aurora will promote the largest-sized Aurora Replica
D) Aurora will not promote an Aurora Replica
A) Aurora will promote an Aurora Replica that is of the same size as the primary instance
B) Aurora will promote an arbitrary Aurora Replica
C) Aurora will promote the largest-sized Aurora Replica
D) Aurora will not promote an Aurora Replica
Aurora will promote an Aurora Replica that is of the same size as the primary instance
2
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete. Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake. Which approach should the Database Specialist take to reduce downtime?
A) Deploy multiple read replicas and have the team members make changes to separate replica instances
B) Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
C) Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
D) Enable the Amazon RDS for MySQL Backtrack feature
A) Deploy multiple read replicas and have the team members make changes to separate replica instances
B) Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
C) Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
D) Enable the Amazon RDS for MySQL Backtrack feature
Deploy multiple read replicas and have the team members make changes to separate replica instances
3
A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster. Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
A) Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B) Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
C) Edit and enable Aurora DB cluster cache management in parameter groups.
D) Set TCP keepalive parameters to a high value.
E) Set JDBC connection string timeout variables to a low value.
F) Set Java DNS caching timeouts to a high value.
A) Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B) Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
C) Edit and enable Aurora DB cluster cache management in parameter groups.
D) Set TCP keepalive parameters to a high value.
E) Set JDBC connection string timeout variables to a low value.
F) Set Java DNS caching timeouts to a high value.
Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
Edit and enable Aurora DB cluster cache management in parameter groups.
Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
Edit and enable Aurora DB cluster cache management in parameter groups.
4
A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest. Which step will provide additional security?
A) Set up NACLs that allow the entire EC2 subnet to access the DB instance
B) Disable the master user account
C) Set up a security group that blocks SSH to the DB instance
D) Set up RDS to use SSL for data in transit
A) Set up NACLs that allow the entire EC2 subnet to access the DB instance
B) Disable the master user account
C) Set up a security group that blocks SSH to the DB instance
D) Set up RDS to use SSL for data in transit
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
5
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region. Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?
A) Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
B) Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
C) Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
D) Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
A) Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
B) Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
C) Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
D) Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
6
A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete. What is the MOST likely cause of the 5-minute connection outage?
A) After a database crash, Aurora needed to replay the redo log from the last database checkpoint
B) The client-side application is caching the DNS data and its TTL is set too high
C) After failover, the Aurora DB cluster needs time to warm up before accepting client connections
D) There were no active Aurora Replicas in the Aurora DB cluster
A) After a database crash, Aurora needed to replay the redo log from the last database checkpoint
B) The client-side application is caching the DNS data and its TTL is set too high
C) After failover, the Aurora DB cluster needs time to warm up before accepting client connections
D) There were no active Aurora Replicas in the Aurora DB cluster
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
7
A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?
A) Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
B) Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
C) Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
D) Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.
A) Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
B) Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
C) Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
D) Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
8
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379. Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)
A) Enable in-transit and at-rest encryption on the ElastiCache cluster.
B) Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
C) Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
D) Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
E) Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.
F) Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
A) Enable in-transit and at-rest encryption on the ElastiCache cluster.
B) Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
C) Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
D) Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
E) Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.
F) Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
9
A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation. How can the Database Specialists accomplish this?
A) Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
B) Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
C) Enable Amazon RDS Performance Insights and review the appropriate dashboard
D) Enable Enhanced Monitoring will the appropriate settings
A) Enable the option to push all database logs to Amazon CloudWatch for advanced analysis
B) Create appropriate Amazon CloudWatch dashboards to contain specific periods of time
C) Enable Amazon RDS Performance Insights and review the appropriate dashboard
D) Enable Enhanced Monitoring will the appropriate settings
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
10
A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on-premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely. Which approach should the Database Specialist take to securely manage the database credentials?
A) Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
B) Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
C) Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
D) Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
A) Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
B) Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
C) Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.
D) Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
11
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future. Which settings will meet this requirement? (Choose three.)
A) Set DeletionProtection to True
B) Set MultiAZ to True
C) Set TerminationProtection to True
D) Set DeleteAutomatedBackups to False
E) Set DeletionPolicy to Delete
F) Set DeletionPolicy to Retain
A) Set DeletionProtection to True
B) Set MultiAZ to True
C) Set TerminationProtection to True
D) Set DeleteAutomatedBackups to False
E) Set DeletionPolicy to Delete
F) Set DeletionPolicy to Retain
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
12
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?
A) Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
B) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
C) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
D) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
A) Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
B) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
C) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.
D) Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
13
A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora. Which migration method should a Database Specialist use?
A) Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
B) Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
C) Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
D) Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
A) Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
B) Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
C) Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
D) Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
14
A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup. The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company. Which solution will meet these requirements with minimal effort?
A) Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
B) Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
C) Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
D) Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
A) Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
B) Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
C) Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.
D) Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
15
The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution. Which approach will meet these requirements?
A) Use pg_audit to generate audit logs and send the logs to the Security team.
B) Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
C) Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
D) Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
A) Use pg_audit to generate audit logs and send the logs to the Security team.
B) Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
C) Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.
D) Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
16
A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity. What should the company do to achieve this in the shortest amount of time?
A) Use a blue-green deployment with a complete application-level failover test
B) Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
C) Use RDS fault injection queries to simulate the primary node failure
D) Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
A) Use a blue-green deployment with a complete application-level failover test
B) Use the RDS console to reboot the DB instance by choosing the option to reboot with failover
C) Use RDS fault injection queries to simulate the primary node failure
D) Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
17
A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after-the-fact analyses. What should a Database Specialist do to meet these requirements with minimal effort?
A) Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
B) Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
C) Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
D) Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
A) Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
B) Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.
C) Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
D) Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
18
The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality. This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows. The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal. How can the Database Specialist accomplish this?
A) Quickly rewind the DB cluster to a point in time before the release using Backtrack.
B) Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
C) Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
D) Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
A) Quickly rewind the DB cluster to a point in time before the release using Backtrack.
B) Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.
C) Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
D) Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
19
A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions. Which solution would meet these requirements and deploy the DynamoDB tables?
A) Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
B) Create an AWS CloudFormation template and deploy the template to all the Regions.
C) Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
D) Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.
A) Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
B) Create an AWS CloudFormation template and deploy the template to all the Regions.
C) Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.
D) Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by-step guide for future deployments.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
20
A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location. Which set of actions will meet these requirements?
A) Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
B) Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
C) Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
D) Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
A) Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
B) Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
C) Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
D) Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
21
A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora. Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)
A) CONNECT
B) QUERY_DCL
C) QUERY_DDL
D) QUERY_DML
E) TABLE
F) QUERY
A) CONNECT
B) QUERY_DCL
C) QUERY_DDL
D) QUERY_DML
E) TABLE
F) QUERY
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
22
A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time-consuming, so it is not an option. How should the Database Specialist satisfy this new requirement?
A) Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
B) Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
C) Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
D) Create an encrypted read replica of the RDS DB instance. Promote it the master.
A) Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.
B) Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.
C) Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.
D) Create an encrypted read replica of the RDS DB instance. Promote it the master.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
23
A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database. Which approach will MOST effectively meet these requirements?
A) Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
B) Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
C) Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
D) Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
A) Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
B) Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
C) Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
D) Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
24
A user has a non-relational key-value database. The user is looking for a fully managed AWS service that will offload the administrative burdens of operating and scaling distributed databases. The solution must be cost-effective and able to handle unpredictable application traffic. What should a Database Specialist recommend for this user?
A) Create an Amazon DynamoDB table with provisioned capacity mode
B) Create an Amazon DocumentDB cluster
C) Create an Amazon DynamoDB table with on-demand capacity mode
D) Create an Amazon Aurora Serverless DB cluster
A) Create an Amazon DynamoDB table with provisioned capacity mode
B) Create an Amazon DocumentDB cluster
C) Create an Amazon DynamoDB table with on-demand capacity mode
D) Create an Amazon Aurora Serverless DB cluster
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
25
A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL. The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop. How should the Database Specialist edit the script to fix this issue?
A) Stop the source instances before stopping their read replicas
B) Delete each read replica before stopping its corresponding source instance
C) Stop the read replicas before stopping their source instances
D) Use the AWS CLI to stop each read replica and source instance at the same
A) Stop the source instances before stopping their read replicas
B) Delete each read replica before stopping its corresponding source instance
C) Stop the read replicas before stopping their source instances
D) Use the AWS CLI to stop each read replica and source instance at the same
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
26
A Database Specialist needs to speed up any failover that might occur on an Amazon Aurora PostgreSQL DB cluster. The Aurora DB cluster currently includes the primary instance and three Aurora Replicas. How can the Database Specialist ensure that failovers occur with the least amount of downtime for the application?
A) Set the TCP keepalive parameters low
B) Call the AWS CLI failover-db-cluster command
C) Enable Enhanced Monitoring on the DB cluster
D) Start a database activity stream on the DB cluster
A) Set the TCP keepalive parameters low
B) Call the AWS CLI failover-db-cluster command
C) Enable Enhanced Monitoring on the DB cluster
D) Start a database activity stream on the DB cluster
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
27
An IT consulting company wants to reduce costs when operating its development environment databases. The company's workflow creates multiple Amazon Aurora MySQL DB clusters for each development group. The Aurora DB clusters are only used for 8 hours a day. The DB clusters can then be deleted at the end of the development cycle, which lasts 2 weeks. Which of the following provides the MOST cost-effective solution?
A) Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.
B) Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.
C) Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.
D) Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.
A) Use AWS CloudFormation templates. Deploy a stack with the DB cluster for each development group. Delete the stack at the end of the development cycle.
B) Use the Aurora DB cloning feature. Deploy a single development and test Aurora DB instance, and create clone instances for the development groups. Delete the clones at the end of the development cycle.
C) Use Aurora Replicas. From the master automatic pause compute capacity option, create replicas for each development group, and promote each replica to master. Delete the replicas at the end of the development cycle.
D) Use Aurora Serverless. Restore current Aurora snapshot and deploy to a serverless cluster for each development group. Enable the option to pause the compute capacity on the cluster and set an appropriate timeout.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
28
A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors. Which of the following will resolve this issue?
A) Edit the my.cnf file for the DB cluster to increase max_connections
B) Increase the instance size of the DB cluster
C) Change the DB cluster to Multi-AZ
D) Increase the number of Aurora Replicas
A) Edit the my.cnf file for the DB cluster to increase max_connections
B) Increase the instance size of the DB cluster
C) Change the DB cluster to Multi-AZ
D) Increase the number of Aurora Replicas
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
29
A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue. Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)
A) Grant least privilege to groups, users, and roles
B) Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
C) Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
D) Use policy conditions to restrict access to selective IP addresses
E) Use AccessList Controls policy type to restrict users for database instance deletion
F) Enable AWS CloudTrail logging and Enhanced Monitoring
A) Grant least privilege to groups, users, and roles
B) Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
C) Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
D) Use policy conditions to restrict access to selective IP addresses
E) Use AccessList Controls policy type to restrict users for database instance deletion
F) Enable AWS CloudTrail logging and Enhanced Monitoring
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
30
A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB cluster. The company's Database Specialist discovered that the Oracle database is storing 100 GB of large binary objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication instances. How should the Database Specialist optimize the database migration using AWS DMS?
A) Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
B) Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
C) Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
D) Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
A) Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
B) Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without LOBs
C) Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2 without LOBs
D) Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs together
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
31
A company is writing a new survey application to be used with a weekly televised game show. The application will be available for 2 hours each week. The company expects to receive over 500,000 entries every week, with each survey asking 2-3 multiple choice questions of each user. A Database Specialist needs to select a platform that is highly scalable for a large number of concurrent writes to handle he anticipated volume. Which AWS services should the Database Specialist consider? (Choose two.)
A) Amazon DynamoDB
B) Amazon Redshift
C) Amazon Neptune
D) Amazon Elasticsearch Service
E) Amazon ElastiCache
A) Amazon DynamoDB
B) Amazon Redshift
C) Amazon Neptune
D) Amazon Elasticsearch Service
E) Amazon ElastiCache
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
32
A Database Specialist is designing a disaster recovery strategy for a production Amazon DynamoDB table. The table uses provisioned read/write capacity mode, global secondary indexes, and time to live (TTL). The Database Specialist has restored the latest backup to a new table. To prepare the new table with identical settings, which steps should be performed? (Choose two.)
A) Re-create global secondary indexes in the new table
B) Define IAM policies for access to the new table
C) Define the TTL settings
D) Encrypt the table from the AWS Management Console or use the update-table command
E) Set the provisioned read and write capacity
A) Re-create global secondary indexes in the new table
B) Define IAM policies for access to the new table
C) Define the TTL settings
D) Encrypt the table from the AWS Management Console or use the update-table command
E) Set the provisioned read and write capacity
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
33
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora: ERROR: cloud not write block 7507718 of temporary file: No space left on device What is the cause of this error and what should the Database Specialist do to resolve this issue?
A) The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.
B) The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.
C) The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.
D) The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.
A) The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.
B) The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.
C) The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.
D) The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
34
A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users. How should the Database Specialist apply the parameter group change for the DB instance?
A) Select the option to apply the change immediately
B) Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
C) Apply the change manually by rebooting the DB instance during the approved maintenance window
D) Reboot the secondary Multi-AZ DB instance
A) Select the option to apply the change immediately
B) Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied
C) Apply the change manually by rebooting the DB instance during the approved maintenance window
D) Reboot the secondary Multi-AZ DB instance
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
35
A company has a web-based survey application that uses Amazon DynamoDB. During peak usage, when survey responses are being collected, a Database Specialist sees the ProvisionedThroughputExceededException error. What can the Database Specialist do to resolve this error? (Choose two.)
A) Change the table to use Amazon DynamoDB Streams
B) Purchase DynamoDB reserved capacity in the affected Region
C) Increase the write capacity units for the specific table
D) Change the table capacity mode to on-demand
E) Change the table type to throughput optimized
A) Change the table to use Amazon DynamoDB Streams
B) Purchase DynamoDB reserved capacity in the affected Region
C) Increase the write capacity units for the specific table
D) Change the table capacity mode to on-demand
E) Change the table type to throughput optimized
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
36
A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user's browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users. Which database solution meets these requirements?
A) Amazon DocumentDB
B) Amazon RDS Multi-AZ deployment
C) Amazon DynamoDB global table
D) Amazon Aurora Global Database
A) Amazon DocumentDB
B) Amazon RDS Multi-AZ deployment
C) Amazon DynamoDB global table
D) Amazon Aurora Global Database
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
37
A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours. To meet the company's disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient. What should a Database Specialist do to copy the database backup into a different Region?
A) Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
B) Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
C) Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
D) Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
A) Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
B) Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
C) Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
D) Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
38
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL. How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?
A) Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
B) Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
C) Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
D) Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
A) Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
B) Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
C) Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
D) Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
39
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company's code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production. What is most secure solution to store the master password?
A) Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.
B) Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
C) Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
D) Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
A) Store the master password in a parameter file in each environment. Reference the environment-specific parameter file in the CloudFormation template.
B) Encrypt the master password using an AWS KMS key. Store the encrypted master password in the CloudFormation template.
C) Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
D) Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
40
A company's Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement. Which combination of actions should the Database Specialist take? (Choose three.)
A) Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
B) Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.
C) Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.
D) Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.
E) Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.
F) Configure the AWS Managed Microsoft AD domain controller Security Group.
A) Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
B) Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.
C) Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.
D) Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.
E) Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.
F) Configure the AWS Managed Microsoft AD domain controller Security Group.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
41
A Database Specialist is planning to create a read replica of an existing Amazon RDS for MySQL Multi-AZ DB instance. When using the AWS Management Console to conduct this task, the Database Specialist discovers that the source RDS DB instance does not appear in the read replica source selection box, so the read replica cannot be created. What is the most likely reason for this?
A) The source DB instance has to be converted to Single-AZ first to create a read replica from it.
B) Enhanced Monitoring is not enabled on the source DB instance.
C) The minor MySQL version in the source DB instance does not support read replicas.
D) Automated backups are not enabled on the source DB instance.
A) The source DB instance has to be converted to Single-AZ first to create a read replica from it.
B) Enhanced Monitoring is not enabled on the source DB instance.
C) The minor MySQL version in the source DB instance does not support read replicas.
D) Automated backups are not enabled on the source DB instance.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
42
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists. Which step should be taken to troubleshoot this issue?
A) Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine's IP address
B) Ensure that the RDS DB instance's subnet group includes a public subnet to allow the Developer to connect
C) Ensure that the RDS DB instance has not reached its maximum connections limit
D) Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
A) Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine's IP address
B) Ensure that the RDS DB instance's subnet group includes a public subnet to allow the Developer to connect
C) Ensure that the RDS DB instance has not reached its maximum connections limit
D) Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
43
A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi-AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs. What should the company do to address this space constraint issue?
A) Log in to the host and run the rm $PGDATA/pg_logs/* command
B) Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
C) Create a ticket with AWS Support to have the logs deleted
D) Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
A) Log in to the host and run the rm $PGDATA/pg_logs/* command
B) Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
C) Create a ticket with AWS Support to have the logs deleted
D) Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
44
A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region. Where should the AWS DMS replication instance be placed for the MOST optimal performance?
A) In the same Region and VPC of the source DB instance
B) In the same Region and VPC as the target DB instance
C) In the same VPC and Availability Zone as the target DB instance
D) In the same VPC and Availability Zone as the source DB instance
A) In the same Region and VPC of the source DB instance
B) In the same Region and VPC as the target DB instance
C) In the same VPC and Availability Zone as the target DB instance
D) In the same VPC and Availability Zone as the source DB instance
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
45
A retail company with its main office in New York and another office in Tokyo plans to build a database solution on AWS. The company's main workload consists of a mission-critical application that updates its application data in a data store. The team at the Tokyo office is building dashboards with complex analytical queries using the application data. The dashboards will be used to make buying decisions, so they need to have access to the application data in less than 1 second. Which solution meets these requirements?
A) Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
B) Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Region. Use Amazon QuickSight for displaying dashboard results.
C) Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.
D) Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica ap-northeast-1 Region.
A) Use an Amazon RDS DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Create an Amazon ElastiCache cluster in the ap-northeast-1 Region to cache application data from the replica to generate the dashboards.
B) Use an Amazon DynamoDB global table in the us-east-1 Region with replication into the ap-northeast-1 Region. Use Amazon QuickSight for displaying dashboard results.
C) Use an Amazon RDS for MySQL DB instance deployed in the us-east-1 Region with a read replica instance in the ap-northeast-1 Region. Have the dashboard application read from the read replica.
D) Use an Amazon Aurora global database. Deploy the writer instance in the us-east-1 Region and the replica in the ap-northeast-1 Region. Have the dashboard application read from the replica ap-northeast-1 Region.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
46
A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details. When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a "could not connect to server: Connection times out" error message to Amazon CloudWatch Logs. What is the cause of this error?
A) The user name and password the application is using are incorrect.
B) The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C) The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
D) The user name and password are correct, but the user is not authorized to use the DB instance.
A) The user name and password the application is using are incorrect.
B) The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
C) The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.
D) The user name and password are correct, but the user is not authorized to use the DB instance.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
47
A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors. Which process should the Database Specialist recommend to meet these requirements?
A) Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
B) Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
C) Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
D) Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
A) Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.
B) Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.
C) Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.
D) Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
48
A Database Specialist must create a read replica to isolate read-only queries for an Amazon RDS for MySQL DB instance. Immediately after creating the read replica, users that query it report slow response times. What could be causing these slow response times?
A) New volumes created from snapshots load lazily in the background
B) Long-running statements on the master
C) Insufficient resources on the master
D) Overload of a single replication thread by excessive writes on the master
A) New volumes created from snapshots load lazily in the background
B) Long-running statements on the master
C) Insufficient resources on the master
D) Overload of a single replication thread by excessive writes on the master
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
49
A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for PostgreSQL DB instance is currently using the default parameter group. A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging. Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)
A) Update the log_connections parameter in the default parameter group
B) Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
C) Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
D) Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
E) Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file
A) Update the log_connections parameter in the default parameter group
B) Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
C) Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
D) Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
E) Connect to the RDS PostgreSQL host and update the log_connections parameter in the postgresql.conf file
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
50
A retail company is about to migrate its online and mobile store to AWS. The company's CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead. What should the Database Specialist do to meet these requirements?
A) Use Amazon DynamoDB global tables to synchronize transactions
B) Use Amazon EMR to copy the orders table data across Regions
C) Use Amazon Aurora Global Database to synchronize all transactions
D) Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them
A) Use Amazon DynamoDB global tables to synchronize transactions
B) Use Amazon EMR to copy the orders table data across Regions
C) Use Amazon Aurora Global Database to synchronize all transactions
D) Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
51
A gaming company is designing a mobile gaming app that will be accessed by many users across the globe. The company wants to have replication and full support for multi-master writes. The company also wants to ensure low latency and consistent performance for app users. Which solution meets these requirements?
A) Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
B) Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
C) Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
D) Use Amazon Neptune for storage
A) Use Amazon DynamoDB global tables for storage and enable DynamoDB automatic scaling
B) Use Amazon Aurora for storage and enable cross-Region Aurora Replicas
C) Use Amazon Aurora for storage and cache the user content with Amazon ElastiCache
D) Use Amazon Neptune for storage
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
52
A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle. Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?
A) Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.
B) Increase the size of the ElastiCache cluster nodes to a larger instance size.
C) Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
D) Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.
A) Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.
B) Increase the size of the ElastiCache cluster nodes to a larger instance size.
C) Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
D) Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
53
An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application. The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts. How should a Database Specialist address these requirements?
A) Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
B) Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
C) Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
D) Use DynamoDB Accelerator to offload the reads
A) Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
B) Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
C) Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
D) Use DynamoDB Accelerator to offload the reads
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
54
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort. What should the Database Specialist do to meet these requirements?
A) Restore a snapshot from the production cluster into test clusters
B) Create logical dumps of the production cluster and restore them into new test clusters
C) Use database cloning to create clones of the production cluster
D) Add an additional read replica to the production cluster and use that node for testing
A) Restore a snapshot from the production cluster into test clusters
B) Create logical dumps of the production cluster and restore them into new test clusters
C) Use database cloning to create clones of the production cluster
D) Add an additional read replica to the production cluster and use that node for testing
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
55
A marketing company is using Amazon DocumentDB and requires that database audit logs be enabled. A Database Specialist needs to configure monitoring so that all data definition language (DDL) statements performed are visible to the Administrator. The Database Specialist has set the audit_logs parameter to enabled in the cluster parameter group. What should the Database Specialist do to automatically collect the database logs for the Administrator?
A) Enable DocumentDB to export the logs to Amazon CloudWatch Logs
B) Enable DocumentDB to export the logs to AWS CloudTrail
C) Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
D) Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3
A) Enable DocumentDB to export the logs to Amazon CloudWatch Logs
B) Enable DocumentDB to export the logs to AWS CloudTrail
C) Enable DocumentDB Events to export the logs to Amazon CloudWatch Logs
D) Configure an AWS Lambda function to download the logs using the download-db-log-file-portion operation and store the logs in Amazon S3
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
56
A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload. The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise. How can a Database Specialist address these requirements with minimal user involvement?
A) Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
B) Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
C) Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
D) Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.
A) Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
B) Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
C) Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
D) Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
57
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort. Which solution meets these requirements in the MOST efficient way?
A) Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
B) Use Amazon DynamoDB as the database and use DynamoDB Accelerator
C) Use Amazon Aurora MySQL as the database and use Aurora's buffer cache
D) Use Amazon DynamoDB as the database and use Amazon API Gateway
A) Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
B) Use Amazon DynamoDB as the database and use DynamoDB Accelerator
C) Use Amazon Aurora MySQL as the database and use Aurora's buffer cache
D) Use Amazon DynamoDB as the database and use Amazon API Gateway
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
58
A Database Specialist is migrating an on-premises Microsoft SQL Server application database to Amazon RDS for PostgreSQL using AWS DMS. The application requires minimal downtime when the RDS DB instance goes live. What change should the Database Specialist make to enable the migration?
A) Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
B) Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
C) Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
D) Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
A) Configure the on-premises application database to act as a source for an AWS DMS full load with ongoing change data capture (CDC)
B) Configure the AWS DMS replication instance to allow both full load and ongoing change data capture (CDC)
C) Configure the AWS DMS task to generate full logs to allow for ongoing change data capture (CDC)
D) Configure the AWS DMS connections to allow two-way communication to allow for ongoing change data capture (CDC)
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
59
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike. Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)
A) Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
B) Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
C) Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
D) Use Amazon QuickSight to view the SQL statement being run.
E) Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.
A) Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
B) Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
C) Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
D) Use Amazon QuickSight to view the SQL statement being run.
E) Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
60
A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season. Which solution will meet these requirements at the lowest cost?
A) DynamoDB Streams
B) DynamoDB with DynamoDB Accelerator
C) DynamoDB with on-demand capacity mode
D) DynamoDB with provisioned capacity mode with Auto Scaling
A) DynamoDB Streams
B) DynamoDB with DynamoDB Accelerator
C) DynamoDB with on-demand capacity mode
D) DynamoDB with provisioned capacity mode with Auto Scaling
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
61
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will spike rapidly. How should a Database Specialist ensure DynamoDB can handle the increased traffic?
A) Ensure the table is always provisioned to meet peak needs
B) Allow burst capacity to handle the additional load
C) Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D) Preprovision additional capacity for the known peaks and then reduce the capacity after the event
A) Ensure the table is always provisioned to meet peak needs
B) Allow burst capacity to handle the additional load
C) Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D) Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
62
A small startup company is looking to migrate a 4 TB on-premises MySQL database to AWS using an Amazon RDS for MySQL DB instance. Which strategy would allow for a successful migration with the LEAST amount of downtime?
A) Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.
B) Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.
C) Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.
D) Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.
A) Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance utilizing the MySQL utilities running on an Amazon EC2 instance. Immediately point the application to the DB instance.
B) Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into the EC2 instance and restore it into the EC2 MySQL instance. Use AWS DMS to migrate data into a new RDS for MySQL DB instance. Point the application to the DB instance.
C) Deploy a new Amazon EC2 instance, install the MySQL software on the EC2 instance, and configure networking for access from the on-premises data center. Use the mysqldump utility to create a snapshot of the on-premises MySQL server. Copy the snapshot into an Amazon S3 bucket and import the snapshot into a new RDS for MySQL DB instance using the MySQL utilities running on an EC2 instance. Point the application to the DB instance.
D) Deploy a new RDS for MySQL DB instance and configure it for access from the on-premises data center. Use the mysqldump utility to create an initial snapshot from the on-premises MySQL server, and copy it to an Amazon S3 bucket. Import the snapshot into the DB instance using the MySQL utilities running on an Amazon EC2 instance. Establish replication into the new DB instance using MySQL replication. Stop application access to the on-premises MySQL server and let the remaining transactions replicate over. Point the application to the DB instance.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
63
A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster. Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?
A) Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
B) Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
C) Create additional readers to cater to the different scenarios.
D) Use custom endpoints to satisfy the different workloads.
A) Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.
B) Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.
C) Create additional readers to cater to the different scenarios.
D) Use custom endpoints to satisfy the different workloads.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
64
A company's database specialist disabled TLS on an Amazon DocumentDB cluster to perform benchmarking tests. A few days after this change was implemented, a database specialist trainee accidentally deleted multiple tables. The database specialist restored the database from available snapshots. An hour after restoring the cluster, the database specialist is still unable to connect to the new cluster endpoint. What should the database specialist do to connect to the new, restored Amazon DocumentDB cluster?
A) Change the restored cluster's parameter group to the original cluster's custom parameter group.
B) Change the restored cluster's parameter group to the Amazon DocumentDB default parameter group.
C) Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.
D) Run the syncInstances command in AWS DataSync.
A) Change the restored cluster's parameter group to the original cluster's custom parameter group.
B) Change the restored cluster's parameter group to the Amazon DocumentDB default parameter group.
C) Configure the interface VPC endpoint and associate the new Amazon DocumentDB cluster.
D) Run the syncInstances command in AWS DataSync.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
65
A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes. Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?
A) Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
B) Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.
C) Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.
D) Create an AWS Backup plan and assign the DynamoDB table as a resource.
A) Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.
B) Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.
C) Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.
D) Create an AWS Backup plan and assign the DynamoDB table as a resource.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
66
A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync. What should the company do to resolve these performance issues?
A) Add an Aurora Replica to scale the read traffic.
B) Scale up the DB instance class.
C) Modify applications to commit transactions in batches.
D) Modify applications to avoid conflicts by taking locks.
A) Add an Aurora Replica to scale the read traffic.
B) Scale up the DB instance class.
C) Modify applications to commit transactions in batches.
D) Modify applications to avoid conflicts by taking locks.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
67
A financial company has allocated an Amazon RDS MariaDB DB instance with large storage capacity to accommodate migration efforts. Post-migration, the company purged unwanted data from the instance. The company now want to downsize storage to save money. The solution must have the least impact on production and near-zero downtime. Which solution would meet these requirements?
A) Create a snapshot of the old databases and restore the snapshot with the required storage
B) Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
C) Create a new database using native backup and restore
D) Create a new read replica and make it the primary by terminating the existing primary
A) Create a snapshot of the old databases and restore the snapshot with the required storage
B) Create a new RDS DB instance with the required storage and move the databases from the old instances to the new instance using AWS DMS
C) Create a new database using native backup and restore
D) Create a new read replica and make it the primary by terminating the existing primary
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
68
A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a primary concern for the company, and a solution that does not require significant rework is needed. Which solution meets these requirements?
A) Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
B) Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
C) Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
D) Change the DB clusters to the burstable instance family.
A) Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
B) Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger DB cluster.
C) Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
D) Change the DB clusters to the burstable instance family.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
69
A company is due for renewing its database license. The company wants to migrate its 80 TB transactional database system from on-premises to the AWS Cloud. The migration should incur the least possible downtime on the downstream database applications. The company's network infrastructure has limited network bandwidth that is shared with other applications. Which solution should a database specialist use for a timely migration?
A) Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
B) Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
C) Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
D) Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
A) Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Use AWS DMS to migrate change data capture (CDC) data from the source database to Amazon S3. Use a second AWS DMS task to migrate all the S3 data to the target database.
B) Perform a full backup of the source database to AWS Snowball Edge appliances and ship them to be loaded to Amazon S3. Periodically perform incremental backups of the source database to be shipped in another Snowball Edge appliance to handle syncing change data capture (CDC) data from the source to the target database.
C) Use AWS DMS to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS DMS to handle syncing change data capture (CDC) data from the source to the target database.
D) Use the AWS Schema Conversion Tool (AWS SCT) to migrate the full load of the source database over a VPN tunnel using the internet for its primary connection. Allow AWS SCT to handle syncing change data capture (CDC) data from the source to the target database.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
70
A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned. Which solution will enable this change?
A) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack's mappings.
B) Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
C) Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
D) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
A) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack's mappings.
B) Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
C) Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.
D) Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
71
A company needs to migrate Oracle Database Standard Edition running on an Amazon EC2 instance to an Amazon RDS for Oracle DB instance with Multi-AZ. The database supports an ecommerce website that runs continuously. The company can only provide a maintenance window of up to 5 minutes. Which solution will meet these requirements?
A) Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.
B) Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.
C) Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
D) Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
A) Configure Oracle Real Application Clusters (RAC) on the EC2 instance and the RDS DB instance. Update the connection string to point to the RAC cluster. Once the EC2 instance and RDS DB instance are in sync, fail over from Amazon EC2 to Amazon RDS.
B) Export the Oracle database from the EC2 instance using Oracle Data Pump and perform an import into Amazon RDS. Stop the application for the entire process. When the import is complete, change the database connection string and then restart the application.
C) Configure AWS DMS with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
D) Configure AWS DataSync with the EC2 instance as the source and the RDS DB instance as the destination. Stop the application when the replication is in sync, change the database connection string, and then restart the application.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
72
A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature. Which AWS solution meets these requirements?
A) Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.
B) Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.
C) Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.
D) Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.
A) Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.
B) Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.
C) Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.
D) Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
73
A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests. Which should the database specialist do to allow the database team to create the test tables?
A) Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
B) Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
C) Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
D) Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
A) Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.
B) Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.
C) Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.
D) Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
74
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window. What is the MOST cost-effective action that should be taken to avoid downtime?
A) Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
B) Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
C) Enable a read replicas and direct read traffic to it when Amazon RDS is down
D) Enable an Amazon RDS for MySQL Multi-AZ configuration
A) Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
B) Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
C) Enable a read replicas and direct read traffic to it when Amazon RDS is down
D) Enable an Amazon RDS for MySQL Multi-AZ configuration
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
75
A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database. Which solution addresses these requirements?
A) Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
B) Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
C) Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
D) Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.
A) Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.
B) Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.
C) Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.
D) Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
76
A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours. Which solution will meet these requirements and is the MOST operationally efficient?
A) Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company's Amazon S3 bucket.
B) Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
C) Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
D) Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.
A) Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company's Amazon S3 bucket.
B) Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.
C) Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.
D) Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
77
A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level. How can the Database Specialist meet these requirements?
A) Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
B) Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
C) Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
D) Define access privileges to the tables containing sensitive data in the pg_hba.conf file.
A) Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
B) Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.
C) Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
D) Define access privileges to the tables containing sensitive data in the pg_hba.conf file.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
78
A database specialist is building a system that uses a static vendor dataset of postal codes and related territory information that is less than 1 GB in size. The dataset is loaded into the application's cache at start up. The company needs to store this data in a way that provides the lowest cost with a low application startup time. Which approach will meet these requirements?
A) Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
B) Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
C) Use Amazon DynamoDB in on-demand capacity mode.
D) Use Amazon S3 and load the data from flat files.
A) Use an Amazon RDS DB instance. Shut down the instance once the data has been read.
B) Use Amazon Aurora Serverless. Allow the service to spin resources up and down, as needed.
C) Use Amazon DynamoDB in on-demand capacity mode.
D) Use Amazon S3 and load the data from flat files.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
79
A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora. Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?
A) Stop the DB cluster and analyze how the website responds
B) Use Aurora fault injection to crash the master DB instance
C) Remove the DB cluster endpoint to simulate a master DB instance failure
D) Use Aurora Backtrack to crash the DB cluster
A) Stop the DB cluster and analyze how the website responds
B) Use Aurora fault injection to crash the master DB instance
C) Remove the DB cluster endpoint to simulate a master DB instance failure
D) Use Aurora Backtrack to crash the DB cluster
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck
80
A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application. Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?
A) Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B) Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C) Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D) Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
A) Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B) Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C) Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D) Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.
Unlock Deck
Unlock for access to all 156 flashcards in this deck.
Unlock Deck
k this deck