Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated DOP-C02 Exam Questions and Answers by inara

Page: 14 / 31

Amazon Web Services DOP-C02 Exam Overview :

Exam Name: AWS Certified DevOps Engineer - Professional
Exam Code: DOP-C02 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Professional
Questions: 425 Q&A's Shared By: inara
Question 56

A company has deployed an application in a single AWS Region. The application backend uses Amazon DynamoDB tables and Amazon S3 buckets.

The company wants to deploy the application in a secondary Region. The company must ensure that the data in the DynamoDB tables and the S3 buckets persists across both Regions. The data must also immediately propagate across Regions.

Which solution will meet these requirements with the MOST operational efficiency?

Options:

A.

Implement two-way S3 bucket replication between the primary Region ' s S3 buckets and the secondary Region ' s S3 buckets. Convert the DynamoDB tables into global tables. Set the secondary Region as the additional Region.

B.

Implement S3 Batch Operations copy jobs between the primary Region and the secondary Region for all S3 buckets. Convert the DynamoDB tables into global tables. Set the secondary Region as the additional Region.

C.

Implement two-way S3 bucket replication between the primary Region ' s S3 buckets and the secondary Region ' s S3 buckets. Enable DynamoDB streams on the DynamoDB tables in both Regions. In each Region, create an AWS Lambda function that subscribes to the DynamoDB streams. Configure the Lambda function to copy new records to the DynamoDB tables in the other Region.

D.

Implement S3 Batch Operations copy jobs between the primary Region and the secondary Region for all S3 buckets. Enable DynamoDB streams on the DynamoDB tables in both Regions. In each Region, create an AWS Lambda function that subscribes to the DynamoDB streams. Configure the Lambda function to copy new records to the DynamoDB tables in the other Region.

Discussion
Lennie
I passed my exam and achieved wonderful score, I highly recommend it.
Emelia Feb 7, 2026
I think I'll give Cramkey a try next time I take a certification exam. Thanks for the recommendation!
Joey
I highly recommend Cramkey Dumps to anyone preparing for the certification exam. They have all the key information you need and the questions are very similar to what you'll see on the actual exam.
Dexter Feb 16, 2026
Agreed. It's definitely worth checking out if you're looking for a comprehensive and reliable study resource.
Fatima
Hey I passed my exam. The world needs to know about it. I have never seen real exam questions on any other exam preparation resource like I saw on Cramkey Dumps.
Niamh Feb 4, 2026
That's true. Cramkey Dumps are simply the best when it comes to preparing for the certification exam. They have all the key information you need and the questions are very similar to what you'll see on the actual exam.
Inaaya
Are these Dumps worth buying?
Fraser Feb 25, 2026
Yes, of course, they are necessary to pass the exam. They give you an insight into the types of questions that could come up and help you prepare effectively.
Question 57

A company is launching an application that stores raw data in an Amazon S3 bucket. Three applications need to access the data to generate reports. The data must be redacted differently for each application before

the applications can access the data.

Which solution will meet these requirements?

Options:

A.

Create an S3 bucket for each application. Configure S3 Same-Region Replication (SRR) from the raw data ' s S3 bucket to each application ' s S3 bucket. Configure each application to consume data from its own S3 bucket.

B.

Create an Amazon Kinesis data stream. Create an AWS Lambda function that is invoked by object creation events in the raw data ' s S3 bucket. Program the Lambda function to redact data for each application. Publish the data on the Kinesis data stream. Configure each application to consume data from the Kinesis data stream.

C.

For each application, create an S3 access point that uses the raw data ' s S3 bucket as the destination. Create an AWS Lambda function that is invoked by object creation events in the raw data ' s S3 bucket. Program the Lambda function to redact data for each application. Store the data in each application ' s S3 access point. Configure each application to consume data from its own S3 access point.

D.

Create an S3 access point that uses the raw data ' s S3 bucket as the destination. For each application, create an S3 Object Lambda access point that uses the S3 access point. Configure the AWS Lambda function for each S3 Object Lambda access point to redact data when objects are retrieved. Configure each application to consume data from its own S3 Object Lambda access point.

Discussion
Question 58

A company is building a new pipeline by using AWS CodePipeline and AWS CodeBuild in a build account. The pipeline consists of two stages. The first stage is a CodeBuild job to build and package an AWS Lambda function. The second stage consists of deployment actions that operate on two different AWS accounts a development environment account and a production environment account. The deployment stages use the AWS Cloud Format ion action that CodePipeline invokes to deploy the infrastructure that the Lambda function requires.

A DevOps engineer creates the CodePipeline pipeline and configures the pipeline to encrypt build artifacts by using the AWS Key Management Service (AWS KMS) AWS managed key for Amazon S3 (the aws/s3 key). The artifacts are stored in an S3 bucket When the pipeline runs, the Cloud Formation actions fail with an access denied error.

Which combination of actions must the DevOps engineer perform to resolve this error? (Select TWO.)

Options:

A.

Create an S3 bucket in each AWS account for the artifacts Allow the pipeline to write to the S3 buckets. Create a CodePipeline S3 action to copy the artifacts to the S3 bucket in each AWS account Update the CloudFormation actions to reference the artifacts S3 bucket in the production account.

B.

Create a customer managed KMS key Configure the KMS key policy to allow the IAM roles used by the CloudFormation action to perform decrypt operations Modify the pipeline to use the customer managed KMS key to encrypt artifacts.

C.

Create an AWS managed KMS key Configure the KMS key policy to allow the development account and the production account to perform decrypt operations. Modify the pipeline to use the KMS key to encrypt artifacts.

D.

In the development account and in the production account create an IAM role for CodePipeline. Configure the roles with permissions to perform CloudFormation operations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipeline account configure the CodePipeline CloudFormation action to use the roles.

E.

In the development account and in the production account create an IAM role for CodePipeline Configure the roles with permissions to perform CloudFormationoperations and with permissions to retrieve and decrypt objects from the artifacts S3 bucket. In the CodePipelme account modify the artifacts S3 bucket policy to allow the roles access Configure the CodePipeline CloudFormation action to use the roles.

Discussion
Question 59

A company requires an RPO of 2 hours and an RTO of 10 minutes for its data and application at all times. An application uses a MySQL database and Amazon EC2 web servers. The development team needs a strategy for failover and disaster recovery.

Which combination of deployment strategies will meet these requirements? (Select TWO.)

Options:

A.

Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store Use Aurora ' s automatic recovery capabilities in the event of a disaster

B.

Create an Amazon Aurora global database in two Regions as the data store. In the event of a failure promote the secondary Region as the primary for the application.

C.

Create an Amazon Aurora multi-master cluster across multiple Regions as the data store. Use a Network Load Balancer to balance the database traffic in different Regions.

D.

Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application Load Balancers in both Regions. Use hearth checks to determine the availability in a given Region. Use Auto Scaling groups in each Region to adjust capacity based on demand.

E.

Set up the application m two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers to manage the capacity based on demand. In the event of a disaster adjust the Auto Scaling group ' s desired instance count to increase baseline capacity in the failover Region.

Discussion
Page: 14 / 31
Title
Questions
Posted

DOP-C02
PDF

$36.75  $104.99

DOP-C02 Testing Engine

$43.75  $124.99

DOP-C02 PDF + Testing Engine

$57.75  $164.99