New Year Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated DOP-C02 Exam Questions and Answers by livia

Page: 25 / 29

Amazon Web Services DOP-C02 Exam Overview :

Exam Name: AWS Certified DevOps Engineer - Professional
Exam Code: DOP-C02 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Professional
Questions: 392 Q&A's Shared By: livia
Question 100

A company releases a new application in a new AWS account. The application includes an AWS Lambda function that processes messages from an Amazon Simple Queue Service (Amazon SOS) standard queue. The Lambda function stores the results in an Amazon S3 bucket for further downstream processing. The Lambda function needs to process the messages within a specific period of time after the messages are published. The Lambda function has a batch size of 10 messages and takes a few seconds to process a batch of messages.

As load increases on the application's first day of service, messages in the queue accumulate at a greater rate than the Lambda function can process the messages. Some messages miss the required processing timelines. The logs show that many messages in the queue have data that is not valid. The company needs to meet the timeline requirements for messages that have valid data.

Which solution will meet these requirements?

Options:

A.

Increase the Lambda function's batch size. Change the SOS standard queue to an SOS FIFO queue. Request a Lambda concurrency increase in the AWS Region.

B.

Reduce the Lambda function's batch size. Increase the SOS message throughput quota. Request a Lambda concurrency increase in the AWS Region.

C.

Increase the Lambda function's batch size. Configure S3 Transfer Acceleration on the S3 bucket. Configure an SOS dead-letter queue.

D.

Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SOS dead-letter queue.

Discussion
Question 101

A company has a workflow that generates a file for each of the company's products and stores the files in a production environment Amazon S3 bucket. The company's users can access the S3 bucket.

Each file contains a product ID. Product IDs for products that have not been publicly announced are prefixed with a specific UUID. Product IDs are 12 characters long. IDs for products that have not been publicly announced begin with the letter P.

The company does not want information about products that have not been publicly announced to be available in the production environment S3 bucket.

Which solution will meet these requirements?

Options:

A.

Create a new staging S3 bucket. Generate all files in the new staging bucket. Create an Amazon Macie custom data identifier to identify product IDs in the new bucket that begin with the specific UUID. Launch an Amazon Macie sensitive data discovery job with the custom data identifier. Copy all files that do not have a Macie finding to the production S3 bucket.

B.

Create an Amazon Macie custom data identifier to identify product IDs in the production bucket that begin with the specific UUID. Launch an Amazon Macie sensitive data discovery job with the custom data identifier. Remove all files that have a Macie finding from the production S3 bucket.

C.

Create a new staging S3 bucket. Generate all files in the new staging bucket. Launch an Amazon Macie sensitive data discovery job with a managed data identifier. Copy all files that do not have a Macie finding to the production S3 bucket.

D.

Create an Amazon Macie sensitive data discovery job with a managed data identifier. Remove all files that have a Macie finding from the production S3 bucket.

Discussion
Vienna
I highly recommend them. They are offering exact questions that we need to prepare our exam.
Jensen Nov 1, 2025
That's great. I think I'll give Cramkey a try next time I take a certification exam. Thanks for the recommendation!
Norah
Cramkey is highly recommended.
Zayan Nov 16, 2025
Definitely. If you're looking for a reliable and effective study resource, look no further than Cramkey Dumps. They're simply wonderful!
Miley
Hey, I tried Cramkey Dumps for my IT certification exam. They are really awesome and helped me pass my exam with wonderful score.
Megan Nov 10, 2025
That’s great!!! I’ll definitely give it a try. Thanks!!!
Peyton
Hey guys. Guess what? I passed my exam. Thanks a lot Cramkey, your provided information was relevant and reliable.
Coby Nov 12, 2025
Thanks for sharing your experience. I think I'll give Cramkey a try for my next exam.
Alessia
Amazing Dumps. Found almost all questions in actual exam whih I prepared from these valuable dumps. Recommended!!!!
Belle Nov 3, 2025
That's impressive. I've been struggling with finding good study material for my certification. Maybe I should give Cramkey Dumps a try.
Question 102

A company runs applications on Windows and Linux Amazon EC2 instances The instances run across multiple Availability Zones In an AWS Region. The company uses Auto Scaling groups for each application.

The company needs a durable storage solution for the instances. The solution must use SMB for Windows and must use NFS for Linux. The solution must also have sub-millisecond latencies. All instances will read and write the data.

Which combination of steps will meet these requirements? (Select THREE.)

Options:

A.

Create an Amazon Elastic File System (Amazon EFS) file system that has targets in multiple Availability Zones

B.

Create an Amazon FSx for NetApp ONTAP Multi-AZ file system.

C.

Create a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume to use for shared storage.

D.

Update the user data for each application's launch template to mount the file system

E.

Perform an instance refresh on each Auto Scaling group.

F.

Update the EC2 instances for each application to mount the file system when new instances are launched

Discussion
Question 103

A developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing.

Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.

How can log collection be automated?

Options:

A.

Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

B.

Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an AWS Config rule for EC2 Instance-terminate Lifecycle Action and trigger a step function that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

C.

Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that invokes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

D.

Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon EventBridge rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that invokes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.

Discussion
Page: 25 / 29
Title
Questions
Posted

DOP-C02
PDF

$36.75  $104.99

DOP-C02 Testing Engine

$43.75  $124.99

DOP-C02 PDF + Testing Engine

$57.75  $164.99