Month End Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cram70off

Amazon Web Services Updated DOP-C02 Exam Questions and Answers by aida

Page: 4 / 30

Amazon Web Services DOP-C02 Exam Overview :

Exam Name: AWS Certified DevOps Engineer - Professional
Exam Code: DOP-C02 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Professional
Questions: 407 Q&A's Shared By: aida
Question 16

A rapidly growing company wants to scale for developer demand for AWS development environments. Development environments are created manually in the AWS Management Console. The networking team uses AWS CloudFormation to manage the networking infrastructure, exporting stack output values for the Amazon VPC and all subnets. The development environments have common standards, such as Application Load Balancers, Amazon EC2 Auto Scaling groups, security groups, and Amazon DynamoDB tables.

To keep up with demand, the DevOps engineer wants to automate the creation of development environments. Because the infrastructure required to support the application is expected to grow, there must be a way to easily update the deployed infrastructure. CloudFormation will be used to create a template for the development environments.

Which approach will meet these requirements and quickly provide consistent AWS environments for developers?

Options:

A.

Use Fn::ImportValue intrinsic functions in the Resources section of the template to retrieve Virtual Private Cloud (VPC) and subnet values. Use CloudFormation StackSets for the development environments, using the Count input parameter to indicate the number of environments needed. Use the UpdateStackSet command to update existing development environments.

B.

Use nested stacks to define common infrastructure components. To access the exported values, use TemplateURL to reference the networking team’s template. To retrieve Virtual Private Cloud (VPC) and subnet values, use Fn::ImportValue intrinsic functions in the Parameters section of the root template. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

C.

Use nested stacks to define common infrastructure components. Use Fn::ImportValue intrinsic functions with the resources of the nested stack to retrieve Virtual Private Cloud (VPC) and subnet values. Use the CreateChangeSet and ExecuteChangeSet commands to update existing development environments.

D.

Use Fn::ImportValue intrinsic functions in the Parameters section of the root template to retrieve Virtual Private Cloud (VPC) and subnet values. Define the development resources in the order they need to be created in the CloudFormation nested stacks. Use the CreateChangeSet. and ExecuteChangeSet commands to update existing development environments.

Discussion
Question 17

A company is implementing an Amazon Elastic Container Service (Amazon ECS) cluster to run its workload. The company architecture will run multiple ECS services on the cluster. The architecture includes an Application Load Balancer on the front end and uses multiple target groups to route traffic.

A DevOps engineer must collect application and access logs. The DevOps engineer then needs to send the logs to an Amazon S3 bucket for near-real-time analysis.

Which combination of steps must the DevOps engineer take to meet these requirements? (Choose three.)

Options:

A.

Download the Amazon CloudWatch Logs container instance from AWS. Configure this instance as a task. Update the application service definitions to include the logging task.

B.

Install the Amazon CloudWatch Logs agent on the ECS instances. Change the logging driver in the ECS task definition to awslogs.

C.

Use Amazon EventBridge to schedule an AWS Lambda function that will run every 60 seconds and will run the Amazon CloudWatch Logs create-export-task command. Then point the output to the logging S3 bucket.

D.

Activate access logging on the ALB. Then point the ALB directly to the logging S3 bucket.

E.

Activate access logging on the target groups that the ECS services use. Then send the logs directly to the logging S3 bucket.

F.

Create an Amazon Kinesis Data Firehose delivery stream that has a destination of the logging S3 bucket. Then create an Amazon CloudWatch Logs subscription filter for Kinesis Data Firehose.

Discussion
Vienna
I highly recommend them. They are offering exact questions that we need to prepare our exam.
Jensen Dec 17, 2025
That's great. I think I'll give Cramkey a try next time I take a certification exam. Thanks for the recommendation!
Aliza
I used these dumps for my recent certification exam and I can say with certainty that they're absolutely valid dumps. The questions were very similar to what came up in the actual exam.
Jakub Dec 3, 2025
That's great to hear. I am going to try them soon.
Walter
Yayyy!!! I passed my exam with the help of Cramkey Dumps. Highly appreciated!!!!
Angus Dec 24, 2025
YES….. I saw the same questions in the exam.
Neve
Will I be able to achieve success after using these dumps?
Rohan Dec 16, 2025
Absolutely. It's a great way to increase your chances of success.
Question 18

A company uses AWS Organizations with CloudTrail trusted access. All events across accounts and Regions must be logged and retained in an audit account, and failed login attempts should trigger real-time notifications.

Which solution meets these requirements?

Options:

A.

Publish CloudTrail logs to S3 in the audit account. Create an EventBridge rule for failed login events and notify via SNS.

B.

Store logs in the management account and query using Athena + Lambda every 5 minutes.

C.

Store logs in audit S3 + CloudWatch log group in management account + metric filter for failed logins → SNS.

D.

Stream to Kinesis → Flink → SNS.

Discussion
Question 19

A company needs to update its order processing application to improve resilience and availability. The application requires a stateful database and uses a single-node Amazon RDS DB instance to store customer orders and transaction history. A DevOps engineer must make the database highly available.

Which solution will meet this requirement?

Options:

A.

Migrate the database to Amazon DynamoDB global tables. Configure automatic failover between AWS Regions by using Amazon Route 53 health checks.

B.

Migrate the database to Amazon EC2 instances in multiple Availability Zones. Use Amazon Elastic Block Store (Amazon EBS) Mult-Attach to connect all the instances to a single EBS volume.

C.

Use the RDS DB instance as the source instance to create read replicas in multiple Availability Zones. Deploy an Application Load Balancer to distribute read traffic across the read replicas.

D.

Modify the RDS DB instance to be a Multi-AZ deployment. Verify automatic failover to the standby instance if the primary instance becomes unavailable.

Discussion
Page: 4 / 30
Title
Questions
Posted

DOP-C02
PDF

$31.5  $104.99

DOP-C02 Testing Engine

$37.5  $124.99

DOP-C02 PDF + Testing Engine

$49.5  $164.99