Pre-Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated DAS-C01 Exam Questions and Answers by reign

Page: 8 / 14

Amazon Web Services DAS-C01 Exam Overview :

Exam Name: AWS Certified Data Analytics - Specialty
Exam Code: DAS-C01 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Data Analytics
Questions: 207 Q&A's Shared By: reign
Question 32

A company uses Amazon Connect to manage its contact center. The company uses Salesforce to manage its customer relationship management (CRM) data. The company must build a pipeline to ingest data from Amazon Connect and Salesforce into a data lake that is built on Amazon S3.

Which solution will meet this requirement with the LEAST operational overhead?

Options:

A.

Use Amazon Kinesis Data Streams to ingest the Amazon Connect data. Use Amazon AppFlow to ingest the Salesforce data.

B.

Use Amazon Kinesis Data Firehose to ingest the Amazon Connect data. Use Amazon Kinesis Data Streams to ingest the Salesforce data.

C.

Use Amazon Kinesis Data Firehose to ingest the Amazon Connect data. Use Amazon AppFlow to ingest the Salesforce data.

D.

Use Amazon AppFlow to ingest the Amazon Connect data. Use Amazon Kinesis Data Firehose to ingest the Salesforce data.

Discussion
Question 33

A marketing company wants to improve its reporting and business intelligence capabilities. During the planning phase, the company interviewed the relevant stakeholders and discovered that:

  • The operations team reports are run hourly for the current month’s data.
  • The sales team wants to use multiple Amazon QuickSight dashboards to show a rolling view of the last 30 days based on several categories.
  • The sales team also wants to view the data as soon as it reaches the reporting backend.
  • The finance team’s reports are run daily for last month’s data and once a month for the last 24 months of data.

Currently, there is 400 TB of data in the system with an expected additional 100 TB added every month. The company is looking for a solution that is as cost-effective as possible.

Which solution meets the company’s requirements?

Options:

A.

Store the last 24 months of data in Amazon Redshift. Configure Amazon QuickSight with Amazon Redshift as the data source.

B.

Store the last 2 months of data in Amazon Redshift and the rest of the months in Amazon S3. Set up an external schema and table for Amazon Redshift Spectrum. Configure Amazon QuickSight with Amazon Redshift as the data source.

C.

Store the last 24 months of data in Amazon S3 and query it using Amazon Redshift Spectrum. Configure Amazon QuickSight with Amazon Redshift Spectrum as the data source.

D.

Store the last 2 months of data in Amazon Redshift and the rest of the months in Amazon S3. Use a long- running Amazon EMR with Apache Sparkcluster to query the data as needed. Configure Amazon QuickSight with Amazon EMR as the data source.

Discussion
Question 34

A company has a data warehouse in Amazon Redshift that is approximately 500 TB in size. New data is imported every few hours and read-only queries are run throughout the day and evening. There is a particularly heavy load with no writes for several hours each morning on business days. During those hours, some queries are queued and take a long time to execute. The company needs to optimize query execution and avoid any downtime.

What is the MOST cost-effective solution?

Options:

A.

Enable concurrency scaling in the workload management (WLM) queue.

B.

Add more nodes using the AWS Management Console during peak hours. Set the distribution style to ALL.

C.

Use elastic resize to quickly add nodes during peak times. Remove the nodes when they are not needed.

D.

Use a snapshot, restore, and resize operation. Switch to the new target cluster.

Discussion
Kingsley
Do anyone guide my how these dumps would be helpful for new students like me?
Haris Mar 23, 2026
Absolutely! They are highly recommended for anyone looking to pass their certification exam. The dumps are easy to understand and follow, making it easier for you to study and retain the information.
Madeleine
Passed my exam with my dream score…. Guys do give these dumps a try. They are authentic.
Ziggy Mar 24, 2026
That's really impressive. I think I might give Cramkey Dumps a try for my next certification exam.
Joey
I highly recommend Cramkey Dumps to anyone preparing for the certification exam. They have all the key information you need and the questions are very similar to what you'll see on the actual exam.
Dexter Mar 11, 2026
Agreed. It's definitely worth checking out if you're looking for a comprehensive and reliable study resource.
Alaia
These Dumps are amazing! I used them to study for my recent exam and I passed with flying colors. The information in the dumps is so valid and up-to-date. Thanks a lot!!!
Zofia Mar 10, 2026
That's great to hear! I've been struggling to find good study material for my exam. I will ty it for sure.
Ilyas
Definitely. I felt much more confident and prepared because of the Cramkey Dumps. I was able to answer most of the questions with ease and I think that helped me to score well on the exam.
Saoirse Mar 15, 2026
That's amazing. I'm glad you found something that worked for you. Maybe I should try them out for my next exam.
Question 35

An insurance company has raw data in JSON format that is sent without a predefined schedule through an Amazon Kinesis Data Firehose delivery stream to an Amazon S3 bucket. An AWS Glue crawler is scheduled to run every 8 hours to update the schema in the data catalog of the tables stored in the S3 bucket. Data analysts analyze the data using Apache Spark SQL on Amazon EMR set up with AWS Glue Data Catalog as the metastore. Data analysts say that, occasionally, the data they receive is stale. A data engineer needs to provide access to the most up-to-date data.

Which solution meets these requirements?

Options:

A.

Create an external schema based on the AWS Glue Data Catalog on the existing Amazon Redshift cluster to query new data in Amazon S3 with Amazon Redshift Spectrum.

B.

Use Amazon CloudWatch Events with the rate (1 hour) expression to execute the AWS Glue crawler every hour.

C.

Using the AWS CLI, modify the execution schedule of the AWS Glue crawler from 8 hours to 1 minute.

D.

Run the AWS Glue crawler from an AWS Lambda function triggered by an S3:ObjectCreated:* event notification on the S3 bucket.

Discussion
Page: 8 / 14

DAS-C01
PDF

$36.75  $104.99

DAS-C01 Testing Engine

$43.75  $124.99

DAS-C01 PDF + Testing Engine

$57.75  $164.99