Pre-Summer Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated AIP-C01 Exam Questions and Answers by indiana

Page: 5 / 8

Amazon Web Services AIP-C01 Exam Overview :

Exam Name: AWS Certified Generative AI Developer - Professional
Exam Code: AIP-C01 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Professional
Questions: 119 Q&A's Shared By: indiana
Question 20

An elevator service company has developed an AI assistant application by using Amazon Bedrock. The application generates elevator maintenance recommendations to support the company’s elevator technicians. The company uses Amazon Kinesis Data Streams to collect the elevator sensor data.

New regulatory rules require that a human technician must review all AI-generated recommendations. The company needs to establish human oversight workflows to review and approve AI recommendations. The company must store all human technician review decisions for audit purposes.

Which solution will meet these requirements?

Options:

A.

Create a custom approval workflow by using AWS Lambda functions and Amazon SQS queues for human review of AI recommendations. Store all review decisions in Amazon DynamoDB for audit purposes.

B.

Create an AWS Step Functions workflow that has a human approval step that uses the waitForTaskToken API to pause execution. After a human technician completes a review, use an AWS Lambda function to call the SendTaskSuccess API with the approval decision. Store all review decisions in Amazon DynamoDB.

C.

Create an AWS Glue workflow that has a human approval step. After the human technician review, integrate the application with an AWS Lambda function that calls the SendTaskSuccess API. Store all human technician review decisions in Amazon DynamoDB.

D.

Configure Amazon EventBridge rules with custom event patterns to route AI recommendations to human technicians for review. Create AWS Glue jobs to process human technician approval queues. Use Amazon ElastiCache to cache all human technician review decisions.

Discussion
Faye
Yayyyy. I passed my exam. I think all students give these dumps a try.
Emmeline Apr 19, 2026
Definitely! I have no doubt new students will find them to be just as helpful as I did.
Cody
I used Cramkey Dumps to prepare and a lot of the questions on the exam were exactly what I found in their study materials.
Eric Apr 13, 2026
Really? That's great to hear! I used Cramkey Dumps too and I had the same experience. The questions were almost identical.
Anaya
I found so many of the same questions on the real exam that I had already seen in the Cramkey Dumps. Thank you so much for making exam so easy for me. I passed it successfully!!!
Nina Apr 11, 2026
It's true! I felt so much more confident going into the exam because I had already seen and understood the questions.
Ilyas
Definitely. I felt much more confident and prepared because of the Cramkey Dumps. I was able to answer most of the questions with ease and I think that helped me to score well on the exam.
Saoirse Apr 27, 2026
That's amazing. I'm glad you found something that worked for you. Maybe I should try them out for my next exam.
Miley
Hey, I tried Cramkey Dumps for my IT certification exam. They are really awesome and helped me pass my exam with wonderful score.
Megan Apr 3, 2026
That’s great!!! I’ll definitely give it a try. Thanks!!!
Question 21

A company is building a multicloud generative AI (GenAI)-powered secret resolution application that uses Amazon Bedrock and Agent Squad. The application resolves secrets from multiple sources, including key stores and hardware security modules (HSMs). The application uses AWS Lambda functions to retrieve secrets from the sources. The application uses AWS AppConfig to implement dynamic feature gating. The application supports secret chaining and detects secret drift. The application handles short-lived and expiring secrets. The application also supports prompt flows for templated instructions. The application uses AWS Step Functions to orchestrate agents to resolve the secrets and to manage secret validation and drift detection.

The company finds multiple issues during application testing. The application does not refresh expired secrets in time for agents to use. The application sends alerts for secret drift, but agents still use stale data. Prompt flows within the application reuse outdated templates, which cause cascading failures. The company must resolve the performance issues.

Which solution will meet this requirement?

Options:

A.

Use Step Functions Map states to run agent workflows in parallel. Pass updated secret metadata through Lambda function outputs. Use AWS AppConfig to version all prompt flows to gate and roll back faulty templates.

B.

Use Amazon Bedrock Agents only. Configure Amazon Bedrock guardrails to restrict prompt variation. Use an inline JSON schema for a single agent’s workflow definition to chain tool calls.

C.

Use a centralized Amazon EventBridge pipeline to invoke each agent. Store intermediate prompts in Amazon DynamoDB. Resolve agent ordering by using TTL-based backoff and retries.

D.

Use Amazon EventBridge Pipes to invoke resolvers based on Amazon CloudWatch log patterns. Store response metadata in DynamoDB with TTL and versioned writes. Use Amazon Q Developer to dynamically generate fallback prompts.

Discussion
Question 22

A university recently digitized a collection of archival documents, academic journals, and manuscripts. The university stores the digital files in an AWS Lake Formation data lake.

The university hires a GenAI developer to build a solution to allow users to search the digital files by using text queries. The solution must return journal abstracts that are semantically similar to a user ' s query. Users must be able to search the digitized collection based on text and metadata that is associated with the journal abstracts. The metadata of the digitized files does not contain keywords. The solution must match similar abstracts to one another based on the similarity of their text. The data lake contains fewer than 1 million files.

Which solution will meet these requirements with the LEAST operational overhead?

Options:

A.

Use Amazon Titan Embeddings in Amazon Bedrock to create vector representations of the digitized files. Store embeddings in the OpenSearch Neural plugin for Amazon OpenSearch Service.

B.

Use Amazon Comprehend to extract topics from the digitized files. Store the topics and file metadata in an Amazon Aurora PostgreSQL database. Query the abstract metadata against the data in the Aurora database.

C.

Use Amazon SageMaker AI to deploy a sentence-transformer model. Use the model to create vector representations of the digitized files. Store embeddings in an Amazon Aurora PostgreSQL database that has the pgvector extension.

D.

Use Amazon Titan Embeddings in Amazon Bedrock to create vector representations of the digitized files. Store embeddings in an Amazon Aurora PostgreSQL Serverless database that has the pgvector extension.

Discussion
Question 23

A global healthcare company is deploying a GenAI application on Amazon Bedrock to produce treatment recommendations. Regulations vary for each country where the company operates. Some countries require the company to retain all model inputs and outputs for 2 years. Other countries require the company to submit data for local audits only. Medical providers require consistent medical terminology across all locations. However, the treatment recommendations that the model produces must adapt to local patient demographics. The solution must also integrate with existing electronic health record (EHR) systems. The application must support up to 10,000 healthcare provider queries every day with sub-second response times. The company must be able to review the application before deployments and approve of prompt changes. The application must produce comprehensive logs for prompts, responses, and user context. Which solution will meet these requirements?

Options:

A.

Use AWS CloudTrail to log API calls. Create standard prompts in Amazon Bedrock Prompt Management that include variables for patient demographics. Implement IAM policies to ensure that only approves users can access prompts.

B.

Use Amazon CloudWatch Logs to collect detailed model invocation logs. Store the logs in Amazon S3. Create parameterized prompts in Amazon Bedrock Prompt Management that include variables for treatment options. Enable prompt versioning and set up an approval workflow.

C.

Create AWS Lambda functions to dynamically generate prompts that enforce clinical language requirements. Use Amazon CloudWatch Logs to track model invocations. Use Amazon SQS queues to implement a prompt approval workflow.

D.

Store prompt templates in Amazon S3. Use S3 Object Lock to implement version control. Use Amazon EventBridge to track model invocations. Use AWS Config to monitor changes to prompt templates.

Discussion
Page: 5 / 8

AIP-C01
PDF

$36.75  $104.99

AIP-C01 Testing Engine

$43.75  $124.99

AIP-C01 PDF + Testing Engine

$57.75  $164.99