Spring Sale Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: get65

Amazon Web Services Updated AIP-C01 Exam Questions and Answers by marlowe

Page: 3 / 7

Amazon Web Services AIP-C01 Exam Overview :

Exam Name: AWS Certified Generative AI Developer - Professional
Exam Code: AIP-C01 Dumps
Vendor: Amazon Web Services Certification: AWS Certified Professional
Questions: 107 Q&A's Shared By: marlowe
Question 12

A company is using Amazon Bedrock to design an application to help researchers apply for grants. The application is based on an Amazon Nova Pro foundation model (FM). The application contains four required inputs and must provide responses in a consistent text format. The company wants to receive a notification in Amazon Bedrock if a response contains bullying language. However, the company does not want to block all flagged responses.

The company creates an Amazon Bedrock flow that takes an input prompt and sends it to the Amazon Nova Pro FM. The Amazon Nova Pro FM provides a response.

Which additional steps must the company take to meet these requirements? (Select TWO.)

Options:

A.

Use Amazon Bedrock Prompt Management to specify the required inputs as variables. Select an Amazon Nova Pro FM. Specify the output format for the response. Add the prompt to the prompts node of the flow.

B.

Create an Amazon Bedrock guardrail that applies the hate content filter. Set the filter response to block. Add the guardrail to the prompts node of the flow.

C.

Create an Amazon Bedrock prompt router. Specify an Amazon Nova Pro FM. Add the required inputs as variables to the input node of the flow. Add the prompt router to the prompts node. Add the output format to the output node.

D.

Create an Amazon Bedrock guardrail that applies the insults content filter. Set the filter response to detect. Add the guardrail to the prompts node of the flow.

E.

Create an Amazon Bedrock application inference profile that specifies an Amazon Nova Pro FM. Specify the output format for the response in the description. Include a tag for each of the input variables. Add the profile to the prompts node of the flow.

Discussion
Question 13

A company deploys multiple Amazon Bedrock–based generative AI (GenAI) applications across multiple business units for customer service, content generation, and document analysis. Some applications show unpredictable token consumption patterns. The company requires a comprehensive observability solution that provides real-time visibility into token usage patterns across multiple models. The observability solution must support custom dashboards for multiple stakeholder groups and provide alerting capabilities for token consumption across all the foundation models that the company’s applications use.

Which combination of solutions will meet these requirements with the LEAST operational overhead? (Select TWO.)

Options:

A.

Use Amazon CloudWatch metrics as data sources to create custom Amazon QuickSight dashboards that show token usage trends and usage patterns across FMs.

B.

Use CloudWatch Logs Insights to analyze Amazon Bedrock invocation logs for token consumption patterns and usage attribution by application. Create custom queries to identify high-usage scenarios. Add log widgets to dashboards to enable continuous monitoring.

C.

Create custom Amazon CloudWatch dashboards that combine native Amazon Bedrock token and invocation CloudWatch metrics. Set up CloudWatch alarms to monitor token usage thresholds.

D.

Create dashboards that show token usage trends and patterns across the company’s FMs by using an Amazon Bedrock zero-ETL integration with Amazon Managed Grafana.

E.

Implement Amazon EventBridge rules to capture Amazon Bedrock model invocation events. Route token usage data to Amazon OpenSearch Serverless by using Amazon Data Firehose. Use OpenSearch dashboards to analyze usage patterns.

Discussion
Teddie
yes, I passed my exam with wonderful score, Accurate and valid dumps.
Isla-Rose Feb 13, 2026
Absolutely! The questions in the dumps were almost identical to the ones that appeared in the actual exam. I was able to answer almost all of them correctly.
Sam
Can I get help from these dumps and their support team for preparing my exam?
Audrey Feb 15, 2026
Definitely, you won't regret it. They've helped so many people pass their exams and I'm sure they'll help you too. Good luck with your studies!
Anaya
I found so many of the same questions on the real exam that I had already seen in the Cramkey Dumps. Thank you so much for making exam so easy for me. I passed it successfully!!!
Nina Feb 20, 2026
It's true! I felt so much more confident going into the exam because I had already seen and understood the questions.
Annabel
I recently used them for my exam and I passed it with excellent score. I am impressed.
Amirah Feb 10, 2026
I passed too. The questions I saw in the actual exam were exactly the same as the ones in the Cramkey Dumps. I was able to answer the questions confidently because I had already seen and studied them.
Question 14

A financial services company uses an AI application to process financial documents by using Amazon Bedrock. During business hours, the application handles approximately 10,000 requests each hour, which requires consistent throughput.

The company uses the CreateProvisionedModelThroughput API to purchase provisioned throughput. Amazon CloudWatch metrics show that the provisioned capacity is unused while on-demand requests are being throttled. The company finds the following code in the application:

python

response = bedrock_runtime.invoke_model(modelId="anthropic.claude-v2", body=json.dumps(payload))

The company needs the application to use the provisioned throughput and to resolve the throttling issues.

Which solution will meet these requirements?

Options:

A.

Increase the number of model units (MUs) in the provisioned throughput configuration.

B.

Replace the model ID parameter with the ARN of the provisioned model that the CreateProvisionedModelThroughput API returns.

C.

Add exponential backoff retry logic to handle throttling exceptions during peak hours.

D.

Modify the application to use the InvokeModelWithResponseStream API instead of the InvokeModel API.

Discussion
Question 15

A company is implementing a serverless inference API by using AWS Lambda. The API will dynamically invoke multiple AI models hosted on Amazon Bedrock. The company needs to design a solution that can switch between model providers without modifying or redeploying Lambda code in real time. The design must include safe rollout of configuration changes and validation and rollback capabilities.

Which solution will meet these requirements?

Options:

A.

Store the active model provider in AWS Systems Manager Parameter Store. Configure a Lambda function to read the parameter at runtime to determine which model to invoke.

B.

Store the active model provider in AWS AppConfig. Configure a Lambda function to read the configuration at runtime to determine which model to invoke.

C.

Configure an Amazon API Gateway REST API to route requests to separate Lambda functions. Hardcode each Lambda function to a specific model provider. Switch the integration target manually.

D.

Store the active model provider in a JSON file hosted on Amazon S3. Use AWS AppConfig to reference the S3 file as a hosted configuration source. Configure a Lambda function to read the file through AppConfig at runtime to determine which model to invoke.

Discussion
Page: 3 / 7

AIP-C01
PDF

$36.75  $104.99

AIP-C01 Testing Engine

$43.75  $124.99

AIP-C01 PDF + Testing Engine

$57.75  $164.99