| Exam Name: | Artificial Intelligence Governance Professional | ||
| Exam Code: | AIGP Dumps | ||
| Vendor: | IAPP | Certification: | Artificial Intelligence Governance |
| Questions: | 194 Q&A's | Shared By: | carla |
Scenario:
Business A provides grammar and writing assistance tools and licenses a generative AI model from Business B to enhance its offerings. Business A is concerned that the AI model might produce inappropriate or toxic content and wants to implement governance processes to prevent this.
Which of the following governance processes should Business A take tobest protect its usersagainst potentially inappropriate text?
What is the most important reason for documenting risks when developing an AI system?
CASE STUDY
A global marketing agency is adapting a large language model ( " LLM " ) to generate content for an upcoming marketing campaign for a client ' s new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.
The marketing agency is accessing the LLM through an application programming interface ( " API " ) developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.
The marketing company has:
• Entered into a contract with the technology company with suitable representations and warranties.
• Completed an impact assessment on the LLM for this intended use.
• Built technical guidance on how to measure and mitigate bias in the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Followed applicable regulatory requirements.
• Created specific legal statements and disclosures regarding the use of the Al on its client ' s advertising.
The technology company has:
• Provided guidance and resources to developers to address environmental concerns.
• Build technical guidance on how to measure and mitigate bias in the LLM.
• Provided tools and resources to measure bias specific to the LLM.
• Enabled technical aspects of transparency, explainability, robustness and privacy.
• Mapped and mitigated potential societal harms and large-scale impacts.
• Followed applicable regulatory requirements and industry standards.
• Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.
Which stakeholder is responsible for the lawful collection of data used to train the foundational AI model?
What is the primary purpose of conducting ethical red-teaming on an Al system?