The correct answer is A because adversarial prompting is a defensive technique used to identify and protect against prompt injection attacks in large language models (LLMs). In adversarial prompting, developers intentionally test the model with manipulated or malicious prompts to evaluate how it behaves under attack and to harden the system by refining prompts, filters, and validation logic.
From AWS documentation:
"Adversarial prompting is used to evaluate and defend generative AI models against harmful or manipulative inputs (prompt injections). By testing with adversarial examples, developers can identify vulnerabilities and apply safeguards such as Guardrails or context filtering to prevent model misuse."
Prompt injection occurs when an attacker tries to override system or developer instructions within a prompt, leading the model to disclose restricted information or behave undesirably. Adversarial prompting helps uncover and mitigate these risks before deployment.
Explanation of other options:
B. Zero-shot prompting provides no examples and does not protect against injection attacks.
C. Least-to-most prompting is a reasoning technique used to break down complex problems step-by-step, not a security measure.
D. Chain-of-thought prompting encourages detailed reasoning by the model but can actually increase exposure to prompt injection if not properly constrained.
Referenced AWS AI/ML Documents and Study Guides:
AWS Responsible AI Practices – Prompt Injection and Safety Testing
Amazon Bedrock Developer Guide – Secure Prompt Design and Evaluation
AWS Generative AI Security Whitepaper – Adversarial Testing and Guardrails