Amazon Web Services AWS Certified AI Practitioner Exam AIF-C01 Question # 65 Topic 7 Discussion
AIF-C01 Exam Topic 7 Question 65 Discussion:
Question #: 65
Topic #: 7
A company is testing the security of a foundation model (FM). During testing, the company wants to get around the safety features and make harmful content.
Jailbreaking an LLM refers to deliberately crafting prompts to bypass its safety guardrails and produce restricted or harmful content.
Fuzzing is input testing for software.
DoS is a service availability attack.
Pen testing (C) is allowed with AWS authorization but doesn’t describe bypassing model safeguards.
???? Reference:
AWS Responsible AI – Prompt Injection and Jailbreak Risks
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit