Isaca ISACA Advanced in AI Security Management (AAISM) Exam AAISM Question # 43 Topic 5 Discussion
AAISM Exam Topic 5 Question 43 Discussion:
Question #: 43
Topic #: 5
A large language model (LLM) has been manipulated to provide advice that serves an attacker’s objectives. Which of the following attack types does this situation represent?
AAISM categorizes the manipulation of an LLM at inference time, where crafted inputs cause outputs to serve attacker objectives, as an evasion attack. Evasion attacks exploit weaknesses in the model’s decision-making boundaries by altering queries to produce compromised or misleading outputs. Privilege escalation refers to unauthorized access rights, data poisoning targets the training phase, and model inversion reconstructs training data. In this case, manipulation of outputs to align with an attacker’s goals reflects an evasion attack.
[References:, AAISM Exam Content Outline – AI Risk Management (Adversarial Attack Types), AI Security Management Study Guide – Evasion and Manipulation Risks]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit