Isaca ISACA Advanced in AI Security Management (AAISM) Exam AAISM Question # 68 Topic 7 Discussion
AAISM Exam Topic 7 Question 68 Discussion:
Question #: 68
Topic #: 7
An organization is deploying a large language model (LLM) and is concerned that input manipulations may compromise its integrity. Which of the following is the MOST effective way to determine an acceptable risk threshold?
A.
Restrict all user inputs containing special characters
B.
Deploy a real-time logging and monitoring system
C.
Implement a static risk threshold by limiting LLM outputs
AAISM requires that risk thresholds/tolerances be set by aligning threat likelihood and impact with the organization’s business context and risk appetite. Determining “acceptable” risk starts with assessing business impact of credible threats (e.g., prompt injection leading to data exfiltration, policy evasion, or harmful actions), then translating this into control intensity and thresholds. Hard input restrictions (A) and static output caps (C) are blunt measures that may degrade utility without ensuring alignment to risk appetite. Monitoring (B) is essential for detection, but it does not, by itself, define what level of risk is acceptable.
[References: AI Security Management™ (AAISM) Body of Knowledge — Risk Appetite and Tolerance for AI; Threat Modeling for LLMs; Business Impact Analysis and Risk Acceptance Criteria., ===========, ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit