Isaca ISACA Advanced in AI Security Management (AAISM) Exam AAISM Question # 27 Topic 3 Discussion
AAISM Exam Topic 3 Question 27 Discussion:
Question #: 27
Topic #: 3
Which of the following mitigation control strategies would BEST reduce the risk of introducing hidden backdoors during model fine-tuning via third-party components?
AAISM highlights threat modeling and supply chain integrity checks as key controls for managing AI-specific risks, including hidden backdoors in third-party models, libraries, or fine-tuning artifacts. The official guidance states that organizations should “identify adversarial insertion points, verify component integrity, and continuously test for malicious behaviors introduced via external components.” This is precisely what option B describes. Simply using open-source (A) does not guarantee security; code can still contain malicious modifications. Disabling runtime logs (C) would reduce visibility, making backdoor detection harder. Choosing unsupervised learning (D) is a modeling approach and has no inherent relation to backdoor risk reduction. The explicit combination of AI-focused threat modeling and integrity verification of external components is the recommended best practice to mitigate this class of attack.
[References: AI Security Management™ (AAISM) Study Guide – AI Supply Chain Risk Management; Threat Modeling and Component Integrity., ====================, ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit