The greatest challenge for IS auditors in evaluating the explainability of generative AI models is the changing nature of algorithms as AI continues to learn (option D). Generative AI models, especially those using advanced techniques like deep learning and reinforcement learning, often employ continuous or dynamic learning, which results in models that evolve over time. This adaptability can significantly hinder explainability because the logic, parameters, or decision pathways may shift with ongoing retraining or real-time learning.
The ISACA Advanced in AI Audit™ (AAIA™) Study Guide stresses that: “Continually learning AI systems present unique audit challenges, as their internal representations and reasoning can change after deployment, making it difficult to fully capture and explain the rationale for outputs at any given point.”
Other options, such as bias in input data or computational performance, are significant but do not pose as fundamental a challenge to explainability as a model whose internal workings can dynamically change.
[Reference:ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: "AI Explainability and Dynamic Models", ]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit