When AI models are updated frequently in production, continuous monitoring is critical to detect performance degradation, bias drift, hallucinations, and security issues introduced by new versions. A lack of continuous monitoring (option C) means the organization might not promptly detect harmful behaviors or compliance violations, despite frequent changes, exposing it to operational, reputational, and regulatory risk.
Option A (no AI-specific change management) is serious but can be partially mitigated if effective monitoring reveals issues quickly. Option B (overreliance on manual review) is inefficient but still a control. Option D (no dedicated AI governance committee) is a structural weakness, yet the immediate operational risk is greatest where model changes are not constantly observed. AAIA emphasizes supervision of AI solutions and monitoring of outputs and impacts , which are directly undermined when continuous monitoring is absent.
[References:, ISACA, AAIA Exam Content Outline – Domain 2: AI Operations (Supervision of AI Solutions; Change Management Specific to AI)., ISACA materials on continuous monitoring and post-deployment oversight of AI systems., , ]
Submit