Sudden and unexplained changes in AI-generated credit scores may result from data drift, model overfitting, or lack of recalibration. According to the AAIA™ Study Guide, regular expert review and calibration help maintain model reliability and transparency, particularly in high-stakes decisions like credit scoring.
“Ongoing human oversight ensures that predictive models remain stable and justifiable. In high-impact environments, such as banking, experts must review and recalibrate AI systems to prevent opaque or unexpected behavior.”
Option B may cause the exclusion of relevant long-term patterns. C promotes risk by removing oversight. D is a validation strategy, not a stability control. Therefore, A is the best option.
[Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “AI Operations and Performance,” Subsection: “Model Monitoring and Recalibration Strategies”, , ]
Submit