AAISM technical coverage identifies recall as the metric that specifically measures a model’s ability to capture all true positive cases out of the total actual positives. A high recall means the system minimizes false negatives, ensuring that relevant instances are not overlooked. Precision instead measures correctness among predicted positives, specificity focuses on true negatives, and the F1 score balances precision and recall but does not by itself indicate the completeness of capturing positives. The official study guide defines recall as the most direct metric for evaluating how well a model identifies all relevant positive cases, making it the correct answer.
[References:, AAISM Study Guide – AI Technologies and Controls (Evaluation Metrics and Model Performance), ISACA AI Security Management – Model Accuracy and Completeness Assessments, , ]
Submit