Bias in AI models is most commonly introduced through the training data. The AAIA™ Study Guide highlights that to ensure fairness, auditors and developers must evaluate the diversity, representativeness, and quality of the data used to train the model.
“The greatest source of bias in AI comes from the training data. Reviewing and auditing this data is critical to ensuring that outputs do not disproportionately affect specific groups or skew results.”
While adaptability (C) and model parameters like temperature (D) affect behavior, they do not address the root cause of most biases. The development environment (B) supports infrastructure but not ethical assurance.
[Reference: ISACA Advanced in AI Audit™ (AAIA™) Study Guide, Section: “Ethical and Legal Considerations in AI,” Subsection: “Bias and Fairness in AI Systems”, ]
Submit