The ISTQB CT-AI syllabus explains inSection 4.5 – Testing AI-Specific Risksthat adversarial testing is a structured test activity in which testers applyadversarial attacks—crafted or perturbed inputs—to intentionally expose weaknesses in the ML model. The purpose is to identify vulnerabilities that could be exploited throughdata poisoning,evasion attacks, orinput manipulation. OptionCcorrectly reflects this syllabus definition: adversarial testing is aboutusing attacks to locate weaknesses so they can be removed or mitigated.
Option A is incorrect because regression testing does not verify data sourcing policies; it verifies unchanged functionality after modifications. Option B is incorrect because adversarial examples are oftenaddedto training datasets to improverobustness(a practice called adversarial training), not excluded. Option D is incorrect because AIB testing is not described as superior to exploratory data analysis in outlier detection; both have different purposes, and EDA remains essential for data quality assessment.
Thus,Option Cis consistent with syllabus-defined adversarial testing.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit