Per AAISM’s ML lifecycle controls, hyperparameter tuning is performed on the validation set, reserving the test set strictly for final, unbiased performance estimation. The training set is used to fit parameters; the validation set guides model selection and hyperparameter optimization; the test set is untouched until the end to prevent leakage and optimistic bias. “Configuration” is not a dataset type in the lifecycle split.
[References:• AI Security Management™ (AAISM) Body of Knowledge: Model Development Controls—Data Splitting and Evaluation Integrity• AAISM Study Guide: Overfitting Avoidance; Validation vs. Test Separation; Leakage Prevention• AAISM Mapping to Standards: Evaluation Integrity—Hold-out Protocols and Tuning Practices]
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit