AAISM materials make clear that the best safeguard against sensitive information being leaked through the outputs of LLMs is data sanitization. This involves filtering, redacting, or masking sensitive content before the model can use it, thereby preventing unintended disclosure in outputs. Encryption protects confidentiality in storage and transmission but does not stop output leaks. Adversarial testing helps identify vulnerabilities but does not prevent exposure by itself. Least privilege access restricts who can interact with the model but does not sanitize the content of its outputs. The control most directly tied to preventing leakage is implementing data sanitization techniques.
[References:, AAISM Exam Content Outline – AI Technologies and Controls (Data Leakage Prevention), AI Security Management Study Guide – Sensitive Data Controls in Generative AI, ]
Submit