Output provenance verification (e.g., robust watermarking, cryptographic signing, content credentials, and chain-of-custody attestations) is the primary preventive and detective control to combat deepfakes at scale. It enables receivers and downstream systems to verify that media originates from trusted sources and has not been tampered with. While risk assessments (Option B) and governance policies (Option C) set expectations, they do not technically prevent forged media. Input validation (Option D) does not address media authenticity once generated or received.
[References:, AAISM Body of Knowledge: Content Authenticity, Watermarking, and Provenance; Trustworthy AI Outputs and Media Integrity Controls., AAISM Study Guide: Mitigations for Synthetic Media Risks; Watermark/Signature Verification Pipelines; Content Credentials in Enterprise Controls., ===========]
Submit