The Einstein Trust Layer is designed to ensure responsible and compliant AI usage. Data Masking (B) is the mechanism that directly addresses compliance with data protection regulations like GDPR by obscuring or anonymizing sensitive personal data (e.g., names, emails, phone numbers) before it is processed by AI models. This prevents unauthorized exposure of personally identifiable information (PII) and ensures adherence to privacy laws.
Salesforce documentation explicitly states that Data Masking is a core component of the Einstein Trust Layer, enabling organizations to meet GDPR requirements by automatically redacting sensitive fields during AI interactions. For example, masked data ensures that PII is not stored or used in AI model training or inference without explicit consent.
In contrast:
Toxicity Scoring (A) identifies harmful or inappropriate content in outputs but does not address data privacy.
Prompt Defense (C) guards against malicious prompts or injection attacks but focuses on security rather than data protection compliance.
[Reference:, Salesforce Help Article: Einstein Trust Layer ("Data Masking" section)., Einstein Trust Layer Overview: "Data Protection and Compliance Features" (GDPR alignment via Data Masking)., , , ]
Submit