A prompt template defines how the model is structured and guided (system prompts, roles, guardrails).
An attack that reveals or leaks this prompt template is known as a prompt extraction attack.
The other options (persona switching, exploiting friendliness, ignoring prompts) describe adversarial techniques but do not directly expose the internal configured behavior.
???? Reference:
AWS Responsible AI – Prompt Injection & Extraction Attacks
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit