Skip to content

Guardrails

Rules, filters, and constraints applied to AI systems to prevent harmful or off-topic outputs. Guardrails can be implemented through system prompts, output classifiers, keyword filters, or dedicated safety models. They are essential for production deployment.

Related terms

AI SafetyContent ModerationSystem Prompt
← Back to glossary