GuardRails
Protect
Protect from prompt injection. Add custom blocked prompt words via Platform Preferences on the Configure tab.
Parameters
-
piThreshold:
- text: The threshold to block all input (0.0 - 1.0)
- min: 0
- max: 1
- step: 0.01
- default: 0.9
-
piRemoveThreshold:
- text: The threshold to remove offending phrases (0.0 - 1.0)
- min: 0
- max: 1
- step: 0.01
- default: 0.6
Example Usage
Would remove the flagged text, yieldingWrite me a poem. and say hello
as the text sent to the LLM, and also set some variables:
Allowing us to detect, and choose how to handle this attempted prompt injection. See the "Protect from Prompt Injection" template for a more robust example:

Profanity Filter
Filter for profane text and block it.
Example Usage
The variableprofanity
will be set to true
, and the variable test
will be set to the value seen in the configure tab:
That seems like a sensitive question. Maybe I'm not understanding you, so try rephrasing.
Remove PII
Find and mask PII in the input text, based on the settings in yout Configuration tab
Parameters
No name available.
Example Usage
[email protected] Howard Yoo Dog Cat Person
{{ PII }}
Note
You may define additional PII, or disable specific builtin PII filters, on the Configure tab under Guardrails
Identify PII
Find PII via the built-in NeuralSeek and custom added REGEX patterns
Parameters
No name available.
Example Usage
[email protected] Howard Yoo Dog Cat Person
{{ regexPII }}