You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/concepts/envars.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,11 +2,11 @@
2
2
3
3
Guardrails recognize a handful of environment variables that can be used at runtime. Most of these correlate to envinronment variables used or expected by the various LLM clients. Below you can find a list of these and their uses.
4
4
5
-
##OPENAI_API_KEY
5
+
### `OPENAI_API_KEY`
6
6
This environment variable can be used to set your api key credentials for Open AI models. It will be used wherever Open AI is called if an api_key kwarg is not passed to `__call__` or `parse`.
7
7
8
-
##GUARDRAILS_PROCESS_COUNT
8
+
### `GUARDRAILS_PROCESS_COUNT`
9
9
This environment variable can be used to set the process count for the multiprocessing executor. The multiprocessing executor is used to run validations in parallel where possible. To disable this behaviour and force synchronous validation, you can set this environment variable to `'1'`. The default is `'10'`.
10
10
11
-
##INSPIREDCO_API_KEY
11
+
### `INSPIREDCO_API_KEY`
12
12
This environment variable can be used to set your api key credentials for the Inspired Cognition API Client. It will be used wherever the Inspired Cognition API is called. Currently this is only used in the `is-high-quality-translation` validator.
| Variables |`${variable_name}`| These are provided by the user at runtime, and substituted in the instructions. |
12
-
| Output Schema |`${output_schema}`| This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the instructions, check out [`output` element compilation](../output/#adding-compiled-output-element-to-prompt).|
12
+
| Output Schema |`${output_schema}`| This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the instructions, check out [`output` element compilation](/docs/concepts/output/#adding-compiled-output-element-to-prompt)|
13
13
| Prompt Primitives |`${gr.prompt_primitive_name}`| These are pre-constructed blocks of text that are useful for common tasks. E.g., some primitives may contain information that helps the LLM understand the output schema better. To see the full list of prompt primitives, check out [`guardrails/constants.xml`](https://github.com/guardrails-ai/guardrails/blob/main/guardrails/constants.xml). |
Copy file name to clipboardExpand all lines: docs/concepts/logs.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
# Inspecting logs
1
+
# Logs
2
2
3
3
All `Guard` calls are logged internally, and can be accessed via the guard history.
4
4
5
5
## 🇻🇦 Accessing logs via `Guard.history`
6
6
7
-
`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/api_reference/helper_classes) page.
7
+
`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/docs/api_reference_markdown/helper_classes) page.
8
8
9
9
Each entry in the history stack is a `Call` log which will contain information specific to a particular `Guard.__call__` or `Guard.parse` call in the order that they were executed within the current session.
10
10
@@ -188,4 +188,4 @@ completion token usage: 16
188
188
token usage for this step: 633
189
189
```
190
190
191
-
For more information on the properties available on `Iteration`, ee the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Iteration) page.
191
+
For more information on the properties available on `Iteration`, see the [History & Logs](/docs/api_reference_markdown/history_and_logs/#guardrails.classes.history.Iteration) page.
| Scalar types are void elements, and can't have any child elements. | Non-scalar types can be non-void, and can have closing tags and child elements. |
| Examples: `string`, `integer`, `float`, `bool`, `url`, `email`, etc. | Examples: `list` and `object` are the only non-scalar types supported by Guardrails. |
212
212
213
213
@@ -217,20 +217,23 @@ Each element can have attributes that specify additional information about the d
217
217
218
218
1.`name` attribute that specifies the name of the field. This will be the key in the output JSON. E.g.
219
219
220
-
=== "RAIL Spec"
221
-
```xml
222
-
<railversion="0.1">
223
-
<output>
224
-
<stringname="some_key" />
225
-
</output>
226
-
</rail>
227
-
```
228
-
=== "Output JSON"
229
-
```json
230
-
{
231
-
"some_key": "..."
232
-
}
233
-
```
220
+
=== "RAIL Spec"
221
+
222
+
```xml
223
+
<rail version="0.1">
224
+
<output>
225
+
<string name="some_key" />
226
+
</output>
227
+
</rail>
228
+
```
229
+
230
+
=== "Output JSON"
231
+
232
+
```json
233
+
{
234
+
"some_key": "..."
235
+
}
236
+
```
234
237
235
238
2.`description` attribute that specifies the description of the field. This is similar to a prompt that will be provided to the LLM. It can contain more context to help the LLM generate the correct output.
236
239
3. (Coming soon!) `required` attribute that specifies whether the field is required or not. If the field is required, the LLM will be asked to generate the field until it is generated correctly. If the field is not required, the LLM will not be asked to generate the field if it is not generated correctly.
@@ -297,7 +300,7 @@ Each quality criteria is then checked against the generated output. If the quali
297
300
### Supported criteria
298
301
299
302
- Each quality critera is relevant to a specific data type. For example, the `two-words` quality criteria is only relevant to strings, and the `positive` quality criteria is only relevant to integers and floats.
300
-
- To see the full list of supported quality criteria, check out the [Validation](../api_reference/validators.md) page.
303
+
- To see the full list of supported quality criteria, check out the [Validation](/docs/api_reference_markdown/validators) page.
| Variables |`${variable_name}`| These are provided by the user at runtime, and substituted in the prompt. |
12
-
| Output Schema |`${output_schema}`| This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the prompt, check out [`output` element compilation](../output/#adding-compiled-output-element-to-prompt). |
12
+
| Output Schema |`${output_schema}`| This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the prompt, check out [`output` element compilation](/docs/concepts/output/#adding-compiled-output-element-to-prompt). |
13
13
| Prompt Primitives |`${gr.prompt_primitive_name}`| These are pre-constructed prompts that are useful for common tasks. E.g., some primitives may contain information that helps the LLM understand the output schema better. To see the full list of prompt primitives, check out [`guardrails/constants.xml`](https://github.com/guardrails-ai/guardrails/blob/main/guardrails/constants.xml). |
Copy file name to clipboardExpand all lines: docs/concepts/validators.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@
3
3
Validators are how we apply quality controls to the schemas specified in our `RAIL` specs. They specify the criteria to measure whether an output is valid, as well as what actions to take when an output does not meet those criteria.
4
4
5
5
## How do Validators work?
6
-
When a validator is applied to a property on a schema, and output is provided for that schema, either by wrapping the LLM call or passing in the LLM output, the validators are executed against the values for the properties they were applied to. If the value for the property passes the criteria defined, a `PassResult` is returned from the validator. This `PassResult` tells Guardrails to treat the value as if it is valid. In most cases this means returning that value for that property at the end; other advanced cases, like using a value override, will be covered in other sections. If, however, the value for the property does not pass the criteria, a `FailResult` is returned. This in turn tells Guardrails to take any corrective actions defined for the property and validation. Corrective actions are defined by the `on-fail-...` attributes in a `RAIL` spec. You can read more about what corrective actions are available [here](/concepts/output/#specifying-corrective-actions).
6
+
When a validator is applied to a property on a schema, and output is provided for that schema, either by wrapping the LLM call or passing in the LLM output, the validators are executed against the values for the properties they were applied to. If the value for the property passes the criteria defined, a `PassResult` is returned from the validator. This `PassResult` tells Guardrails to treat the value as if it is valid. In most cases this means returning that value for that property at the end; other advanced cases, like using a value override, will be covered in other sections. If, however, the value for the property does not pass the criteria, a `FailResult` is returned. This in turn tells Guardrails to take any corrective actions defined for the property and validation. Corrective actions are defined by the `on-fail-...` attributes in a `RAIL` spec. You can read more about what corrective actions are available [here](/docs/concepts/output#%EF%B8%8F-specifying-corrective-actions).
First step is to check the docs. Each validator has an API reference that documents both its initialization arguments and any required metadata that must be supplied at runtime. Continuing with the example used above, `ExtractedSummarySentencesMatch` accepts an optional threshold argument which defaults to `0.7`; it also requires an entry in the metadata called `filepaths` which is an array of strings specifying which documents to use for the similarity comparison. You can see an example of a Validator's metadata documentation [here](../api_reference/validators.md/#guardrails.validators.ExtractedSummarySentencesMatch).
52
+
First step is to check the docs. Each validator has an API reference that documents both its initialization arguments and any required metadata that must be supplied at runtime. Continuing with the example used above, `ExtractedSummarySentencesMatch` accepts an optional threshold argument which defaults to `0.7`; it also requires an entry in the metadata called `filepaths` which is an array of strings specifying which documents to use for the similarity comparison. You can see an example of a Validator's metadata documentation [here](/docs/api_reference_markdown/validators#extractedsummarysentencesmatch).
53
53
54
54
Secondly, if a piece of metadata is required and not present, a `RuntimeError` will be raised. For example, if the metadata requirements are not met for the above validator, an `RuntimeError` will be raised with the following message:
55
55
56
56
> extracted-sentences-summary-match validator expects `filepaths` key in metadata
57
57
58
58
## Custom Validators
59
-
If you need to perform a validation that is not currently supported by the [validators](../api_reference/validators.md) included in guardrails, you can create your own custom validators to be used in your local python environment.
59
+
If you need to perform a validation that is not currently supported by the [validators](/docs/api_reference_markdown/validators) included in guardrails, you can create your own custom validators to be used in your local python environment.
60
60
61
61
A custom validator can be as simple as a single function if you do not require addtional arguments:
0 commit comments