You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,9 +15,9 @@ _Note: Guardrails is an alpha release, so expect sharp edges and bugs._
15
15
16
16
Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). Guardrails:
17
17
18
-
✅ does pydantic-style validation of LLM outputs,
19
-
✅ takes corrective actions (e.g. reasking LLM) when validation fails,
20
-
✅ enforces structure and type guarantees (e.g. JSON).
18
+
- does pydantic-style validation of LLM outputs (including semantic validation such as checking for bias in generated text, checking for bugs in generated code, etc.)
19
+
- takes corrective actions (e.g. reasking LLM) when validation fails,
20
+
- enforces structure and type guarantees (e.g. JSON).
Copy file name to clipboardExpand all lines: docs/index.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ _Note: Guardrails is an alpha release, so expect sharp edges and bugs._
6
6
7
7
Guardrails is a Python package that lets a user add structure, type and quality guarantees to the outputs of large language models (LLMs). Guardrails:
8
8
9
-
✅ does pydantic-style validation of LLM outputs,
10
-
✅ takes corrective actions (e.g. reasking LLM) when validation fails,
11
-
✅ enforces structure and type guarantees (e.g. JSON).
9
+
- does pydantic-style validation of LLM outputs. This includes semantic validation such as checking for bias in generated text, checking for bugs in generated code, etc.
10
+
- takes corrective actions (e.g. reasking LLM) when validation fails,
11
+
- enforces structure and type guarantees (e.g. JSON).
12
12
13
13
## 🚒 Under the hood
14
14
@@ -40,12 +40,12 @@ To learn more about the `rail` spec and the design decisions behind it, check ou
40
40
## 📍 Roadmap
41
41
42
42
-[ ] Adding more examples, new use cases and domains
43
-
-[] Adding integrations with langchain, gpt-index, minichain, manifest
43
+
-[x] Adding integrations with langchain, gpt-index, minichain, manifest
44
44
-[ ] Expanding validators offering
45
45
-[ ] More compilers from `.rail` -> LLM prompt (e.g. `.rail` -> TypeScript)
Copy file name to clipboardExpand all lines: docs/integrations/pydantic_validation.ipynb
+4-3Lines changed: 4 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@
8
8
"# Validating LLM Outputs with Pydantic\n",
9
9
"\n",
10
10
"!!! note\n",
11
-
" To download this example as a Jupyter notebook, click [here](https://github.com/ShreyaR/guardrails/blob/main/docs/examples/pydantic_validation.ipynb).\n",
11
+
" To download this example as a Jupyter notebook, click [here](https://github.com/ShreyaR/guardrails/blob/main/docs/integrations/pydantic_validation.ipynb).\n",
12
12
"\n",
13
13
"In this example, we will use Guardrails with Pydantic.\n",
14
14
"\n",
@@ -38,8 +38,9 @@
38
38
"Ordinarily, we would create an RAIL spec in a separate file. For the purposes of this example, we will create the spec in this notebook as a string following the RAIL syntax. For more information on RAIL, see the [RAIL documentation](../rail/output.md).\n",
39
39
"\n",
40
40
"Here, we define a Pydantic model for a `Person` with the following fields:\n",
41
-
"- `name`: a string\n",
42
-
"- `age`: an integer\n",
41
+
"\n",
42
+
"- `name`: a string \n",
43
+
"- `age`: an integer \n",
43
44
"- `zip_code`: a string zip code\n",
44
45
"\n",
45
46
"and write very simple validators for the fields as an example. As a way to show how LLM reasking can be used to generate data that is consistent with the Pydantic model, we can define a validator that asks for a zip code in California (including being perversely opposed to the \"90210\" zip code). If this validator fails, the LLM will be sent the error message and will reask the question.\n",
0 commit comments