Skip to content

Commit 3c337af

Browse files
authored
Merge pull request #499 from guardrails-ai/0.3.0-backup
0.3.0
2 parents 78d5052 + d50eefc commit 3c337af

File tree

152 files changed

+10717
-5841
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

152 files changed

+10717
-5841
lines changed

.github/workflows/ci.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ on:
99
branches:
1010
- main
1111
- dev
12+
- '0.3.0'
1213

1314
# Allows you to run this workflow manually from the Actions tab
1415
workflow_dispatch:

.github/workflows/scripts/run_notebooks.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ cd docs/examples
99
# Function to process a notebook
1010
process_notebook() {
1111
notebook="$1"
12-
if [ "$notebook" != "valid_chess_moves.ipynb" ] && [ "$notebook" != "translation_with_quality_check.ipynb" ] && [ "$notebook" != "competitors_check.ipynb" ]; then
12+
invalid_notebooks=("valid_chess_moves.ipynb" "translation_with_quality_check.ipynb" "llamaindex-output-parsing.ipynb" "competitors_check.ipynb")
13+
if [[ ! " ${invalid_notebooks[@]} " =~ " ${notebook} " ]]; then
1314
echo "Processing $notebook..."
1415
poetry run jupyter nbconvert --to notebook --execute "$notebook"
1516
if [ $? -ne 0 ]; then

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ dist/*
2020
.cache
2121
scratch/
2222
.coverage*
23+
coverage.xml
2324
test.db
2425
test.index
2526
htmlcov

Makefile

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,9 @@ test-cov:
4747
view-test-cov:
4848
poetry run pytest tests/ --cov=./guardrails/ --cov-report html && open htmlcov/index.html
4949

50+
view-test-cov-file:
51+
poetry run pytest tests/unit_tests/test_logger.py --cov=./guardrails/ --cov-report html && open htmlcov/index.html
52+
5053
docs-serve:
5154
poetry run mkdocs serve -a $(MKDOCS_SERVE_ADDR)
5255

@@ -59,6 +62,9 @@ dev:
5962
full:
6063
poetry install --all-extras
6164

65+
self-install:
66+
pip install -e .
67+
6268
all: autoformat type lint docs test
6369

6470
precommit:

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ Call the `Guard` object with the LLM API call as the first argument and add any
154154
import openai
155155

156156
# Wrap the OpenAI API call with the `guard` object
157-
raw_llm_output, validated_output = guard(
157+
raw_llm_output, validated_output, *rest = guard(
158158
openai.Completion.create,
159159
engine="text-davinci-003",
160160
max_tokens=1024,
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
::: guardrails.classes.generic
2+
options:
3+
members:
4+
- "Stack"
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
::: guardrails.classes.history
2+
options:
3+
members:
4+
- "Call"
5+
- "CallInputs"
6+
- "Inputs"
7+
- "Iteration"
8+
- "Outputs"

docs/concepts/guard.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ from guardrails import Guard
1919

2020
guard = Guard.from_rail(...)
2121

22-
raw_output, validated_output = guard(
22+
raw_output, validated_output, *rest = guard(
2323
openai.Completion.create,
2424
engine="text-davinci-003",
2525
max_tokens=1024,

docs/concepts/logs.md

Lines changed: 164 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -1,78 +1,191 @@
11
# Inspecting logs
22

3-
All `gd.Guard` calls are logged internally, and can be accessed via two methods, `gd.Guard.guard_state` or `guardrails.log`.
3+
All `Guard` calls are logged internally, and can be accessed via the guard history.
44

5-
## 🪵 Accessing logs via `guardrails.log`
5+
## 🇻🇦 Accessing logs via `Guard.history`
66

7-
This is the simplest way to access logs. It returns a list of all `gd.Guard` calls, in the order they were made.
7+
`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/api_reference/helper_classes) page.
88

9-
In order to access logs, run:
9+
Each entry in the history stack is a `Call` log which will contain information specific to a particular `Guard.__call__` or `Guard.parse` call in the order that they were executed within the current session.
1010

11-
```bash
12-
13-
eliot-tree --output-format=ascii guardrails.log
11+
For example, if you have a guard:
1412

13+
```py
14+
my_guard = Guard.from_rail(...)
1515
```
1616

17-
## 🇻🇦 Accessing logs via `gd.Guard.guard_state`
17+
and you call it multiple times:
18+
19+
```py
20+
response_1 = my_guard(...)
1821

19-
`guard_state` is an attribute of the `gd.Guard` class. It contains:
22+
response_2 = my_guard.parse(...)
23+
```
2024

21-
1. A list of all `gd.Guard` calls, in the order they were made.
22-
2. For each call, reasks needed and their results.
25+
Then `guard.history` will have two call logs with the first representing the first call `response_1 = my_guard(...)` and the second representing the following `parse` call `response_2 = my_guard.parse(...)`.
2326

24-
To pretty print logs, run:
27+
To pretty print logs for the latest call, run:
2528

2629
```python
2730
from rich import print
2831

29-
print(guard.state.most_recent_call.tree)
32+
print(guard.history.last.tree)
3033
```
34+
--8<--
3135

32-
![guard_state](../img/guard_history.png)
36+
docs/html/single-step-history.html
3337

34-
To access fine-grained logs on field validation, see the FieldValidationLogs object:
38+
--8<--
3539

36-
```python
37-
validation_logs = guard.guard_state.all_histories[0].history[0].field_validation_logs
38-
print(validation_logs.json(indent=2))
40+
The `Call` log will contain initial and final information about a particular guard call.
41+
42+
```py
43+
first_call = my_guard.history.first
44+
```
45+
46+
For example, it tracks the initial inputs as provided:
47+
```py
48+
print("prompt\n-----")
49+
print(first_call.prompt)
50+
print("prompt params\n------------- ")
51+
print(first_call.prompt_params)
52+
```
53+
```log
54+
prompt
55+
-----
56+
57+
You are a human in an enchanted forest. You come across opponents of different types. You should fight smaller opponents, run away from bigger ones, and freeze if the opponent is a bear.
58+
59+
You run into a ${opp_type}. What do you do?
60+
61+
${gr.complete_json_suffix_v2}
62+
63+
64+
Here are a few examples
65+
66+
goblin: {"action": {"chosen_action": "fight", "weapon": "crossbow"}}
67+
troll: {"action": {"chosen_action": "fight", "weapon": "sword"}}
68+
giant: {"action": {"chosen_action": "flight", "flight_direction": "north", "distance": 1}}
69+
dragon: {"action": {"chosen_action": "flight", "flight_direction": "south", "distance": 4}}
70+
black bear: {"action": {"chosen_action": "freeze", "duration": 3}}
71+
beets: {"action": {"chosen_action": "fight", "weapon": "fork"}}
72+
73+
prompt params
74+
-------------
75+
{'opp_type': 'grizzly'}
3976
```
4077

41-
```json
78+
as well as the final outputs:
79+
```py
80+
print("status: ", first_call.status) # The final status of this guard call
81+
print("validated response:", first_call.validated_output) # The final valid output of this guard call
82+
```
83+
```log
84+
status: pass
85+
validated response: {'action': {'chosen_action': 'freeze', 'duration': 3}}
86+
```
87+
88+
89+
The `Call` log also tracks cumulative values from any iterations that happen within the call.
90+
91+
For example, if the first response from the LLM fails validation and a reask occurs, the `Call` log can provide total tokens consumed (*currently only for OpenAI models), as well as access to all of the raw outputs from the LLM:
92+
```py
93+
print("prompt token usage: ", first_call.prompt_tokens_consumed) # Total number of prompt tokens consumed across iterations within this call
94+
print("completion token usage: ", first_call.completion_tokens_consumed) # Total number of completion tokens consumed across iterations within this call
95+
print("total token usage: ",first_call.tokens_consumed) # Total number of tokens consumed; equal to the sum of the two values above
96+
print("llm responses\n-------------") # An Stack of the LLM responses in order that they were received
97+
for r in first_call.raw_outputs:
98+
print(r)
99+
```
100+
```log
101+
prompt token usage: 909
102+
completion token usage: 57
103+
total token usage: 966
104+
105+
llm responses
106+
-------------
107+
{"action": {"chosen_action": "freeze"}}
42108
{
43-
"validator_logs": [],
44-
"children": {
45-
"name": {
46-
"validator_logs": [
47-
{
48-
"validator_name": "TwoWords",
49-
"value_before_validation": "peter parker the second",
50-
"validation_result": {
51-
"outcome": "fail",
52-
"metadata": null,
53-
"error_message": "must be exactly two words",
54-
"fix_value": "peter parker"
55-
},
56-
"value_after_validation": {
57-
"incorrect_value": "peter parker the second",
58-
"fail_results": [
59-
{
60-
"outcome": "fail",
61-
"metadata": null,
62-
"error_message": "must be exactly two words",
63-
"fix_value": "peter parker"
64-
}
65-
],
66-
"path": [
67-
"name"
68-
]
69-
}
70-
}
71-
],
72-
"children": {}
73-
}
109+
"action": {
110+
"chosen_action": "freeze",
111+
"duration": null
74112
}
75113
}
114+
{
115+
"action": {
116+
"chosen_action": "freeze",
117+
"duration": 1
118+
}
119+
}
120+
```
121+
122+
For more information on `Call`, see the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Call) page.
123+
124+
## 🇻🇦 Accessing logs from individual steps
125+
In addition to the cumulative values available directly on the `Call` log, it also contains a `Stack` of `Iteration`'s. Each `Iteration` represent the logs from within a step in the guardrails process. This includes the call to the LLM, as well as parsing and validating the LLM's response.
126+
127+
Each `Iteration` is treated as a stateless entity so it will only contain information about the inputs and outputs of the particular step it represents.
128+
129+
For example, in order to see the raw LLM response as well as the logs for the specific validations that failed during the first step of a call, we can access this information via that steps `Iteration`:
130+
131+
```py
132+
first_step = first_call.iterations.first
76133

134+
first_llm_output = first_step.raw_output
135+
print("First LLM response\n------------------")
136+
print(first_llm_output)
137+
print(" ")
138+
139+
validation_logs = first_step.validator_logs
140+
print("\nValidator Logs\n--------------")
141+
for log in validation_logs:
142+
print(log.json(indent=2))
143+
```
144+
```log
145+
First LLM response
146+
------------------
147+
{"action": {"chosen_action": "fight", "weapon": "spoon"}}
148+
149+
150+
Validator Logs
151+
--------------
152+
{
153+
"validator_name": "ValidChoices",
154+
"value_before_validation": "spoon",
155+
"validation_result": {
156+
"outcome": "fail",
157+
"metadata": null,
158+
"error_message": "Value spoon is not in choices ['crossbow', 'axe', 'sword', 'fork'].",
159+
"fix_value": null
160+
},
161+
"value_after_validation": {
162+
"incorrect_value": "spoon",
163+
"fail_results": [
164+
{
165+
"outcome": "fail",
166+
"metadata": null,
167+
"error_message": "Value spoon is not in choices ['crossbow', 'axe', 'sword', 'fork'].",
168+
"fix_value": null
169+
}
170+
],
171+
"path": [
172+
"action",
173+
"weapon"
174+
]
175+
}
176+
}
177+
```
178+
179+
Similar to the `Call` log, we can also see the token usage for just this step:
180+
```py
181+
print("prompt token usage: ", first_step.prompt_tokens_consumed)
182+
print("completion token usage: ", first_step.completion_tokens_consumed)
183+
print("token usage for this step: ",first_step.tokens_consumed)
184+
```
185+
```log
186+
prompt token usage: 617
187+
completion token usage: 16
188+
token usage for this step: 633
189+
```
77190

78-
```
191+
For more information on the properties available on `Iteration`, ee the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Iteration) page.

docs/concepts/validators.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Sometimes validators need addtional parameters that are only availble during run
3535
```python
3636
guard = Guard.from_rail("my_railspec.rail")
3737

38-
raw_output, guarded_output = guard(
38+
raw_output, guarded_output, *rest = guard(
3939
llm_api=openai.ChatCompletion.create,
4040
model="gpt-3.5-turbo",
4141
num_reasks=3,
@@ -134,7 +134,7 @@ ${guardrails.complete_json_suffix}
134134

135135
guard = Guard.from_rail_string(rail_string=rail_str)
136136

137-
raw_output, guarded_output = guard(
137+
raw_output, guarded_output, *rest = guard(
138138
llm_api=openai.ChatCompletion.create,
139139
model="gpt-3.5-turbo"
140140
)

0 commit comments

Comments
 (0)