@@ -56,7 +56,7 @@ pip install guardrails-ai
5656### Create Input and Output Guards for LLM Validation
5757
58581 . Download and configure the Guardrails Hub CLI.
59-
59+
6060 ``` bash
6161 pip install guardrails-ai
6262 guardrails configure
@@ -96,7 +96,7 @@ pip install guardrails-ai
9696 ` ` `
9797
9898 Then, create a Guard from the installed guardrails.
99-
99+
100100 ` ` ` python
101101 from guardrails import Guard, OnFailAction
102102 from guardrails.hub import CompetitorCheck, ToxicLanguage
@@ -161,7 +161,7 @@ raw_output, validated_output, *rest = guard(
161161print(validated_output)
162162```
163163
164- This prints:
164+ This prints:
165165```
166166{
167167 "pet_type": "dog",
@@ -175,7 +175,7 @@ Guardrails can be set up as a standalone service served by Flask with `guardrail
175175
1761761. Install: `pip install "guardrails-ai"`
1771772. Configure: `guardrails configure`
178- 3. Create a config: `guardrails create --validators=hub://guardrails/two_words --name=two-word-guard`
178+ 3. Create a config: `guardrails create --validators=hub://guardrails/two_words --guard- name=two-word-guard`
1791794. Start the dev server: `guardrails start --config=./config.py`
1801805. Interact with the dev server via the snippets below
181181```
@@ -204,7 +204,7 @@ completion = openai.chat.completions.create(
204204)
205205```
206206
207- For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
207+ For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
208208
209209## FAQ
210210
0 commit comments