You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Update docs/models/cerebras.md: use pip/uv-add format, link to Cerebras docs
- Fix httpx.AsyncClient typo in cerebras.md, groq.md, mistral.md
- Add docs/api/models/cerebras.md and update mkdocs.yml
- Remove Cerebras section from openai.md, move to main list in overview.md
- Add str | to CerebrasModelName for arbitrary model names
- Add CerebrasModelSettings with cerebras_disable_reasoning field
- Add zai_model_profile, restore unsupported_model_settings and json_schema_transformer
- Pass lowercase model name to profile functions
- Add tests/providers/test_cerebras.py with full coverage
- Remove type ignore in models/__init__.py
Copy file name to clipboardExpand all lines: docs/models/cerebras.md
+4-10Lines changed: 4 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,20 +5,14 @@
5
5
To use `CerebrasModel`, you need to either install `pydantic-ai`, or install `pydantic-ai-slim` with the `cerebras` optional group:
6
6
7
7
```bash
8
-
pip install "pydantic-ai-slim[cerebras]"
9
-
```
10
-
11
-
or
12
-
13
-
```bash
14
-
uv add "pydantic-ai-slim[cerebras]"
8
+
pip/uv-add "pydantic-ai-slim[cerebras]"
15
9
```
16
10
17
11
## Configuration
18
12
19
-
To use [Cerebras](https://cerebras.ai/) through their API, go to [cloud.cerebras.ai](https://cloud.cerebras.ai/?utm_source=3pi_pydantic-ai&utm_campaign=partner_doc) and follow your nose until you find the place to generate an API key.
13
+
To use [Cerebras](https://cerebras.ai/) through their API, go to [cloud.cerebras.ai](https://cloud.cerebras.ai/?utm_source=3pi_pydantic-ai&utm_campaign=partner_doc) and generate an API key.
20
14
21
-
`CerebrasModelName` contains a list of available Cerebras models.
15
+
For a list of available models, see the [Cerebras models documentation](https://inference-docs.cerebras.ai/models).
22
16
23
17
## Environment variable
24
18
@@ -64,7 +58,7 @@ agent = Agent(model)
64
58
...
65
59
```
66
60
67
-
You can also customize the `CerebrasProvider` with a custom `httpx.AsyncHTTPClient`:
61
+
You can also customize the `CerebrasProvider` with a custom `httpx.AsyncClient`:
result = agent.run_sync('What is the capital of France?')
664
-
print(result.output)
665
-
#> The capital of France is Paris.
666
-
```
667
-
668
635
### LiteLLM
669
636
670
637
To use [LiteLLM](https://www.litellm.ai/), set the configs as outlined in the [doc](https://docs.litellm.ai/docs/set_keys). In `LiteLLMProvider`, you can pass `api_base` and `api_key`. The value of these configs will depend on your setup. For example, if you are using OpenAI models, then you need to pass `https://api.openai.com/v1` as the `api_base` and your OpenAI API key as the `api_key`. If you are using a LiteLLM proxy server running on your local machine, then you need to pass `http://localhost:<port>` as the `api_base` and your LiteLLM API key (or a placeholder) as the `api_key`.
0 commit comments