Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
72 changes: 72 additions & 0 deletions pages/docs/configuration/librechat_yaml/ai_endpoints/helicone.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
---
title: Helicone
description: Example configuration for Helicone AI Gateway
---

# [Helicone](https://www.helicone.ai/)

> Helicone API key: [https://us.helicone.ai/settings/api-keys](https://us.helicone.ai/settings/api-keys)

**Notes:**

- **Known:** icon provided, fetching list of models is recommended as Helicone provides access to 100+ AI models from multiple providers.

- Helicone is an AI Gateway Provider that enables access to models from OpenAI, Anthropic, Google, Meta, Mistral, and other providers with built-in observability and monitoring.

- **Key Features:**
- 🚀 Access to 100+ AI models through a single gateway
- 📊 Built-in observability and monitoring
- 🔄 Multi-provider support
- ⚡ Request logging and analytics in Helicone dashboard

- **Important Considerations:**
- Make sure your Helicone account has credits so you can access the models.
- You can find all supported models in the [Helicone Model Library](https://helicone.ai/models).
- You can set your own rate limits and caching policies within the Helicone dashboard.

```yaml
- name: "Helicone"
# For `apiKey` and `baseURL`, you can use environment variables that you define.
# recommended environment variables:
apiKey: "${HELICONE_KEY}"
baseURL: "https://ai-gateway.helicone.ai"
headers:
x-librechat-body-parentmessageid: "{{LIBRECHAT_BODY_PARENTMESSAGEID}}"
models:
default: ["gpt-4o-mini", "claude-4.5-sonnet", "llama-3.1-8b-instruct", "gemini-2.5-flash-lite"]
fetch: true
titleConvo: true
titleModel: "gpt-4o-mini"
modelDisplayLabel: "Helicone"
iconURL: "https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/helicone.png"
```

**Configuration Details:**

- **apiKey:** Use the `HELICONE_KEY` environment variable to store your Helicone API key.
- **baseURL:** The Helicone AI Gateway endpoint: `https://ai-gateway.helicone.ai`
- **headers:** The `x-librechat-body-parentmessageid` header is essential for message tracking and conversation continuity
- **models:** Sets default models, however by enabling `fetch`, you will automatically retrieve all available models from Helicone's API.
- **fetch:** Set to `true` to automatically retrieve available models from Helicone's API

**Setup Steps:**
1. Sign up for a Helicone account at [helicone.ai](https://helicone.ai/)
2. Generate your API key from the [Helicone dashboard](https://us.helicone.ai/settings/api-keys)
3. Set the `HELICONE_KEY` environment variable in your `.env` file
4. Copy the example configuration to your `librechat.yaml` file
5. Rebuild your Docker containers if using Docker deployment
6. Restart LibreChat to load the new configuration
7. Test by selecting Helicone from the provider dropdown
8. Head over to the [Helicone dashboard](https://us.helicone.ai/dashboard) to review your usage and settings.

**Potential Issues:**

- **Model Access:** Verify that you have credits within Helicone so you can access the models.
- **Rate Limiting:** You can set your own rate limits and caching policies within the Helicone dashboard.
- **Environment Variables:** Double-check that `HELICONE_KEY` is properly set and accessible to your LibreChat instance

**Testing:**
1. After configuration, select Helicone from the provider dropdown
2. Verify that models appear in the model selection
3. Send a test message and confirm it appears in your Helicone dashboard
4. Check that conversation threading works correctly with the parent message ID header
Loading