|
| 1 | +--- |
| 2 | +title: Helicone |
| 3 | +description: Example configuration for Helicone AI Gateway |
| 4 | +--- |
| 5 | + |
| 6 | +# [Helicone](https://www.helicone.ai/) |
| 7 | + |
| 8 | +> Helicone API key: [https://us.helicone.ai/settings/api-keys](https://us.helicone.ai/settings/api-keys) |
| 9 | +
|
| 10 | +**Notes:** |
| 11 | + |
| 12 | +- **Known:** icon provided, fetching list of models is recommended as Helicone provides access to 100+ AI models from multiple providers. |
| 13 | + |
| 14 | +- Helicone is an AI Gateway Provider that enables access to models from OpenAI, Anthropic, Google, Meta, Mistral, and other providers with built-in observability and monitoring. |
| 15 | + |
| 16 | +- **Key Features:** |
| 17 | + - 🚀 Access to 100+ AI models through a single gateway |
| 18 | + - 📊 Built-in observability and monitoring |
| 19 | + - 🔄 Multi-provider support |
| 20 | + - ⚡ Request logging and analytics in Helicone dashboard |
| 21 | + |
| 22 | +- **Important Considerations:** |
| 23 | + - Make sure your Helicone account has credits so you can access the models. |
| 24 | + - You can find all supported models in the [Helicone Model Library](https://helicone.ai/models). |
| 25 | + - You can set your own rate limits and caching policies within the Helicone dashboard. |
| 26 | + |
| 27 | +```yaml |
| 28 | + - name: "Helicone" |
| 29 | + # For `apiKey` and `baseURL`, you can use environment variables that you define. |
| 30 | + # recommended environment variables: |
| 31 | + apiKey: "${HELICONE_KEY}" |
| 32 | + baseURL: "https://ai-gateway.helicone.ai" |
| 33 | + headers: |
| 34 | + x-librechat-body-parentmessageid: "{{LIBRECHAT_BODY_PARENTMESSAGEID}}" |
| 35 | + models: |
| 36 | + default: ["gpt-4o-mini", "claude-4.5-sonnet", "llama-3.1-8b-instruct", "gemini-2.5-flash-lite"] |
| 37 | + fetch: true |
| 38 | + titleConvo: true |
| 39 | + titleModel: "gpt-4o-mini" |
| 40 | + modelDisplayLabel: "Helicone" |
| 41 | + iconURL: "https://marketing-assets-helicone.s3.us-west-2.amazonaws.com/helicone.png" |
| 42 | +``` |
| 43 | +
|
| 44 | +**Configuration Details:** |
| 45 | +
|
| 46 | +- **apiKey:** Use the `HELICONE_KEY` environment variable to store your Helicone API key. |
| 47 | +- **baseURL:** The Helicone AI Gateway endpoint: `https://ai-gateway.helicone.ai` |
| 48 | +- **headers:** The `x-librechat-body-parentmessageid` header is essential for message tracking and conversation continuity |
| 49 | +- **models:** Sets default models, however by enabling `fetch`, you will automatically retrieve all available models from Helicone's API. |
| 50 | +- **fetch:** Set to `true` to automatically retrieve available models from Helicone's API |
| 51 | + |
| 52 | +**Setup Steps:** |
| 53 | +1. Sign up for a Helicone account at [helicone.ai](https://helicone.ai/) |
| 54 | +2. Generate your API key from the [Helicone dashboard](https://us.helicone.ai/settings/api-keys) |
| 55 | +3. Set the `HELICONE_KEY` environment variable in your `.env` file |
| 56 | +4. Copy the example configuration to your `librechat.yaml` file |
| 57 | +5. Rebuild your Docker containers if using Docker deployment |
| 58 | +6. Restart LibreChat to load the new configuration |
| 59 | +7. Test by selecting Helicone from the provider dropdown |
| 60 | +8. Head over to the [Helicone dashboard](https://us.helicone.ai/dashboard) to review your usage and settings. |
| 61 | + |
| 62 | +**Potential Issues:** |
| 63 | + |
| 64 | +- **Model Access:** Verify that you have credits within Helicone so you can access the models. |
| 65 | +- **Rate Limiting:** You can set your own rate limits and caching policies within the Helicone dashboard. |
| 66 | +- **Environment Variables:** Double-check that `HELICONE_KEY` is properly set and accessible to your LibreChat instance |
| 67 | + |
| 68 | +**Testing:** |
| 69 | +1. After configuration, select Helicone from the provider dropdown |
| 70 | +2. Verify that models appear in the model selection |
| 71 | +3. Send a test message and confirm it appears in your Helicone dashboard |
| 72 | +4. Check that conversation threading works correctly with the parent message ID header |
0 commit comments