-
Notifications
You must be signed in to change notification settings - Fork 60
Add Claude API integration with generic environment variables #327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
ddworken
wants to merge
11
commits into
master
Choose a base branch
from
claude-api-integration
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit adds support for using Anthropic's Claude API as an alternative to OpenAI for AI-powered shell command suggestions. The implementation uses Claude's OpenAI-compatible endpoint for simplicity and minimal code changes. Key changes: - Added provider detection logic that checks ANTHROPIC_API_KEY and OPENAI_API_KEY - Implemented generic environment variables for multi-provider support: - AI_API_KEY (works with either provider) - AI_API_MODEL (replaces OPENAI_API_MODEL) - AI_API_NUMBER_COMPLETIONS (replaces OPENAI_API_NUMBER_COMPLETIONS) - AI_API_SYSTEM_PROMPT (replaces OPENAI_API_SYSTEM_PROMPT) - All legacy OPENAI_* variables still work for backwards compatibility - Auto-detection of provider based on which API key is set - Proper authentication headers (x-api-key for Claude, Authorization Bearer for OpenAI) - Updated both client-side and server-side proxy to support both providers - Added test for Claude API integration - Default model: gpt-4o-mini for OpenAI, claude-sonnet-4-5 for Claude The configuration intelligently defaults to the appropriate endpoint based on which API keys are available in the environment. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Claude's OpenAI-compatible endpoint only supports n=1 (single completion). This commit adds logic to automatically limit numberCompletions to 1 when using the Anthropic provider.
The test needs to explicitly pass a Claude model name (claude-sonnet-4-5) instead of relying on environment variable detection for model selection.
Make both OpenAI and Claude tests skip when API keys aren't available, without checking for branch or GitHub Action context. This is simpler and avoids issues with branch detection in PR contexts. Also removes unused testutils import.
Update skip logic to check for valid key prefixes: - OpenAI keys must start with "sk-" - Anthropic keys must start with "sk-ant-" This prevents tests from running with placeholder or empty keys in CI environments. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
The previous logic checked ANTHROPIC_API_KEY first, causing it to return ProviderAnthropic even when calling the OpenAI endpoint if both keys were set. This broke the OpenAI test in CI. New logic: 1. Check endpoint first - if it's a known endpoint, use that provider 2. For unknown endpoints, auto-detect based on API key prefix: - sk-ant-* → Claude/Anthropic - sk-proj-* → OpenAI 3. Default to OpenAI for backwards compatibility This allows both API keys to coexist without conflicts. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
The test was only clearing OPENAI_API_KEY, but when ANTHROPIC_API_KEY is set (as it is in CI), the code bypasses the hishtory proxy API and calls GetAiSuggestionsViaOpenAiApi directly, which ignores the test's server mock. Now clearing all AI-related keys: OPENAI_API_KEY, ANTHROPIC_API_KEY, and AI_API_KEY to ensure the test uses the hishtory proxy API path where the mock is active. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Previously, the code silently limited Claude to 1 completion even when users requested more (e.g., n=3), because Claude's OpenAI-compatible endpoint doesn't support the n parameter natively. Changes: 1. Extract makeSingleApiCall() helper for single API call with n=1 2. Add getMultipleClaudeCompletions() to make N sequential API calls 3. Aggregate results and usage stats (tokens) across all calls 4. Update TestLiveClaudeApi to test with n=3 completions Trade-offs: - Pro: Users get the N completions they requested - Con: Slower (N sequential network calls) and more expensive (N × API cost) Example: Requesting 3 completions now makes 3 API calls (~6s vs ~2s), but returns up to 3 unique results as requested. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Changed getMultipleClaudeCompletions to make parallel API calls instead of sequential calls. This significantly improves performance when requesting multiple completions (n>1) from Claude's API. Key changes: - Added sync.WaitGroup and channels for parallel coordination - Launch N goroutines for concurrent API calls - Aggregate results and usage stats from all parallel calls - Performance improvement: ~2.1s vs ~6.6s for 3 completions 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Added a reusable ParallelMap function to shared/utils.go and refactored getMultipleClaudeCompletions to use it instead of manually managing goroutines, channels, and WaitGroups. Changes: - Added ParallelMap[T, R] generic function to shared/utils.go - Simplified getMultipleClaudeCompletions by using ParallelMap - Removed manual goroutine/channel coordination code - Made code more maintainable and reusable Performance remains the same (~2s for 3 parallel completions). 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds support for using Anthropic's Claude API as an alternative to OpenAI for AI-powered shell command suggestions. The implementation uses Claude's OpenAI-compatible endpoint for simplicity and minimal code changes.
Changes
AI_API_KEY- Works with either OpenAI or ClaudeAI_API_MODEL- ReplacesOPENAI_API_MODELAI_API_NUMBER_COMPLETIONS- ReplacesOPENAI_API_NUMBER_COMPLETIONSAI_API_SYSTEM_PROMPT- ReplacesOPENAI_API_SYSTEM_PROMPTOPENAI_*variables continue to workgpt-4o-minifor OpenAI,claude-sonnet-4-5for ClaudeUsage Examples
Using Claude directly:
Using OpenAI directly:
Using generic environment variables:
Files Modified
shared/ai/ai.go- Core provider detection and API integrationclient/ai/ai.go- Client-side provider supportclient/hctx/hctx.go- Configuration defaultsbackend/server/internal/server/api_handlers.go- Server-side proxy supportshared/ai/ai_test.go- Added Claude API testTest Plan
🤖 Generated with Claude Code