Skip to content

Conversation

@ddworken
Copy link
Owner

Summary

This PR adds support for using Anthropic's Claude API as an alternative to OpenAI for AI-powered shell command suggestions. The implementation uses Claude's OpenAI-compatible endpoint for simplicity and minimal code changes.

Changes

  • Provider Detection: Added logic to automatically detect which AI provider to use based on environment variables
  • Generic Environment Variables: Introduced provider-agnostic environment variables:
    • AI_API_KEY - Works with either OpenAI or Claude
    • AI_API_MODEL - Replaces OPENAI_API_MODEL
    • AI_API_NUMBER_COMPLETIONS - Replaces OPENAI_API_NUMBER_COMPLETIONS
    • AI_API_SYSTEM_PROMPT - Replaces OPENAI_API_SYSTEM_PROMPT
  • Backwards Compatibility: All legacy OPENAI_* variables continue to work
  • Authentication: Proper headers for each provider (Bearer token for OpenAI, x-api-key for Claude)
  • Smart Defaults:
    • Endpoint auto-selection based on API keys
    • Default model: gpt-4o-mini for OpenAI, claude-sonnet-4-5 for Claude
  • Testing: Added test for Claude API integration

Usage Examples

Using Claude directly:

export ANTHROPIC_API_KEY=sk-ant-...
hishtory # AI features will use Claude

Using OpenAI directly:

export OPENAI_API_KEY=sk-...
hishtory # AI features will use OpenAI

Using generic environment variables:

export AI_API_KEY=sk-ant-...
export AI_API_MODEL=claude-sonnet-4-5
hishtory # Works with either provider

Files Modified

  • shared/ai/ai.go - Core provider detection and API integration
  • client/ai/ai.go - Client-side provider support
  • client/hctx/hctx.go - Configuration defaults
  • backend/server/internal/server/api_handlers.go - Server-side proxy support
  • shared/ai/ai_test.go - Added Claude API test

Test Plan

  • Build succeeds
  • Pre-commit hooks pass
  • Tests pass (skipped when API keys not present)
  • Manual testing with both OpenAI and Claude API keys

🤖 Generated with Claude Code

ddworken and others added 11 commits October 12, 2025 19:35
This commit adds support for using Anthropic's Claude API as an alternative
to OpenAI for AI-powered shell command suggestions. The implementation uses
Claude's OpenAI-compatible endpoint for simplicity and minimal code changes.

Key changes:
- Added provider detection logic that checks ANTHROPIC_API_KEY and OPENAI_API_KEY
- Implemented generic environment variables for multi-provider support:
  - AI_API_KEY (works with either provider)
  - AI_API_MODEL (replaces OPENAI_API_MODEL)
  - AI_API_NUMBER_COMPLETIONS (replaces OPENAI_API_NUMBER_COMPLETIONS)
  - AI_API_SYSTEM_PROMPT (replaces OPENAI_API_SYSTEM_PROMPT)
- All legacy OPENAI_* variables still work for backwards compatibility
- Auto-detection of provider based on which API key is set
- Proper authentication headers (x-api-key for Claude, Authorization Bearer for OpenAI)
- Updated both client-side and server-side proxy to support both providers
- Added test for Claude API integration
- Default model: gpt-4o-mini for OpenAI, claude-sonnet-4-5 for Claude

The configuration intelligently defaults to the appropriate endpoint based on
which API keys are available in the environment.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Claude's OpenAI-compatible endpoint only supports n=1 (single completion).
This commit adds logic to automatically limit numberCompletions to 1 when
using the Anthropic provider.
The test needs to explicitly pass a Claude model name (claude-sonnet-4-5)
instead of relying on environment variable detection for model selection.
Make both OpenAI and Claude tests skip when API keys aren't available,
without checking for branch or GitHub Action context. This is simpler
and avoids issues with branch detection in PR contexts.

Also removes unused testutils import.
Update skip logic to check for valid key prefixes:
- OpenAI keys must start with "sk-"
- Anthropic keys must start with "sk-ant-"

This prevents tests from running with placeholder or empty keys
in CI environments.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
The previous logic checked ANTHROPIC_API_KEY first, causing it to
return ProviderAnthropic even when calling the OpenAI endpoint if both
keys were set. This broke the OpenAI test in CI.

New logic:
1. Check endpoint first - if it's a known endpoint, use that provider
2. For unknown endpoints, auto-detect based on API key prefix:
   - sk-ant-* → Claude/Anthropic
   - sk-proj-* → OpenAI
3. Default to OpenAI for backwards compatibility

This allows both API keys to coexist without conflicts.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
The test was only clearing OPENAI_API_KEY, but when ANTHROPIC_API_KEY
is set (as it is in CI), the code bypasses the hishtory proxy API
and calls GetAiSuggestionsViaOpenAiApi directly, which ignores the
test's server mock.

Now clearing all AI-related keys: OPENAI_API_KEY, ANTHROPIC_API_KEY,
and AI_API_KEY to ensure the test uses the hishtory proxy API path
where the mock is active.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Previously, the code silently limited Claude to 1 completion even when
users requested more (e.g., n=3), because Claude's OpenAI-compatible
endpoint doesn't support the n parameter natively.

Changes:
1. Extract makeSingleApiCall() helper for single API call with n=1
2. Add getMultipleClaudeCompletions() to make N sequential API calls
3. Aggregate results and usage stats (tokens) across all calls
4. Update TestLiveClaudeApi to test with n=3 completions

Trade-offs:
- Pro: Users get the N completions they requested
- Con: Slower (N sequential network calls) and more expensive (N × API cost)

Example: Requesting 3 completions now makes 3 API calls (~6s vs ~2s),
but returns up to 3 unique results as requested.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Changed getMultipleClaudeCompletions to make parallel API calls instead
of sequential calls. This significantly improves performance when
requesting multiple completions (n>1) from Claude's API.

Key changes:
- Added sync.WaitGroup and channels for parallel coordination
- Launch N goroutines for concurrent API calls
- Aggregate results and usage stats from all parallel calls
- Performance improvement: ~2.1s vs ~6.6s for 3 completions

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Added a reusable ParallelMap function to shared/utils.go and refactored
getMultipleClaudeCompletions to use it instead of manually managing
goroutines, channels, and WaitGroups.

Changes:
- Added ParallelMap[T, R] generic function to shared/utils.go
- Simplified getMultipleClaudeCompletions by using ParallelMap
- Removed manual goroutine/channel coordination code
- Made code more maintainable and reusable

Performance remains the same (~2s for 3 parallel completions).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants