Replies: 1 comment
-
I can't get it working either, but for me, it works using LiteLLM as an Azure endpoint:
{
"github.copilot.chat.azureModels": {
"gpt-4.1": {
"name": "gpt-4.1",
"url": "http://localhost:4000/v1/chat/completions",
"maxInputTokens": 128000,
"maxOutputTokens": 16000,
"toolCalling": true,
"vision": false,
"thinking": false
}
}
}
The downside is that you need to manually add models instead of using automatic discovery through |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is this document still valid: https://github.com/BerriAI/litellm/blob/main/docs/my-website/docs/tutorials/github_copilot_integration.md
I've tried it with Github Copilot for Intellij, and it does not work. The first issue was that Copilot uses
CONNECTHTTP proxy requests, which LiteLLM doesn't handle. Then after sticking a real HTTP proxy in the mix to have it unencapsulate the proxy request, and redirectapi.individual.githubcopilot.cominto LiteLLM, it then started failing because copilot requests things like/agents, which LiteLLM doesn't handle.Even when I do a manual curl, such as
http_proxy=http://localhost:4000 curl http://api.individual.githubcopilot.com/models, which is how it would it would work if Copilot were using LiteLLM as a simple HTTP proxy (which is what the document shows), it responds with a 404.When I go searching through the LiteLLM code for "copilot", all I see is about using copilot as a provider. I don't see anything about this special proxy emulation.
So is this document still valid?
Beta Was this translation helpful? Give feedback.
All reactions