You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey Vercel team - I'm requesting you add logprobs support for Google's Gemini models in the AI SDK. This is especially important now that Gemini 2.5 has been released.
Looking at the AI SDK docs, I can see logprobs is a standard capability for some providers, but it's not currently implemented for the Google provider. The Google Generative AI API already supports this by letting developers set responseLogprobs: true in requests, plus an optional logprobs parameter to specify how many candidate tokens you get back.
What we need:
A way to pass responseLogprobs: true in our AI SDK requests when using Gemini
Access to the logprobsCandidates that come back in the response
Also expose the avgLogprobs field that Google returns by default
Ideally these would be accessible via provider metadata in both requests and responses, similar to how it's handled for other providers (like 4o mini)
Use Cases
In my company, we need logprobs for:
Filtering out low-confidence responses (essential when the model is unsure)
Better token selection logic for more sensitive applications
Benchmarking different models on the same tasks
Building pipelines that use confidence scores for downstream decision-making
Detecting when to fall back to human review instead of using model output
Additional context
This is especially frustrating with Gemini 2.5 now released. We're currently either using OpenAI exclusively through your SDK (missing out on Gemini's capabilities), or maintaining two separate code paths - Vercel SDK for most things, then Google's SDK specifically for the logprobs functionality.
There's an existing issue (#4969) that's been open since before Gemini 2.0. With 2.5 now available with its improved capabilities, having this feature would let smaller teams like mine stay on a single SDK rather than juggling multiple vendor libraries.
It would be awesome if you could prioritize this - it seems like a straightforward addition given that the underlying Google API already supports it!
The text was updated successfully, but these errors were encountered:
Hi @lgrammel, sorry to tag you in this.
I am working on this and would want a little guidance and clarification about the solution I am thinking to implement.
The Google vertex ai provider is using Google generative AI provider under the hood which does not support the log probs feature as of now. Therefore, below are my two proposed solutions which I think would fix the issue:
Whether to create separate files for the Vertex provider and use them instead of the files from the Generative AI. This way I think the dependency of the vertex provider would move from google generative provider code to its own code files.
Or, is it better to add an optional key in the createVertex provider like below. And then I could modify the endpoint or any other code in the google generative provider accordingly.
constresponse=awaitgenerateText({model: vertex("gemini-1.5-flash",{vertexAI: true,// optional key or it could be appended, any time logProbs have been requested by the user.}),
Feature Description
Hey Vercel team - I'm requesting you add logprobs support for Google's Gemini models in the AI SDK. This is especially important now that Gemini 2.5 has been released.
Looking at the AI SDK docs, I can see logprobs is a standard capability for some providers, but it's not currently implemented for the Google provider. The Google Generative AI API already supports this by letting developers set
responseLogprobs: true
in requests, plus an optionallogprobs
parameter to specify how many candidate tokens you get back.What we need:
responseLogprobs: true
in our AI SDK requests when using GeminilogprobsCandidates
that come back in the responseavgLogprobs
field that Google returns by defaultIdeally these would be accessible via provider metadata in both requests and responses, similar to how it's handled for other providers (like 4o mini)
Use Cases
In my company, we need logprobs for:
Additional context
This is especially frustrating with Gemini 2.5 now released. We're currently either using OpenAI exclusively through your SDK (missing out on Gemini's capabilities), or maintaining two separate code paths - Vercel SDK for most things, then Google's SDK specifically for the logprobs functionality.
There's an existing issue (#4969) that's been open since before Gemini 2.0. With 2.5 now available with its improved capabilities, having this feature would let smaller teams like mine stay on a single SDK rather than juggling multiple vendor libraries.
It would be awesome if you could prioritize this - it seems like a straightforward addition given that the underlying Google API already supports it!
The text was updated successfully, but these errors were encountered: