You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Is there a way to extract prompt (input) and completion (output) tokens from LiteLLM in PDL? For a comparison between 2 approaches, I want to see if one approach consumes additional tokens. I would like to get these stats for the entire agent trajectory per turn
Describe the solution you'd like
This info is available in the litellm completion response. We would need to store it somewhere...
Describe alternatives you've considered
n/a
Additional context
n/a
The text was updated successfully, but these errors were encountered:
I don't know exactly what LiteLLM supplies. This is the technique used to send OpenTelemetry, so the data is at least as good as what we send to Jaeger.
The idea would be to attach to the trace of a ModelBlock the arguments of every callback received during its execution.
Is your feature request related to a problem? Please describe.
Describe the solution you'd like
This info is available in the
litellm
completion response. We would need to store it somewhere...Describe alternatives you've considered
n/a
Additional context
n/a
The text was updated successfully, but these errors were encountered: