You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Observation
The observation here is sometimes it is using the prompt A and sometimes it is not using the prompts as per traces in langfuse. Also, the prompt_variables is not being used it seems and anything can be given in the key.
Expectation
The model should always use prompt in this case while calling "my-langfuse-model" it should always use prompt A.
Usecase for prompts fallback:
Calling a model and if that model fails it should fallback to another model with a prompt
Now, in this scenario the wrong key is configured in litellm for the model "my-langfuse-model", so that it fallbacks to "my-langfuse-fallback"
Observation
The observation here is it is falling to "my-langfuse-fallback" and gpt-4o is being used for response, however in case of prompts sometimes it is using prompt A, sometimes prompt B and sometimes no prompt is being used as per langfuse traces.
Expectation
The model should always use a prompt in this case it can either be prompt A or prompt B , however the behaviour is random in using both prompts and no prompt as well.
Ask
Are there any missing configurations as per the expectations or behaviour or .
There is no defined behaviours in case of prompts as of now.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Prompt Usage and Prompt Fallback
Configuration in Langfuse :
Created 2 prompts in Langfuse with 2 models as follows:
Configuration in Litellm :
Usecase for prompts:
Observation
The observation here is sometimes it is using the prompt A and sometimes it is not using the prompts as per traces in langfuse. Also, the prompt_variables is not being used it seems and anything can be given in the key.
Expectation
The model should always use prompt in this case while calling "my-langfuse-model" it should always use prompt A.
Usecase for prompts fallback:
Now, in this scenario the wrong key is configured in litellm for the model "my-langfuse-model", so that it fallbacks to "my-langfuse-fallback"
Observation
The observation here is it is falling to "my-langfuse-fallback" and gpt-4o is being used for response, however in case of prompts sometimes it is using prompt A, sometimes prompt B and sometimes no prompt is being used as per langfuse traces.
Expectation
The model should always use a prompt in this case it can either be prompt A or prompt B , however the behaviour is random in using both prompts and no prompt as well.
Ask
Are there any missing configurations as per the expectations or behaviour or .
There is no defined behaviours in case of prompts as of now.
Please help here .... thanks
Beta Was this translation helpful? Give feedback.
All reactions