Skip to content

Conversation

@maciekbanas
Copy link
Contributor

No description provided.

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
@maciekbanas
Copy link
Contributor Author

@shajoezhu would you like to test if the new solution works for your examples?

echo <- check_echo(echo)

if (lifecycle::is_present(virtual_key)) {
lifecycle::deprecate_warn(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's too aggressive to immediately deprecate? Is there some other way we could tell if the user has opted in to the model catalog?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you may be right. I will get back with a more gentle solution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hadley I introduced new model_provider parameter next to virtual_key - though, it is also optional.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It works now this way - if there is no model_provider (passed as empty character "") nor virtual_key, chat_portkey() still chats, if the given model, e.g. "gpt" is available in user's Portkey AI. It seems like defining model_provider is not necessary (but it probably depends on configs/settings used by different hosts).

If user passes proper model_provider, that he has got in his Model Catalog, chat_portkey() will work, if wrong model_provider will be passed (non-existing in Model Catalog), an error occurs. Technically, in Portkey, model provider can be passed through:

  • body
curl https://api.portkey.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -d '{
    "model": "@openai-prod/gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'
  • or headers:
curl https://api.portkey.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: $PORTKEY_API_KEY" \
  -H "x-portkey-provider: @openai-prod" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello!"}]
  }

I applied second solution to ellmer (headers).

FYI @shajoezhu @hadley

@shajoezhu
Copy link

@shajoezhu would you like to test if the new solution works for your examples?

will do. thank you!

Copy link

@shajoezhu shajoezhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm! thanks @maciekbanas

tested all working!

@shajoezhu
Copy link

i think it is nice to have the deprecation option for the virtual_key, it seems that free account has not fully moved away from the virtual key?

@maciekbanas
Copy link
Contributor Author

@shajoezhu you mean the public free account? Yes, they still use virtual keys, I am a bit confused. Still, in my solution, you can still pass virtual_key to chat_portkey().

@shajoezhu
Copy link

@shajoezhu you mean the public free account? Yes, they still use virtual keys, I am a bit confused. Still, in my solution, you can still pass virtual_key to chat_portkey().

hi @maciekbanas , yes, i agree with your solution. there is no problem here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants