-
chat_gemini()
can now authenticate with Google default application credentials (including service accounts, etc). This requires thegargle
package (#317, @atheriel). -
chat_gemini()
now detects viewer-based credentials when running on Posit Connect (#320, @atheriel).
-
option(ellmer_verbosity)
is no longer supported; instead use the standard httr2 verbosity functions, such ashttr2::with_verbosity()
; these now support streaming data. -
chat_cortex()
has been renamedchat_cortex_analyst()
to better disambiguate it fromchat_snowflake()
(which also uses "Cortex") (#275, @atheriel).
-
All providers now wait for up to 60s to get the complete response. You can increase this with, e.g.,
option(ellmer_timeout_s = 120)
(#213, #300). -
chat_azure()
,chat_databricks()
,chat_snowflake()
, andchat_cortex_analyst()
now detect viewer-based credentials when running on Posit Connect (#285, @atheriel). -
chat_deepseek()
provides support for DeepSeek models (#242). -
chat_openrouter()
provides support for models hosted by OpenRouter (#212). -
chat_snowflake()
allows chatting with models hosted through Snowflake's Cortex LLM REST API (#258, @atheriel). -
content_pdf_file()
andcontent_pdf_url()
allow you to upload PDFs to supported models. Models that currently support PDFs are Google Gemini and Claude Anthropic. With help from @walkerke and @andrie (#265).
-
Chat$get_model()
returns the model name (#299). -
chat_azure()
has greatly improved support for Azure Entra ID. API keys are now optional and we can pick up on ambient credentials from Azure service principals or attempt to use interactive Entra ID authentication when possible. The broken-by-designtoken
argument has been deprecated (it could not handle refreshing tokens properly), but a newcredentials
argument can be used for custom Entra ID support when needed instead (for instance, if you're trying to use tokens generated by theAzureAuth
package) (#248, #263, #273, #257, @atheriel). -
chat_azure()
now reports better error messages when the underlying HTTP requests fail (#269, @atheriel). It now also defaults toapi_version = "2024-10-21"
which includes data for structured data extraction (#271). -
chat_bedrock()
now handles temporary IAM credentials better (#261, @atheriel) andchat_bedrock()
gainsapi_args
argument (@billsanto, #295). -
chat_databricks()
now handles theDATABRICKS_HOST
environment variable correctly whether it includes an HTTPS prefix or not (#252, @atheriel). It also respects theSPARK_CONNECT_USER_AGENT
environment variable when making requests (#254, @atheriel). -
chat_gemini()
now defaults to using the gemini-2.0-flash model. -
print(Chat)
no longer wraps long lines, making it easier to read code and bulleted lists (#246).
-
New
chat_vllm()
to chat with models served by vLLM (#140). -
The default
chat_openai()
model is now GPT-4o. -
New
Chat$set_turns()
to set turns.Chat$turns()
is nowChat$get_turns()
.Chat$system_prompt()
is replaced withChat$set_system_prompt()
andChat$get_system_prompt()
. -
Async and streaming async chat are now event-driven and use
later::later_fd()
to wait efficiently on curl socket activity (#157). -
New
chat_bedrock()
to chat with AWS bedrock models (#50). -
New
chat$extract_data()
uses the structured data API where available (and tool calling otherwise) to extract data structured according to a known type specification. You can create specs with functionstype_boolean()
,type_integer()
,type_number()
,type_string()
,type_enum()
,type_array()
, andtype_object()
(#31). -
The general
ToolArg()
has been replaced by the more specifictype_*()
functions.ToolDef()
has been renamed totool
. -
content_image_url()
will now create inline images when given a data url (#110). -
Streaming ollama results works once again (#117).
-
Streaming OpenAI results now capture more results, including
logprops
(#115). -
New
interpolate()
andprompt_file()
make it easier to create prompts that are a mix of static text and dynamic values. -
You can find how many tokens you've used in the current session by calling
token_usage()
. -
chat_browser()
andchat_console()
are nowlive_browser()
andlive_console()
. -
The
echo
can now be one of three values: "none", "text", or "all". If "all", you'll now see both user and assistant turns, and all content types will be printed, not just text. When running in the global environment,echo
defaults to "text", and when running inside a function it defaults to "none". -
You can now log low-level JSON request/response info by setting
options(ellmer_verbosity = 2)
. -
chat$register_tool()
now takes an object created byTool()
. This makes it a little easier to reuse tool definitions (#32). -
new_chat_openai()
is nowchat_openai()
. -
Claude and Gemini are now supported via
chat_claude()
andchat_gemini()
. -
The Snowflake Cortex Analyst is now supported via
chat_cortex()
(#56). -
Databricks is now supported via
chat_databricks()
(#152).