Skip to content

Commit c018104

Browse files
committed
Release 0.19
Refs #495, #610, #640, #641, #644, #653
1 parent ac3d008 commit c018104

File tree

3 files changed

+16
-1
lines changed

3 files changed

+16
-1
lines changed

docs/changelog.md

+11
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,16 @@
11
# Changelog
22

3+
## 0.19 (2024-12-01)
4+
5+
- Tokens used by a response are now logged to new `input_tokens` and `output_tokens` integer columns and a `token_details` JSON string column, for the default OpenAI models and models from other plugins that {ref}`implement this feature <advanced-model-plugins-usage>`. [#610](https://github.com/simonw/llm/issues/610)
6+
- `llm prompt` now takes a `-u/--usage` flag to display token usage at the end of the response.
7+
- `llm logs -u/--usage` shows token usage information for logged responses.
8+
- `llm prompt ... --async` responses are now logged to the database. [#641](https://github.com/simonw/llm/issues/641)
9+
- `llm.get_models()` and `llm.get_async_models()` functions, {ref}`documented here <python-api-listing-models>`. [#640](https://github.com/simonw/llm/issues/640)
10+
- `response.usage()` and async response `await response.usage()` methods, returning a `Usage(input=2, output=1, details=None)` dataclass. [#644](https://github.com/simonw/llm/issues/644)
11+
- `response.on_done(callback)` and `await response.on_done(callback)` methods for specifying a callback to be executed when a response has completed, {ref}`documented here <python-api-response-on-done>`. [#653](https://github.com/simonw/llm/issues/653)
12+
- Fix for bug running `llm chat` on Windows 11. Thanks, [Sukhbinder Singh](https://github.com/sukhbinder). [#495](https://github.com/simonw/llm/issues/495)
13+
314
(v0_19a2)=
415
## 0.19a2 (2024-11-20)
516

docs/python-api.md

+4
Original file line numberDiff line numberDiff line change
@@ -160,6 +160,8 @@ async for chunk in model.prompt(
160160
print(chunk, end="", flush=True)
161161
```
162162

163+
(python-api-conversations)=
164+
163165
## Conversations
164166

165167
LLM supports *conversations*, where you ask follow-up questions of a model as part of an ongoing conversation.
@@ -195,6 +197,8 @@ response = conversation.prompt(
195197

196198
Access `conversation.responses` for a list of all of the responses that have so far been returned during the conversation.
197199

200+
(python-api-response-on-done)=
201+
198202
## Running code when a response has completed
199203

200204
For some applications, such as tracking the tokens used by an application, it may be useful to execute code as soon as a response has finished being executed

setup.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
from setuptools import setup, find_packages
22
import os
33

4-
VERSION = "0.19a2"
4+
VERSION = "0.19"
55

66

77
def get_long_description():

0 commit comments

Comments
 (0)