@@ -90,6 +90,10 @@ that the LLM can decide to call based on the prompt. The function calling
90
90
interface must be used with chat completions and the ` gpt-4-0613 ` or
91
91
` gpt-3.5-turbo-0613 ` models or later.
92
92
93
+ > See < https://github.com/leafo/lua-openai/blob/main/examples/example5.lua > for
94
+ > a full example that implements basic math functions to compute the standard
95
+ > deviation of a list of numbers
96
+
93
97
Here's a quick example of how to use functions in a chat exchange. First you
94
98
will need to create a chat session with the ` functions ` option containing an
95
99
array of available functions.
@@ -130,17 +134,19 @@ if type(res) == "table" and res.function_call then
130
134
-- Note that res may also include a content field if the LLM produced a textual output as well
131
135
132
136
local cjson = require " cjson"
137
+ local name = res .function_call .name
133
138
local arguments = cjson .decode (res .function_call .arguments )
134
- call_my_function ( res . function_call . name , arguments )
139
+ -- ... compute the result and send it back ...
135
140
end
136
141
```
137
142
138
- Finally, you can evaluate the function and send the result back to the client
139
- so it can resume operation:
143
+ You can evaluate the requested function & arguments and send the result back to
144
+ the client so it can resume operation with a ` role=function ` message object :
140
145
141
146
> Since the LLM can hallucinate every part of the function call, you'll want to
142
147
> do robust type validation to ensure that function name and arguments match
143
- > what you expect.
148
+ > what you expect. Assume every stage can fail, including receiving malformed
149
+ > JSON for the arguments.
144
150
145
151
``` lua
146
152
local name , arguments = ... -- the name and arguments extracted from above
0 commit comments