Support for multi-turn conversations that use function calling #1320
Labels
bug
Something isn't working
enhancement
New feature or request
question
Further information is requested
Is your feature request related to a problem? Please describe.
I would like to call an LLM to generate code that's returned as structured output using function calling, and then I'd like to continue the conversation to amend that code when there are errors.
Describe the solution you'd like
I would like to be able use a
response_model
for my function call the way Instructor supports today, but then continue the conversation to make subsequent function calls that repair or modify the output of the first attempt. I would like to do this using the Instructor response_model primitive all throughout, where the subsequent tool-related messages are constructed automatically by Instructor for the particular model I'm using.Describe alternatives you've considered
I've accomplish multi-turn function calling conversations using litellm completions directly. First, I'll share the working version using litellm, which demonstrates what the underlying messages that need to be passed to gpt-4o look like.
Second, I'll share my attempt at converting this to Instructor, along with the stack trace that shows where it fails. What the stack trace shows is that the shape of the assistant message and the following tool message aren't expected/supported by Instructor.
This works great. The following version is my attempt to use Instructor, with the error message below:
Here is the stack trace of the error:
And dropping into pdb gives me the following value for
message
:(Please note that calling "eval(source_code)" on the LLM's output results in a syntax error. This is a somewhat contrived example to guarantee failure and get the demo to work. I don't believe this is material to the task at hand)
The text was updated successfully, but these errors were encountered: