You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there, I was reading the readme and the docs, but I couldn't find a concise explanation for how instructor actually works!
I was evaluating whether to use instructor, and wanted to know whether it used constrained decoding, prompting, tool APIs, or something else.
I think it'd be really helpful to have 1–2 sentences in the readme and docs that says something like:
Here's how instructor gets LLMs to produce responses that follow the provided schema:
If an LLM exposes a structured endpoint, we use that
Otherwise, if an LLM exposes a tool call endpoint, we use that
Otherwise, we fall back to a prompt to ask the model to output a response of the provided schema. If it fails, we retry N times
We don't implement constrained decoding for LLMs that produce logits, but may implement that in the future
(I don't know if the above is accurate!)
As a user of the library, knowing these details are really helpful to understand whether the schema is guaranteed by the LLM in a single call, or whether it's best-effort. Linking to the prompt would be helpful too.
Many thanks for your consideration, and thank you for making and open-sourcing this library!
Is your feature request related to a problem? Please describe.
Understanding instructor's approach.
Describe the solution you'd like
Brief documentation update
Describe alternatives you've considered
Reading the code to figure it out (it's what I did!)
Additional context
n/a
The text was updated successfully, but these errors were encountered:
Hi there, I was reading the readme and the docs, but I couldn't find a concise explanation for how instructor actually works!
I was evaluating whether to use instructor, and wanted to know whether it used constrained decoding, prompting, tool APIs, or something else.
I think it'd be really helpful to have 1–2 sentences in the readme and docs that says something like:
(I don't know if the above is accurate!)
As a user of the library, knowing these details are really helpful to understand whether the schema is guaranteed by the LLM in a single call, or whether it's best-effort. Linking to the prompt would be helpful too.
Many thanks for your consideration, and thank you for making and open-sourcing this library!
Is your feature request related to a problem? Please describe.
Understanding instructor's approach.
Describe the solution you'd like
Brief documentation update
Describe alternatives you've considered
Reading the code to figure it out (it's what I did!)
Additional context
n/a
The text was updated successfully, but these errors were encountered: