-
Notifications
You must be signed in to change notification settings - Fork 596
Create Usage section in documentation + API #1577
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
142b945
to
b2c760e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll keep adding comments here and there
docs_new/features/core/generator.md
Outdated
model = from_llamacpp( | ||
Llama.from_pretrained( | ||
repo_id="M4-ai/TinyMistral-248M-v2-Instruct-GGUF", | ||
filename="TinyMistral-248M-v2-Instruct.Q4_K_M.gguf", | ||
) | ||
) | ||
|
||
# Define the output structure | ||
class Character(BaseModel): | ||
name: str | ||
age: int | ||
skills: List[str] | ||
|
||
# Create a generator for the output type defined and call it to generate text | ||
generator = Generator(model, Character) | ||
response = generator("Create a character for my game", max_tokens=100) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The prompt here is not correct because we are using an instruction-tuned model without user and assistant tags. The model will generate a character, but anyone using this code will find severely degraded performance in more flexible contexts.
Brief overview here.
This is more of a general issue (feature?) of how Outlines works that we need to find a way to consistently address. The transformer tokenizer approach is the most ergonomic, and I'm wondering if we should just offer that everywhere even for different inference backends.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's a good point, working with those prompts is a bit awkward in Outlines. I can use other models in the features section not to add noise and we say we cover that in the guides?
Seems like my previous comments didn't propagate to the recent directory changes, I'll adjust those. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few general comments:
- Most of the stuff in the output types section should include an example of output, as they're quite difficult to visualize for all but the most simple cases. Currently only an
output_type
variable is shown.
I've got more to review here but I'll leave these in case you wanted to take a look.
Damn fine work.
b2c760e
to
b6f5bc2
Compare
No description provided.