-
Notifications
You must be signed in to change notification settings - Fork 611
[WIP] HuggingFaceModelTokenizer #2723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2723
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @krammnic for taking this one on! This will be huge for lowering the barrier to onboard new models. Let's definitely make sure to add unit tests for this one. (You can likely create some dummy tokenizer_config.json
files and check them directly into the repo, since they should be pretty small.)
special_tokens_mapping = {} | ||
for token in self.special_tokens: | ||
special_tokens_mapping[token] = self.base_tokenizer.encode(token) | ||
rendered_template = self.template.render( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow this actually wound up being quite easy lol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, tool calling will still be quite tricky
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@krammnic Other than the lack of tool calls in tt Message class, is there any other reasons behind why tool calling will be tricky?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably, no
if content := token_info.get("content"): | ||
special_tokens.add(content) | ||
|
||
# We sort lexicographically in order to get real tokens after all <|dummy_x|> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't fully understand this comment. I assume this is referring to reserved special tokens? If so, why is string sort the thing to use here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably we can drop it, it might just simplify debugging in case if we face some problems with new configs.
self.base_tokenizer = HuggingFaceBaseTokenizer( | ||
tokenizer_json_path=tokenizer_json_path, | ||
tokenizer_config_json_path=tokenizer_config_json_path, | ||
generation_config_path=generation_config_path, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know @joecummings had some thoughts on whether we should use generic base_tokenizer
instead of constraining to use HuggingFaceBaseTokenizer
. I suspect the latter is better for making sure everything works together, but I know at least Qwen2Tokenizer
still relies on the merges + vocab files instead of the tokenizer.json
file (I alluded to this at the very bottom of #2706). So we should figure out if this will work for that case
{"role": m.role, "content": m.content[0]["content"]} for m in messages | ||
], | ||
add_generation_prompt=add_eos, | ||
**special_tokens_mapping, # We assume that the naming is consitent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think this should be a reasonable assumption (as long as we are also getting the special_tokens from the same place as the template)
Context
What is the purpose of this PR? Is it to
Please link to any issues this PR addresses.
Changelog
What are the changes made in this PR?
Test plan
Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.
pre-commit install
)pytest tests
pytest tests -m integration_test
UX
If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example
Basically, this is a first pass (I'm thinking about how to add masking), but jinja render works surprisingly well.