Skip to content

[WIP] HuggingFaceModelTokenizer #2723

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: main
Choose a base branch
from
Open

Conversation

krammnic
Copy link
Contributor

Context

What is the purpose of this PR? Is it to

  • add a new feature
  • fix a bug
  • update tests and/or documentation
  • other (please add here)

Please link to any issues this PR addresses.

Changelog

What are the changes made in this PR?

Test plan

Please make sure to do each of the following if applicable to your PR. If you're unsure about any one of these just ask and we will happily help. We also have a contributing page for some guidance on contributing.

  • run pre-commit hooks and linters (make sure you've first installed via pre-commit install)
  • add unit tests for any new functionality
  • update docstrings for any new or updated methods or classes
  • run unit tests via pytest tests
  • run recipe tests via pytest tests -m integration_test
  • manually run any new or modified recipes with sufficient proof of correctness
  • include relevant commands and any other artifacts in this summary (pastes of loss curves, eval results, etc.)

UX

If your function changed a public API, please add a dummy example of what the user experience will look like when calling it.
Here is a docstring example
and a tutorial example

  • I did not change any public API
  • I have added an example to docs or docstrings

Basically, this is a first pass (I'm thinking about how to add masking), but jinja render works surprisingly well.

Copy link

pytorch-bot bot commented May 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/2723

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 12, 2025
Copy link
Contributor

@ebsmothers ebsmothers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @krammnic for taking this one on! This will be huge for lowering the barrier to onboard new models. Let's definitely make sure to add unit tests for this one. (You can likely create some dummy tokenizer_config.json files and check them directly into the repo, since they should be pretty small.)

special_tokens_mapping = {}
for token in self.special_tokens:
special_tokens_mapping[token] = self.base_tokenizer.encode(token)
rendered_template = self.template.render(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow this actually wound up being quite easy lol

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, tool calling will still be quite tricky

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krammnic Other than the lack of tool calls in tt Message class, is there any other reasons behind why tool calling will be tricky?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably, no

if content := token_info.get("content"):
special_tokens.add(content)

# We sort lexicographically in order to get real tokens after all <|dummy_x|>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I don't fully understand this comment. I assume this is referring to reserved special tokens? If so, why is string sort the thing to use here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably we can drop it, it might just simplify debugging in case if we face some problems with new configs.

Comment on lines +203 to +207
self.base_tokenizer = HuggingFaceBaseTokenizer(
tokenizer_json_path=tokenizer_json_path,
tokenizer_config_json_path=tokenizer_config_json_path,
generation_config_path=generation_config_path,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know @joecummings had some thoughts on whether we should use generic base_tokenizer instead of constraining to use HuggingFaceBaseTokenizer. I suspect the latter is better for making sure everything works together, but I know at least Qwen2Tokenizer still relies on the merges + vocab files instead of the tokenizer.json file (I alluded to this at the very bottom of #2706). So we should figure out if this will work for that case

{"role": m.role, "content": m.content[0]["content"]} for m in messages
],
add_generation_prompt=add_eos,
**special_tokens_mapping, # We assume that the naming is consitent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I think this should be a reasonable assumption (as long as we are also getting the special_tokens from the same place as the template)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants