Skip to content

Unified Llama 3 (8b,70b) + Safetensors support #169

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jun 26, 2024

Conversation

nivibilla
Copy link
Contributor

@nivibilla nivibilla commented Apr 29, 2024

As discussed in #158

This PR unifies the support for llama 3

however you must convert the model files from the safe tensors format to the PyTorch.bin format.

Can be done by:
model.save_pretrained('/llama-3-70b-instruct-hf-pt", safe_serialization=False)

I have pre converted and uploaded the PyTorch.bin versions of the llama-3-8b-instruct and the llama-3-70b-instruct for use.

UPDATE : Thanks to @jerrymannil for the safetensors tip. We can now load from safetensors directly and use the official meta repos. And in general support for other safetensor versions of models not just llama 3.

Some performance numbers on 8xA10
python generate.py --compile --checkpoint_path ./llama-3-8b-instruct-hf-pt/model.pth

# 70b TP8
Average tokens/sec: 21.79
Memory used: 21.66 GB/GPU

# 8b TP8
Average tokens/sec: 112.74
Memory used: 4.19 GB/GPU

# 8b NO_TP
Average tokens/sec: 34.06
Memory used: 16.43 GB/GPU

Thanks @Artyom17 for the initial implementation (especially the tokenizer changes!)

@facebook-github-bot
Copy link

Hi @nivibilla!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 29, 2024
@nivibilla
Copy link
Contributor Author

This is ready for review @yanboliang @Chillee It's a quick followup to the previous llama 3 pr.

Thanks!

@nivibilla
Copy link
Contributor Author

I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.

@Artyom17
Copy link
Contributor

I've made a request in the official repos to have the pytorch_model.bin files so that this can be easier.

That would be awesome! And thanks for doing this!

@lightmatmul
Copy link

how can we run the llama 3 70b ? I used your own pre-converted repo and it cannot find .pth file

@nivibilla
Copy link
Contributor Author

You have to convert it as usual with the instructions on the readme

@lightmatmul
Copy link

it works ! thanks, fyi, im getting 39 tks/s on 4xH100.

@nivibilla
Copy link
Contributor Author

#158 (comment)

According to this it should work with safetensors too but I haven't had the time to make the changes and test.

@jerrymannil
Copy link

@nivibilla Do you have time to work on this soon ?

@nivibilla
Copy link
Contributor Author

nivibilla commented May 16, 2024

@jerrymannil sorry, forgot about this. Ive made the changes and tested with llama 3 8b. Works fine. We no longer need the pytorch converted files. Ive tested it with the official Meta repo and i am able to convert and run. Actually this should allow for all safetensor models not just llama 3.

Thanks @Artyom17 and @jerrymannil

this PR is ready to be reviewed/merged

@nivibilla nivibilla changed the title unify llama 3 support Unified Llama 3 (8b,70b) support + Safetensors support May 16, 2024
@nivibilla nivibilla changed the title Unified Llama 3 (8b,70b) support + Safetensors support Unified Llama 3 (8b,70b) + Safetensors support May 16, 2024
@nivibilla
Copy link
Contributor Author

@yanboliang @Chillee could you pls have a look. Thanks

Copy link
Contributor Author

@nivibilla nivibilla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed config matching bug

Copy link

@jerrymannil jerrymannil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nivibilla
Copy link
Contributor Author

@yanboliang @Chillee could you have a look pls. Thanks

@xavierpuigf
Copy link

Hi, thanks for this PR and this codebase!

I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with rope_base=50000 for both llama3 models. I would suggest changing the config here:

"llama-3-8b": dict(block_size=8192, n_layer=32, n_head=32, n_local_heads=8, dim=4096, intermediate_size=14336, vocab_size=128256),

@jerrymannil
Copy link

Hi, thanks for this PR and this codebase!

I tested this and the model works well for short context lengths but fails on longer (>500 token) context. I found it to have results on par with the huggingface implementation with rope_base=50000 for both llama3 models. I would suggest changing the config here:

"llama-3-8b": dict(block_size=8192, n_layer=32, n_head=32, n_local_heads=8, dim=4096, intermediate_size=14336, vocab_size=128256),

@nivibilla Can you look ? Thanks.

@nivibilla
Copy link
Contributor Author

Thanks @xavierpuigf for testing, ive made the changes.

And thanks @jerrymannil for the ping, must have missed this email.

@musab-mk
Copy link

@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?

@xavierpuigf
Copy link

@xavierpuigf Huggingface uses 500000 instead of 50000 as you suggested. Is there a reason why you used 50k instead of 500k ?

That's right, I had a typo. rope_base should be 500K. Thanks for checking!

@nivibilla
Copy link
Contributor Author

Updated

@jerrymannil
Copy link

jerrymannil commented Jun 25, 2024

@Chillee @yanboliang
Can one of you approve?

Copy link
Contributor

@yanboliang yanboliang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@yanboliang yanboliang merged commit 091515a into pytorch-labs:main Jun 26, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants