-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rope_scaling not implemented. Issue using deepseek-ai/deepseek-coder-6.7b-instruct
#439
Comments
We have seen similar behavior with togethercomputer/LLaMA-2-7B-32K You can also replicate the example above with this code:
Output:
|
Could this be the effect of Rope Scaling? |
deepseek-ai/deepseek-coder-6.7b-instruct
deepseek-ai/deepseek-coder-6.7b-instruct
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
1 similar comment
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
I met the same issue, is there any updated? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
5 similar comments
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
I am using the newest AMI image from yesterday, with optimum-neuronx 0.0.17 (https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) I have not tried using another image yet.
I am trying to evaluate
AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct")
while using torch.cpu I get the following output, for the neuron equivalent, I get\n\n\n\n
* 512.Reproduction script:
Output:
The text was updated successfully, but these errors were encountered: