Replies: 1 comment
-
You should be passing a list of |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Does anyone know how to enable batched inference using a tokensprompt as input instead of text?
Trying to declare a list of tokens like with a text input doesn't work.
For example trying:
tp = TokensPrompt({"prompt_token_ids": [[1,2],[1,2]]})
Results in a *** TypeError: '>' not supported between instances of 'list' and 'int' when using:
model.generate(tp, sampling_params)
However passing a list of strings as an input does batched inference correctly.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions