You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that in the documentation, the example code uses model.chat() to generate text. However, I'm wondering if I could use model.generate() to generate text, like other models on HuggingFace?
For example (this snippet code doesn't work because the processor doesn't accept the inputs like that):
path = 'OpenGVLab/InternVL2_5-8B'
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=True,
trust_remote_code=True).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)
processor = AutoProcessor.from_pretrained(path, trust_remote_code=True)
prompt = "Please describe the image shortly."
image = Image.open('../examples/image1.jpg') # the small red panda
inputs = processor(images=image, text=prompt, return_tensors="pt").to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128)
print(f"[Output]: {processor.decode(generated_ids[0])}")
The text was updated successfully, but these errors were encountered:
Hi, thanks for your great work!
I noticed that in the documentation, the example code uses
model.chat()
to generate text. However, I'm wondering if I could usemodel.generate()
to generate text, like other models on HuggingFace?For example (this snippet code doesn't work because the processor doesn't accept the inputs like that):
The text was updated successfully, but these errors were encountered: