Getting Embedding from CLIP model #1940
Unanswered
lelefontaa
asked this question in
Q&A
Replies: 1 comment
-
Yup — definitely doable. You can cache the CLIP embeddings easily after generation — just serialize the tensor (or Here's a very quick sketch of the idea: import os, hashlib, torch
def get_cache_path(image_path):
h = hashlib.md5(open(image_path, 'rb').read()).hexdigest()
return f"./clip_cache/{h}.pt"
def get_clip_embedding(image, model):
cache_path = get_cache_path(image)
if os.path.exists(cache_path):
return torch.load(cache_path)
else:
embedding = model.encode_image(image)
torch.save(embedding, cache_path)
return embedding That’ll save you time and GPU sanity. PS: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Is there a way to save the embeddings of images generated by the CLIP model (in chat_handler) so that they can be reused without recalculating them?
Beta Was this translation helpful? Give feedback.
All reactions