How to overwrite checkpoints with ModelCheckpoint #15803
Unanswered
nsabir2011
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
Any answer on this? This looks like a very fair feature request. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently trying extract features for some images with Resnet18. After that I'm adding a
registered_buffer
to the model. I'm saving all the embeddings for the extracted feature in that buffer after doing some computations. I only do this for 1 epoch and stop the training (although no training is done, only doing feature extraction).Later, I do infernece and compare the embeddings. Basically, I look for anomaly this way.
Currently, I save the weights like os:
This way, a
model.ckpt
is saved and on each training run, a new weight file saved (Ex.model-v1.ckpt
) if a checkpoint already exists.However, I don't want to save a new model. I want to overwrite the previous one. How do I do that?
I have tried different combination of args like
save_top_k=0
,mode="min"
,every_n_epoch=1
andsave_on_train_epoch_end=True
. But it doesn't work.How do I achieve what I require?
Beta Was this translation helpful? Give feedback.
All reactions