Hi, thank you for your awesome work!
I noticed that in the recent update of Latent.ipynb, the preprocessing step now applies
per-image normalization (normalizing each image by its own mean and std)
But in earlier versions, preprocessing used ImageNet mean/std normalization, and based on
main_finetune.py, the training code still appears to rely on ImageNet normalization
I mainly want to make sure I follow the correct preprocessing pipeline. Should new runs
stick with ImageNet normalization, or switch to per-image normalization?
And if based on your experiments the difference isn’t significant, I would be happy to know that as well
Thanks again for releasing this
Hi, thank you for your awesome work!
I noticed that in the recent update of
Latent.ipynb, the preprocessing step now appliesper-image normalization (normalizing each image by its own mean and std)
But in earlier versions, preprocessing used ImageNet mean/std normalization, and based on
main_finetune.py, the training code still appears to rely on ImageNet normalizationI mainly want to make sure I follow the correct preprocessing pipeline. Should new runs
stick with ImageNet normalization, or switch to per-image normalization?
And if based on your experiments the difference isn’t significant, I would be happy to know that as well
Thanks again for releasing this