-
It looks like the tensorflow-datasets training iterator in the ImageNet example isn't applying any augmentation. In the PyTorch ImageNet example it's standard to apply random crops and flips during training. Is that happening somewhere else or is it not necessary for some reason? I was looking for it to design an experiment involving a different type of augmentation so I was hoping to see an augmentation block I could modify. It looks like I could either modify Another good reference might be the data augmentation pipeline in the vision transformer code. It looks like the haiku ImageNet example might also be a good reference for an implementation of dataset augmentation, here. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
I found it, it's in |
Beta Was this translation helpful? Give feedback.
I found it, it's in
input_pipeline.py
, here.