Skip to content

Conversation

@eplatero97
Copy link

DATA PARALLEL

When I trained the FaceNet model using your repository, the default batch size of 128 was too much memory for just one GPU.

Instead, I added the power of nn.DataParallel not only to be able to train the model on a batch size of 128, but it also speeds the process of training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant