How should I put model into distributed devices if I have multiple optimization loops? #12909
Unanswered
NoTody
asked this question in
code help: CV
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, guys
I'm trying to incorporate average weight perturbation (https://proceedings.neurips.cc/paper/2020/file/1ef91c212e30e14bf125e9374262401f-Paper.pdf) to my lightning code. However, because it calculates adversary weight perturbation by another optimization loop, the model is not put into devices correctly in distributed setting. How should I do this correctly?
Specifically, how should I incorporate the perturb function in this link when I have another model need to be trained in train_loop?
https://github.com/csdongxian/AWP/blob/main/AT_AWP/awp.py
Right now, I did something like this, but it doesn't work out.
During initialization in def init(self, hparams, backbone):
During training in train_loop
With the error shown as:
Beta Was this translation helpful? Give feedback.
All reactions