Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The question about running lstm.sh #23

Open
speaker-lover opened this issue Jun 11, 2021 · 2 comments
Open

The question about running lstm.sh #23

speaker-lover opened this issue Jun 11, 2021 · 2 comments

Comments

@speaker-lover
Copy link

When i run the lstm.sh which in eend/egs/mini_librisppech/v1/local/run_blstm, i get the usingwarning: shared memory size is too small.
Please set shared_mem option for MultiprocessIterator.
Expect shared memory size: 4298700 bytes.
Actual shared meory size: 3118900 bytes.
But i use v100, 16G memory!
The error information is follow.

Exception in main training loop: 'cupy.cuda.memory.MemoryPointer' object is not iterable
Traceback (most recent call last):
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
File "../../../eend/bin/train.py", line 82, in
train(args)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/train.py", line 223, in train
trainer.run()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 349, in run
six.reraise(*exc_info)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/six.py", line 719, in reraise
raise value
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable

What should i do? I have no idea to solve it.

@DiLiangWU
Copy link

I have the same question when i run local/run_blstm.sh. Have you solved the problem yet?

TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable

@DiLiangWU
Copy link

Maybe I have solved the error [TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable]. You need change "for t in label.data" to "for t in label" in eend/chainer_backend/models.py, line 333.

But i still get the userwarning same with you. The warning information is:
shared memory size is too small.
Please set shared_mem option for MultiprocessIterator.
Expect shared memory size: 2085180 bytes.
Actual shared memory size: 1643796 bytes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants