You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i run the lstm.sh which in eend/egs/mini_librisppech/v1/local/run_blstm, i get the usingwarning: shared memory size is too small.
Please set shared_mem option for MultiprocessIterator.
Expect shared memory size: 4298700 bytes.
Actual shared meory size: 3118900 bytes.
But i use v100, 16G memory!
The error information is follow.
Exception in main training loop: 'cupy.cuda.memory.MemoryPointer' object is not iterable
Traceback (most recent call last):
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
File "../../../eend/bin/train.py", line 82, in
train(args)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/train.py", line 223, in train
trainer.run()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 349, in run
six.reraise(*exc_info)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/six.py", line 719, in reraise
raise value
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable
What should i do? I have no idea to solve it.
The text was updated successfully, but these errors were encountered:
Maybe I have solved the error [TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable]. You need change "for t in label.data" to "for t in label" in eend/chainer_backend/models.py, line 333.
But i still get the userwarning same with you. The warning information is:
shared memory size is too small.
Please set shared_mem option for MultiprocessIterator.
Expect shared memory size: 2085180 bytes.
Actual shared memory size: 1643796 bytes.
When i run the lstm.sh which in eend/egs/mini_librisppech/v1/local/run_blstm, i get the usingwarning: shared memory size is too small.
Please set shared_mem option for MultiprocessIterator.
Expect shared memory size: 4298700 bytes.
Actual shared meory size: 3118900 bytes.
But i use v100, 16G memory!
The error information is follow.
Exception in main training loop: 'cupy.cuda.memory.MemoryPointer' object is not iterable
Traceback (most recent call last):
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
File "../../../eend/bin/train.py", line 82, in
train(args)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/train.py", line 223, in train
trainer.run()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 349, in run
six.reraise(*exc_info)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/six.py", line 719, in reraise
raise value
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/trainer.py", line 316, in run
update()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 175, in update
self.update_core()
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/training/updaters/standard_updater.py", line 189, in update_core
optimizer.update(loss_func, **in_arrays)
File "/home/chenyafeng.cyf/EEND-master/tools/miniconda3/envs/eend/lib/python3.7/site-packages/chainer/optimizer.py", line 825, in update
loss = lossfun(*args, **kwds)
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in call
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 588, in
F.stack([dc_loss(em, t) for (em, t) in zip(ems, ts)]))
File "/home/chenyafeng.cyf/EEND-master/eend/chainer_backend/models.py", line 333, in dc_loss
[int(''.join(str(x) for x in t), base=2) for t in label.data]] = 1
TypeError: 'cupy.cuda.memory.MemoryPointer' object is not iterable
What should i do? I have no idea to solve it.
The text was updated successfully, but these errors were encountered: