Error 950 near the end of the simulation #75
Unanswered
LeoDanninger
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear colleagues,
I would like to kindly ask for your assistance once again regarding my simulations using DAMASK.
In short, I am applying a tensile load to a sample at a fixed strain rate. In my initial attempts, I used relatively small grids (ranging from 16×16×16 up to 64×64×64), and the simulations ran without issues. However, when increasing the grid size to 128×128×128, I began encountering error 402. After consulting the GitHub forum, I found that this error might be associated with large deformations, and one suggested workaround was to enable regridding.
Although the applied deformation is not particularly large (engineering strain ≈ 0.12), I enabled regridding and, after some effort to figure out how to correctly restart the simulation, I managed to do so successfully (https://github.com/damask-multiphysics/DAMASK/discussions/72). Regridding seemed to partially solve the issue, as error 402 no longer appeared. However, I then began encountering error 950 ("max number of cutbacks exceeded, terminating").
To address this new error, I tried increasing the total number of time steps N, and also experimented with different values for N_cutback_max and N_iter_max in the numerics.yaml file. I tested both increasing and decreasing N_cutback_max (as I’m still unsure about the exact implications of this parameter). The configuration that yielded the best progress so far involved subdividing the load steps to trigger regridding more frequently, and using N_cutback_max = 2 and N_iter_max = 150. Attempts to reduce N_cutback_max further (to 1 and 0) led to the same termination. The error message from the last attempt is saved in the file Teste2_f.log.
Below are the commands I used to run the simulation and its restarts:
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_a.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml 2>&1 | tee Teste2_a.log
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_b.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml -r 600 2>&1 | tee Teste2_b.log
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_c.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml -r 800 2>&1 | tee Teste2_c.log
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_d.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml -r 900 2>&1 | tee Teste2_d.log
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_e.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml -r 1000 2>&1 | tee Teste2_e.log
mpiexec -n 16 DAMASK_grid -j Ti_Teste2 -l tensionX_1e-3_f.yaml -g poly_Ti_100_128x128x128.vti -m material.yaml -n numerics.yaml -r 1100 2>&1 | tee Teste2_f.log
The input files for the simulation, along with the log files containing the error messages, are attached below for your reference:
Teste2_Frag.zip
What I am trying to understand is whether there is a flaw in my setup and, more specifically, what error 950 actually means and how best to avoid or solve it. Additionally, I would greatly appreciate any insights into the practical meaning of changing the N_cutback_max value—how does increasing or decreasing it affect the simulation behavior?
P.S.1: The f_out value is set relatively high, as I am only interested in a few output points and smaller values were generating excessively large output files.
P.S.2: The 2>&1 | tee command is only used to save the logs. The errors occurred regardless of whether this command was used.
Thank you very much in advance for your time and support. I truly appreciate all the guidance shared in this forum.
Best regards,
Leo R D
Beta Was this translation helpful? Give feedback.
All reactions