Move back MPI Init/Finalize to within HPC Virtualization#551
Open
Move back MPI Init/Finalize to within HPC Virtualization#551
Conversation
A top-level MPI_Init at XACC Initialize() is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a global MPI_Init is not ideal. Hence, move it back to within the scope of HPCVirt decorator. Fixing a MPI_Finalize race condition issue when ExaTN MPI is present within the installation. ExaTN has `exatnInitializedMPI` variable to determine if it should do the MPI_Finalize step, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors during ExaTN Finalize() which could call MPI API's Signed-off-by: Thien Nguyen <thien.md.nguyen@gmail.com>
Contributor
Author
|
@danclaudino Could you please review this PR? This is trying to fix the MPI API usage after Finalize and an issue with double MPI_Init in some cases. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
A top-level
MPI_Initat XACCInitialize()is not ideal since we may want to use an MPI-enabled backend (without HPC Virtualization) => a globalMPI_InitatXACC::Initialize()could be problematic.Hence, move it back within the scope of
HPCVirtdecorator.Fixing an
MPI_Finalizerace condition issue when ExaTN MPI is present within the installation.ExaTN has
exatnInitializedMPIvariable to determine if it should do theMPI_Finalizestep, hence HPC Virt should have the same mechanism to prevent HPC Virt from finalizing MPI pre-maturely and causing MPI errors duringExaTN::Finalize()which could call MPI API's