Not to merge - synthlayer #525
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is an experimental branch and should not be merged. It is not (yet) backwards compatible with other models since it hard-codes nnunetv2 env, flip before nnunet, new tissue mappings. In principle these things should each be specified in a nnunet model config, but for now this is just a quick test of functionality.
I trained the synthlayer model a while ago as part of a previous experiment. The basic idea is to use highly gyrified priors that come from the multihist7 ground truth (though note that 2 of these samples were relative smooth). From this, we generate synthetic MRI images by
Here is an example segmentation (most labels are made invisible for clearer visualization):

This still has amazing gyrification, but some common issues that we normally resolve with template shape injection:
However, in many cases the template shape injection fails. In the above example, here is post-injection:

Notice that a lot of tissue was lost in the posterior, and though its hard to see in this snapshot, theres a region in the head where the inferior side is thickened massively to span the superior side too. In addition, a lot of the gyrification definition is lost (as expected with tempalte shape injection).
In retrospect I shouldn't be surprised that this level of detail breaks template shape injection since injection also failed in most cases of the multihist7, which required extremely careful manual edits (sometimes jsut a few voxels) in order to unfold. Here we have a similar problem since we have (almost) as much gyrification making the registration really tricky.
This PR also includes a new template shape that is layerified to match the nnunet segmentation. I had hoped that this would preserve better definition of the gyrification while still recovereing/smotohing/regularizing all required labels, but it still seems to fail for the same reasons as above.
I did some testing under 3 conditions: no tempalte shape injection, layerified template shape injection, remapping nnunet to the original labels and using the original upenn template shape injection. In all 3 cases, there are frequent failures. In principle the best solution would be the train nnunet longer (was still improving),

however, it seems unlikely to me that the small labels like the dentate gyrus src/sink will be always present to the extent that template shape injection can be discarded entirely. Fixing the template shape injection would be safer, but this is tough since the nnunet segmentations are so complex/folded in many cases (though also the smoother participants generally worked fine). I'm not confident that just tweaking the parameters of this will solve the issue.
Finally, a note on why this apporach is promising. First it offloads a lot of the tricky parts onto the NN, particular in resolving complex topologies that get broken by holes or bridges between folds. Secondly, it does show really detailed gyrification. Quantatively we can see that though consistency between sessions was a bit lower (but still R>0.90) (left), the identifiability (right) of surfaces is way higher:

Note that this result was using the layerified template shape injection and discarding all subjects who failed (maybe 25%).
So overall I think this is a really promising approach but it has an unacceptably high rate of catastrophic failures. I had hoped to make this the new default model for hippunfold, but unless we can resolve these issues I don't think its worthwhile.
A new approach that I'm interested in is predicting deformation fields of a tempalte shape to folded space (i.e. going directly to a solved templpate shape injection) (voxelMorph style). From there, predefined volumetric segmentations and standardized surfaces coming from the template could be put directly into subject space, making a significantly simpler workflow. These tempalte shape injection warpfields could be generated in detail using the vertex-equivalence we've already established (which would be scattered data that we interpolate onto a regular grid to make a warp field to initialize tempalte shape registration). This is jsut a cool theoretical idea though.