Skip to content

Conversation

@jordandekraker
Copy link
Collaborator

This is an experimental branch and should not be merged. It is not (yet) backwards compatible with other models since it hard-codes nnunetv2 env, flip before nnunet, new tissue mappings. In principle these things should each be specified in a nnunet model config, but for now this is just a quick test of functionality.

I trained the synthlayer model a while ago as part of a previous experiment. The basic idea is to use highly gyrified priors that come from the multihist7 ground truth (though note that 2 of these samples were relative smooth). From this, we generate synthetic MRI images by

  1. superimpose hippocampal labelmaps onto FreeSurfer labelmaps. This is both to make synthetic images more MRI-like and also so background regions are given their own labels instead of being confused with hippocampal tissue - as would sometimes happen particularly in the colateral sulcus (x4 background labelmaps - one sample with a single collateral sulcus, one with split, left and right. This, alongside nnunet's random deformation fields, should cover most variability in "background" labels)
  2. train with synthetic MRI images, similar to synthseg's generate synthetic MRI method
  3. added in a manual segmentation of the alveus+fimbria+some of fornix (approx within the corobl FOV). This was to resolve a specific issue where nnunet was over segmenting in real MRI data. By assigning these to a new label they should now be excluded from our hippocampal grey matter labels
  4. inbuilt layerification. hippocampal grey matter is split into 4 equipotential layers, and dentate gyrus in 2. This solves issues like holes or bridges over the srlm or between sulci which would otherwise break layerification with laynii or volumetric laplace. Note that I did not yet remove layerification from the workflow here

Here is an example segmentation (most labels are made invisible for clearer visualization):
image

This still has amazing gyrification, but some common issues that we normally resolve with template shape injection:

  • disconnected components
  • dentate src/sink labels (very small) are often missing

However, in many cases the template shape injection fails. In the above example, here is post-injection:
image
Notice that a lot of tissue was lost in the posterior, and though its hard to see in this snapshot, theres a region in the head where the inferior side is thickened massively to span the superior side too. In addition, a lot of the gyrification definition is lost (as expected with tempalte shape injection).

In retrospect I shouldn't be surprised that this level of detail breaks template shape injection since injection also failed in most cases of the multihist7, which required extremely careful manual edits (sometimes jsut a few voxels) in order to unfold. Here we have a similar problem since we have (almost) as much gyrification making the registration really tricky.

This PR also includes a new template shape that is layerified to match the nnunet segmentation. I had hoped that this would preserve better definition of the gyrification while still recovereing/smotohing/regularizing all required labels, but it still seems to fail for the same reasons as above.

I did some testing under 3 conditions: no tempalte shape injection, layerified template shape injection, remapping nnunet to the original labels and using the original upenn template shape injection. In all 3 cases, there are frequent failures. In principle the best solution would be the train nnunet longer (was still improving),
image
however, it seems unlikely to me that the small labels like the dentate gyrus src/sink will be always present to the extent that template shape injection can be discarded entirely. Fixing the template shape injection would be safer, but this is tough since the nnunet segmentations are so complex/folded in many cases (though also the smoother participants generally worked fine). I'm not confident that just tweaking the parameters of this will solve the issue.

Finally, a note on why this apporach is promising. First it offloads a lot of the tricky parts onto the NN, particular in resolving complex topologies that get broken by holes or bridges between folds. Secondly, it does show really detailed gyrification. Quantatively we can see that though consistency between sessions was a bit lower (but still R>0.90) (left), the identifiability (right) of surfaces is way higher:
image
Note that this result was using the layerified template shape injection and discarding all subjects who failed (maybe 25%).

So overall I think this is a really promising approach but it has an unacceptably high rate of catastrophic failures. I had hoped to make this the new default model for hippunfold, but unless we can resolve these issues I don't think its worthwhile.

A new approach that I'm interested in is predicting deformation fields of a tempalte shape to folded space (i.e. going directly to a solved templpate shape injection) (voxelMorph style). From there, predefined volumetric segmentations and standardized surfaces coming from the template could be put directly into subject space, making a significantly simpler workflow. These tempalte shape injection warpfields could be generated in detail using the vertex-equivalence we've already established (which would be scattered data that we interpolate onto a regular grid to make a warp field to initialize tempalte shape registration). This is jsut a cool theoretical idea though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants