Hi! Since we last met, I've

Finished setting up the ICON documentation: It's getting pretty fancy, and I've set up readthedocs so that I can keep including code samples and have readthedocs verify that the code samples all work on each commit: https://icon.readthedocs.io/en/doc-update.

Worked on your request that we disentangle the affine and deformable components of the knee x-ray registration. I brought the code for doing affine registration in ICON up to date, and added some documentation for it. It works well on its own but remains more difficult to train than deformable registration, and as a result if trained end to end, the deformable component takes over and the affine component stays near the identity map. I tried several fancy ways to mitigate this but they all diverged, leaving us with the un-fancy approach of pretraining with just affine, then tacking on deformable registration.

More details on the fancy approach: I think the correct formulation is: minimize the similarity and regularity losses by varying the parameters of the deformable network, then minimize the magnitude of the output of the deformable network by varying the parameters of the affine network, but alas this diverges no matter how I implement it.

Worked on documenting the interaction between MONAI and ICON. The biggest upshot of this is that I can now integrate MONAI's fancy attention U-Nets with our code (which don't seem to work better than my U-Net implementation, alas) and MONAI's Mutual Information similarity measure, which should be very useful if you want me to keep working on cross-modality image registration.

Back to Reports
subdavis.com forrestli.com