The Plan:

Two goals for new unigradicon: affine registration and multimodal registration

Neural Affine registration approaches:

Extract affine component from unigradicon output

  • Appears to be impossible for complicated reasons (willing to elaborate)

Large ConstrICON model

  • Trains fine, works fine, training on new datasets nicely documented

  • Large but non global capture radius

  • Currently in use with Basar for his KL evaluation on OAI project

  • Doesn't generalize to images not in the uniGradICON dataset - weird

Directly optimize affine map against LNCC

  • Works fine, doesn't require training, generalizes

  • Considered for Basar's project but slower than previous approach

  • Not interesting or publishable, but a good baseline

Directly optimize affine map prior to unigradicon

  • Works fine, doesn't require new training, generalizes

  • Complicated, not interested in explaining again because...

  • Worse than previous approach (RIP)

Extract affine transform from equivariant registration (CARL)

  • Trains ...ok, not nicely documented yet

  • Global capture radius, which is why we use CARL on the biobank dataset- this is a major capabilities boost

  • uniCARL Doesn't generalize to images not in the uniGradICON dataset - weird

Use KeyMorph architecture

  • fundamentally can't generalize to images not in the training set.

  • global capture radius

  • included because it emphasizes the pattern

Investigative steps:

MNIST generalization task: train on 2, 5, 8, eval on 6, 1

  • GradICON does ok

  • ConstrICON affine passes ?? <- main subject for followup

  • translation equivariant CARL passes- doesn not provide affine-ness guarantees

  • translation equivariant CARL passes without multistep - relevant for my notes

  • rotation equivariant CARL FAILS. (addendum- passes when trained for longer (!!?!?!?))

ConstrICON GradICON CARL CARLrot

New Theory: problem is global/local, not affine/deformable

Since failure of CARL ROT to generalize is replicable in small scale, it could be easy to investigate, but is unlikely to be a bug in the 3-D implementation. Expensive but useful experiment would be to train a translation equivariant univeral CARL and see if that generalizes. This model might be useful for tasks that unigradicon fails due to large translation, like the biobank scans- not actually useless.

Since failure of ConstrICON to generalize is not replicable, it is more likely that the ConstrICON 3-D model is being held back by some bug.

Investigation with Basar: The ConstrICON checkpoint which I picked for generalization testing last fall is bad. Rolling back to an earlier checkpoint shows promise- (ConstrICON training was not previously known to be unstable)

UK BioBank data extraction

Teng fei emailed several minutes ago (wednesday). He got the features computed himself, needs me to register them. Will do this first priority after meeting.

First step: Re-test generalization of uniConstrICON model

Submission Plan:

last conference of the year is NeurIPS: may 1st

Thesis writing is in flight on github

(missed miccai, midl, ICML)

side project: The torch unigradicon performance question

Investigation 1: biag-w05, 10 io iterations of unigradicon

torch 1.19: 18.6 seconds

torch 2.6: 17.5 seconds

Investigation 2: biag-gpu6, 10 steps of gradicon training on noise

torch 1.19:

Back to Reports
subdavis.com forrestli.com