CARL questions

Why is there a performance regression when training gradicon using the venv set up for CARL?

  • Is it a pytorch version issue?

  • Is it an icon_registration version issue?

  • Does it affect CARL training performance too?

Can we train CARL using 16 bit attention? This should ameliorate training time

Can we train CARL using slightly smaller resolution for the transformer? Also good for training time

Can CARL work with a spherical harmonic / Group convolution encoder?

  • preliminary results: no https://colab.research.google.com/drive/1C04yejs2F9v1UseygRtDaAvxWK5gvM1l?usp=sharing

How does CARL perform on Learn2reg abdomen?

Does nonuniform spacing help?

IXI skull strip

How to save the CARL paper

Easiest route: get equivariance to rotations working. now we have a differentiator.

Add more power to the encoder: The 3-D encoder has much fewer parameters than 3D tallUNet2, while the 2D encoder has the same number of parameters.

Things Hastings wants to do with UniGRADICON

UniGRADICON dataset 2.0

Add more diverse datasets:

  • add abdomen1k dataset to training (this is pure upside)

  • abdomen8k dataset?

  • add COPDGene inter-subject

  • add a bunch of brain datasets to training, including registration between them (IXI + HCP + OASIS)

  • add mouse brain dataset to training

  • using high resolution subset of Abdomen1K and COPDGene, add cropped regions to training (cropped to heart, cropped to liver, cropped to pancreas)

  • SynthMorph dataset?

  • SynthMorph dataset built from abdomen segmentations?

Augment all datasets with

  • All 90 degree rotations (matched)

  • Random crop

  • With and without masking. When registering a masked image to an unmasked image, compute similarity on unmasked to unmasked

  • Add dice loss wherever we have it

  • intensity -> -intensity, (2 * intensity - 1)^2, intensity / 2

  • Crop off bottom and top of image by random black rectangles

Whenever perfoming augmentation, compute similarity on unaugmented image

Ablate architecture of UniGRADICON

Once we have all that, it's just GPU time to train a bunch of architectures on it

CARL, ConstrICON, GradICON, ConstrICON / GradICON hybrid

Possibility One:

add some constricon affine

Infrastructure TODO

we want

pip install unigradicon

unigradicon register --fixed=squirrel_jawbone.nrrd --moving=aardvark_jawbone.nrrd --transform_out=transform.nrrd

unigradicon warp --nearest --moving=aardvark_segmentation.nrrd --image_out=warped_jawbone.nrrd

and we can finally produce this.

Also, icon_registration should have a built in learn2reg interface along with the itk interface if we're going to keep participating in these contests.

Back to Reports
subdavis.com forrestli.com