Lung results!

In all cases:

footsteps needs to log uncommitted file if that's what's being run footsteps needs to log version of packages (in seperate thread) footsteps github link

merge code in "/media/data/hastings/InverseConsistency/" into master LNCC loss with sigma=5 pixels

Augmentation: random permutation + flipping of axis + 10% affine warping.

Augmentation implemented here: github

Image preprocessing: clip intensity to [-1000, 0], scale to [-1, 0], shift to [0, 1], mask out lung.

Preprocessing available here: github

lambda=1

On to the cases:

High resolution, no finetuning:

mTRE: 2.3006992213541535, mTRE_X: 0.8731911562027538, mTRE_Y: 1.1455374789234933, mTRE_Z: 1.3988634236498467, DICE: 0.9839835991734549
mTRE: 1.7353536802929188, mTRE_X: 0.6434888778122482, mTRE_Y: 0.8173537622369533, mTRE_Z: 1.0992206030662603, DICE: 0.9875824779902229
0.9839835991734549

High resolution, finetuning:

mTRE: 1.5035167981939945, mTRE_X: 0.5783369599852262, mTRE_Y: 0.7254410319306086, mTRE_Z: 0.9301830532375389, DICE: 0.9870256044148528
mTRE: 1.2175174987112052, mTRE_X: 0.4692889806645579, mTRE_Y: 0.5572003989919224, mTRE_Z: 0.7828604378504245, DICE: 0.9895374508621282
0.9870256044148528

Low resolution, no finetuning:

mTRE: 2.6811465187351606, mTRE_X: 0.992520793545659, mTRE_Y: 1.389975279426745, mTRE_Z: 1.5807646716690722, DICE: 0.9704927550796143
mTRE: 2.0511508312603874, mTRE_X: 0.7512121502866995, mTRE_Y: 1.0436619656409658, mTRE_Z: 1.2434038192848738, DICE: 0.9784813035820681
0.9859310202970379

Low resolution, finetuning:

mTRE: 1.9011778275171511, mTRE_X: 0.7254579471524127, mTRE_Y: 0.9877803858492393, mTRE_Z: 1.1078842197419392, DICE: 0.974102213784451
mTRE: 1.4957662203853648, mTRE_X: 0.5709490608602367, mTRE_Y: 0.7459466674727133, mTRE_Z: 0.9081815544914023, DICE: 0.9802951342755577
0.9900536332107993

✓ footsteps needs to log uncommitted file if that's what's being run

✓ merge code in "/media/data/hastings/InverseConsistency/" into master

✓ get OAI pull request merged

✓ train oai knees with LNCC + augmentation

  • currently training with just LNCC for taste. Will try next with LNCC + augmentation

✓ evaluate latest highres model

Research concepts:

registration that improves previous result: train by randomy resetting prev result

try segmenting latest kaggle challenge

try registering latest kaggle challenge

99%ICON: ICON but the worst 1% of inverse consistencies aren't penalized

mixture of gaussians ICON: interpret ICON as "registration outputs a gaussian at each pixel, ICON loss is log odds of exact inverse consistency. in that framework, rewrite to output a mixture of gaussians instead of a gaussian: perhaps can accurately model tearing?

need a synthetic dataset for tearing.

one approach to mitigate this is to create a very diverse zoo of intensity transfer functions for data augmentation, and hope that this forces the network to learn to register a larger complete set of transfer functions including MR -> CT even though that's not a strict 1-1 function Adrien's paper

Todo:

folds of lncc

Documentation of preprocessing- more comments

Documentation of train_batchfunction!

evaluation during training

example of how to view warped images

description of input_shape: the network will throw an error if the input images aren't this size.

Snap to grid evaluation

Finetune a low resolution model on a high resolution image

train lung with lin loss + augmnentation

put all results into the overleaf

make formal list of experiments for paper – journal article?

put OAI preprocessing into pretrained models

Create a set of images where the shapes are bright and the background dark. Create another set where it is the other way around. Train a network that gets as inputs either a random image pair from one or from the other. Will this network entirely fail if it is presented with a pair where one image is from one set and the other one from the other?

non neural finetune

Back to Reports
subdavis.com forrestli.com