ConstrICON final table entries:

Equivariance reg taylor expansion

The regularization of ICON is driven in some sense by the implicit inversion in the network. There is no such inversion in the W bipath loss, so there is no regularization of the underlying map. Instead, any regularization comes from the smoothness of the underlying map, and a penalty on the magnitude of the deviation on it? Or, possibly there is some penalty on the smoothness of the deviation if U is small.

L=Φ[WA,UB]W1Φ[A,B]U\mathcal{L} = ||\Phi[W \circ A, U \circ B] - W^{-1} \circ \Phi[A, B] \circ U|| Φ equivariant,Φ^=Φ+ϵn(x)\Phi ~ \text{equivariant}, \hat{\Phi} = \Phi + \epsilon n(x) L[Φ^]=ϵW1(n(x)n(U(x))\mathcal{L}[\hat{\Phi}] = \epsilon || \nabla W^{-1} \cdot (n(x) - n(U(x))||

Grad equivariance reg taylor expansion

L=[Φ[WA,UB]W1Φ[A,B]U]\mathcal{L} = ||\nabla[\Phi[W \circ A, U \circ B] - W^{-1} \circ \Phi[A, B] \circ U]||

On a hunch, assume W id

L=[Φ[A,UB]Φ[A,B]U]\mathcal{L} = ||\nabla[\Phi[A, U \circ B] - \Phi[A, B] \circ U]|| L=[Φ[A,UB]+ϵ(n(x))Φ[A,B]U]ϵ(U(x))]\mathcal{L} = ||\nabla[\Phi[A, U \circ B] + \epsilon(n(x)) - \Phi[A, B] \circ U] - \epsilon(U(x))]|| L=[ϵn(x)ϵn(U(x))]\mathcal{L} = ||\nabla[\epsilon n(x) - \epsilon n (U(x))]|| L=ϵ[n(x)n(U(x))]\mathcal{L} = \epsilon||\nabla[n(x) - n(U(x))]||

Pretty directly a penalty on the gradient of the deviation, penalty on the second order derivative of the deviation if U small.

Research idea: black box optimize similarity by varying parameters of U

Back to Reports
subdavis.com forrestli.com