Optimization and Control
Constrained Approximate Optimal Transport Maps
Publié le
We investigate finding a map $g$ within a function class $G$ that minimises an Optimal Transport (OT) cost between a target measure $\nu$ and the image by $g$ of a source measure $\mu$. This is relevant when an OT map from $\mu$ to $\nu$ does not exist or does not satisfy the desired constraints of $G$. We address existence and uniqueness for generic subclasses of $L$-Lipschitz functions, including gradients of (strongly) convex functions and typical Neural Networks. We explore a variant that approaches a transport plan, showing equivalence to a map problem in some cases. For the squared Euclidean cost, we propose alternating minimisation over a transport plan $\pi$ and map $g$, with the optimisation over $g$ being the $L^2$ projection on $G$ of the barycentric mapping $\overline{\pi}$. In dimension one, this global problem equates the $L^2$ projection of $\overline{\pi^*}$ onto $G$ for an OT plan $\pi^*$ between $\mu$ and $\nu$, but this does not extend to higher dimensions. We introduce a simple kernel method to find $g$ within a Reproducing Kernel Hilbert Space in the discrete case. We present numerical methods for $L$-Lipschitz gradients of $\ell$-strongly convex potentials, and study the convergence of Stochastic Gradient Descent methods for Neural Networks. We finish with an illustration on colour transfer, applying learned maps on new images, and showcasing outlier robustness.