8:00 – 8:20 contributed 1: David Alvarez-Melis, Tommi Jaakkola and Stefanie Jegelka Structured Optimal Transport [paper]

8:20 – 9:00 invited 1 Pierre Jacob

9:00 – 9:40 invited 2 Katy Craig, Gradient Flow in the Wasserstein Metric

9:40 – 10:00 contributed 2: Charlie Frogner and Tomaso Poggio Approximate inference with Wasserstein gradient flows

10:00 – 10:18 6 x 3 Minutes spotlights:

  1. Nicolas Courty, Rémi Flamary and Mélanie Ducoffe. Learning Wasserstein Embeddings
  2. Yongxin Chen, Tryphon Georgiou and Allen Tannenbaum. Optimal transport for Gaussian mixture models
  3. Napat Rujeerapaiboon, Kilian Schindler, Daniel Kuhn and Wolfram Wiesemann. Size Matters: Cardinality-Constrained Clustering and Outlier Detection via Conic Optimization
  4. Jonas Adler, Axel Ringh, Ozan Öktem and Johan Karlsson. Learning to solve inverse problems using Wasserstein loss
  5. John Lee, Adam Charles, Nicholas Bertrand and Christopher Rozell. An Optimal Transport Tracking Regularizer
  6. Lucas Roberts, Leo Razoumov, Lin Su and Yuyang Wang. Gini-regularized Optimal Transport with an Application in Spatio-Temporal Forecasting [paper]

10:18 – 11:00 Posters + Coffee

11:00 – 11:40 invited 3 Alexandr Andoni, Optimal planar transport in near-linear time

11:40 – 12:20 invited 4 Wilfrid Gangbo, A partial Laplacian as an infinitesimal generator on the Wasserstein space

12:20 – 13:40 lunch break

13:40 – 14:20 invited 5 Léon Bottou, Geometrical insights for unsupervised learning

14:20 – 14:40 contributed 3 Tim Salimans, Han Zhang, Alec Radford and Dimitris Metaxas. Improving GANs Using Optimal Transport

14:40 – 15:00 contributed 4 Alexis Thibault, Lénaïc Chizat, Charles Dossal and Nicolas Papadakis. Overrelaxed Sinkhorn-Knopp Algorithm for Regularized Optimal Transport

15:00 – 15:30 Coffee break

15:30 – 16:10 invited 6 Rémi Flamary, Domain adaptation with optimal transport

16:10 – 16:50 invited 7 Francis Bach, Sharp rates of convergence of empirical measures in Wasserstein distance

16:50 – 17:11 7 x 3 minutes spotlights:

  1. Jérémie Bigot, Elsa Cazelles and Nicolas Papadakis. Central limit theorems for Sinkhorn divergence between probability distributions on finite spaces
  2. Aude Genevay, Learning Generative Models with Sinkhorn Divergences
  3. Gonzalo Mena, David Belanger, Gonzalo Muñoz and Jasper Snoek. Sinkhorn Networks: Using Optimal Transport Techniques to Learn Permutations
  4. Christoph Brauer, Christian Clason, Dirk Lorenz and Benedikt Wirth. A Sinkhorn-Newton method for entropic optimal transport
  5. Henning Petzka, Asja Fischer and Denis Lukovnikov. On the regularization of Wasserstein GANs
  6. Vivien Seguy, Bharath Bhushan Damodaran, Rémi Flamary, Nicolas Courty, Antoine Rolet and Mathieu Blondel. Large Scale Optimal Transport and Mapping Estimation
  7. Sho Sonoda and Noboru Murata. Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks

17:11 – 17:40 short roundtable 

17:30 – 18:30 Final poster session