We are very excited to accept 42 fantastic papers for the first workshop on Deep Generative Models for Highly Structured Data. Special thanks to our wonderful program committee for their hard work in reviewing the submissions. The full proceedings will be available on OpenReview, and the papers will be presented as posters during the workshop.
See you all in New Orleans!
Correlated Variational Auto-Encoders
Da Tang, Dawen Liang, Tony Jebara, Nicholas Ruozzi
Compositional GAN (Extended Abstract): Learning Image-Conditional Binary Composition
Samaneh Azadi, Deepak Pathak, Sayna Ebrahimi, Trevor Darrell
AlignFlow: Auto cycle-consistent domain translations via normalizing flows
Aditya Grover, Christopher Chute, Rui Shu, Zhangjie Cao, Stefano Ermon
Generating Molecules via Chemical Reactions
John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato
HYPE: Human-eYe Perceptual Evaluation of Generative Models
Sharon Zhou, Mitchell Gordon, Ranjay Krishna, Austin Narcomey, Durim Morina, Michael S. Bernstein
Deep Random Splines for Point Process Intensity Estimation
Gabriel Loaiza-Ganem, John P. Cunningham
Learning to Defense by Learning to Attack
Zhehui Chen, Haoming Jiang, Yuyang Shi, Bo Dai, Tuo Zhao
WiSE-ALE: Wide Sample Estimator for Approximate Latent Embedding
Shuyu Lin, Ronald Clark, Robert Birke, Niki Trigoni, Stephen Roberts
Visualizing and Understanding GANs
David Bau, Jun-Yan Zhu, Hendrik Strobelt, Bolei Zhou, Joshua B. Tenenbaum, William T. Freeman, Antonio Torralba
Fully differentiable full-atom protein backbone generation
Namrata Anand, Raphael Eguchi, Po-Ssu Huang
Learning Deep Latent-variable MRFs with Amortized Bethe Free Energy Minimization
Sam Wiseman
Debiasing Deep Generative Models via Likelihood-free Importance Weighting
Aditya Grover, Jiaming Song, Ashish Kapoor, Kenneth Tran, Alekh Agarwal, Eric Horvitz, Stefano Ermon
On Scalable and Efficient Computation of Large Scale Optimal Transport
Yujia Xie, Minshuo Chen, Haoming Jiang, Tuo Zhao, Hongyuan Zha
Unsupervised Demixing of Structured Signals from Their Superposition Using GANs
Mohammadreza Soltani, Swayambhoo Jain, Abhinav Sambasivan
Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations
Sidak Pal Singh, Andreas Hug, Aymeric Dieuleveut, Martin Jaggi
Understanding Posterior Collapse in Generative Latent Variable Models
James Lucas, George Tucker, Roger Grosse, Mohammad Norouzi
Perceptual Generative Autoencoders
Zijun Zhang, Ruixiang Zhang, Zongpeng Li, Yoshua Bengio, Liam Paull
A Learned Representation for Scalable Vector Graphics
Raphael Gontijo Lopes, David Ha, Douglas Eck, Jonathon Shlens
Revisiting Auxiliary Latent Variables in Generative Models
Dieterich Lawson, George Tucker, Bo Dai, Rajesh Ranganath
Understanding the Relation Between Maximum-Entropy Inverse Reinforcement Learning and Behaviour Cloning
Seyed Kamyar Seyed Ghasemipour, Shane Gu, Richard Zemel
Point Cloud GAN
Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabás Póczos, Ruslan Salakhutdinov
FVD: A new Metric for Video Generation
Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphaël Marinier, Marcin Michalski, Sylvain Gelly
A RAD approach to deep mixture models
Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle
Generating Diverse High-Resolution Images with VQ-VAE
Ali Razavi, Aaron van den Oord, Oriol Vinyals
On the relationship between Normalising Flows and Variational- and Denoising Autoencoders
Alexey A. Gritsenko, Jasper Snoek, Tim Salimans
Deep Generative Models for Generating Labeled Graphs
Shuangfei Fan, Bert Huang
DIVA: Domain Invariant Variational Autoencoder
Maximilian Ilse, Jakub M. Tomczak, Christos Louizos, Max Welling
Storyboarding of Recipes: Grounded Contextual Generation
Anonymous
Interactive Visual Exploration of Latent Space (IVELS) for peptide auto-encoder model selection
Tom Sercu, Sebastian Gehrmann, Hendrik Strobelt, Payel Das, Inkit Padhi, Cicero Dos Santos, Kahini Wadhawan, Vijil Chenthamarakshan
Smoothing Nonlinear Variational Objectives with Sequential Monte Carlo
Antonio Moretti, Zizhao Wang, Luhuan Wu, Itsik Pe'er
DISENTANGLED STATE SPACE MODELS: UNSUPERVISED LEARNING OF DYNAMICS ACROSS HETEROGENEOUS ENVIRONMENTS
Ðorđe Miladinović, Waleed Gondal, Bernhard Schölkopf, Joachim M. Buhmann, Stefan Bauer
Generative Models for Protein Design
John Ingraham, Vikas Garg, Regina Barzilay, Tommi Jaakkola
Adversarial Mixup Resynthesizers
Christopher Beckham, Sina Honari, Alex Lamb, Vikas Verma, Farnoosh Ghadiri, R Devon Hjelm, Christopher Pal
Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran, Keyon Vafa, Kumar Agrawal, Laurent Dinh, Ben Poole
Interactive Image Generation Using Scene Graphs
Gaurav Mittal, Shubham Agrawal, Anuva Agarwal, Sushant Mehta, Tanya Marwah
Improved Adversarial Image Captioning
Pierre Dognin, Igor Melnyk, Youssef Mroueh, Jarret Ross, Tom Sercu
Variational autoencoders trained with q-deformed lower bounds
Septimia Sârbu, Luigi Malagò
DUAL SPACE LEARNING WITH VARIATIONAL AUTOENCODERS
Hirono Okamoto, Masahiro Suzuki, Itto Higuchi, Shohei Ohsawa, Yutaka Matsuo
Structured Prediction using cGANs with Fusion Discriminator
Faisal Mahmood, Wenhao Xu, Nicholas J. Durr, Jeremiah W. Johnson, Alan Yuille
Adjustable Real-time Style Transfer
Mohammad Babaeizadeh, Golnaz Ghiasi
A Seed- Augment-Train framework for universal digit classification
Vinay Uday Prabhu, Sanghyun Han, Dian Ang Yap, Mihail D, Preethi S
Disentangling Content and Style via Unsupervised Geometry Distillation
Wayne Wu, Kaidi Cao, Cheng Li, Chen Qian, Chen Change Loy