Abstract: In this paper, we describe the “implicit autoencoder” (IAE), a generative autoencoderin which both the generative path and the recognition path are parametrizedby implicit distributions. We use two generative adversarial networks to define thereconstruction and the regularization cost functions of the implicit autoencoder,and derive the learning rules based on maximum-likelihood learning. Using implicitdistributions allows us to learn more expressive posterior and conditionallikelihood distributions for the autoencoder. Learning an expressive conditional likelihood distribution enables the latent code to only capture the abstract andhigh-level information of the data, while the remaining information is capturedby the implicit conditional likelihood distribution. For example, we show thatimplicit autoencoders can disentangle the global and local information, and performdeterministic or stochastic reconstructions of the images. We further showthat implicit autoencoders can disentangle discrete underlying factors of variationfrom the continuous factors in an unsupervised fashion, and perform clustering andsemi-supervised learning.