TPAMI 2020
Generalizing Energy-based Generative ConvNets from Particle Evolution Perspective
Yang Wu, Xu Cai, Pengxu Wei, Guanbin Li, Liang Lin
TPAMI 2020

Abstract


Compared with Generative Adversarial Networks (GAN)[3], Energy-Based generative Models (EBMs)[4] possess two appealing properties: i) they can be directly optimized without requiring an auxiliary network during the learning and synthesizing; ii) they can better approximate underlying distribution of the observed data by learning explicitly potential functions. This paper studies a branch of EBMs, i.e., energy-based Generative ConvNets (GCNs)[1], which minimize their energy function defined by a bottom-up ConvNet. From the perspective of particle physics, we solve the problem of unstable energy dissipation that might damage the quality of the synthesized samples during the maximum likelihood learning. Specifically, we firstly establish a connection between classical FRAME model [2] and dynamic physics process and generalize the GCN in discrete flow with a certain metric measure from particle perspective. To address KL-vanishing issue, we then reformulate GCN from the KL discrete flow with KL divergence measure to a Jordan-Kinderleher-Otto (JKO) discrete flow with Wasserastein distance metric and derive a Wasserastein GCN (wGCN). Based on these theoretical studies on GCN, we finally derive a Generalized GCN (GGCN) to further improve the model generalization and learning capability. GGCN introduces a hidden space mapping strategy by employing a normal distribution for the reference distribution to address the learning bias issue. Due to MCMC sampling in GCNs, it still suffers from a serious time-consuming issue when sampling steps increase; thus a trainable non-linear upsampling function and an amortized learning are proposed to improve the learning efficiency. Our proposed GGCN is trained in a symmetrical learning manner. Besides, quantitative and qualitative experiments are conducted on several widely-used face and natural image datasets. Our experimental results surpass those of existing models in both model stability and the quality of generated samples.

 

 

Contributions


1. A reformulation of GCN from a physical particle evolution perspective

We attach a precise interpretation to prototype energy-based generative ConvNets from traditional information and statistical theory to particle perspective. From a particle evolution perspective, we establish a connection between GCN and dynamic physics process and provide a generalized formulation of GCN in discrete flow with a certain metric measure. Our reformulation of GCN provides a principle manner to extend GCN with discrete flow.

 

2. A generalization of GCN with Wasserastein JKO discrete flow

To address KL-vanishing issue, based on the reformulation of GCN, we generalize the KL discrete flow with KL divergence measure into a Jordan-Kinderleher-Otto (JKO) discrete flow with Wasserastein distance metric in GCN and derive Wasserastein GCN (wGCN). The proposed Wasserstein learning method manages to solve the instability of energy dissipation process in GCN, owning to the better geometric characteristic of Wasserstein distance.

3. A Generalized Generative ConvNets

We propose a generalized Generative ConvNet model that inherits the mechanism of Wasserastein JKO discrete flow formulation. To further minimize the learning bias and improve the model generalization, we introduce a hidden space mapping strategy and employ a normal distribution as a hidden space for the reference distribution. Considering the limitation of the efficiency problem in MCMC based learning of EBMs, it still suffers from a serious time-consuming problem when sampling steps increase. A trainable non-linear upsampling function and an amortized sampling are proposed to further improve the learning efficiency and model generalization. We learn the GGCN with a symmetrical learning strategy. The complete framework and learning strategy are as follows

 

Experiment


Extensive experiments are conducted on several large-scale datasets for image generation task, demonstrating the improvements on the generation of proposed models quantitatively and qualitatively. We also evaluate our proposed model on other tasks, i.e., image inpainting and interpolation, demonstrating its appealing learning ability.

Results on CelebA:

Compared results on CIFAR10:

Experimental results of inpainting and interpolation:

Reference


[1] J. Xie, Y. Lu, S.-C. Zhu, and Y. Wu, “A theory of generativeconvnet,” inInternational Conference on Machine Learning, 2016, pp.2635–2644;

[2] S. C. Zhu, Y. Wu, and D. Mumford, “Filters, random fields andmaximum entropy (frame): Towards a unified theory for texturemodeling,”International Journal of Computer Vision, vol. 27, no. 2,pp. 107–126, 1998;

[3] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,S. Ozair, A. Courville, and Y. Bengio, “Generative adversarialnets,” inAdvances in Neural Information Processing Systems, 2014,pp. 2672–2680;

[4] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang,“A tutorial on energy-based learning,”Predicting structured data,vol. 1, no. 0, 2006;