Triple-Translation GAN with Multi-layer Sparse Representation for Face Image Synthesis

Publication date: Available online 15 May 2019Source: NeurocomputingAuthor(s): Linbin Ye, Bob Zhang, Meng Yang, Wei LianAbstractFace image synthesis with facial feature and identity preserving is one of the key challenges in computer vision. Recently, outstanding performances in image translation and synthesis have been reported in CycleGAN. However, for the task of face image synthesis, there are still several remaining issues (e.g., poor-visual-quality facial feature, changed face identity, unstable model optimization). In order to solve the above issues, in this paper we propose a novel model of triple translation GAN (TTGAN) with multi-layer sparse representation. We design a multi-layer sparse representation model, in which L1-norm representation constraint is integrated into the image generation to enhance the ability of identity preserving and the robustness of the generated facial images to reconstruction error. Moreover, in order to improve the stability of the model optimization, we propose a triple translation consistence loss, including a designed third image translation from a reconstructed original input to a desired output. The face synthesis experimental results on the benchmark face databases clearly shows the superior performance over the competing methods.
Source: Neurocomputing - Category: Neuroscience Source Type: research