SSNet: Structure-Semantic Net for Chinese Typography Generation based on Image Translation

Publication date: Available online 30 August 2019Source: NeurocomputingAuthor(s): Jianwei Zhang, Danni Chen, Guoqiang Han, Guanzhao Li, Junting He, Zhenmei Liu, Zhihui RuanAbstractThe abundant complex Chinese characters often lead to the high cost of time and labor in its typography production, which cannot meet various demands of typohraphies in daily life. Image translation methods are becoming the mainstream of typography generation to facilitate typography production. Nevertheless, current translation methods do not take the Chinese semantics and structure into account. In this paper, we propose a method called Structure-Semantic Net (SSNet) for Chinese typography generation, which utilizes disentangled stroke features from the structure module, pre-trained semantic features from the semantic module to generate target typographies. Furthermore, a novel loss called dual-masked Hausdorff distance is proposed to punish the incorrectly generated pixels to regularize the character contours, stabilizing the training process. Qualitative and quantitative results show that the proposed SSNet surpasses existing image translation methods in image quality, and the ablation study of SSNet verifies the effectiveness of each module and loss function.
Source: Neurocomputing - Category: Neuroscience Source Type: research