Unsupervised font generation network integrating content and style representation

Generating Chinese fonts with a large number of characters is a challenging task. Existing methods mainly rely on large amounts of paired data for supervised learning, but collecting such data is labor-intensive and difficult to scale to new styles of fonts. To assist font designers in improving the...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Yu, Ding, Yang, Binti Khalid, Fatimah, Li, Xin, Mustaffa, Mas Rina, Azman, Azreen
Format: Article
Language:en
Published: Institute of Computing Technology 2025
Subjects:
Online Access:http://psasir.upm.edu.my/id/eprint/123157/1/123157.pdf
http://psasir.upm.edu.my/id/eprint/123157/
https://www.sciengine.com/JCADC/doi/10.3724/SP.J.1089.2023-00397
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Generating Chinese fonts with a large number of characters is a challenging task. Existing methods mainly rely on large amounts of paired data for supervised learning, but collecting such data is labor-intensive and difficult to scale to new styles of fonts. To assist font designers in improving the efficiency of computer Chinese font library development, an unsupervised font generation network that separates font content and style representations is proposed. First, establish dense semantic correspondences between style and content representations in the same domain to guide the decoder to produce high-quality outputs. Then, introduce deformable convolutions in the skip connection, and make the model more focused on the structural characteristics of the font through the mutual dependence between the learning offset and the channel. Finally, design a multi-scale style discriminator to evaluate the style consistency of generated images at different scales. The team demonstrated and analyzed the generation effects of five font generation methods on public datasets, including FUNIT, MX-Font, and DG-Font. Experimental results show that the method outperforms others in terms of L1, RMSE, and user study experiments.