As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive, and interpretable latent representation for texture synthesis, which can be used to generate smooth texture morphs between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture by directly using the co-occurrence values.
Publications
- Article type
- Year
- Co-author
Article type
Year
Open Access
Research Article
Issue
Computational Visual Media 2022, 8(2): 289-302
Published: 06 December 2021
Downloads:39
Total 1