|

CSAST: Content self-supervised and style contrastive learning for arbitrary style transfer.

Researchers

Journal

Modalities

Models

Abstract

Arbitrary artistic style transfer has achieved great success with deep neural networks, but it is still difficult for existing methods to tackle the dilemma of content preservation and style translation due to the inherent content-and-style conflict. In this paper, we introduce content self-supervised learning and style contrastive learning to arbitrary style transfer for improved content preservation and style translation, respectively. The former one is based on the assumption that stylization of a geometrically transformed image is perceptually similar to applying the same transformation to the stylized result of the original image. This content self-supervised constraint noticeably improves content consistency before and after style translation, and contributes to reducing noises and artifacts as well. Furthermore, it is especially suitable to video style transfer, due to its ability to promote inter-frame continuity, which is of crucial importance to visual stability of video sequences. For the latter one, we construct a contrastive learning that pull close style representations (Gram matrices) of the same style and push away that of different styles. This brings more accurate style translation and more appealing visual effect. A large number of qualitative and quantitative experiments demonstrate superiority of our method in improving arbitrary style transfer quality, both for images and videos.Copyright © 2023 Elsevier Ltd. All rights reserved.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *