SeMani: Semantic Manipulation for
Entity-Level Text-Guided Image Manipulation

Yikai Wang*, Jianan Wang*, Guansong Lu, Hang Xu, Zhenguo Li, Wei Zhang, Yanwei Fu†.
Fudan University, Huawei Noah's Ark Lab.

Introduction

Entity-Level Text-Guided Image Manipulation (eL-TGIM) is a novel task of text-guided image manipulation on the entity level in the real world. eL-TGIM takes as inputs the entity prompt word, the target text description, and the original image from the real world. The following illustration provides a visual representation of eL-TGIM.
eL-TGIM imposes three basic requirements,
  • To edit the entity consistent with the text descriptions;
  • To preserve the entity-irrelevant regions;
  • To merge the manipulated entity into the image naturally.
Although most current image editors partially satisfy the first requirement, they tend to neglect the other two, particularly the preservation of entity-irrelevant regions. Moreover, many editors depend on the user-provided mask to identify the entity-relevant region. Examples below illustrate these points.
Muse and Imagic can create new entity-irrelevant regions instead of preserve these regions, while DALLE2 needs a user-provided mask to perform image manipulation and can only generate square images. As a result, it is not straightforward to apply existing editors to eL-TGIM.

Framework

To address eL-TGIM, we propose a framework called SeMani (Semantic Manipulation of real-world images), which consists of two phases: semantic alignment and image manipulation. In the semantic alignment phase, SeMani utilizes a semantic alignment module to identify the regions of the image that need to be manipulated. In the image manipulation phase, SeMani employs a generative model to create new images based on the entity-irrelevant regions and target descriptions. Below is an illustration of SeMani.
To implement SeMani, we resort to two popular perspectives for viewing images: discrete and continuous. The discrete perspective draws inspiration from auto-regressive transformers, while the continuous perspective is inspired by denoising diffusion probabilistic models. These perspectives give rise to two variants of SeMani: SeMani-Trans and SeMani-Diff. Each variant of SeMani uses specific architectures and generation processes. SeMani can perform manipulation on multiple objects either simultaneously or sequentially, as shown below.
For details about SeMani-Trans and SeMani-Diff, please refer to our paper.

Experiments

We conducted experiments on CUB, Oxford, and COCO datasets, results are shown in below.

We also design an interface for using SeMani.
Please refer to our papers(CVPR22 oral, extension) for more details.