Abstract
Graph representation of objects and their relations in a scene, known as a scene graph, provides a precise and discernible interface to manipulate a scene by modifying the nodes or the edges in the graph. Although existing works have shown promising results in modifying the placement and pose of objects, scene manipulation often leads to losing some visual characteristics like the appearance or identity of objects. In this work, we propose DisPositioNet, a model that learns a disentangled representation for each object for the task of image manipulation using scene graphs in a self-supervised manner. Our framework enables the disentanglement of the variational latent embeddings as well as the feature representation in the graph. In addition to producing more realistic images due to the decomposition of features like pose and identity, our method takes advantage of the probabilistic sampling in the intermediate features to generate more diverse images in object replacement or addition tasks. The results of our experiments show that disentangling the feature representations in the latent manifold of the model outperforms the previous works qualitatively and quantitatively on two public benchmarks.
Contributions
- a self-supervised approach for disentanglement of pose and appearance for semantic image manipulation, that does not require label information for the disentanglement task
- a disentangled scene graph neural network
- a variational latent representation that provides higher diversity in image manipulation
- superior quantitative and qualitative performance compared to the state of the art on two public benchmarks
Paper
British Machine Vision Conference (BMVC) 2022Link to paper: BMVC Website | arXiv
@inproceedings{Farshad_2022_BMVC, author = {Azade Farshad and Yousef Yeganeh and Helisa Dhamo and Federico Tombari and Nassir Navab}, title = {DisPositioNet: Disentangled Pose and Identity in Semantic Image Manipulation}, booktitle = {33rd British Machine Vision Conference 2022, {BMVC} 2022, London, UK, November 21-24, 2022}, publisher = {{BMVA} Press}, year = {2022}, url = {https://bmvc2022.mpi-inf.mpg.de/0340.pdf} }
Source Code
Source Code
The source code is publicly available in the following repository: Source CodeTeam
Our Team
Azade Farshad
Yousef Yeganeh
Helisa Dhamo
Federico Tombari
Nassir Navab
Contact
Contact Us
In case you have any questions or you are looking for collaborations, feel free to contact us.
Email:
azade.farshad@tum.de
y.yeganeh@tum.de