Learning Disentangled Representation for Robust Person Re-identification

Fig. Visual comparison of identity-related and -unrelated features. We generate new person images by interpolating (left) identity-related features and (right) identity-unrelated ones between two images, while fixing the other ones. We can see that identity-related features encode e.g., clothing and color, and identity-unrelated ones involve e.g., human pose and scale changes.



We address the problem of person re-identification (reID), that is, retrieving person images from a large dataset, given a query image of the person of interest. The key challenge is to learn person representations robust to intra-class variations, as different persons can have the same attribute and the same person’s appearance looks different with viewpoint changes. Recent reID methods focus on learning dis- criminative features but robust to only a particular factor of variations (e.g., human pose) and this requires corresponding supervisory signals (e.g., pose annotations). To tackle this problem, we propose to disentangle identity-related and -unrelated features from person images. Identity-related features contain information useful for specifying a particular person (e.g., clothing), while identity-unrelated ones hold other factors (e.g., human pose, scale changes). To this end, we introduce a new generative adversarial network, dubbed identity shuffle GAN (IS-GAN), that factorizes these features using identification labels without any auxiliary information. We also propose an identity-shuffling technique to regularize the disentangled features. Experimental results demonstrate the effectiveness of IS- GAN, largely outperforming the state of the art on standard reID benchmarks including the Market-1501, CUHK03 and DukeMTMC-reID.


Quantitative results

†: ReID methods trained using both classification and (hard) triplet losses; ∗: Our implementation.

Qualitative results

Fig.(left) Visual comparison of retrieval results on the Market-1501dataset. Results with green boxes have the same identity as the query, while those with red boxes do not. (right) An example of generated images using a part-level identity shuffling technique.


C. Eom, B. Ham
Learning Disentangled Representation for Robust Person Re-identification
[Paper] [Code]


		title={Learning Disentangled Representation for Robust Person Re-identification},
		author={Eom, Chanho and Ham, Bumsub},
		booktitle={Advances in Neural Information Processing Systems},


This research was supported by R&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA(NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289).