Video-based person re-identification (reID) aims to retrieve person videos with the same identity as a query person across multiple cameras. Spatial and temporal distractors in person videos, such as background clutter and partial occlusions over frames, respectively, make this task much more challenging than image-based person reID. We observe that (a) spatial distractors appear consistently in a particular location, and (b) temporal distractors show several patterns, e.g., partial occlusions occur in the first few frames, where such patterns provide informative cues for predicting which frames to focus on (i.e., temporal attentions). Based on this, we introduce a novel Spatial and Temporal Memory Networks (STMN). The spatial memory stores features for spatial distractors that frequently emerge across video frames, while the temporal memory saves attentions which are optimized for typical temporal patterns in person videos. We leverage the spatial and temporal memories to refine frame-level person representations and to aggregate the refined frame-level features into a sequence-level person representation, respectively, effectively handling spatial and temporal distractors in person videos. We also introduce a memory spread loss preventing our model from addressing particular items only in the memories. Experimental results on standard benchmarks, including MARS, DukeMTMC-VideoReID, and LS-VID, demonstrate the effectiveness of our method.
We compare STMN with state-of-the-art methods on standard video-based reID benchmarks, including MARS, DukeMTMC-VideoReID, and LS-VID. For fair comparisons, we classify the methods into two groups, depending on whether they follow RRS or all- frames strategies for evaluation (please refer to the paper for the details). We set a new state of the art on the benchmarks. The results of STMN using the RRS even surpass those of previous methods, e.g., COSAM, M3D, and GLTR on the all-frames setting.
[link]
[code]
@inproceedings{eom2021video, title={Video-based Person Re-identification with Spatial and Temporal Memory Networks}, author={Eom, Chanho and Lee, Geon and Lee, Junghyup and Ham, Bumsub}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, pages={12036--12045}, year={2021} } |
This research was supported in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2019R1A2C2084816) and the Yonsei University Research Fund of 2021 (2021-22-0001).