In vision transformers, position embedding (PE) plays a crucial role in capturing the order of tokens. However, in vision transformer structures, there is a limitation in the expressiveness of PE due to the structure where position embedding is simply added to the token embedding. A layer-wise method that delivers PE to each layer and applies independent Layer Normalizations for token embedding and PE has been adopted to overcome this limitation. In this paper, we identify the conflicting result that occurs in a layer-wise structure when using the global average pooling (GAP) method instead of the class token. To overcome this problem, we propose MPVG, which maximizes the effectiveness of PE in a layer-wise structure with GAP. Specifically, we identify that PE counterbalances token embedding values at each layer in a layer-wise structure. Furthermore, we recognize that the counterbalancing role of PE is insufficient in the layer-wise structure, and we address this by maximizing the effectiveness of PE through MPVG. Through experiments, we demonstrate that PE performs a counterbalancing role and that maintaining this counterbalancing directionality significantly impacts vision transformers. As a result, the experimental results show that MPVG outperforms existing methods across vision transformers on various tasks.
Figure: The heatmaps depict the characteristics of each layer in both the original structure and the Layer-wise structure with the GAP method. For the Layer-wise structure, the heatmaps illustrate cases both with and without PE in the Last LN. For each heatmap based on DeiT-Ti, the x-axis represents the dimension of DeiT-Ti (256), and the y-axis represents the number of tokens (196). In both (a) and the top row (token embedding) of (b), the heatmaps represent the average value of token embedding in each layer, while the bottom row of (b) shows the heatmap of PE. The correlation in (b) refers to the correlation coefficient between token embedding and position embedding.
Figure: The overview of the various methods. (a) ViT. (b) LaPE~\cite{lape}. (c) PVG, an improved Layer-wise structure. Specifically, we adopt a structure where the token embedding and PE are added before entering layer 0 and a hierarchical structure for delivering PE, excluding layer 0. (d) MPVG. The main difference from PVG is whether the initial PE is delivered to the Last LN.
Table: Left, Top-1 accuracy comparison with various methods, using DeiT-T, DeiT-S, DeiT-B, Swin-Ti, CeiT-Ti, and T2T-ViT-7 on ImageNet-1K. Upper-Right, Top-1 accuracy comparison with various methods, using ViT-Lite and T2T-ViT-7 on CIFAR-100. Middle-Right, Performance comparison of object detection on COCO2017. Lower-Right, Performance comparison of semantic segmentation on ADE20K.
@article{Lee2025maximizing,
author = {Lee, Wonjun and Ham, Bumsub and Kim, Suhyun},
title = {Maximizing the Position Embedding for Vision Transformers with Global Average Pooling},
journal = {AAAI},
year = {2025},
}