FeatER: An Efficient Network for Human Reconstruction via Feature Map-Based TransformER

1 Center for Research in Computer Vision, University of Central Florida
2 OPPO Seattle Research Center, USA 3 Westlake University
CVPR 2023

Abstract

Recently, vision transformers have shown great success in a set of human reconstruction tasks such as 2D/3D human pose estimation (2D/3D HPE) and human mesh reconstruction (HMR) tasks. In these tasks, feature map representations of the human structural information are often extracted first from the image by a CNN (such as HRNet), and then further processed by transformer to predict the heatmaps for HPE or HMR. However, existing transformer architectures are not able to process these feature map inputs directly, forcing an unnatural flattening of the location-sensitive human structural information. Furthermore, much of the performance benefit in recent HPE and HMR methods has come at the cost of ever-increasing computation and memory needs. Therefore, to simultaneously address these problems, we propose FeatER, a novel transformer design that preserves the inherent structure of feature map representations when modeling attention while reducing memory and computational costs. Taking advantage of FeatER, we build an efficient network for a set of human reconstruction tasks including 2D HPE, 3D HPE, and HMR. A feature map reconstruction module is applied to improve the performance of the estimated human pose and mesh. Extensive experiments demonstrate the effectiveness of FeatER on various human pose and mesh datasets. For instance, FeatER outperforms the SOTA method MeshGraphormer by requiring 5% of Params and 16% of MACs on Human3.6M and 3DPW datasets.


Overall framework




Our proposed FeatER blocks





Results of image classification task




Results of human mesh recovery




Mesh visualization



Visualization of pose and mesh



More qualitative results



Qualitative comparison with SOTA methods





Video


Bibtex


            @InProceedings{zheng2023feater,
                title={FeatER: An Efficient Network for Human Reconstruction via Feature Map-Based TransformER},
                author={Zheng, Ce and Mendieta, Matias and Yang, Taojiannan and Qi, Guo-Jun and Chen, Chen},
                booktitle ={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
                year={2023}
            }
        

This webpage template was adapted from here.