Articulated Pose Estimation Using Discriminative Armlet Classifiers
Abstract
We propose a novel approach for human pose estimation in real-world cluttered scenes, and focus on the challenging problem of predicting the pose of both arms for each person in the image. For this purpose, we build on the notion of poselets [4] and train highly discriminative classifiers to differentiate among arm configurations, which we call armlets. We propose a rich representation which, in addition to standard HOG features, integrates the information of strong contours, skin color and contextual cues in a principled manner. Unlike existing methods, we evaluate our approach on a large subset of images from the PASCAL VOC detection dataset, where critical visual phenomena, such as occlusion, truncation, multiple instances and clutter are the norm. Our approach outperforms Yang and Ramanan [26], the state-of-the-art technique, with an improvement from 29.0% to 37.5% PCP accuracy on the arm keypoint prediction task, on this new pose estimation dataset.
Additional Information
This research was supported by the Intel Visual Computing Center and by ONR SMARTS MURI N00014-09-1-1051. We also thank R1 for thoughtful comments as well as Yi Yang and Deva Ramanan for their help.Attached Files
Accepted Version - GkioxariCVPR2013.pdf
Files
Name | Size | Download all |
---|---|---|
md5:7d9503ec08f57279e060f6bf71ddf2bc
|
5.4 MB | Preview Download |
Additional details
- Eprint ID
- 118364
- Resolver ID
- CaltechAUTHORS:20221215-789688000.2
- Intel Visual Computing Center
- Office of Naval Research (ONR)
- N00014-09-1-1051
- Created
-
2022-12-17Created from EPrint's datestamp field
- Updated
-
2022-12-17Created from EPrint's last_modified field