Detecting and Recognizing Human-Object Interactions
Abstract
To understand the visual world, a machine must not only recognize individual object instances but also how they interact. Humans are often at the center of such interactions and detecting human-object interactions is an important practical and scientific problem. In this paper, we address the task of detecting triplets in challenging everyday photos. We propose a novel model that is driven by a human-centric approach. Our hypothesis is that the appearance of a person -- their pose, clothing, action -- is a powerful cue for localizing the objects they are interacting with. To exploit this cue, our model learns to predict an action-specific density over target object locations based on the appearance of a detected person. Our model also jointly learns to detect people and objects, and by fusing these predictions it efficiently infers interaction triplets in a clean, jointly trained end-to-end system we call InteractNet. We validate our approach on the recently introduced Verbs in COCO (V-COCO) and HICO-DET datasets, where we show quantitatively compelling results.
Attached Files
Submitted - 1704.07333.pdf
Files
Name | Size | Download all |
---|---|---|
md5:52848547240c701e58891ae3bf86801e
|
7.7 MB | Preview Download |
Additional details
- Eprint ID
- 118434
- Resolver ID
- CaltechAUTHORS:20221219-204829682
- arXiv
- arXiv:1704.07333
- URL
- https://resolver.caltech.edu/CaltechAUTHORS:20221215-789762000.15
- Created
-
2022-12-20Created from EPrint's datestamp field
- Updated
-
2022-12-20Created from EPrint's last_modified field