Interpreting Expert Annotation Differences in Animal Behavior
Abstract
Hand-annotated data can vary due to factors such as subjective differences, intra-rater variability, and differing annotator expertise. We study annotations from different experts who labelled the same behavior classes on a set of animal behavior videos, and observe a variation in annotation styles. We propose a new method using program synthesis to help interpret annotation differences for behavior analysis. Our model selects relevant trajectory features and learns a temporal filter as part of a program, which corresponds to estimated importance an annotator places on that feature at each timestamp. Our experiments on a dataset from behavioral neuroscience demonstrate that compared to baseline approaches, our method is more accurate at capturing annotator labels and learns interpretable temporal filters. We believe that our method can lead to greater reproducibility of behavior annotations used in scientific studies. We plan to release our code.
Attached Files
Submitted - 2106.06114.pdf
Files
Name | Size | Download all |
---|---|---|
md5:f414d0bfaca5b386c88a7d66019db3bf
|
3.9 MB | Preview Download |
Additional details
- Eprint ID
- 113576
- Resolver ID
- CaltechAUTHORS:20220224-200758198
- Created
-
2022-02-25Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field