Hu, Peiyun and Lipton, Zachary C. and Anandkumar, Anima and Ramanan, Deva (2019) Active Learning with Partial Feedback. In: 7th International Conference on Learning Representations (ICLR 2019), 6-9 May 2019, New Orleans, LA. https://resolver.caltech.edu/CaltechAUTHORS:20190327-085746172
![]() |
PDF
- Published Version
See Usage Policy. 644kB |
![]() |
PDF
- Submitted Version
See Usage Policy. 783kB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190327-085746172
Abstract
While many active learning papers assume that the learner can simply ask for a label and receive it, real annotation often presents a mismatch between the form of a label (say, one among many classes), and the form of an annotation (typically yes/no binary feedback). To annotate examples corpora for multiclass classification, we might need to ask multiple yes/no questions, exploiting a label hierarchy if one is available. To address this more realistic setting, we propose active learning with partial feedback (ALPF), where the learner must actively choose both which example to label and which binary question to ask. At each step, the learner selects an example, asking if it belongs to a chosen (possibly composite) class. Each answer eliminates some classes, leaving the learner with a partial label. The learner may then either ask more questions about the same example (until an exact label is uncovered) or move on immediately, leaving the first example partially labeled. Active learning with partial labels requires (i) a sampling strategy to choose (example, class) pairs, and (ii) learning from partial labels between rounds. Experiments on Tiny ImageNet demonstrate that our most effective method improves 26% (relative) in top-1 classification accuracy compared to i.i.d. baselines and standard active learners given 30% of the annotation budget that would be required (naively) to annotate the dataset. Moreover, ALPF-learners fully annotate TinyImageNet at 42% lower cost. Surprisingly, we observe that accounting for per-example annotation costs can alter the conventional wisdom that active learners should solicit labels for hard examples.
Item Type: | Conference or Workshop Item (Paper) | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| ||||||||||||
Additional Information: | This work was done while the author was an intern at Amazon AI. | ||||||||||||
DOI: | 10.48550/arXiv.1802.07427 | ||||||||||||
Record Number: | CaltechAUTHORS:20190327-085746172 | ||||||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20190327-085746172 | ||||||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | ||||||||||||
ID Code: | 94173 | ||||||||||||
Collection: | CaltechAUTHORS | ||||||||||||
Deposited By: | George Porter | ||||||||||||
Deposited On: | 28 Mar 2019 22:19 | ||||||||||||
Last Modified: | 02 Jun 2023 00:38 |
Repository Staff Only: item control page