Published December 2017 | Version Published
Conference Paper Open

Interpretable Machine Teaching via Feature Feedback

Abstract

A student's ability to learn a new concept can be greatly improved by providing them with clear and easy to understand explanations from a knowledgeable teacher. However, many existing approaches for machine teaching only give a limited amount of feedback to the student. For example, in the case of learning visual categories, this feedback could be the class label of the object present in the image. Instead, we propose a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners. For image categorization, our feature-level feedback consists of a highlighted part or region in an image that explains the class label. We perform experiments on real human participants and show that learners that are taught with feature-level feedback perform better at test time compared to existing methods.

Additional Information

The authors thank Google for supporting the Visipedia project, and kind donations from Northrop Grumman, Bloomberg, and AWS Research Credits. Yuxin Chen was supported in part by a Swiss NSF Mobility Postdoctoral Fellowship.

Attached Files

Published - nips17-teaching_paper-5.pdf

Files

nips17-teaching_paper-5.pdf

Files (1.4 MB)

Name Size Download all
md5:f3872553c7fe238ba2cf732574c6a322
1.4 MB Preview Download

Additional details

Identifiers

Eprint ID
87329
Resolver ID
CaltechAUTHORS:20180622-113758617

Funding

Northrop Grumman Corporation
Bloomberg
Amazon Web Services
Swiss National Science Foundation (SNSF)

Dates

Created
2018-06-23
Created from EPrint's datestamp field
Updated
2020-03-09
Created from EPrint's last_modified field