Coordinated Multi-Agent Imitation Learning
- Creators
- Le, Hoang M.
- Yue, Yisong
- Carr, Peter
- Lucey, Patrick
Abstract
We study the problem of imitation learning from demonstrations of multiple coordinating agents. One key challenge in this setting is that learning a good model of coordination can be difficult, since coordination is often implicit in the demonstrations and must be inferred as a latent variable. We propose a joint approach that simultaneously learns a latent coordination model along with the individual policies. In particular, our method integrates unsupervised structure learning with conventional imitation learning. We illustrate the power of our approach on a difficult problem of learning multiple policies for fine-grained behavior modeling in team sports, where different players occupy different roles in the coordinated team strategy. We show that having a coordination model to infer the roles of players yields substantially improved imitation loss compared to conventional baselines.
Additional Information
© 2017 the authors. This work was funded in part by NSF Awards #1564330 & #1637598, JPL PDF IAMS100224, a Bloomberg Data Science Research Grant, and a gift from Northrop Grumman.
Attached Files
Accepted Version - 1703.03121.pdf
Accepted Version - icml2017_coordinated_long.pdf
Published - p1995-le.pdf
Files
Additional details
- Eprint ID
- 80920
- DOI
- 10.48550/arXiv.1703.03121
- Resolver ID
- CaltechAUTHORS:20170829-143913716
- arXiv
- arXiv:1703.03121
- IIS-1564330
- NSF
- CCF-1637598
- NSF
- IAMS100224
- JPL
- Bloomberg Data Science
- Northrop Grumman
- Created
-
2017-08-30Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field