Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published December 21, 2022 | public

The MABe22 Benchmarks for Representation Learning of Multi-Agent Behavior


Real-world behavior is often shaped by complex interactions between multiple agents. To scalably study multi-agent behavior, advances in unsupervised and self-supervised learning have enabled a variety of different behavioral representations to be learned from trajectory data. To date, there does not exist a unified set of benchmarks that can enable comparing methods quantitatively and systematically across a broad set of behavior analysis settings. We aim to address this by introducing a large-scale, multi-agent trajectory dataset from real-world behavioral neuroscience experiments that covers a range of behavior analysis tasks. Our dataset consists of trajectory data from common model organisms, with 9.6 million frames of mouse data and 4.4 million frames of fly data, in a variety of experimental settings, such as different strains, lengths of interaction, and optogenetic stimulation. A subset of the frames also consist of expert-annotated behavior labels. Improvements on our dataset corresponds to behavioral representations that work across multiple organisms and is able to capture differences for common behavior analysis tasks.

Additional Information

This work was generously supported by the Simons Collaboration on the Global Brain grant 543025 (to PP), NIH Award #R00MH117264 (to AK), NSF Award #1918839 (to YY), NSERC Award #PGSD3-532647-2019 (to JJS), as well as a gift from Charles and Lily Trimble (to PP). We would like to thank Tom Sproule for mouse breeding and dataset collection. The mouse dataset was supported by the National Institute of Health DA041668 (NIDA), DA048634 (NIDA, and Simons Foundation SFARI Director's Award) (to VK). We also greatly appreciate Google, Amazon, HHMI, and the Simons Foundation for sponsoring the MABe 2022 Challenge and Workshop.

Additional details

August 20, 2023
October 24, 2023