DeepWild: Application of the pose estimation tool DeepLabCut for behaviour tracking in wild chimpanzees and bonobos
Creators
Abstract
1. Studying animal behaviour allows us to understand how different species and individuals navigate their physical and social worlds. Video coding of behaviour is considered a gold standard: allowing researchers to extract rich nuanced behavioural datasets, validate their reliability, and for research to be replicated. However, in practice, videos are only useful if data can be efficiently extracted. Manually locating relevant footage in 10,000 s of hours is extremely time-consuming, as is the manual coding of animal behaviour, which requires extensive training to achieve reliability. 2. Machine learning approaches are used to automate the recognition of patterns within data, considerably reducing the time taken to extract data and improving reliability. However, tracking visual information to recognise nuanced behaviour is a challenging problem and, to date, the tracking and pose-estimation tools used to detect behaviour are typically applied where the visual environment is highly controlled. 3. Animal behaviour researchers are interested in applying these tools to the study of wild animals, but it is not clear to what extent doing so is currently possible, or which tools are most suited to particular problems. To address this gap in knowledge, we describe the new tools available in this rapidly evolving landscape, suggest guidance for tool selection, provide a worked demonstration of the use of machine learning to track movement in video data of wild apes, and make our base models available for use. 4. We use a pose-estimation tool, DeepLabCut, to demonstrate successful training of two pilot models of an extremely challenging pose estimate and tracking problem: multi-animal wild forest-living chimpanzees and bonobos across behavioural contexts from hand-held video footage. 5. With DeepWild we show that, without requiring specific expertise in machine learning, pose estimation and movement tracking of free-living wild primates in visually complex environments is an attainable goal for behavioural researchers.
Additional Information
© 2023 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made. We thank the Guest Editors, Dr Thibaud Gruber and Dr Erica van de Waal for their invitation to contribute this paper, and we thank the journal editor and two anonymous reviewers for their constructive guidance on how to improve it. We thank the staff and communities at the field stations in which we collected our data in Uganda, Democratic Republic of Congo, and Guinea. In particular, we thank the staff of the Budongo Conservation Field Station for the years of support that has facilitated video data collection across projects from 2004 to 2022. We thank Professors Hashimoto and Furuichi for permission to collect video data at the Kalinzu and Wamba field sites. We thank Dr. Aly Garpard Soumah, the Institut de Recherche Environmentale de Bossou (IREB), for permission to collect data at the Bossou field site, which has run continuously through collaboration between the scholars of Kyoto University Primate Research Insitute, led by Yukimaru Sugiyama and Guinean scholars including: Jeremie Koman, Soh Pletah Bonimy, Bakary Coulibary, Tamba Tagbino, Makan Kourouma, Mamadou Diakite, Cécé Kolié, Iba Conde and Sekou Moussa Keita. We also thank the Guinean authorities who provided permission for the long-term researcher including: Ministre de l'Enseignement Supérieur et de la Recherche Scientifique, and Direction Générale de la Recherche Scientifique et de l'Innovation Technologique. We thank Alexander Mielke for his help with the R-code for the figures. All research projects within Uganda were conducted with permission from the Uganda Wildlife Authority, the Ugandan National Council for Science and Technology. All research projects were conducted under ethical permissions from the Animal Welfare and Ethics Committee of the University of St Andrews. We thank the developers of the programs we evaluated, and the machine learning communities for their work and their patience in answering many of our questions, as well as our lab group for their constructive discussions. This project received funding from the European Union's 8th Framework Programme, Horizon 2020 (grant agreement number: 802719) and the St Andrews Restarting Research Funding Scheme (2020). DATA AVAILABILITY STATEMENT. The DeepWild models as well as all data and code used in this paper are available for download in our Github repository https://github.com/Wild-Minds/DeepWild, which is archived in Zenodo at https://doi.org/10.5281/zenodo.7414432 (Wiltshire et al., 2022) Information on use of data from the Great Ape Video Database are available at https://doi.org/10.5281/zenodo.5600472 (Hobaiter et al., 2021). An online open-access interface for marking additional frames is available here: https://contrib.deeplabcut.org/label Frames received in through this process will be used to regularly update the base-model in our Github and shared with the DeepLabCut model zoo. The authors declare they have no conflicts of interest.Attached Files
Supplemental Material - jane13932-sup-0001-tables1.docx
Supplemental Material - jane13932-sup-0002-tables2.docx
Supplemental Material - jane13932-sup-0003-tables3.docx
Supplemental Material - jane13932-sup-0004-tables4.docx
Supplemental Material - jane13932-sup-0005-videos1.docx
Files
Journal_of_Animal_Ecology_-_2023_-_Wiltshire_-_DeepWild__Application_of_the_pose_estimation_tool_DeepLabCut_for_behaviour.pdf
Files
(3.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:0fb0e781721626f4f3af49f3879e6c0e
|
17.0 kB | Download |
|
md5:1669455b04695b7b880bdc2676b04ca0
|
159.2 kB | Download |
|
md5:77d1c5f98f99030d8f430adda60ca3ff
|
12.9 kB | Download |
|
md5:f7dd49b8c83ea6ce3cb62180b75ec3f7
|
9.8 kB | Download |
|
md5:822f357a087cf8a749ee62929043a662
|
13.1 kB | Download |
|
md5:fe32c9c2ca49ea54c716601e2631c7da
|
2.9 MB | Preview Download |
Additional details
Identifiers
- Eprint ID
- 121633
- Resolver ID
- CaltechAUTHORS:20230530-441768000.73
Related works
- Describes
- 10.5281/zenodo.7414432 (DOI)
Funding
- European Research Council (ERC)
- 802719
Dates
- Created
-
2023-07-14Created from EPrint's datestamp field
- Updated
-
2023-07-14Created from EPrint's last_modified field