CaltechAUTHORS
  A Caltech Library Service

Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery

Luongo, Francisco and Hakim, Ryan and Nguyen, Jessica H. and Anandkumar, Animashree and Hung, Andrew J. (2020) Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery. Surgery . ISSN 0039-6060. (In Press) https://resolver.caltech.edu/CaltechAUTHORS:20200928-140721280

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20200928-140721280

Abstract

Background: Our previous work classified a taxonomy of needle driving gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep learning-based computer vision to automate the identification and classification of suturing gestures for needle driving attempts. Methods: Two independent raters manually annotated live suturing video clips to label timepoints and gestures. Identification (2,395 videos) and classification (511 videos) datasets were compiled to train computer vision models to produce 2- and 5-class label predictions, respectively. Networks were trained on inputs of raw red/blue/green pixels as well as optical flow for each frame. We explore the effect of different recurrent models (long short-term memory versus convolutional long short-term memory). All models were trained on 80/20 train/test splits. Results: We observe that all models are able to reliably predict either the presence of a gesture (identification, area under the curve: 0.88) as well as the type of gesture (classification, area under the curve: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice on performance. Conclusion: Our results demonstrate computer vision’s ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1016/j.surg.2020.08.016DOIArticle
Additional Information:© 2020 Elsevier Inc. Accepted 6 August 2020, Available online 26 September 2020. Conflict of interest/Disclosure: Andrew J. Hung has financial disclosures with Quantgene, Inc (consultant), Mimic Technologies, Inc (consultant), and Johnson & Johnson (consultant). This study is supported in part by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award number K23EB026493.
Funders:
Funding AgencyGrant Number
NIHK23EB026493
Record Number:CaltechAUTHORS:20200928-140721280
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20200928-140721280
Official Citation:Francisco Luongo, Ryan Hakim, Jessica H. Nguyen, Animashree Anandkumar, Andrew J. Hung, Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery, Surgery, 2020, ISSN 0039-6060, https://doi.org/10.1016/j.surg.2020.08.016. (http://www.sciencedirect.com/science/article/pii/S0039606020305481)
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:105591
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:28 Sep 2020 21:15
Last Modified:28 Sep 2020 21:15

Repository Staff Only: item control page