CaltechAUTHORS
  A Caltech Library Service

Quantification of Robotic Surgeries with Vision-Based Deep Learning

Kiyasseh, Dani and Ma, Runzhuo and Haque, Taseen F. and Nguyen, Jessica and Wagner, Christian and Anandkumar, Animashree and Hung, Andrew J. (2022) Quantification of Robotic Surgeries with Vision-Based Deep Learning. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20220714-212511473

[img] PDF - Submitted Version
Creative Commons Attribution.

14MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220714-212511473

Abstract

Surgery is a high-stakes domain where surgeons must navigate critical anatomical structures and actively avoid potential complications while achieving the main task at hand. Such surgical activity has been shown to affect long-term patient outcomes. To better understand this relationship, whose mechanics remain unknown for the majority of surgical procedures, we hypothesize that the core elements of surgery must first be quantified in a reliable, objective, and scalable manner. We believe this is a prerequisite for the provision of surgical feedback and modulation of surgeon performance in pursuit of improved patient outcomes. To holistically quantify surgeries, we propose a unified deep learning framework, entitled Roboformer, which operates exclusively on videos recorded during surgery to independently achieve multiple tasks: surgical phase recognition (the what of surgery), gesture classification and skills assessment (the how of surgery). We validated our framework on four video-based datasets of two commonly-encountered types of steps (dissection and suturing) within minimally-invasive robotic surgeries. We demonstrated that our framework can generalize well to unseen videos, surgeons, medical centres, and surgical procedures. We also found that our framework, which naturally lends itself to explainable findings, identified relevant information when achieving a particular task. These findings are likely to instill surgeons with more confidence in our framework's behaviour, increasing the likelihood of clinical adoption, and thus paving the way for more targeted surgical feedback.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
https://doi.org/10.48550/arXiv.2205.03028arXivDiscussion Paper
ORCID:
AuthorORCID
Anandkumar, Animashree0000-0002-6974-6797
Additional Information:Attribution 4.0 International (CC BY 4.0) Data availability. The data from the University of Southern California and St. Antonius Hospital are not publicly available. Code availability. All models were developed using Python and standard deep learning libraries such as PyTorch42. The code and model parameters will be made publicly available via GitHub. Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Record Number:CaltechAUTHORS:20220714-212511473
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220714-212511473
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115583
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:15 Jul 2022 22:41
Last Modified:15 Jul 2022 22:41

Repository Staff Only: item control page