Length-scale study in deep learning prediction for
non-small cell lung cancer brain metastasis
Haowen Zhou
1,+
, Siyu (Steven) Lin
1,+
, Mark Watson
2
, Cory T. Bernadt
2
, Oumeng Zhang
1
,
Ling Liao
2
, Ramaswamy Govindan
3
, Richard J. Cote
2
, and Changhuei Yang
1,*
1
California Institute of Technology, Department of Electrical Engineering, Pasadena CA 91125 USA
2
Washington University School of Medicine, Department of Pathology and Immunology, St. Louis MO 63110 USA
3
Washington University School of Medicine, Department of Medicine, St. Louis MO 63110 USA
*
chyang@caltech.edu
+
these authors contributed equally to this work
ABSTRACT
A supplementary document to the paper titled "Length-scale study in deep learning prediction for non-small cell lung cancer
brain metastasis"
Illustration of model interpretation methods for early-stage NSCLC metastatic progression
We applied other model interpretation methods to evaluate the early-stage non-small-cell lung cancer (NSCLC) metastasis
progression risk prediction. Three popular methods, gradient
1
, integrated gradient
2
, and gradient SHAP
3
, are implemented.
The example tiles are shown in Figure S1. The visualization of the model’s attention is overlayed with the grayscale image
tiles. We initially tried to draw conclusions from these attention maps. However, expert pulmonary pathologists on our team
concluded that in general, the model’s attention was directed to subtle and varied details in each image tile, which could not be
associated with any previously or unpreviously recognized cellular or histopathologic features.
References
1.
Simonyan, K., Vedaldi, A. & Zisserman, A. Deep inside convolutional networks: Visualising image classification models
and saliency maps.
arXiv preprint arXiv:1312.6034
(2013).
2.
Sundararajan, M., Taly, A. & Yan, Q. Axiomatic attribution for deep networks.
CoRR
abs/1703.01365
(2017). 1703.01365.
3.
Lundberg, S. M. & Lee, S.-I. A unified approach to interpreting model predictions. In Guyon, I.
et al.
(eds.)
Advances in
Neural Information Processing Systems
, vol. 30 (Curran Associates, Inc., 2017).
1
Figure 1.
Images for different models’ interpretation methods. The left column shows the original H&E-stained images. The
right three columns are our model’s attention map using Gradient, Integrated Gradient, and GradientSHAP methods,
respectively. The visualization of the normalized model’s attention is overlayed with the grayscale image in the left column.
2/2