A Caltech Library Service

Tensor Contraction Layers for Parsimonious Deep Nets

Kossaifi, Jean and Khanna, Aran and Lipton, Zachary and Furlanello, Tommaso and Anandkumar, Anima (2017) Tensor Contraction Layers for Parsimonious Deep Nets. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). IEEE , Piscataway, NJ, pp. 1940-1946. ISBN 978-1-5386-0733-6.

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. In particular, tensor decompositions are noted for their ability to discover multi-dimensional dependencies and produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. We evaluate TCL's performance on the task of image recognition, using the CIFAR100 and ImageNet datasets, studying the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance.

Item Type:Book Section
Related URLs:
URLURL TypeDescription Paper
Additional Information:© 2017 IEEE.
Record Number:CaltechAUTHORS:20180321-103123441
Persistent URL:
Official Citation:J. Kossaifi, A. Khanna, Z. Lipton, T. Furlanello and A. Anandkumar, "Tensor Contraction Layers for Parsimonious Deep Nets," 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, 2017, pp. 1940-1946. doi: 10.1109/CVPRW.2017.243. URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:85394
Deposited By: Tony Diaz
Deposited On:26 Mar 2018 21:56
Last Modified:03 Oct 2019 19:30

Repository Staff Only: item control page