CaltechAUTHORS
  A Caltech Library Service

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers

Guibas, John and Mardani, Morteza and Li, Zongyi and Tao, Andrew and Anandkumar, Anima and Catanzaro, Bryan (2021) Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20220714-224636083

[img] PDF - Submitted Version
Creative Commons Attribution.

629kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220714-224636083

Abstract

Vision transformers have delivered tremendous success in representation learning. This is primarily due to effective token mixing through self attention. However, this scales quadratically with the number of pixels, which becomes infeasible for high-resolution inputs. To cope with this challenge, we propose Adaptive Fourier Neural Operator (AFNO) as an efficient token mixer that learns to mix in the Fourier domain. AFNO is based on a principled foundation of operator learning which allows us to frame token mixing as a continuous global convolution without any dependence on the input resolution. This principle was previously used to design FNO, which solves global convolution efficiently in the Fourier domain and has shown promise in learning challenging PDEs. To handle challenges in visual representation learning such as discontinuities in images and high resolution inputs, we propose principled architectural modifications to FNO which results in memory and computational efficiency. This includes imposing a block-diagonal structure on the channel mixing weights, adaptively sharing weights across tokens, and sparsifying the frequency modes via soft-thresholding and shrinkage. The resulting model is highly parallel with a quasi-linear complexity and has linear memory in the sequence size. AFNO outperforms self-attention mechanisms for few-shot segmentation in terms of both efficiency and accuracy. For Cityscapes segmentation with the Segformer-B3 backbone, AFNO can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
https://doi.org/10.48550/arXiv.2111.13587arXivDiscussion Paper
https://github.com/jtguibas/AdaptiveFourierNeuralOperatorRelated ItemCode
ORCID:
AuthorORCID
Li, Zongyi0000-0003-2081-9665
Anandkumar, Anima0000-0002-6974-6797
Additional Information:Attribution 4.0 International (CC BY 4.0) Joint first authors, contributed equally. The first author has done this work during internship at NVIDIA, and the second author was leading the project.
Record Number:CaltechAUTHORS:20220714-224636083
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220714-224636083
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115602
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:15 Jul 2022 23:17
Last Modified:15 Jul 2022 23:17

Repository Staff Only: item control page