A Caltech Library Service

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers

Xie, Enze and Wang, Wenhai and Yu, Zhiding and Anandkumar, Anima and Alvarez, Jose M. and Luo, Ping (2021) SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. In: Advances in Neural Information Processing Systems 34 (NeurIPS 2021). Neural Information Processing Foundation , La Jolla, CA, pp. 1-14. ISBN 9781713845393.

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


We present SegFormer, a simple, efficient yet powerful semantic segmentation framework which unifies Transformers with lightweight multilayer perceptron (MLP) decoders. SegFormer has two appealing features: 1) SegFormer comprises a novel hierarchically structured Transformer encoder which outputs multiscale features. It does not need positional encoding, thereby avoiding the interpolation of positional codes which leads to decreased performance when the testing resolution differs from training. 2) SegFormer avoids complex decoders. The proposed MLP decoder aggregates information from different layers, and thus combining both local attention and global attention to render powerful representations. We show that this simple and lightweight design is the key to efficient segmentation on Transformers. We scale our approach up to obtain a series of models from SegFormer-B0 to Segformer-B5, which reaches much better performance and efficiency than previous counterparts.For example, SegFormer-B4 achieves 50.3% mIoU on ADE20K with 64M parameters, being 5x smaller and 2.2% better than the previous best method. Our best model, SegFormer-B5, achieves 84.0% mIoU on Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes-C. Code is available at:

Item Type:Book Section
Related URLs:
URLURL TypeDescription ItemDiscussion Paper
Xie, Enze0000-0001-6890-1049
Anandkumar, Anima0000-0002-6974-6797
Luo, Ping0000-0002-6685-7950
Additional Information:We thank Ding Liang, Zhe Chen and Yaojun Liu for insightful discussion without which this paper would not be possible. Ping Luo is supported by the General Research Fund of Hong Kong No.27208720.
Funding AgencyGrant Number
Research Grants Council of Hong Kong27208720
Record Number:CaltechAUTHORS:20221222-232723221
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118602
Deposited By: George Porter
Deposited On:23 Dec 2022 20:21
Last Modified:23 Dec 2022 20:21

Repository Staff Only: item control page