ZerO Initialization: Initializing Residual Networks with only Zeros and Ones
Abstract
Deep neural networks are usually initialized with random weights, with adequately selected initial variance to ensure stable signal propagation during training. However, there is no consensus on how to select the variance, and this becomes challenging especially as the number of layers grows. In this work, we replace the widely used random weight initialization with a fully deterministic initialization scheme ZerO, which initializes residual networks with only zeros and ones. By augmenting the standard ResNet architectures with a few extra skip connections and Hadamard transforms, ZerO allows us to start the training from zeros and ones entirely. This has many benefits such as improving reproducibility (by reducing the variance over different experimental runs) and allowing network training without batch normalization. Surprisingly, we find that ZerO achieves state-of-the-art performance over various image classification datasets, including ImageNet, which suggests random weights may be unnecessary for modern network initialization.
Attached Files
Submitted - 2110.12661.pdf
Files
Name | Size | Download all |
---|---|---|
md5:76355b2d04e72f0fec7af5ecd81bd575
|
662.2 kB | Preview Download |
Additional details
- Eprint ID
- 115609
- Resolver ID
- CaltechAUTHORS:20220714-224704502
- Created
-
2022-07-15Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field