Title | SGD Noise and Implicit Low-Rank Bias in Deep Neural Networks |
Publication Type | CBMM Memos |
Year of Publication | 2022 |
Authors | Galanti, T, Poggio, T |
Date Published | 03/2022 |
Abstract | We analyze deep ReLU neural networks trained with mini-batch stochastic gradient decent and weight decay. We prove that the source of the SGD noise is an implicit low rank constraint across all of the weight matrices within the network. Furthermore, we show, both theoretically and empirically, that when training a neural network using Stochastic Gradient Descent (SGD) with a small batch size, the resulting weight matrices are expected to be of small rank. Our analysis relies on a minimal set of assumptions and the neural networks may include convolutional layers, residual connections, as well as batch normalization layers. |
DSpace@MIT |
Associated Module:
Research Area:
CBMM Relationship:
- CBMM Funded