Title | Feature learning in deep classifiers through Intermediate Neural Collapse |
Publication Type | CBMM Memos |
Year of Publication | 2023 |
Authors | Rangamani, A, Lindegaard, M, Galanti, T, Poggio, T |
Number | 141 |
Date Published | 02/2023 |
Abstract | In this paper, we conduct an empirical study of the feature learning process in deep classifiers. Recent research has identified a training phenomenon called Neural Collapse (NC), in which the top-layer feature embeddings of samples from the same class tend to concentrate around their means, and the top layer’s weights align with those features. Our study aims to investigate if these properties extend to intermediate layers. We empirically study the evolution of the covariance and mean of representations across different layers and show that as we move deeper into a trained neural network, the within-class covariance decreases relative to the between-class covariance. Additionally, we find that in the top layers, where the between-class covariance is dominant, the subspace spanned by the class means aligns with the subspace spanned by the most significant singular vector components of the weight matrix in the corresponding layer. Finally, we discuss the relationship between NC and Associative Memories (Willshaw et al., 1969). |
DSpace@MIT |
Associated Module:
Research Area:
CBMM Relationship:
- CBMM Funded