Description
Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize. That is, according to traditional statistics, the more parameters, the more the…
Summary
- A surprising explanation for generalization in neural networks Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize.
- Minyoung Huh et al proposed in a recent paper, “The Low-Rank Simplicity Bias in Deep Networks”, that depth increases the proportion of simpler solutions in the parameter space.
- On the other hand, a lower rank can be considered simpler.
- rank(AB) ≤ min (rank(A), rank(B)) What is more interesting, though, is that the same applies to nonlinear networks.