Deeper Neural Networks Lead to Simpler Embeddings

By Medium - 2021-03-21

Description

Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize. That is, according to traditional statistics, the more parameters, the more the…

Summary

  • A surprising explanation for generalization in neural networks Recent research is increasingly investigating how neural networks, being as hyper-parametrized as they are, generalize.
  • Minyoung Huh et al proposed in a recent paper, “The Low-Rank Simplicity Bias in Deep Networks”, that depth increases the proportion of simpler solutions in the parameter space.
  • On the other hand, a lower rank can be considered simpler.
  • rank(AB) ≤ min (rank(A), rank(B)) What is more interesting, though, is that the same applies to nonlinear networks.

 

Topics

  1. Machine_Learning (0.56)
  2. Backend (0.08)
  3. NLP (0.07)

Similar Articles

10 Deep Learning Terms Explained in Simple English

By datasciencecentral - 2020-12-27

  Deep Learning is a new area of Machine Learning research that has been gaining significant media interest owing to the role it is playing in artificial intel…

New deep learning models: Fewer neurons, more intelligence

By techxplore - 2020-10-14

Artificial intelligence has arrived in our everyday lives—from search engines to self-driving cars. This has to do with the enormous computing power that has become available in recent years. But new ...