Description
A groundbreaking and relatively new discovery upends classical statistics with relevant implications for data science practitioners and…
Summary
- Introduction Data science is a fascinating field.
- However, subsets of the machine learning community regularly train models to perfectly fit training datasets, such that there is zero training error and these models go on to perform well on unseen, test data.
- The second model is over-parametrized.
- 1a] The above 4 points are examples of “model-wise double descent,” basically, increasing model capacity / complexity / flexibility can cause demonstrate the tradition bias-variance tradeoff and then a second descent in test error.