Forgetting in Deep Learning. Team member: Qiang Fei, Yingsi Jian

By Medium - 2020-12-16

Description

Neural network models suffer from the phenomenon of catastrophic forgetting: a model can drastically lose its generalization ability on a task after being trained on a new task. This usually means a…

Summary

  • Forgetting in Deep Learning Problem Statement Neural network models suffer from the phenomenon of catastrophic forgetting: Percentage change in forgetting with different levels of mixup, compared to the baseline model.
  • One observation is that the diagonal seems to have the least forgetting, which corresponds to cases when task 1 and task 2 have the same initial learning rates.
  • Train Epoch Experiments We set up experiments by fixing task 2 training epochs to learn the effect of number of epochs for task 1 on forgetting.

 

Topics

  1. Machine_Learning (0.34)
  2. Backend (0.24)
  3. Database (0.08)

Similar Articles

Introduction to Active Learning

By KDnuggets - 2020-12-15

An extensive overview of Active Learning, with an explanation into how it works and can assist with data labeling, as well as its performance and potential limitations.

What is semi-supervised machine learning?

By TechTalks - 2021-01-04

Semi-supervised learning helps you solve classification problems when you don't have labeled data to train your machine learning model.

30 Most Asked Machine Learning Questions Answered

By Medium - 2021-03-18

Machine Learning is the path to a better and advanced future. A Machine Learning Developer is the most demanding job in 2021 and it is going to increase by 20–30% in the upcoming 3–5 years. Machine…