A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives

By arXiv.org - 2021-02-28

Description

Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various application tasks. These pretraining ...

Summary

  • Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various application tasks.
  • In this survey, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, few or zero-shot learning, pretraining data-efficiency and specific NLP end-tasks.
  • We introduce key contrastive learning concepts with lessons learned from prior research and structure works by applications and cross-field relations.
  • arXiv is committed to these values and only works with partners that adhere to them.

 

Topics

  1. NLP (0.31)
  2. Machine_Learning (0.26)
  3. Backend (0.13)

Similar Articles