Description
Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various application tasks. These pretraining ...
Summary
- Modern natural language processing (NLP) methods employ self-supervised pretraining objectives such as masked language modeling to boost the performance of various application tasks.
- In this survey, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, few or zero-shot learning, pretraining data-efficiency and specific NLP end-tasks.
- We introduce key contrastive learning concepts with lessons learned from prior research and structure works by applications and cross-field relations.
- arXiv is committed to these values and only works with partners that adhere to them.