Self-supervised learning: The dark matter of intelligence

By facebook - 2021-03-04

Description

How can we build machines with human-level intelligence? There’s a limit to how far the field of AI can go with supervised learning alone. Here's why...

Summary

  • RESEARCH Self-supervised learning: Instead, they use a predictive architecture in which the model directly produces a prediction for y.
  • One starts for a complete segment of text y, then corrupts it, e.g., by masking some words to produce the observation x.
  • The corrupted input is fed to a large neural network that is trained to reproduce the original text y.
  • An uncorrupted text will be reconstructed as itself (low reconstruction error), while a corrupted text will be reconstructed as an uncorrupted version of itself (large reconstruction error).
  • With a properly trained model, as the latent variable varies over a given set, the output prediction varies over the set of plausible predictions compatible with the input x. Latent-variable models can be trained with contrastive methods.
  • The volume of the set over which the latent variable can vary limits the volume of outputs that take low energy.

 

Topics

  1. Machine_Learning (0.49)
  2. NLP (0.3)
  3. Backend (0.24)

Similar Articles

What is semi-supervised machine learning?

By TechTalks - 2021-01-04

Semi-supervised learning helps you solve classification problems when you don't have labeled data to train your machine learning model.

Introduction to Active Learning

By KDnuggets - 2020-12-15

An extensive overview of Active Learning, with an explanation into how it works and can assist with data labeling, as well as its performance and potential limitations.

CLIP: Connecting Text and Images

By OpenAI - 2021-01-05

We’re introducing a neural network called CLIP which efficiently learns visual concepts from natural language supervision.