Fit More and Train Faster With ZeRO via DeepSpeed and FairScale

By huggingface - 2021-01-20

Description

We’re on a journey to solve and democratize artificial intelligence through natural language.

Summary

  • A guest blog post by Hugging Face fellow Stas Bekman Fit More and Train Faster With ZeRO via DeepSpeed and FairScale As recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those huge models onto their hardware.
  • Following the 80:20 rule, I have only spent a few hours on these benchmarks and I haven't tried to squeeze every MB and second by refining the command line arguments and configuration, since it's pretty obvious from the simple table what you'd want to try next.
  • The Magic Behind ZeRO Since transformers only integrated these fabulous solutions and wasn't part of their invention I will share the resources where you can discover all the details for yourself.
  • You can, of course, modify your own trainer to integrate DeepSpeed and FairScale, based on each project's instructions or you can "cheat" and see how we did it in the transformers Trainer.

 

Topics

  1. NLP (0.19)
  2. Security (0.18)
  3. Management (0.11)

Similar Articles

timoschick/pet

By GitHub - 2020-10-01

This repository contains the code for "Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference" - timoschick/pet

K-fold Cross Validation with PyTorch

By MachineCurve - 2021-02-02

Explanations and code examples showing you how to use K-fold Cross Validation for Machine Learning model evaluation/testing with PyTorch.