Description
A new model surpassed human baseline performance on the challenging natural language understanding benchmark.
Summary
- SuperGLUE met its match this week when, for the first time, a new model surpassed human baseline performance on the challenging natural language understanding (NLU) benchmark.
- In the paper DeBERTa: In experiments on the NLU benchmark SuperGLUE, a DeBERTa model scaled up to 1.5 billion parameters outperformed Google’s 11 billion parameter T5 language model by 0.6 percent, and was the first model to surpass the human baseline.
- The team will update their GitHub code repository soon with the latest DeBERTa code and models.