Introducing ABENA: BERT Natural Language Processing for Twi

By Medium - 2020-10-23

Description

Transformer Language Modeling for Akuapem and Asante Twi

Summary

  • BERT Natural Language Processing for Twi Introduction In our previous blog post we introduced a preliminary Twi embedding model based on fastText and visualized it using the Tensorflow Embedding Projector.
  • It is typically trained with a “fill-in-the-blanks” objective which is very practical to implement and does not require labeled data — just randomly drop some words and try to predict them.
  • ABENA Twi BERT Models The first thing we do is initialize a BERT architecture and tokenizer to the multilingual BERT (mBERT) checkpoint.
  • Description of all the models we trained and shared in this work.

 

Topics

  1. NLP (0.27)
  2. Backend (0.12)
  3. Machine_Learning (0.1)

Similar Articles

FastFormers: 233x Faster Transformers inference on CPU

By Medium - 2020-11-04

Since the birth of BERT followed by that of Transformers have dominated NLP in nearly every language-related tasks whether it is Question-Answering, Sentiment Analysis, Text classification or Text…