The Language Interpretability Tool (LIT): Interactive Exploration and Analysis of NLP Models

By Google AI Blog - 2020-11-20

Description

Posted by James Wexler, Software Developer and Ian Tenney, Software Engineer, Google Research As natural language processing (NLP) models...

Summary

  • Posted by James Wexler, Software Developer and Ian Tenney, Software Engineer, Google Research As natural language processing (NLP) models become more powerful and are deployed in more real-world contexts, understanding their behavior is becoming increasingly critical.
  • With these challenges in mind, we built and open-sourced the Language Interpretability Tool (LIT), an interactive platform for NLP model understanding.
  • In this example, a user can explore a BERT-based binary classifier that predicts if a sentence has positive or negative sentiment.
  • It can also be used as an easy and fast way to create an interactive demo for any NLP model.

 

Topics

  1. NLP (0.27)
  2. Management (0.07)
  3. UX (0.07)

Similar Articles

The Model’s Shipped; What Could Possibly go Wrong

By Medium - 2021-02-18

In our last post we took a broad look at model observability and the role it serves in the machine learning workflow. In particular, we discussed the promise of model observability & model monitoring…