Description
Posted by James Wexler, Software Developer and Ian Tenney, Software Engineer, Google Research As natural language processing (NLP) models...
Summary
- Posted by James Wexler, Software Developer and Ian Tenney, Software Engineer, Google Research As natural language processing (NLP) models become more powerful and are deployed in more real-world contexts, understanding their behavior is becoming increasingly critical.
- With these challenges in mind, we built and open-sourced the Language Interpretability Tool (LIT), an interactive platform for NLP model understanding.
- In this example, a user can explore a BERT-based binary classifier that predicts if a sentence has positive or negative sentiment.
- It can also be used as an easy and fast way to create an interactive demo for any NLP model.