Description
AI researchers at Darwin AI and the University of Waterloo have established a set of metrics to measure trust in deep learning models
Summary
- Whether it’s diagnosing patients or driving cars, we want to know whether we can trust a person before assigning them a sensitive task.
- Recent work by scientists at the University of Waterloo and Darwin AI, a Toronto-based AI company, provides new metrics to measure the trustworthiness of deep learning systems in an intuitive and interpretable way.
- But the metric will also reward wrong answers by the inverse of the confidence score (i.e., 100% – confidence score).
- Necessary cookies are absolutely essential for the website to function properly.