Metric Matters, Part 1: Evaluating Classification Models

By KDnuggets - 2021-03-16

Description

You have many options when choosing metrics for evaluating your machine learning models. Select the right one for your situation with this guide that considers metrics for classification models.

Summary

  • Imagine taking a 100-question multiple-choice test and giving the right answer to 85 questions.
  • When we discuss “balanced” datasets in the context of classification, we mean that your outcome variable is pretty evenly distributed between/among the potential options, not heavily skewed or “imbalanced” such that one or some outcomes dominate.
  • For a binary classification problem, this is the proportion of times the model predicted outcome A correctly out of the total predictions of outcome A (whether correct or incorrect).
  • However, that doesn’t mean that the F1 score is always the perfect metric for all scenarios.

 

Topics

  1. NLP (0.15)
  2. Machine_Learning (0.13)
  3. Backend (0.12)

Similar Articles

How to Use AutoKeras for Classification and Regression

By Machine Learning Mastery - 2020-09-01

AutoML refers to techniques for automatically discovering the best-performing model for a given dataset. When applied to neural networks, this involves both discovering the model architecture and the ...

The Model’s Shipped; What Could Possibly go Wrong

By Medium - 2021-02-18

In our last post we took a broad look at model observability and the role it serves in the machine learning workflow. In particular, we discussed the promise of model observability & model monitoring…