Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Machine Learning Performance Evaluation: Precision, Recall, Error Rate & More, Lecture notes of Machine Learning

An introduction to performance evaluation in machine learning, focusing on precision, recall, error rate, and other criteria for assessing the practical utility of classifiers. It covers basic variables, correct/incorrect classification, error rate vs. Rejection rate, and the interplay between precision and recall. The document also discusses the roc-curve, the f-measure, sensitivity, specificity, and performance in multi-label domains.

Typology: Lecture notes

2018/2019

Uploaded on 11/01/2019

chetan-reddy
chetan-reddy 🇺🇸

9 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Experiments and Evaluation
Know your data (How much data?)
* Running & evaluating experiments
* Statistical characterization
Overall goal
Sample error vs. True error
Confidence in results
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Machine Learning Performance Evaluation: Precision, Recall, Error Rate & More and more Lecture notes Machine Learning in PDF only on Docsity!

Experiments and Evaluation

Know your data (How much data?)

* Running & evaluating experiments

* Statistical characterization

Overall goal

Sample error vs. True error

Confidence in results

Introduction to

Machine Learning

CH11: PERFORMANCE EVALUATION

Correct/Incorrect

Classification

Number of correctly classified examples: Number of misclassified examples: Total number of examples:

Error Rate and

Classification Accuracy

Error Rate Classification Accuracy Note that

Precision and Recall:

Motivation

Consider a heavily imbalanced set of examples ◦ (^) E.g., 970 examples are pos and 30 are neg ◦ (^) Consider a classifier that labels all examples as pos ◦ (^) The error rate is only 3% but the classifier is useless Such domains are quite common Therefore: we need criteria capable of quantifying the classifier’s practical utility

Precision and Recall

A classifier has been applied to a set of examples Precision : percentage of truly pos examples among those labeled as such by the classifier: Recall : percentage of pos examples labeled as such by the classifier among all truly positive examples:

Which Criterion Matters

This depends on the concrete application: ◦ (^) Recommender systems: ◦ (^) Need high precision, to make sure the customer is rarely disappointed ◦ (^) Recall is here unimportant (no need to identify all relevant movies) ◦ (^) Medical diagnosis: ◦ (^) Usually, recall is more important ◦ (^) Precision can be improved by follow-up tests

ROC-Curve: The

Interplay Between and

Different types of error can often be influenced by the classifier’s parameters Compare two classifiers, and

-Criterion: Example

Some Other Criteria

Sensitivity (recall measured on positive examples): Specificity (recall measured on negative examples):

Gmean

Performance in Multi-

Label Domains: Micro-

Averaging

Each class is weighed according to its frequency among the examples

Learning Curve

Methodology of

Performance Evaluation

Unless the set of pre-classified examples is really big, the results can be unreliable Therefore: specific methodologies of repeated trials ◦ (^) Random subsampling ◦ (^) N -fold cross-validation ◦ (^) Stratified versions of these approaches ◦ (^) cross-validation

Random Subsampling

Let T be the set of pre-classified examples

  1. Divide T randomly into the training and testing subsets
  2. Induce the classifier on the training set
  3. Evaluate the classifier on the testing set
  4. Repeat N times
  5. Calculate the average error rate