The Effect of Annotator Error on Classifier Evaluation
from LingPipe Blog by lingpipe
Anyone who’s looked at a corpus realizes that the “gold standards” are hardly 24 karat; they are full of impurities in the form of mislabeled items. Just how many, no one really knows. That’d require a true gold standard to evaluate! We’ve been wondering how we can conclude we have a 99.9% recall entity annotator when the gold standard data’s highly unlikely to be 100% accurate itself.
Check out this 2003 publication from the OpenMind Initiative’s list of publications:
Lam, Chuck P. and David G. Stork. 2003. Evaluating classifiers by means of test data with noisy labels. In Proceedings of IJCAI.
In particular, table 1, which plots the observed error rates for different true error rates and corpus mislabeling rates. Here’s a small slice:
Corpus Mislabeling Rate
True Classifier Error
1%
3%
5%
2%
3.0%
4.9%
6.8%
6%
6.9%
8.6%
10.4%
10%
10.8%
12.4%
14%
Observed Classifier Error vs. Mislabeled Corpus
For simplicity, Lam and Stork assumed the classifier being evaluated and the data annotators make independent errors. This is not particularly realistic, as problems that are hard for humans tend to be hard for classification algorithms, too. Even the authors point out that it’s common for the same errors to be in training data and test data, thus making it very likely errors will be correlated.
Lam and Stork’s paper is also the first I know to raise the problem addressed by Sheng et al.’s 2008 KDD paper, Get Another Label?, namely:
… it is no longer obvious how one should spend each additional labeling effort. Should one spend it labeling the unlabeled data, or should one spend it increasing the accuracy of already labeled data? (Lam and Stork 2003)
Lam and Stork also discuss a problem related to that discussed in Snow et al.’s 2008 EMNLP paper, Cheap and Fast - But is it Good?, namely how many noisy annotators are required to estimate the true error rate (their figure 1). The answer, if they’re noisy, is “a lot”. Snow et al. considered how many really noisy annotators were required to recreate a gold standard approximated by presumably less noisy annotators, which is a rather different estimation.
Of course, this is all very Platonic in assuming that the truth is out there. Here at Alias-i, we are willing to accept that there is no truth, or at least that some cases are so borderline as to not be encompassed by a coding standard, and that any attempt to extend the standard will still leave a fuzzy boundary.
The question we have now is whether our models of annotation will distinguish the borderline cases from hard cases. With hard cases, enough good annotators should converge to a single truth. With borderline cases, there should be no convergence.
4 Comments:
Explores the relationship between true classifier error rates, corpus mislabeling rates, and observed classifier error rates. The author emphasizes the challenges of obtaining a perfect gold standard and the need to address issues related to noisy annotators in the evaluation process.
With our superb selection of Desperate Lies S01 Oscar Leather Blazer, you can stay on top of fashion trends. Made with the finest materials and meticulous attention to detail, our jackets are made to easily uplift your look. Our selection provides a wide range of options to suit your personal taste, whether you're searching for a striking statement item or a traditional silhouette. Wear a Supreme coat that radiates style and modernity to leave a memorable impression.
Choosing our Professional eBook Formatting Service is an investment in dependability, expertise, and a dedication to quality. Our skilled group of ghostwriters is committed to producing excellent writing that engages readers and achieves your particular objectives. We collaborate closely with you from concept development to final edits to make sure your vision is captured in every word. Experience the impact that professionalism can have on your writing by putting your trust in us to boost your project with our outstanding ghostwriting services.
If you are clueless about How To Write Acknowledgements For A Dissertation then let us know your specific academic requirements. We have experienced proofreaders and editors in our team with a sharp eye for catching an error and we ensure that your document is error-free and ready to submit. Contact us fast and get assistance.
Post a Comment