Frederick Reiss, Hong Xu*, Bryan Cutler*, Karthik Muthuraman*, and Zachary Eichenberger*.
Identifying incorrect labels in the CoNLL-2003 corpus.
In Proceedings of the 24th Conference on Computational Natural Language Learning (CoNLL), 215–226. 2020.
[full text] [BibTeX▼]
The CoNLL-2003 corpus for English-language named entity recognition (NER) is one of the most influential corpora for NER model research. A large number of publications, including many landmark works, have used this corpus as a source of ground truth for NER tasks. In this paper, we examine this corpus and identify over 1300 incorrect labels (out of 35089 in the corpus). In particular, the number of incorrect labels in the test fold is comparable to the number of errors that state-of-the-art models make when running inference over this corpus. We describe the process by which we identified these incorrect labels, using novel variants of techniques from semi-supervised learning. We also summarize the types of errors that we found, and we revisit several recent results in NER in light of the corrected data. Finally, we show experimentally that our corrections to the corpus have a positive impact on three state-of-the-art models.