Evaluating and automating the annotation of a learner corpus

Alexandr Rosen, Jirka Hana, Barbora Štindlová, Anna Feldman

Research output: Contribution to journalArticlepeer-review

21 Scopus citations


The paper describes a corpus of texts produced by non-native speakers of Czech. We discuss its annotation scheme, consisting of three interlinked tiers, designed to handle a wide range of error types present in the input. Each tier corrects different types of errors; links between the tiers allow capturing errors in word order and complex discontinuous expressions. Errors are not only corrected, but also classified. The annotation scheme is tested on a data set including approx. 175,000 words with fair inter-annotator agreement results. We also explore the possibility of applying automated linguistic annotation tools (taggers, spell checkers and grammar checkers) to the learner text to support or even substitute manual annotation.

Original languageEnglish
Pages (from-to)65-92
Number of pages28
JournalLanguage Resources and Evaluation
Issue number1
StatePublished - Mar 2014


  • Czech
  • Error annotation
  • Learner corpus
  • Second language acquisition


Dive into the research topics of 'Evaluating and automating the annotation of a learner corpus'. Together they form a unique fingerprint.

Cite this