Measuring argumentative reasoning: What's behind the numbers?

Alina Reznitskaya, Li jen Kuo, Monica Glina, Richard C. Anderson

Research output: Contribution to journalArticlepeer-review

49 Scopus citations

Abstract

The aim of this paper is to develop a more thorough, empirically-based understanding of the differences in measurement of written argumentation when alternative scoring frameworks are employed. Reflective compositions of 127 elementary school children were analyzed using analytic and holistic scales. The scales were derived from Argument Schema Theory, an explicit model of argumentation development. We investigated the relationships among the different scales, as well as their relative reliability and efficiency. The scores derived using analytic and holistic methods have adequate reliability. Although less efficient, analytic scoring allows for gathering more sensitive and detailed information about the differences in student performance. The results suggest that the choice of an analytic framework for measuring argumentation should not be arbitrary, as each scoring method taps into distinct facets of the construct.

Original languageEnglish
Pages (from-to)219-224
Number of pages6
JournalLearning and Individual Differences
Volume19
Issue number2
DOIs
StatePublished - Jun 2009

Keywords

  • Alternative assessment
  • Argumentation
  • Reliability
  • Scoring rubrics
  • Validity

Fingerprint

Dive into the research topics of 'Measuring argumentative reasoning: What's behind the numbers?'. Together they form a unique fingerprint.

Cite this