Las consecuencias jurídicas de imponer decisiones de consecuencias severas basadas en Información de baja calidad

Evaluación docente en la era de 'Carrera hacia la cima'

Translated title of the contribution: The legal consequences of mandating high stakes decisions based on low quality information: Teacher evaluation in the race-to-the-top era

Bruce D. Baker, Joseph Oluwole, Preston C. Green

Research output: Contribution to journalArticleResearchpeer-review

36 Citations (Scopus)

Abstract

In this article, we explain how overly prescriptive, rigid state statutory and regulatory policy frameworks regarding teacher evaluation, tenure and employment decisions outstrip the statistical reliability and validity of proposed measures of teaching effectiveness. We begin with a discussion of the emergence of highly prescriptive state legislation regarding the use of student testing data within teacher evaluation systems, specifically for purposes of making employment decisions. Next, we explain the most problematic features of those policies, which include a) requirements that test-based measures constitute fixed, non-negotiable weight in final decisions, b) that test-based measures are used to place teachers into categories of effectiveness by applying numerical cutoffs beyond the precision or accuracy of the available data, and c) that professional judgment is removed from personnel decisions by legislating (or regulating) specific actions be taken when teachers fall into certain performance categories. In the subsequent section, we point out that different types of measures are being developed and implemented across states, and we explain that while value-added metrics in particular are, in fact designed to estimate a teacher's effect on student outcomes, descriptive growth percentile measures are not designed for making such inference and thus have no place in making determinations regarding teacher effectiveness. We also explain that, due to the properties of value-added estimates, they have no place in making high-stakes decisions based on rigid policy frameworks like those described herein. We evaluate the legal implications of rigid reliance on measures of teaching effectiveness that a) lack reliability and b) may be entirely invalid.

Original languageSpanish
JournalEducation Policy Analysis Archives
Volume21
StatePublished - 11 Feb 2013

Fingerprint

teacher
evaluation
effectiveness of teaching
value added
regulatory policy
personnel
student
legislation
lack
performance

Keywords

  • High stakes
  • Race to the top
  • Value added models (VAM)

Cite this

@article{92074d30cb914e7faee13759477dbff9,
title = "Las consecuencias jur{\'i}dicas de imponer decisiones de consecuencias severas basadas en Informaci{\'o}n de baja calidad: Evaluaci{\'o}n docente en la era de 'Carrera hacia la cima'",
abstract = "In this article, we explain how overly prescriptive, rigid state statutory and regulatory policy frameworks regarding teacher evaluation, tenure and employment decisions outstrip the statistical reliability and validity of proposed measures of teaching effectiveness. We begin with a discussion of the emergence of highly prescriptive state legislation regarding the use of student testing data within teacher evaluation systems, specifically for purposes of making employment decisions. Next, we explain the most problematic features of those policies, which include a) requirements that test-based measures constitute fixed, non-negotiable weight in final decisions, b) that test-based measures are used to place teachers into categories of effectiveness by applying numerical cutoffs beyond the precision or accuracy of the available data, and c) that professional judgment is removed from personnel decisions by legislating (or regulating) specific actions be taken when teachers fall into certain performance categories. In the subsequent section, we point out that different types of measures are being developed and implemented across states, and we explain that while value-added metrics in particular are, in fact designed to estimate a teacher's effect on student outcomes, descriptive growth percentile measures are not designed for making such inference and thus have no place in making determinations regarding teacher effectiveness. We also explain that, due to the properties of value-added estimates, they have no place in making high-stakes decisions based on rigid policy frameworks like those described herein. We evaluate the legal implications of rigid reliance on measures of teaching effectiveness that a) lack reliability and b) may be entirely invalid.",
keywords = "High stakes, Race to the top, Value added models (VAM)",
author = "Baker, {Bruce D.} and Joseph Oluwole and Green, {Preston C.}",
year = "2013",
month = "2",
day = "11",
language = "Spanish",
volume = "21",
journal = "Education Policy Analysis Archives",
issn = "1068-2341",
publisher = "Arizona State University",

}

TY - JOUR

T1 - Las consecuencias jurídicas de imponer decisiones de consecuencias severas basadas en Información de baja calidad

T2 - Evaluación docente en la era de 'Carrera hacia la cima'

AU - Baker, Bruce D.

AU - Oluwole, Joseph

AU - Green, Preston C.

PY - 2013/2/11

Y1 - 2013/2/11

N2 - In this article, we explain how overly prescriptive, rigid state statutory and regulatory policy frameworks regarding teacher evaluation, tenure and employment decisions outstrip the statistical reliability and validity of proposed measures of teaching effectiveness. We begin with a discussion of the emergence of highly prescriptive state legislation regarding the use of student testing data within teacher evaluation systems, specifically for purposes of making employment decisions. Next, we explain the most problematic features of those policies, which include a) requirements that test-based measures constitute fixed, non-negotiable weight in final decisions, b) that test-based measures are used to place teachers into categories of effectiveness by applying numerical cutoffs beyond the precision or accuracy of the available data, and c) that professional judgment is removed from personnel decisions by legislating (or regulating) specific actions be taken when teachers fall into certain performance categories. In the subsequent section, we point out that different types of measures are being developed and implemented across states, and we explain that while value-added metrics in particular are, in fact designed to estimate a teacher's effect on student outcomes, descriptive growth percentile measures are not designed for making such inference and thus have no place in making determinations regarding teacher effectiveness. We also explain that, due to the properties of value-added estimates, they have no place in making high-stakes decisions based on rigid policy frameworks like those described herein. We evaluate the legal implications of rigid reliance on measures of teaching effectiveness that a) lack reliability and b) may be entirely invalid.

AB - In this article, we explain how overly prescriptive, rigid state statutory and regulatory policy frameworks regarding teacher evaluation, tenure and employment decisions outstrip the statistical reliability and validity of proposed measures of teaching effectiveness. We begin with a discussion of the emergence of highly prescriptive state legislation regarding the use of student testing data within teacher evaluation systems, specifically for purposes of making employment decisions. Next, we explain the most problematic features of those policies, which include a) requirements that test-based measures constitute fixed, non-negotiable weight in final decisions, b) that test-based measures are used to place teachers into categories of effectiveness by applying numerical cutoffs beyond the precision or accuracy of the available data, and c) that professional judgment is removed from personnel decisions by legislating (or regulating) specific actions be taken when teachers fall into certain performance categories. In the subsequent section, we point out that different types of measures are being developed and implemented across states, and we explain that while value-added metrics in particular are, in fact designed to estimate a teacher's effect on student outcomes, descriptive growth percentile measures are not designed for making such inference and thus have no place in making determinations regarding teacher effectiveness. We also explain that, due to the properties of value-added estimates, they have no place in making high-stakes decisions based on rigid policy frameworks like those described herein. We evaluate the legal implications of rigid reliance on measures of teaching effectiveness that a) lack reliability and b) may be entirely invalid.

KW - High stakes

KW - Race to the top

KW - Value added models (VAM)

UR - http://www.scopus.com/inward/record.url?scp=84873398346&partnerID=8YFLogxK

M3 - Article

VL - 21

JO - Education Policy Analysis Archives

JF - Education Policy Analysis Archives

SN - 1068-2341

ER -