Skip to:

Teacher Quality

  • That's Not Teacher-Like

    Written on September 24, 2012

    I’ve been reading Albert Shanker’s “The Power of Ideas: Al In His Own Words," the American Educator’s compendium of Al’s speeches and columns, published posthumously in 1997. What an enjoyable, witty and informative collection of essays.

    Two columns especially caught my attention: “That’s Very Unprofessional Mr. Shanker!" and “Does Pavarotti Need to File an Aria Plan” – where Al discusses expectations for (and treatment of) teachers. They made me reflect, yet again, on whether perceptions of teacher professionalism might be gendered. In other words, when society thinks of the attributes of a professional teacher, might we unconsciously be thinking of women teachers? And, if so, why might this be important?

    In “That’s Very Unprofessional, Mr. Shanker!" Al writes:

    READ MORE
  • Do Top Teachers Produce "A Year And A Half Of Learning?"

    Written on September 11, 2012

    One claim that gets tossed around a lot in education circles is that “the most effective teachers produce a year and a half of learning per year, while the least effective produce a half of a year of learning."

    This talking point is used all the time in advocacy materials and news articles. Its implications are pretty clear: Effective teachers can make all the difference, while ineffective teachers can do permanent damage.

    As with most prepackaged talking points circulated in education debates, the “year and a half of learning” argument, when used without qualification, is both somewhat valid and somewhat misleading. So, seeing as it comes up so often, let’s very quickly identify its origins and what it means.

    READ MORE
  • A Look At The Changes To D.C.'s Teacher Evaluation System

    Written on August 22, 2012

    D.C. Public Schools (DCPS) recently announced a few significant changes to its teacher evaluation system (called IMPACT), including the alteration of its test-based components, the creation of a new performance category (“developing”), and a few tweaks to the observational component (discussed below). These changes will be effective starting this year.

    As with any new evaluation system, a period of adjustment and revision should be expected and encouraged (though it might be preferable if the first round of changes occurs during a phase-in period, prior to stakes becoming attached). Yet, despite all the attention given to the IMPACT system over the past few years, these new changes have not been discussed much beyond a few quick news articles.

    I think that’s unfortunate: DCPS is an early adopter of the “new breed” of teacher evaluation policies being rolled out across the nation, and any adjustments to IMPACT’s design – presumably based on results and feedback – could provide valuable lessons for states and districts in earlier phases of the process.

    Accordingly, I thought I would take a quick look at three of these changes.

    READ MORE
  • The Irreconcilables

    Written on July 30, 2012

    ** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post

    The New Teacher Project (TNTP) has a new, highly-publicized report about what it calls “irreplaceables," a catchy term that is supposed to describe those teachers who are “so successful they are nearly impossible to replace." The report’s primary conclusion is that these “irreplaceable” teachers often leave the profession voluntarily, and TNTP offers several recommendations for how to improve this.

    I’m not going to discuss this report fully. It shines a light on teacher retention, which is a good thing. Its primary purpose is to promulgate the conceptual argument that not all teacher turnover is created equal – i.e., that it depends on whether “good” or “bad” teachers are leaving (see here for a strong analysis on this topic). The report’s recommendations are standard fare – improve working conditions, tailor pay to “performance” (see here for a review of evidence on incentives and retention), etc. Many are widely-supported, while others are more controversial. All of them merit discussion.

    I just want to make one quick (and, in many respects, semantic) point about the manner in which TNTP identifies high-performing teachers, as I think it illustrates larger issues. In my view, the term “irreplaceable” doesn't apply, and I think it would have been a better analysis without it.

    READ MORE
  • Teachers: Pressing The Right Buttons

    Written on June 5, 2012

    The majority of social science research does not explicitly dwell on how we go from situation A to situation B. Instead, most social scientists focus on associations between different outcomes. This “static” approach has advantages but also limitations. Looking at associations might reveal that teachers who experience condition A are twice as likely to leave their schools than teachers who experience condition B. But what does this knowledge tell us about how to move from condition A to condition B? In many cases, very little.

    Many social science findings are not easily “actionable” for policy purposes precisely because they say nothing about processes or sequences of events and activities unfolding over time, and in context. While conventional quantitative research provides indications of what works — on average — across large samples, a look at processes reveals how factors or events (situated in time and space) are associated with each other. This kind of research provides the detail that we need, not just to understand the world, but to do so in a way that is useful and enables us to act on it constructively.

    Although this kind of work is rare, every now then a quantitative study showing “process sensitivity” sees the light of day. This is the case of a recent paper by Morgan and colleagues (2010) examining how the events that teachers experience routinely affect their commitment to remain in the profession.

    READ MORE
  • Staff Matters: Social Resilience In Schools

    Written on May 7, 2012

    In the world of education, particularly in the United States, educational fads, policy agendas, and funding priorities tend to change rapidly. The attention of education research fluctuates accordingly. And, as David Cohen persuasively argues in Teaching and Its Predicaments, the nation has little coherent educational infrastructure to fall back upon. As a result of all this, teachers’ work is almost always surrounded by important levels of uncertainty (e.g., lack of a common curricula) and variation. In such a context, it is no surprise that collaboration and collegiality figure prominently in teachers’ world (and work) views.

    After all, difficulties can be dealt with more effectively when/if individuals are situated in supportive and close-knit social networks from which to draw strength and resources. In other words, in the absence of other forms of stability, the ability of a group – a group of teachers in this case – to work together becomes indispensable to cope with challenges and change.

    The idea that teachers’ jobs are surrounded by uncertainty made me of think problems often encountered in the field of security. In this sector, because threats are increasingly complex and unpredictable, much of the focus has shifted away from heightened protection and toward increased resilience. Resilience is often understood as the ability of communities to survive and thrive after disasters or emergencies.

    READ MORE
  • The Allure Of Teacher Quality

    Written on April 23, 2012

    Those following education know that policy focused on "teacher quality" is by far the dominant paradigm for improving  schools over the past few years. Some (but not nearly all) components of this all-hands-on-deck effort are perplexing to many teachers, and have generated quite a bit of pushback. No matter one’s opinion of this approach, however, what drives it is the tantalizing allure of variation in teacher quality.

    Fueled by the ever-increasing availability of detailed test score datasets linking teachers to students, the research literature on teachers’ test-based effectiveness has grown rapidly, in both size and sophistication. Analysis after analysis finds that, all else being equal, the variation in teachers’ estimated effects on students' test growth – the difference between the “top” and “bottom” teachers – is very large. In any given year, some teachers’ students make huge progress, others’ very little. Even if part of this estimated variation is attributable to confounding factors, the discrepancies are still larger than most any other measured "input" within the jurisdiction of education policy. The underlying assumption here is that “true” teacher quality varies to a degree that is at least somewhat comparable in magnitude to the spread of the test-based estimates.

    Perhaps that's the case, but it does not, by itself, help much. The key question is whether and how we can measure teacher performance at the individual level and, more importantly, influence the distribution – that is, to raise the ceiling, the middle and/or the floor. The variation hangs out there like a drug to which we’re addicted, but haven’t really figured out how to administer. If there was some way to harness it efficiently, the potential benefits could be considerable. The focus of current education policy is in large part an effort to do anything and everything to try and figure this out. And, as might be expected given the enormity of the task, progress has been slow.

    READ MORE
  • Value-Added Versus Observations, Part Two: Validity

    Written on April 18, 2012

    In a previous post, I compared value-added (VA) and classroom observations in terms of reliability – the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren’t useful unless they are valid – that is, unless they’re measuring what we want them to measure.

    Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional – in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they’re being used.

    Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.

    READ MORE
  • Value-Added Versus Observations, Part One: Reliability

    Written on April 12, 2012

    Although most new teacher evaluations are still in various phases of pre-implementation, it’s safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers’ final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many - perhaps most - teachers strongly prefer the former (observations, especially peer observations) over the latter (VA).

    One of the most common arguments against VA is that the scores are error-prone and unstable over time - i.e., that they are unreliable. And it's true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than “real” performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class.

    These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe “true” teacher performance, it’s tough to say which is “better” or “worse," despite the certainty with which both “sides” often present their respective cases. And, the fact that both entail some level of measurement error doesn't by itself speak to whether they should be part of evaluations.*

    Nevertheless, many states and districts have already made the choice to use both measures, and in these places, the existence of imprecision is less important than how to deal with it. Viewed from this perspective, VA and observations are in many respects more alike than different.

    READ MORE
  • Learning From Teach For America

    Written on March 19, 2012

    There is a small but growing body of evidence about the (usually test-based) effectiveness of teachers from Teach for America (TFA), an extremely selective program that trains and places new teachers in mostly higher needs schools and districts. Rather than review this literature paper-by-paper, which has already been done by others (see here and here), I’ll just give you the super-short summary of the higher-quality analyses, and quickly discuss what I think it means.*

    The evidence on TFA teachers focuses mostly on comparing their effect on test score growth vis-à-vis other groups of teachers who entered the profession via traditional certification (or through other alternative routes). This is no easy task, and the findings do vary quite a bit by study, as well as by the group to which TFA corps members are compared (e.g., new or more experienced teachers). One can quibble endlessly over the methodological details (and I’m all for that), and this area is still underdeveloped, but a fair summary of these papers is that TFA teachers are no more or less effective than comparable peers in terms of reading tests, and sometimes but not always more effective in math (the differences, whether positive or negative, tend to be small and/or only surface after 2-3 years). Overall, the evidence thus far suggests that TFA teachers perform comparably, at least in terms of test-based outcomes.

    Somewhat in contrast with these findings, TFA has been the subject of both intensive criticism and fawning praise. I don’t want to engage this debate directly, except to say that there has to be some middle ground on which a program that brings talented young people into the field of education is not such a divisive issue. I do, however, want to make a wider point specifically about the evidence on TFA teachers – what it might suggest about the current focus to “attract the best people” to the profession.

    READ MORE

Pages

Subscribe to Teacher Quality

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.