Skip to:

Accountability

  • "Show Me What Democracy Looks Like"

    Written on April 29, 2014

    Our guest author today is John McCrann, a Math teacher and experiential educator at Harvest Collegiate High School in New York City. John is a member of the America Achieves Fellowship, Youth Opportunities Program, and Teacher Leader Study Group. He tweets at @JohnTroutMcCran.

    New York City’s third through eighth graders are in the middle of state tests, and many of our city’s citizens have taken strong positions on the value (or lack thereof) of these assessments.  The protests, arguments and activism surrounding these tests remind me of a day when I was a substitute civics teacher during summer school.  “I need help," Charlotte said as she approached my desk, “what is democracy?"

    On that day, my mind flashed to a scene I witnessed outside the White House in the spring of 2003.  On one side of the fence, protestors shouted: “Show me what democracy looks like! This is what democracy looks like!”  On the other side worked an administration who had invaded another country in an effort to “expand democracy." Passionate, bright people on both sides of that fence believed in the idea that Charlotte was asking about, but came to very different conclusions about how to enact the concept. 

    READ MORE
  • Is Selective Admission A School Improvement Plan?

    Written on January 23, 2014

    The Washington Post reports that parents and alumni of D.C.’s Dunbar High School have quietly been putting together a proposal to revitalize what the article calls "one of the District's worst performing schools."

    Those behind the proposal are not ready to speak about it publicly, and details are still very thin, but the Post article reports that it calls for greater flexibility in hiring, spending and other core policies. Moreover, the core of the plan – or at least its most drastic element - is to make Dunbar a selective high school, to which students must apply and be accepted, presumably based on testing results and other performance indicators (the story characterizes the proposal as a whole with the term “autonomy”). I will offer no opinion as to whether this conversion, if it is indeed submitted to the District for consideration, is a good idea. That will be up to administrators, teachers, parents, and other stakeholders.

    I am, however, a bit struck by two interrelated aspects of this story. The first is the unquestioned characterization of Dunbar as a “low performing” or “struggling” school. This fateful label appears to be based mostly on the school’s proficiency rates, which are indeed dismally low – 20 percent in math and 29 percent in reading.

    READ MORE
  • ESEA Waivers And The Perpetuation Of Poor Educational Measurement

    Written on November 26, 2013

    Some of the best research out there is a product not of sophisticated statistical methods or complex research designs, but rather of painstaking manual data collection. A good example is a recent paper by Morgan Polikoff, Andrew McEachin, Stephani Wrabel and Matthew Duque, which was published in the latest issue of the journal Educational Researcher.

    Polikoff and his colleagues performed a task that makes most of the rest of us cringe: They read and coded every one of the over 40 state applications for ESEA flexibility, or “waivers." The end product is a simple but highly useful presentation of the measures states are using to identify “priority” (low-performing) and “focus” (schools "contributing to achievement gaps") schools. The results are disturbing to anyone who believes that strong measurement should guide educational decisions.

    There's plenty of great data and discussion in the paper, but consider just one central finding: How states are identifying priority (i.e., lowest-performing) schools at the elementary level (the measures are of course a bit different for secondary schools).

    READ MORE
  • A Quick Look At The DC Charter School Rating System

    Written on November 19, 2013

    Having taken a look at several states’ school rating systems  (see our posts on the systems in IN, OH, FL and CO), I thought it might be interesting to examine a system used by a group of charter schools – starting with the system used by charters in the District of Columbia. This is the third year the DC charter school board has released the ratings.

    For elementary and middle schools (upon which I will focus in this post*), the DC Performance Management Framework (PMF) is a weighted index composed of: 40 percent absolute performance; 40 percent growth; and 20 percent what they call “leading indicators” (a more detailed description of this formula can be found in the second footnote).** The index scores are then sorted into one of three tiers, with Tier 1 being the highest, and Tier 3 the lowest.

    So, these particular ratings weight absolute performance – i.e., how highly students score on tests – a bit less heavily than do most states that have devised their own systems, and they grant slightly more importance to growth and alternative measures. We might therefore expect to find a somewhat weaker relationship between PMF scores and student characteristics such as free/reduced price lunch eligibility (FRL), as these charters are judged less predominantly on the students they serve. Let’s take a quick look.

    READ MORE
  • Are There Low Performing Schools With High Performing Students?

    Written on October 3, 2013

    I write often (probably too often) about the difference between measures of school performance and student performance, usually in the context of school rating systems. The basic idea is that schools cannot control the students they serve, and so absolute performance measures, such as proficiency rates, are telling you more about the students a school or district serves than how effective it is in improving outcomes (which is better-captured by growth-oriented indicators).

    Recently, I was asked a simple question: Can a school with very high absolute performance levels ever actually be considered a “bad school?"

    This is a good question.

    READ MORE
  • Underlying Issues In The DC Test Score Controversy

    Written on October 1, 2013

    In the Washington Post, Emma Brown reports on a behind the scenes decision about how to score last year’s new, more difficult tests in the District of Columbia Public Schools (DCPS) and the District's charter schools.

    To make a long story short, the choice faced by the Office of the State Superintendent of Education, or OSSE, which oversees testing in the District, was about how to convert test scores into proficiency rates. The first option, put simply, was to convert them such that the proficiency bar was more “aligned” with the Common Core, thus resulting in lower aggregate proficiency rates in math, compared with last year’s (in other states, such as Kentucky and New York, rates declined markedly). The second option was to score the tests while "holding constant" the difficulty of the questions, in order to facilitate comparisons of aggregate rates with those from previous years.

    OSSE chose the latter option (according to some, in a manner that was insufficiently transparent). The end result was a modest increase in proficiency rates (which DC officials absurdly called “historic”).

    READ MORE
  • The Great Proficiency Debate

    Written on August 26, 2013

    A couple of weeks ago, Mike Petrilli of the Fordham Institute made the case that absolute proficiency rates should not be used as measures of school effectiveness, as they are heavily dependent on where students “start out” upon entry to the school. A few days later, Fordham president Checker Finn offered a defense of proficiency rates, noting that how much students know is substantively important, and associated with meaningful outcomes later in life.

    They’re both correct. This is not a debate about whether proficiency rates are at all useful (by the way, I don't read Petrilli as saying that). It’s about how they should be used and how they should not.

    Let’s keep this simple. Here is a quick, highly simplified list of how I would recommend interpreting and using absolute proficiency rates, and how I would avoid using them.

    READ MORE
  • New York State Of Mind

    Written on August 13, 2013

    Last week, the results of New York’s new Common Core-aligned assessments were national news. For months, officials throughout the state, including New York City, have been preparing the public for the release of these data.

    Their basic message was that the standards, and thus the tests based upon them, are more difficult, and they represent an attempt to truly gauge whether students are prepared for college and the labor market. The inevitable consequence of raising standards, officials have been explaining, is that fewer students will be “proficient” than in previous years (which was, of course, the case) – this does not mean that students are performing worse, only that they are being held to higher expectations, and that the skills and knowledge being assessed require a new, more expansive curriculum. Therefore, interpretation of the new results versus those in previous year must be extremely cautious, and educators, parents and the public should not jump to conclusions about what they mean.

    For the most part, the main points of this public information campaign are correct. It would, however, be wonderful if similar caution were evident in the roll-out of testing results in past (and, more importantly, future) years.

    READ MORE
  • Under The Hood Of School Rating Systems

    Written on August 5, 2013

    Recent events in Indiana and Florida have resulted in a great deal of attention to the new school rating systems that over 25 states are using to evaluate the performance of schools, often attaching high-stakes consequences and rewards to the results. We have published reviews of several states' systems here over the past couple of years (see our posts on the systems in Florida, Indiana, Colorado, New York City and Ohio, for example).

    Virtually all of these systems rely heavily, if not entirely, on standardized test results, most commonly by combining two general types of test-based measures: absolute performance (or status) measures, or how highly students score on tests (e.g., proficiency rates); and growth measures, or how quickly students make progress (e.g., value-added scores). As discussed in previous posts, absolute performance measures are best seen as gauges of student performance, since they can’t account for the fact that students enter the schooling system at vastly different levels, whereas growth-oriented indicators can be viewed as more appropriate in attempts to gauge school performance per se, as they seek (albeit imperfectly) to control for students’ starting points (and other characteristics that are known to influence achievement levels) in order to isolate the impact of schools on testing performance.*

    One interesting aspect of this distinction, which we have not discussed thoroughly here, is the idea/possibility that these two measures are “in conflict." Let me explain what I mean by that.

    READ MORE
  • So Many Purposes, So Few Tests

    Written on August 1, 2013

    In a new NBER working paper, economist Derek Neal makes an important point, one of which many people in education are aware, but is infrequently reflected in actual policy. The point is that using the same assessment to measure both student and teacher performance often contaminates the results for both purposes.

    In fact, as Neal notes, some of the very features required to measure student performance are the ones that make possible the contamination when the tests are used in high-stakes accountability systems. Consider, for example, a situation in which a state or district wants to compare the test scores of a cohort of fourth graders in one year with those of fourth graders the next year. One common means of facilitating this comparability is administering some of the questions to both groups (or to some "pilot" sample of students prior to those being tested). Otherwise, any difference in scores between the two cohorts might simply be due to differences in the difficulty of the questions. If you cannot check that out, it's tough to make meaningful comparisons.

    But it’s precisely this need to repeat questions that enables one form of so-called “teaching to the test," in which administrators and educators use questions from prior assessments to guide their instruction for the current year.

    READ MORE

Pages

Subscribe to Accountability

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.