Skip to:

Accountability

  • Performance And Chance In New York's Competitive District Grant Program

    Written on January 23, 2012

    New York State recently announced a new $75 million competitive grant program, which is part of its Race to the Top plan. In order to receive some of the money, districts must apply, and their applications receive a score between zero and 115. Almost a third of the points (35) are based on proposals for programs geared toward boosting student achievement, 10 points are based on need, and there are 20 possible points awarded for a description of how the proposal fits into districts’ budgets.

    The remaining 50 points – almost half – of the application is based on “academic performance” over the prior year. Four measures are used to produce the 0-50 point score: One is the year-to-year change (between 2010 and 2011) in the district’s graduation rate, and the other three are changes in the state “performance index” in math, English Language Arts (ELA) and science. The “performance index” in these three subjects is calculated using a simple weighting formula that accounts for the proportion of students scoring at levels 2 (basic), 3 (proficient) and 4 (advanced).

    The idea of using testing results as a criterion in the awarding of grants is to reward those districts that are performing well. Unfortunately, due to the choice of measures and how they are used, the 50 points will be biased and to no small extent based on chance.

    READ MORE
  • Is California's "API Growth" A Good Measure Of School Performance?

    Written on January 4, 2012

    California calls its “Academic Performance Index” (API) the “cornerstone” of its accountability system. The API is calculated as a weighted average of the proportions of students meeting proficiency and other cutoffs on the state exams.

    It is a high-stakes measure. “Growth” in schools’ API scores determines whether they meet federal AYP requirements, and it is also important in the state’s own accountability regime. In addition, toward the middle of last month, the California Charter Schools Association called for the closing of ten charter schools based in part on their (three-year) API “growth” rates.

    Putting aside the question of whether the API is a valid measure of student performance in any given year, using year-to-year changes in API scores in high-stakes decisions is highly problematic. The API is cross-sectional measure – it doesn’t follow students over time – and so one must assume that year-to-year changes in a school’s index do not reflect a shift in demographics or other characteristics of the cohorts of students taking the tests. Moreover, even if the changes in API scores do in fact reflect “real” progress, they do not account for all the factors outside of schools’ control that might affect performance, such as funding and differences in students’ backgrounds (see here and here, or this Mathematica paper, for more on these issues).

    Better data are needed to test these assumptions directly, but we might get some idea of whether changes in schools’ API are good measures of school performance by testing how stable they are over time.

    READ MORE
  • The Ratings Game: New York City Edition

    Written on October 27, 2011

    Gotham Schools reports that the New York City Department of Education rolled out this year’s school report card grades by highlighting the grades’ stability between this year and last. That is, they argued that schools’ grades were roughly the same between years, which is supposed to serve as evidence of the system’s quality.

    The city’s logic here is generally sound. As I’ve noted before, most schools don’t undergo drastic changes in their operations over the course of a year, and so fluctuations in grades among a large number of schools might serve as a warning sign that there’s something wrong with the measures being used. Conversely, it’s not unreasonable to expect from a high-quality rating system that, over a two-year period, some schools would get higher grades and some lower, but that most would stay put. That was the city’s argument this year.

    The only problem is that this wasn’t really the case.

    READ MORE
  • Making (Up) The Grade In Ohio

    Written on October 13, 2011

    In a post last week over at Flypaper, the Fordham Institute’s Terry Ryan took a “frank look” at the ratings of the handful of Ohio charter schools that Fordham’s Ohio branch manages. He noted that the Fordham schools didn’t make a particularly strong showing, ranking 24th among the state’s 47 charter authorizers in terms of the aggregate “performance index” among the schools it authorizes. Mr. Ryan takes the opportunity to offer a few valid explanations as to why Fordham ranked in the middle of the charter authorizer pack, such as the fact that the state’s “dropout recovery schools," which accept especially hard-to-serve students who left public schools, aren’t included (which would likely bump up Fordham's relative ranking).

    Mr. Ryan doth protest too little. His primary argument, which he touches on but does not flesh out, should be that Ohio’s performance index is more a measure of student characteristics than of any defensible concept of school effectiveness. By itself, it reveals relatively little about the “quality” of schools operated by Ohio’s charter authorizers.

    But the limitations of measures like the performance index, which are discussed below (and in the post linked above), have implications far beyond Ohio’s charter authorizers. The primary means by which Ohio assesses school/district performance is the state’s overall “report card grades," which are composite ratings comprised of multiple test-based measures, including the performance index. Unfortunately, however, these ratings are also not a particularly useful measure of school effectiveness. Not only are the grades unstable between years, but they also rely too heavily on test-based measures, including the index, that fail to account for student characteristics. While any attempt to measure school performance using testing data is subject to imprecision, Ohio’s effort falls short.

    READ MORE
  • The Stability Of Ohio's School Value-Added Ratings And Why It Matters

    Written on September 28, 2011

    I have discussed before how most testing data released to the public are cross-sectional, and how comparing them between years entails the comparison of two different groups of students. One way to address these issues is to calculate and release school- and district-level value-added scores.

    Value added estimates are not only longitudinal (i.e., they follow students over time), but the models go a long way toward accounting for differences in the characteristics of students between schools and districts. Put simply, these models calculate “expectations” for student test score gains based on student (and sometimes school) characteristics, which are then used to gauge whether schools’ students did better or worse than expected.

    Ohio is among the few states that release school- and district-level value-added estimates (though this number will probably increase very soon). These results are also used in high-stakes decisions, as they are a major component of Ohio’s “report card” grades for schools, which can be used to close or sanction specific schools. So, I thought it might be useful to take a look at these data and their stability over the past two years. In other words, what proportion of the schools that receive a given rating in one year will get that same rating the next year?

    READ MORE
  • Learning Versus Punishment And Accountability

    Written on November 15, 2010

    Our guest author today is Jeffrey Pfeffer, Thomas D. Dee II Professor of Organizational Behavior at the Stanford University Graduate School of Business. We find it intriguing, given the current obsession with “accountability” in education reform. It is reprinted with permission from Dr. Pfeffer’s blog, Rational Rants, found at http://www.jeffreypfeffer.com.

    People seem to love to exact retribution on those who screw up—it satisfies some primitive sense of justice. For instance, research in experimental economics shows that people will voluntarily give up resources to punish others who have acted unfairly or inappropriately, even though such behavior costs those doing it and even in circumstances where there is going to be no future interaction to be affected by the signal sent through the punishment. In other words, people will mete out retribution even when such behavior is economically irrational.

    READ MORE
  • The Cost Of Success In Education

    Written on August 26, 2010

    Many are skeptical of the current push to improve our education system by means of test-based “accountability” - hiring, firing, and paying teachers and administrators, as well as closing and retaining schools, based largely on test scores. They say it won’t work. I share their skepticism, because I think it will.

    There is a simple logic to this approach: when you control the supply of teachers, leaders, and schools based on their ability to increase test scores, then this attribute will become increasingly common among these individuals and institutions. It is called “selecting on the dependent variable," and it is, given the talent of the people overseeing this process and the money behind it, a decent bet to work in the long run.

    Now, we all know the arguments about the limitations of test scores. We all know they’re largely true. Some people take them too far, others are too casual in their disregard. The question is not whether test scores provide a comprehensive measure of learning or subject mastery (of course they don’t). The better question is the extent to which teachers (and schools) who increase test scores a great deal are imparting and/or reinforcing the skills and traits that students will need after their K-12 education, relative to teachers who produce smaller gains. And this question remains largely unanswered.

    This is dangerous, because if there is an unreliable relationship between teaching essential skills and the boosting of test scores, then success is no longer success. And by selecting teachers and schools based on those scores, we will have deliberately engineered our public education system to fail in spite of success.

    It may be only then that we truly realize what we have done.

    READ MORE
  • Accountability For Us, No Way; We're The Washington Post

    Written on August 25, 2010

    In his August 4th testimony before the Senate’s Committee on Health, Education, Labor and Pensions, Government Accountability Office (GAO) official Gregory D. Kutz offered an earful of scandalous stories about how for-profit, post-secondary institutions use misrepresentation, fraud, and generally unethical practices to tap the federal loan and grant-making trough. One of these companies, so says the Washington Post itself, is Kaplan Inc, a profit-making college that contributes a whopping amount to the paper’s bottom line (67 percent of the Washington Post Company’s $92 million in second quarter earnings, according to the Washington Examiner; 62 percent according to the Post’s Ombudsman Andrew Alexander).

    One might assume that the Post's deep financial involvement in Kaplan Inc. would prompt its editorial board to recuse itself from comment on new proposed federal regulations designed to correct the problems. Instead of offering "point-counterpoint" op-eds on this issue, this bastion of journalistic integrity has launched a veritable campaign in support of its corporate education interests, and offered up its op-ed page to education business allies. It is a sad and disappointing chapter in the history of this once-great institution.

    READ MORE

Pages

Subscribe to Accountability

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.