Skip to:

Education Research

  • A 'Summary Opinion' Of The Hoxby NYC Charter School Study

    Written on July 6, 2011

    Almost two years ago, a report on New York City charter schools rocked the education policy world. It was written by Hoover Institution scholar Caroline Hoxby with co-authors Sonali Murarka and Jenny Kang. Their primary finding was that:

    On average, a student who attended a charter school for all of grades kindergarten through eight would close about 86 percent of the “Scarsdale-Harlem achievement gap” [the difference in scores between students in Harlem and those in the affluent NYC suburb] in math, and 66 percent of the achievement gap in English.
    The headline-grabbing conclusion was uncritically repeated by most major news outlets, including the New York Post, which called the charter effects “off the charts," and the NY Daily News, which announced that, from that day forward, anyone who opposed charter schools was “fighting to block thousands of children from getting superior educations." A week or two later, Mayor Michael Bloomberg specifically cited the study in announcing that he was moving to expand the number of NYC charter schools. Even today, the report is often mentioned as primary evidence favoring the efficacy of charter schools.

    I would like to revisit this study, but not as a means to relitigate the “do charters work?" debate. Indeed, I have argued previously that we spend too much time debating whether charter schools “work," and too little time asking why some few are successful. Instead, my purpose is to illustrate an important research maxim: Even well-designed, sophisticated analyses with important conclusions can be compromised by a misleading presentation of results.

    READ MORE
  • Investment Counselors

    Written on June 15, 2011

    NOTE: With this post, we are starting a new “feature” here at Shanker Blog – periodically summarizing research papers that carry interesting and/or important implications for the education policy debates. We intend to focus on papers that are either published in peer-reviewed journals or are still in working paper form, and are unlikely to get significant notice. Here is the first:

    Are School Counselors a Cost-Effective Education Input?

    Scott E. Carrell and Mark Hoekstra, Working paper (link to PDF), September 2010

    Most teachers and principals will tell you that non-instructional school staff can make a big difference in school performance. Although we may all know this, it’s always useful to have empirical research to confirm it, and to examine the size and nature of the effects. In this paper, economists Scott Carrell and Mark Hoekstra put forth one of the first rigorous tests of how one particular group of employees – school counselors – affect both discipline and achievement outcomes. The authors use a unique administrative dataset of third, fourth, and fifth graders in Alachua County, Florida, a diverse district that serves over 30,000 students.  Their approach exploits year-to-year variation in the number of counselors in each school – i.e., whether the outcomes of a given school change from the previous year when a counselor is added to the staff.

    Their results are pretty striking: The addition of a single full-time counselor is associated with a 0.04 standard deviation increase in boys’ achievement (about 1.2 percentile points). These effects are robust across different specifications (including sibling and student fixed effects). The disciplinary effects are, as expected, even more impressive. A single additional counselor helps to decrease boys’ disciplinary infractions between 15 to 26 percent.

    READ MORE
  • When It Comes To How We Use Evidence, Is Education Reform The New Welfare Reform?

    Written on June 2, 2011

    ** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post

    In the mid-1990s, after a long and contentious debate, the U.S. Congress passed the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, which President Clinton signed into law. It is usually called the “Welfare Reform Act," as it effectively ended the Aid to Families with Dependent Children (AFDC) program (which is what most people mean when they say “welfare," even though it was [and its successor is] only a tiny part of our welfare state). Established during the New Deal, AFDC was mostly designed to give assistance to needy young children (it was later expanded to include support for their parents/caretakers as well).

    In place of AFDC was a new program – Temporary Assistance for Needy Families (TANF). TANF gave block grants to states, which were directed to design their own “welfare” programs. Although the states were given considerable leeway, their new programs were to have two basic features: first, for welfare recipients to receive benefits, they had to be working; and second, there was to be a time limit on benefits, usually 3-5 years over a lifetime, after which individuals were no longer eligible for cash assistance (states could exempt a proportion of their caseload from these requirements). The general idea was that time limits and work requirements would “break the cycle of poverty”; recipients would be motivated (read: forced) to work, and in doing so, would acquire the experience and confidence necessary for a bootstrap-esque transformation.

    There are several similarities between the bipartisan welfare reform movement of the 1990s and the general thrust of the education reform movement happening today. For example, there is the reliance on market-based mechanisms to “cure” longstanding problems, and the unusually strong liberal-conservative alliance of the proponents. Nevertheless, while calling education reform “the new welfare reform” might be a good soundbyte, it would also take the analogy way too far.

    My intention here is not to draw a direct parallel between the two movements in terms of how they approach their respective problems (poverty/unemployment and student achievement), but rather in how we evaluate their success in doing so. In other words, I am concerned that the manner in which we assess the success or failure of education reform in our public debate will proceed using the same flawed and misguided methods that were used by many for welfare reform.

    READ MORE
  • To Understand The Impact Of Teacher-Focused Reforms, Pay Attention To Teachers

    Written on May 17, 2011

    You don’ t need to be a policy analyst to know that huge changes in education are happening at the state- and local-levels right now – teacher performance pay, the restriction of teachers’ collective bargaining rights, the incorporation of heavily-weighted growth model estimates in teacher evaluations, the elimination of tenure, etc. Like many, I am concerned about the possible consequences of some of these new policies (particularly about their details), as well as about the apparent lack of serious efforts to monitor them.

    Our “traditional” gauge of “what works” – cross-sectional test score gains – is totally inadequate, even under ideal circumstances. Even assuming high quality tests that are closely aligned to what has been taught, raw test scores alone cannot account for changes in the student population over time and are subject to measurement error. There is also no way to know whether fluctuations in test scores (even fluctuations that are real) are the result of any particular policy (or lack thereof).

    Needless to say, test scores can (and will) play some role, but I for one would like to see more states and districts commissioning reputable, independent researchers to perform thorough, longitudinal analyses of their assessment data (which would at least mitigate the measurement issues). Even so, there is really no way to know how these new, high-stakes test-based policies will influence the validity of testing data, and, as I have argued elsewhere, we should not expect large, immediate testing gains even if policies are working well. If we rely on these data as our only yardstick of how various policies are working, we will be getting a picture that is critically incomplete and potentially biased.

    What are the options? Well, we can’t solve all the measurement and causality issues mentioned above, but insofar as the policy changes are focused on teacher quality, it makes sense to evaluate them in part by looking at teacher behavior and characteristics, particularly in those states with new legislation. Here’s a few suggestions.

    READ MORE
  • Revisiting The CREDO Charter School Analysis

    Written on May 2, 2011

    ** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post

    Most people involved in education policy know exactly what you mean when you refer to “the CREDO study." I can’t prove this, but suspect it may be the most frequently mentioned research report over the past two years (it was released in 2009).

    For those who haven’t heard of it (or have forgotten), this report, done by the Center for Research on Education Outcomes (CREDO), which is based at Stanford University, was a comparison of charter schools and regular public schools in 15 states and the District of Columbia. Put simply, the researchers matched up real charter school students with fictional amalgamations of statistically similar students in the same area (the CREDO team called them “virtual twins”), and compared charter school students’ performance (in terms of test score gains) to that of their “twins." The “take home” finding – the one that everybody talks about – was that, among the charter schools included in the analysis, 17 percent had students who did better on the whole than their public school twins, in 37 percent they did worse, and in 46 percent there was no statistical difference. Results varied a bit by student subgroup and over time.

    There are good reasons why this analysis is mentioned so often. For one thing, it remains the largest study of charter school performance to date, and despite some criticism that the "matching" technique biased charter effects downward, it was also a well done large-scale study (for a few other good multi-state charter studies, see here, here, and here). Nevertheless, as is so often the case, the manner in which its findings are discussed and understood sometimes reflect a few key errors of interpretation. Given that it still gets attention in major media outlets, as well as the fact that the CREDO team continues to release new state-specific reports (the latest one is from Pennsylvania), it makes sense to quickly clear up three of the most common misinterpretations.

    READ MORE
  • A List Of Education And Related Data Resources

    Written on March 1, 2011

    We frequently present quick analyses of data on this blog (and look at those done by others). As a close follower of the education debate, I often get the sense that people are hungry for high-quality information on a variety of different topics, but searching for these data can be daunting, which probably deters many people from trying.

    So, while I’m sure that many others have compiled lists of data resources relevant to education, I figured I would do the same, with a focus on more user-friendly sources.

    But first, I would be remiss if I didn’t caution you to use these data carefully. Almost all of the resources below have instructions or FAQ’s, most non-technical. Read them. Remember that improper or misleading presentation of data is one of the most counterproductive features of today’s education debates, and it occurs to the detriment of all.

    That said, here are a few key resources for education and other related quantitative data. It is far from exhaustive, so feel free to leave comments and suggestions if you think I missed anything important.

    READ MORE
  • The Year In Research On Market-Based Education Reform

    Written on January 4, 2011

    ** Also posted here on “Valerie Strauss’ Answer Sheet” in the Washington Post.

    Race to the Top and Waiting for Superman made 2010 a banner year for the market-based education reforms that dominate our national discourse. By contrast, a look at the “year in research” presents a rather different picture for the three pillars of this paradigm: merit pay, charter schools, and using value-added estimates in high-stakes decisions.

    There will always be exceptions (especially given the sheer volume of reports generated by think tanks, academics, and other players), and one year does not a body of research make.  But a quick review of high-quality studies from independent, reputable researchers shows that 2010 was not a particularly good year for these policies.

    READ MORE
  • "No Comment" Would Have Been Better

    Written on November 9, 2010

    Bruce Baker is a professor at Rutgers University who writes an informative blog called School Finance 101.  He presented some descriptive analysis of New Jersey charter schools in a post, and ended up being asked to comment on the data by a reporter.  The same reporter dutifully asked the New Jersey Charter Schools Association (NJCSA) to comment on the analysis. 

    The NJCSA describes itself as “the leading statewide advocate for charter public schools in New Jersey and a principal source of public information about charter schools in the state.”  The organization issued the following response to Baker’s analysis:

    The New Jersey Charter Schools Association seriously questions the credibility of this biased data. Rutgers University Professor Bruce Baker is closely aligned with teachers unions, which have been vocal opponents of charter schools and have a vested financial interest in their ultimate failure.

    Baker is a member of the Think Tank Review Panel, which is bankrolled by the Great Lakes Center for Education Research and Practice. Great Lakes Center members include the National Education Association and the State Education Affiliate Associations in Illinois, Indiana, Michigan, Minnesota, Ohio and Wisconsin. Its chairman is Lu Battaglieri, the executive director of the Michigan Education Association.

    There are now thousands of children on waiting lists for charters schools in New Jersey. This demand shows parents want the option of sending their children to these innovative schools and are satisfied with the results.

    Note the stretch that they have to make to allege that Baker is “closely aligned” with teachers unions—he occasionally reviews papers for an organization that is partly funded by unions. There is no formal connection beyond that. Note also that the NJCSA statement “questions the credibility of [sic] this biased data”—meaning they doubt the credibility of data from the State of New Jersey, which Baker merely recasts as graphs and maps. There is not a shred of substance in this statement that addresses the data or Baker’s description of them. It’s pure guilt by association (and there’s not really even an association).
    READ MORE
  • Research Wars

    Written on September 20, 2010

    Weeks before the fact, a Sept. 29 forum sponsored by the Economic Policy Institute and the National Education Policy Center has sparked some interesting debate over at the National Journal. The event, centered around the recent book, Think Tank Research Quality: Lessons for Policy Makers, the Media and the Public, is an effort to separate "the junk research from the science."

    The crux of the debate is whether the recent explosion of self-published reports by various educational think tanks has helped or hindered the effort to improve the quality of educational research. (Full disclosure: The Albert Shanker Institute is often called a "think tank" and we frequently self-publish.) The push and pull of dueling experts and conflicting reports, say some, has turned education research into a political football—moved down the field by one faction, only to be punted all the way to the other end by a rival faction—each citing "research" as their guide.

    "My research says this works and that doesn’t," can always be countered by, "Oh yeah, well my research says that works and this doesn’t." There are even arguments about what "what works" means because, except for performance on standardized tests, our goals remain diverse, decentralized and subject to local control. As a result, public education is plagued by trial and error policies that rise and fall, district by district and state by state, like some sort of crazed popularity contest.

    READ MORE

Pages

Subscribe to Education Research

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.