Skip to:

Education Research

  • Lessons And Directions From The CREDO Urban Charter School Study

    Written on March 26, 2015

    Last week, CREDO, a Stanford University research organization that focuses mostly on charter schools, released an analysis of the test-based effectiveness of charter schools in “urban areas” – that is, charters located in cities located within in 42 urban areas throughout 22 states. The math and reading testing data used in the analysis are from the 2006-07 to 2010-11 school years.

    In short, the researchers find that, across all areas included, charters’ estimated impacts on test scores, vis-à-vis the regular public schools to which they are compared, are positive and statistically discernible. The magnitude of the overall estimated effect is somewhat modest in reading, and larger in math. In both cases, as always, results vary substantially by location, with very large and positive impacts in some places and negative impacts in others.

    These “horse race” charter school studies are certainly worthwhile, and their findings have useful policy implications. In another sense, however, the public’s relentless focus on the “bottom line” of these analyses is tantamount to asking continually a question ("do charter schools boost test scores?") to which we already know the answer (some do, some do not). This approach is somewhat inconsistent with the whole idea of charter schools, and with harvesting what is their largest potential contribution to U.S. public education. But there are also a few more specific issues and findings in this report that merit a little bit of further discussion, and we’ll start with those.

    READ MORE
  • Turning Conflict Into Trust Improves Schools And Student Learning

    Written on March 3, 2015

    Our guest author today is Greg Anrig, vice president of policy and programs at The Century Foundation and author of Beyond the Education Wars: Evidence That Collaboration Builds Effective Schools.

    In recent years, a number of studies (discussed below; also see here and here) have shown that effective public schools are built on strong collaborative relationships, including those between administrators and teachers. These findings have helped to accelerate a movement toward constructing such partnerships in public schools across the U.S. However, the growing research and expanding innovations aimed at nurturing collaboration have largely been neglected by both mainstream media and the policy community.

    Studies that explore the question of what makes successful schools work never find a silver bullet, but they do consistently pinpoint commonalities in how those schools operate. The University of Chicago's Consortium on Chicago School Research produced the most compelling research of this type, published in a book called Organizing Schools for Improvement. The consortium gathered demographic and test data, and conducted extensive surveys of stakeholders, in more than 400 Chicago elementary schools from 1990 to 2005. That treasure trove of information enabled the consortium to identify with a high degree of confidence the organizational characteristics and practices associated with schools that produced above-average improvement in student outcomes.

    The most crucial finding was that the most effective schools, based on test score improvement over time after controlling for demographic factors, had developed an unusually high degree of "relational trust" among their administrators, teachers, and parents.

    READ MORE
  • The Debate And Evidence On The Impact Of NCLB

    Written on February 17, 2015

    There is currently a flurry of debate focused on the question of whether “NCLB worked.” This question, which surfaces regularly in the education field, is particularly salient in recent weeks, as Congress holds hearings on reauthorizing the law.

    Any time there is a spell of “did NCLB work?” activity, one can hear and read numerous attempts to use simple NAEP changes in order to assess its impact. Individuals and organizations, including both supporters and detractors of the law, attempt to make their cases by presenting trends in scores, parsing subgroups estimates, and so on. These efforts, though typically well-intentioned, do not, of course, tell us much of anything about the law’s impact. One can use simple, unadjusted NAEP changes to prove or disprove any policy argument. And the reason is that they are not valid evidence of an intervention's effects. There’s more to policy analysis than subtraction.

    But it’s not just the inappropriate use of evidence that makes these “did NCLB work?” debates frustrating and, often, unproductive. It is also the fact that NCLB really cannot be judged in simple, binary terms. It is a complex, national policy with considerable inter-state variation in design/implementation and various types of effects, intended and unintended. This is not a situation that lends itself to clear cut yes/no answers to the “did it work?” question.

    READ MORE
  • The Increasing Academic Ability Of New York Teachers

    Written on February 12, 2015

    For many years now, a common talking point in education circles has been that U.S. public school teachers are disproportionately drawn from the “bottom third” of college graduates, and that we have to “attract better candidates” in order to improve the distribution of teacher quality. We discussed the basis for this “bottom third” claim in this post, and I will not repeat the points here, except to summarize that “bottom third” teachers (based on SAT/ACT scores) were indeed somewhat overrepresented nationally, although the magnitudes of such differences vary by cohort and other characteristics.

    A very recent article in the journal Educational Researcher addresses this issue head-on (a full working version of the article is available here). It is written by Hamilton Lankford, Susanna Loeb, Andrew McEachin, Luke Miller and James Wyckoff. The authors analyze SAT scores of New York State teachers over a 25 year period (between 1985 and 2009). Their main finding is that these SAT scores, after a long term decline, improved between 2000 and 2009 among all certified teachers, with the increases being especially large among incoming (new) teachers, and among teachers in high-poverty schools. For example, the proportion of incoming New York teachers whose SAT scores were in the top third has increased over 10 percentage points, while the proportion with scores in the bottom third has decreased by a similar amount (these figures define “top third” and “bottom third” in terms of New York State public school students who took the SAT between 1979 and 2008).

    This is an important study that bears heavily on the current debate over improving the teacher labor supply, and there are few important points about it worth discussing briefly.

    READ MORE
  • Feeling Socially Connected Fuels Intrinsic Motivation And Engagement

    Written on November 20, 2014

    Our "social side of education reform" series has emphasized that teaching is a cooperative endeavor, and as such is deeply influenced by the quality of a school's social environment -- i.e., trusting relationships, teamwork and cooperation. But what about learning? To what extent are dispositions such as motivation, persistence and engagement mediated by relationships and the social-relational context?

    This is, of course, a very complex question, which can't be addressed comprehensively here. But I would like to discuss three papers that provide some important answers. In terms of our "social side" theme, the studies I will highlight suggest that efforts to improve learning should include and leverage social-relational processes, such as how learners perceive (and relate to) -- how they think they fit into -- their social contexts. Finally, this research, particularly the last paper, suggests that translating this knowledge into policy may be less about top down, prescriptive regulations and more about what Stanford psychologist Gregory M. Walton has called "wise interventions" -- i.e., small but precise strategies that target recursive processes (more below).

    The first paper, by Lucas P. Butler and Gregory M. Walton (2013), describes the results of two experiments testing whether the perceived collaborative nature of an activity that was done individually would cause greater enjoyment of and persistence on that activity among preschoolers.

    READ MORE
  • Building And Sustaining Research-Practice Partnerships

    Written on October 1, 2014

    Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. Bill is co-Principal Investigator of the Research+Practice Collaboratory (funded by the National Science Foundation) and of a study about research use in research-practice partnerships (supported by the William T. Grant Foundation). This is the second of two posts on research-practice partnerships - read the part one here; both posts are part of The Social Side of Reform Shanker Blog series.

    In my first post on research-practice partnerships, I highlighted the need for partnerships and pointed to some potential benefits of long-term collaborations between researchers and practitioners. But how do you know when an arrangement between researchers and practitioners is a research-practice partnership? Where can people go to learn about how to form and sustain research-practice partnerships? Who funds this work?

    In this post I answer these questions and point to some resources researchers and practitioners can use to develop and sustain partnerships.

    READ MORE
  • The Superintendent Factor

    Written on September 16, 2014

    One of the more visible manifestations of what I have called “informal test-based accountability” -- that is, how testing results play out in the media and public discourse -- is the phenomenon of superintendents, particularly big city superintendents, making their reputations based on the results during their administrations.

    In general, big city superintendents are expected to promise large testing increases, and their success or failure is to no small extent judged on whether those promises are fulfilled. Several superintendents almost seem to have built entire careers on a few (misinterpreted) points in proficiency rates or NAEP scale scores. This particular phenomenon, in my view, is rather curious. For one thing, any district leader will tell you that many of their core duties, such as improving administrative efficiency, communicating with parents and the community, strengthening districts' financial situation, etc., might have little or no impact on short-term testing gains. In addition, even those policies that do have such an impact often take many years to show up in aggregate results.

    In short, judging superintendents based largely on the testing results during their tenures seems misguided. A recent report issued by the Brown Center at Brookings, and written by Matt Chingos, Grover Whitehurst and Katharine Lindquist, adds a little bit of empirical insight to this viewpoint.

    READ MORE
  • Why Teachers And Researchers Should Work Together For Improvement

    Written on September 4, 2014

    Our guest author today is Bill Penuel, professor of educational psychology and learning sciences at the University of Colorado Boulder. He leads the National Center for Research in Policy and Practice, which investigates how school and district leaders use research in decision-making. This is the first of two posts on research-practice partnerships; both are part of The Social Side of Education Reform series.

    Policymakers are asking a lot of public school teachers these days, especially when it comes to the shifts in teaching and assessment required to implement new, ambitious standards for student learning. Teachers want and need more time and support to make these shifts. A big question is: What kinds of support and guidance can educational research and researchers provide?

    Unfortunately, that question is not easy to answer. Most educational researchers spend much of their time answering questions that are of more interest to other researchers than to practitioners.  Even if researchers did focus on questions of interest to practitioners, teachers and teacher leaders need answers more quickly than researchers can provide them. And when researchers and practitioners do try to work together on problems of practice, it takes a while for them to get on the same page about what those problems are and how to solve them. It’s almost as if researchers and practitioners occupy two different cultural worlds.

    READ MORE
  • Research And Policy On Paying Teachers For Advanced Degrees

    Written on September 2, 2014

    There are three general factors that determine most public school teachers’ base salaries (which are usually laid out in a table called a salary schedule). The first is where they teach; districts vary widely in how much they pay. The second factor is experience. Salary schedules normally grant teachers “step raises” or “increments” each year they remain in the district, though these raises end at some point (when teachers reach the “top step”).

    The third typical factor that determines teacher salary is their level of education. Usually, teachers receive a permanent raise for acquiring additional education beyond their bachelor’s degree. Most commonly, this means a master’s degree, which roughly half of teachers have earned (though most districts award raises for accumulating a certain number of credits towards a master’s and/or a Ph.D., and for getting a Ph.D.). The raise for receiving a master’s degree varies, but just to give an idea, it is, on average, about 10 percent over the base salary of bachelor’s-only teachers.

    This practice of awarding raises for teachers who earn master’s degrees has come under tremendous fire in recent years. The basic argument is that these raises are expensive, but that having a master’s degree is not associated with test-based effectiveness (i.e., is not correlated with scores from value-added models of teachers’ estimated impact on their students’ testing performance). Many advocates argue that states and districts should simply cease giving teachers raises for advanced degrees, since, they say, it makes no sense to pay teachers for a credential that is not associated with higher performance. North Carolina, in fact, passed a law last year ending these raises, and there is talk of doing the same elsewhere.

    READ MORE
  • A Quick Look At The ASA Statement On Value-Added

    Written on August 26, 2014

    Several months ago, the American Statistical Association (ASA) released a statement on the use of value-added models in education policy. I’m a little late getting to this (and might be repeating points that others made at the time), but I wanted to comment on the statement, not only because I think it's useful to have ASA add their perspective to the debate on this issue, but also because their statement seems to have become one of the staple citations for those who oppose the use of these models in teacher evaluations and other policies.

    Some of these folks claimed that the ASA supported their viewpoint – i.e., that value-added models should play no role in accountability policy. I don’t agree with this interpretation. To be sure, the ASA authors described the limitations of these estimates, and urged caution, but I think that the statement rather explicitly reaches a more nuanced conclusion: That value-added estimates might play a useful role in education policy, as one among several measures used in formal accountability systems, but this must be done carefully and appropriately.*

    Much of the statement puts forth the standard, albeit important, points about value-added (e.g., moderate stability between years/models, potential for bias, etc.). But there are, from my reading, three important takeaways that bear on the public debate about the use of these measures, which are not always so widely acknowledged.

    READ MORE

Pages

Subscribe to Education Research

DISCLAIMER

This web site and the information contained herein are provided as a service to those who are interested in the work of the Albert Shanker Institute (ASI). ASI makes no warranties, either express or implied, concerning the information contained on or linked from shankerblog.org. The visitor uses the information provided herein at his/her own risk. ASI, its officers, board members, agents, and employees specifically disclaim any and all liability from damages which may result from the utilization of the information provided herein. The content in the Shanker Blog may not necessarily reflect the views or official policy positions of ASI or any related entity or organization.