Econintersect: Last week Stanford University’s Center for Research on Education Outcomes (CREDO) issued its Charter School Growth and Replication report. The report specifically criticized:
(1) the lack of consistency in Charter School performance;
(2) early years for many Charter Schools are “rocky;” and
(3) replication of Charter Schools within the same school districts is not consistent.
This week The Center for Education Reform issued a pointed criticism of the Stanford study saying it was failing to recognize a wide variety of parameters across the nation’s Charter Schools, even within a single state, that make generalized conclusions invalid.
Here are the 13 major findings from the Stanford report:
1. It is possible to organize a school to be excellent on Day One. New schools do not universally struggle in their early years; in fact, a surprising proportion in each gradespan produce strong academic progress from the start. Interestingly, the attributes of a school — urban, high poverty or high minority — have no relation to the performance of the school. Based on the evidence, there appears to be no structural “new school” phenomenon of wobbly performance for several years.
2. The initial signals of performance are predictive of later performance. We use the distribution of schools’ value add for all schools in each of our included states, divided into quintiles, to map an individual charter school as being low performing (Quintile 1) or high performing (Quintile 5) or in-between. For middle and high schools, we can obtain an initial signal of performance at the end of the first year for a new school, since their enrolled students have prior test scores. The earliest we can measure an elementary school’s quality is in the second year (since it takes two years to create a growth measure.)
Taking the first available performance measure and using it to predict one-year increments going forward, 80 percent of schools in the bottom quintiles of performance remain low performers through their fifth year. Additionally, 94 percent of schools that begin in the top quintile remain there over time.
If we wait until the third year to start the predictions (i.e. use two growth periods as the basis of setting the initial performance for the subsequent conditional probabilities), the patterns are even stronger: 89 percent of low performing schools remain low performing and 97 percent of all the high flyers persist at the top of the distribution.
Only the schools in the 2nd quintile show any substantial pattern of movement, with half of the schools moving to a higher quintile (mostly to the 3rd) and half remaining in the bottom two quintiles.
3. Substantial improvement over time is largely absent from middle schools, multi-level schools and high schools. Only elementary schools show an upward pattern of growth if they start out in the lower two quintiles. Elementary schools showed a greater tendency than other grade spans to be strong in one subject and weak in the other. In math, 80 percent of initially lowest-performing elementary schools showed enough improvement to move themselves out of the bottom of the distribution; from the 2nd quintile the share was about 40 percent. In reading, the rise took longer to manifest, leaving about one-quarter of the schools in the lowest quintiles. About 40 percent of the 2nd quintile elementary schools improved into higher deciles. The elementary schools in the higher quintiles behaved similarly to other schools.
4. The process of morphing into CMOs can be successfully managed. For 21 new CMOs, we were able to observe as they moved from a single school to operating as a CMO. Most of the CMOs that are in operation today began before consistent accountability testing was adopted, but we are able to observe the “birth” of 21 CMOs during our study window. Due to small numbers, we are hesitant to place too much weight on the findings, but they present interesting patterns that merit discussion. Of these, 14 of the 21 have flagship schools with quality in the top two quintiles, with the notable counterpoint that 7 of the 21 flagships had performance that placed them in the bottom three quintiles. The math performance of the flagship school as the first replications occurred held steady or improved in the in 14 of 20 nascent CMOs for whom we have pre- and post-replication data. In reading, 11 of the 21 new CMOs held the flagship performance steady or posted improvements.
5. CMOs on average are pretty average. The growing focus and importance of CMOs in education reform discussions leads to questions about their contributions in the aggregate. To be included in our CMO impact analysis an operator needed to have at least three schools operating in our participating states during our study period. Across the 25 states in the study, a sample of 167 operating CMOs were identified for the years 2007 – 2011. CMOs on average are not dramatically better than non-CMO schools in terms of their contributions to student learning. The difference in learning compared to the Traditional Public school alternatives for CMOs is -.005 standard deviations in Math and .005 in reading; both these values are statistically significant, but obviously not materially different from the comparison.
6. CMOs post superior results with historically disadvantaged student subgroups. They produce stronger academic gains for students of color and student in poverty than those students would have realized either in traditional public schools (TPS) or in many categories what would have learned in independent charter schools.
7. The real story of CMOs is found in their range of quality. The measures of aggregate performance, however, mask considerable variation across CMOs in terms of their overall quality and impact. Across the 167 CMOs, 43 percent outpace the learning gains of their local TPS in reading; 37 percent of CMOs do so in math. These proportions are more positive than was seen for charter schools as a whole, where 17 percent posted better results. However, about a third (37%) of CMOs have portfolio average learning gains that are significantly worse in reading, and half lag their TPS counterparts in math.
Interestingly, across the range of performance, the range of quality around the CMO’s portfolio average is the same, regardless of the nominal value of the average. This finding holds regardless of the size or age of the portfolio.
8. CMO-affiliated new schools on average deliver larger learning gains than independent charter schools. However, both types of new charter schools still lag the learning gains in the average TPS. These effects were consistent for reading and math.
9. Two thirds of CMOs start new schools that are of the same or slightly better quality as the existing portfolio. This demonstrates the feasibility of replication, but also highlights that the resulting schools for the most part still mirror the overall distribution in CMO quality. The finding takes on more importance when considered in concert with the fact that the lowest third of CMOS replicate more rapidly than middling or high-performing CMOs. Of the 245 new schools that were started by CMOs over the course of this study, 121 (or 49 percent) were begun by Organizations whose average performance was in the bottom third of the range. Another 19 percent (47 schools) were started by CMOs in the middle third of the quality distribution. The final 77 new schools (31 percent) were opened by CMOs in the top third of the distribution. This finding highlights the need to be vigilant about which CMOs replicate; CMOs with high average learning gains remain high performers as they grow and CMOs with poor results remain inferior.
10. Few observable attributes of CMOs provide reliable signals of performance. We sought to identify attributes of CMOs that were associated with the overall quality of their portfolio. For the most part, most of the factors we examined had no value as external signals of CMO performance. Specifically, there is no evidence to suggest that maturity, size (by either number of schools or total enrollment) or the spatial proximity of the schools in the network have any significant relationship to the overall quality of the CMO portfolio. Operating in multiple states dampened a SMO’s results on average. One bright signal was found in having a CMO be the recipient of a Charter School Growth Fund; those CMOs that were supported by the Charter School Growth Fund had significantly higher learning gains than other CMOs or independent charter schools.
11. CMOs that are driving to scale show that scale and quality are not mutually assured. Some CMO networks have grown to the point that some of their member schools have in turn replicated in their local communities; we refer to these federated entities as super-networks. Performance as measured by student academic growth differs strikingly across the four super-networks we identified. Strong and positive learning gains were realized for students in the Uncommon Schools and KIPP super-networks. The other two, Responsive Education Solution (ResponsiveEd) and White Hat Management, had less favorable results.
12. Some CMOs get better over time. Besides replication, the alternate path to higher quality results is to improve all schools within the CMO portfolio. Tracking how the portfolio-wide average student learning gain in each CMO changes over time reveals the proportions of CMOs that have positive, negative or flat trajectories to their performance. Using statistical tests of differences, the trend analysis showed that about a third of CMOs has significant and positive growth in performance over time. In one quarter of CMOs, the average learning gain declines significantly over time. The rest of the CMOs remain stable. These findings illustrate that it is possible for CMOs to evolve their performance to higher levels. At the same time, the portfolio growth findings show that the largest share of CMOs do not change much from their initial levels of performance, which again returns to the underlying range in quality.
13. The average student in an Education Management Organizations (EMOs) posted significantly more positive learning gains than either CMOs, independent charter schools or the traditional public schools comparisons. Their results were also relatively more positive for black and Hispanic students and English Language Learners.
The somewhat mixed review by Stanford was criticized by The Center for Education Reform in the following press release 05 February 2013:
A national research study across 23 states and DC assessing charter school performance over time makes erroneous conclusions about the impact of charter schools on students, while ignoring critical distinctions among state proficiency standards and the components of each state’s widely differentiating charter school laws.
“It is hard to believe that year-after-year, smart, well-intentioned researchers believe they can make national conclusions about charter school performance using uneven data, flawed definitions of poverty and ignoring variations in state charter school laws,” said Jeanne Allen president of The Center for Education Reform (CER).
Among the two-dozen states that were the subject of study for Stanford University’s Center for Research on Education Outcomes (CREDO) in its Charter School Growth and Replication report released last week, there are more than two-dozen varieties of charter law:
• Fewer than half of all states studied — ten plus the District of Columbia — have authorizers that are independent from existing education entities, a notable difference in laws and outcomes;
• Nine states have only either school districts or the state board of education authorizing charter schools, compromising school freedoms;
• Three states in the report do not permit flexibility from rules and regulations;
• 11 states guarantee less than 75% of average per pupil funding; and
• Six states limit teacher freedom from collective bargaining agreements.
All the states in the study have vastly different ways of assessing student performance. For example, charter schools in Washington, DC, are evaluated on a criteria that ineffectively measures growth, but the independent DC Public Charter School Board uses the city’s assessment and combines it with other data to create its own performance metrics which analyzes school performance over time and provides a clear, unambiguous data set from which to judge the quality of DC charter schools.
By looking at the quality of a charter school law, it is possible to predict the quality of the charter schools in that area. States with independent, multiple authorizers, that provide their schools high degree of freedom for operations and financial management, and ensure equitable funding have and will continue to show progress among students, while states that do not afford such autonomy and freedom have less successful schools as evidenced in CER’s 2013 Charter School Laws Across the States; Ranking and Scorecard.
Thus, aggregating states into one research universe and drawing conclusions about their relative achievement, in addition to relying on flawed virtual twin methodology, is highly misleading and ignores the so-called “gold standard” of academic research that compares individual student achievement on identical measures. Stanford University Economist, Caroline Hoxby, has reported additional insights into the problems of the CREDO study and has pointed out numerous inconsistencies when CREDO first deployed its unique methodology to make conclusions about student achievement.
The Center also solicited comments from other researchers and while not on record, they were used to issue the following reports on CREDO over the past three years.
The Center for Education Reform has a history of disagreement with Stanford University Center for Research on Education Outcomes regarding assessments of Charter Schools. The supporters of The Center for Education reform are listed here. Econintersect did not find a listing of support at the CREDO website.
Sources:
- Charter School Growth and Replication (Stanford University Center for Research on Education Outcomes, 30 January 2013)
- National Charter Research Misfires on Charter Schools – CREDO report ignores wide variation in state assessments and state law (Press release, The Center for Education Reform, 05 February 2013)