How is the value that a school adds shared between the students that it serves?

Education and Skills
Blog

There’s a large body of literature that focuses on learning gaps created by more advantaged students’ attendance at higher quality schools (see also Darling-Hammond 1998). Previous analysis of Young Lives data has also explored whether gaps are reduced or reinforced between more and less advantaged students even within the same school. Each of these topics relates to how an education system supports student groups differently, and what that means for educational inequalities at a national level.

Using Young Lives’ 2016-17 Ethiopia school survey data, it is possible to further explore whether schools reduce or reinforce learning gaps. Specifically, in this post I explore how learning progress varies according to a student’s prior achievement and how this is influenced by school context.

To set the scene: in the scatter plot below, each dot represents a school according to its average maths score at the end of the year and its average ‘advantage’ [i]. There’s an obvious relationship between advantage and achievement, which we would expect. However, there’s also a fair amount of variation in school mean score even at lower levels of advantage (most between 400 and 500). In addition, it is also important to note that there will be substantial variation in the scores of individual students within each school.

 

Figure 1: School mean maths score against school mean advantage
 

The scale of the maths score is not linked directly to curriculum expectations, however, from early attempts to anchor the scale the pattern proves consistent with the idea of a small ‘elite’ (perhaps 2-4 percent) of students in Ethiopia performing at levels comparable to OECD norms and a long tail of poor performing students and schools. The latest Grade 8 National Learning Assessment (published in 2012) supports this general picture, concluding that 61% of students had obtained only ‘Below Basic’ maths competency (up one percentage point from the equivalent assessment in 2008).

But whether or not a certain school is more or less ‘effective’ cannot be interpreted from the scatter plot. School achievement levels depend most of all on the student intake. As a first look at school effectiveness therefore, we use a ‘value-added’ model containing student test scores at the beginning and end of the school year in our analysis, which incorporates school advantage. This approach allows us to look at the relationship between a student’s achievement and their progress within the year and how this varies according to school context (a school’s ‘advantage’ in this case).

What we find is that the highest achieving schools are not always the most effective. Neither are the most effective schools the ones in only the most advantaged areas. These are useful but unsurprising insights from a value-added approach. What is more interesting, is how the combination of ‘effectiveness’ and ‘advantage’ interact to produce what appear to be quite different outcomes for students. I have tried to pull out four example schools in Figure 2 (note throughout that we are dealing with a pro- poor sample of students in predominantly government schools and so more advantaged doesn’t get anywhere near to the extreme of the population):

  1. A more effective school in a more advantaged area (the solid blue line)

  2. A more effective school in a less advantaged area (the solid red line)

  3. A less effective school in a more advantaged area (the dashed blue line)

  4. A less effective school in a less advantaged area (the dashed red line)

 

Figure 2: Four school types depending on ‘effectiveness’ and ‘advantage’
 

As a reference, for the average student in the average school, the Wave 2 (end of year) test score would be 536 and this student would sit at an x-axis value of 0 (shown as the orange circle). The first thing to note is that for each example school the relationship between Wave 1 (beginning of year) and Wave 2 (end of year) test score is different. Second, differences between example schools are more pronounced at higher rather than lower Wave 1 scores. Third, ineffective schools tend to be more equalising (i.e. the slopes are flatter for dashed lines, suggesting that the lower achieving students in these schools ‘catch-up’ with their peers more than the average) and more effective schools tend to be less ‘equalising’ (steeper slopes). But what does this mean for educational inequalities and possibilities for progress ‘through the middle’?

Take a look at the two extremes: a less effective school in a more advantaged area (dashed blue line) and a more effective school in a less advantaged area (solid red line). For a student with a low score at the beginning of the year, the school context dominates and students in the more advantaged area do better irrespective of school effectiveness (i.e. both blue lines are above both red lines).

For a student with a higher score at the beginning of the year, however, the solid red line is far steeper and rises above the blue dashed line. School effectiveness has overcome school advantage for these students. It’s almost as if the higher school effectiveness of the less advantaged school is driven by the advancement of higher performing students, not by the advancement of all students. In contrast, the less ‘effective’ school in a more advantaged area tends to be far more equalising for students with lower scores at the start of the year.

This is simply a descriptive analysis. It should not be interpreted as saying that moving a student from one school to the other will cause a change in their later score. Nonetheless, the fact that the relationship between beginning and end of year scores is related to school context is a potentially interesting finding, since it adds a dimension to the point estimate of school effectiveness, and brings into question how we view more and less effective schools.

These relationships encourage further questions. If schools in more advantaged areas are more ‘equalising’ and schools in less advantaged areas are less ‘equalising’, then this can have a widening effect on student outcomes. To what extent is this a ‘real’ phenomenon, and to what extent is it a result of teachers doing something different in different school types? What are schools doing differently for higher achievers in less advantaged areas? What are the links to student differences within schools – e.g. language of instruction and individual child wealth? Forthcoming analysis of the Young Lives 2016-17 Ethiopia school survey data will seek to answer some of these questions.

 

[i] (i.e. the average student wealth index score, based on a set of durable assets)