Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T16:49:05.007Z Has data issue: false hasContentIssue false

Assessing the Variance in Pupil Attainment: How Important is the School Attended?

Published online by Cambridge University Press:  01 January 2020

David Wilkinson*
Affiliation:
UCL and NIESR
Alex Bryson
Affiliation:
UCL, NIESR and IZA
Lucy Stokes
Affiliation:
NIESR
*
corresponding author, e-mail: d.wilkinson@ucl.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

We explore the variation in pupil attainment at the end of secondary schooling in England. The paper links data on all schools and all pupils within these schools to analyse the role of the school in accounting for this variation. We analyse a number of different indicators of pupil attainment including value added between the end of primary and secondary schooling and attainment levels at the end of secondary schooling. We examine indicators that were the focus of the school accounting framework as well as other indicators that were not directly part of how schools were assessed. We show that schools account for a minority of the variance in pupil attainment, and the extent of the variation accounted for by the school is sensitive to the measure of pupil attainment used. In addition, we find that the majority of the explained school-level variance in attainment is related to school composition. However, most of the variance in attainment remains unexplained, raising questions about what other factors contribute to the variation in school performance.

Type
Research Articles
Copyright
Copyright © 2018 National Institute of Economic and Social Research

1. Introduction

There is academic and policy concern regarding pupil attainment in England's schools. These concerns reflect perceptions of high variance in educational attainment both across and within schools, even when accounting for the quality of pupil intake. Such concerns have been evident for some time and are highlighted by the triennial publication of results from the Programme for International Student Assessment (PISA), which in recent years have indicated little or no change in pupil performance in England relative to the OECD average (OECD, 2014 and 2016).

The aim of this paper is to explore the variation in pupil attainment at the end of secondary schooling in England in the period 2009/10 to 2015/16. Concern about low attainment of pupils has focused on the role played by individual characteristics, family and socio-economic characteristics, community and societal characteristics and educational experience (Reference SammonsSammons, 2007). The main focus of this paper is on the role of the school in explaining variation in pupil attainment in secondary schools in England between 2009/10 and 2015/16. We also explore pupil characteristics and the different mix of pupils within schools.

Recent developments in schools in England have placed a sharper focus on the quality of delivery in schools, through the development of school league tables, a tightening of the school inspection regime and the academies programme. This sharper focus on quality of delivery, in particular in relation to the improvement of ‘failing’ schools, might lead to a reduction in the variation in school performance. However, the academies programme, which expanded rapidly since 2010, allowed schools to move out of Local Authority control and gave school leaders greater autonomy over the curriculum that pupils follow, and the hiring and firing of teachers. As such we might expect that whilst school leadership has become more important during this period, and school leaders utilise these greater freedoms to varying degrees and with varying success, this may lead to greater variance in school performance. Hence it is important to examine, for this period of rapid change in schools' policy in England, the extent to which schools account for the variance in pupil attainment.

There is a long history of studies addressing questions around the importance of schools. A review by Reference RutterRutter (1983) highlighted the controversies in the 1960s and 1970s with studies by Reference Coleman, Campbell, Hobson, McPartland, Mood, Weinfeld and YorkColeman et al. (1966) and Reference Jencks, Smith, Ackland, Bane, Cohen, Gintis, Heyns and MichelsonJencks et al. (1972) indicating that schools made little difference to educational outcomes in comparison to personal characteristics, whilst later authors (Reference Brookover, Beady, Flood, Schweitzer and WisenbakerBrookover et al., 1979, Reference Rutter, Maughan, Mortimore, Ouston and SmithRutter et al., 1979, Reference Madaus, Kellaghan, Rakow and KingMadaus et al., 1979, and Reference Halsey, Heath and RidgeHalsey et al., 1980) argued that schools can make a difference, highlighting significant school effects whilst also acknowledging the influence of personal characteristics.

The Department for Education and Skills (DfES, 2004) examined variation in test results,Footnote 1 pupil progress and value added using data for 2003 in England. They find that 20 per cent of the variation in the results at the end of Key Stage (KS) 4, the end of secondary schooling, occurs between schools as opposed to within schools.

Value added indicators were also examined, recognising that prior attainment is the strongest predictor of KS4 attainment. At this time value added in secondary schooling was considered in two stages, namely KS2 to KS3 and KS3 to KS4, as KS3 covered the first half of secondary schooling and KS4 covered the second half. The results show that between KS2 and KS3, 13 per cent of the variance in value added scores was between schools, whilst between KS3 and KS4 8 per cent of the variance in value added scores was between schools.

The results for England in 2003 are broadly in line with earlier research. For example, a systematic meta-analysis by Reference Scheerens and BoskerScheerens and Bosker (1997) reports that schools accounted for 19 per cent of attainment differences between pupils when initial differences between students are not accounted for, but 8 per cent of the variance when these initial differences are accounted for.

Analysis of PISA (OECD, 2016) gives some recent results for 2015 for a sample of UK schools with the variance in science performance between schools estimated as around 22 per cent of total variance in pupil science performance.

Other studies have focused on other determinants of educational achievement, with Reference Shakeshaft, Trzaskowski, McMillan, Rimfield, Krapohl, Haworth, Dale and PlominShakeshaft et al. (2013) considering twins born between 1994 and 1996 showing that just over half of the variation in end of secondary schooling attainment in English, mathematics and science in England was due to genetics, whilst Reference Nicoletti and RabeNicoletti and Rabe (2013) show that the family explains at least 43 per cent of the variance in educational attainment for pupils that took KS4 exams between 2007 and 2009. The neighbourhood explains 10 to 15 per cent of the variance in educational attainment.

Our analysis is close in spirit to the DfES (2004) analysis in that it covers the whole of secondary schooling, and considers both value added indicators i.e. KS2 to KS4, as well as end of KS4 indicators of educational attainment. We exploit the same large dataset for pupils in England used by the DfES. We consider a range of pupil attainment indicators and consider how much of the variance in pupil attainment is attributable to the school attended.

The remainder of the paper is set out as follows. Section 2 briefly discusses the dataset used in our analysis; Section 3 introduces the measures of pupil attainment used in the analysis; Section 4 shows the variance in our measures between 2009/10 and 2015/16 both at individual and school level. In Section 5, we set out a model to decompose the variance in pupil attainment between and within schools and highlight the importance of pupil characteristics. Section 6 focuses on school characteristics and Section 7 concludes.

2. Data

Our data are from three linked datasets for the period 2009/10 to 2015/16. The School census is a statutory return covering all local authority maintained schools, as well as some specific types of non-maintained schools such as academies. Schools are required to complete the School census three times a year; it collects information on a range of school and pupil characteristics. The School census is our starting point for identifying secondary schools in England. We include all schools classified as either a secondary school or middle-deemed secondary school in any year of our analysis period; there are around 3,300 schools in each year on this basis (table 1).

Key Stage 4 attainment data are available for more than 90 per cent of schools in each year; the majority of schools for which this was not possible were middle-deemed secondary schools, which would generally not include pupils of this age.

We also match in other school-level data, for example from the School Workforce Census (SWF), a census of all publicly funded schools in England. These data are available for the vast majority of schools leaving us with an analysis sample of just under 3,000 schools, roughly 90 per cent of all secondary schools.

We are concerned by the impact of school entry and exit over this period, so also conduct separate analysis for the subset of schools that are in the sample in each of the seven years 2009/10 to 2015/16 and also for the schools that either enter or exit the sample during this period. Roughly 500 schools in each year are not in the sample in all seven years.Footnote 2

3. Measuring pupil attainment in secondary schools

State education in England is split into a number of ‘Key Stages’, which were first defined in the 1988 Education Reform Act upon the introduction of the National Curriculum. Key Stages 1 and 2 cover primary education, typically between ages 5 and 11, whilst secondary education from ages 12 to 16 is covered by Key Stages 3 and 4. Compulsory schooling in England covers up to the last Friday in June in the school year in which the child reached the age of 16. However, following the Education and Skills Act 2008, from academic year 2013/14 participation in some form of education or training until the school year in which the child turned 17 was required, with the age being raised to 18 in academic year 2015/16. Attainment targets have been set within the National Curriculum at each of the Key Stages allowing pupil progress to be traced and school performance to be monitored.

Two types of attainment measure are used in the school accounting system at the end of secondary schooling, namely threshold measures and value added point scores. The former are indicators that pupils have attained a certain level; the latter assign a score to combinations of qualification and grade and control for prior attainment.

The main threshold indicator is the attainment of five or more A∗–C grades in General Certificate of Secondary Education qualifications (GCSE), or equivalent qualifications including English and maths.Footnote 3 This indicator was the primary focus of the school accounting system up to 2014/15. From 2015/16 a new value added indicator, Progress 8, became the main indicator.

For the value added indicators a points score is calculated for each individual based on a system whereby each qualification and grade has an associated points score. This score includes pupils' best eight qualifications with bonuses for English and mathematics qualifications. The main school qualification taken at the end of KS4 is the GCSE. Up to 2014/15 an A∗ grade GCSE was worth 58 points, an A grade worth 52 points, with a reduction of six points for each grade down to 16 points for a G grade. For the new Progress 8 indicator, a different points system was used with an A∗ grade worth 8 points, with a reduction of one point for each grade down to 1 point for a G grade.

Value added indicators for KS2 to KS4 were introduced in 2003/4 following a successful pilot in 2003. These value added indicators seek to give a measure of pupil progress during secondary schooling by comparing attainment at the end of KS4 with attainment at the end of primary education at the end of Key Stage 2 (KS2). Value added measures usually compare attainment of pupils at KS4 with the attainment of pupils who had the same attainment levels at KS2. Thus, a positive value added score indicates progress better than the average for pupils with the same prior attainment.

These value added measures and the threshold measures based on five or more A∗–C grades are high stakes indicators in that they are published annually in school performance tables.Footnote 4

During our analysis period there have been a number of changes that affect these headline measures. These cover inclusion/exclusion of certain qualifications, and methodological changes to the calculation of value added (Department for Education, 2010, 2011 and 2017).

The main changes between 2009/10 and 2015/16 are outlined below:

  1. 1. In 2010/11 the method for calculating value added changed. Up to 2009/10 a ‘contextualised value added’ approach was used which compared attainment of pupils at KS4 with the attainment of pupils with similar characteristicsFootnote 5 who had similar KS2 attainment. From 2010/11 the value added indicator does not take into account pupil characteristics, comparing attainment of pupils at KS4 only with the attainment of pupils who had the same KS2 attainment. In both periods, the KS4 attainment indicator was the points-based indicator associated with their best eight GCSE and equivalent outcomes. For each pupil a predicted score is calculated based on a statistical model drawn from pupils in the same year group in England (roughly 600,000 pupils). The value added score for a pupil is then calculated as the difference (positive or negative) between the model's prediction for pupils like them nationally and their actual KS4 attainment.

  2. 2. From 2013/14 substantial changes were made in relation to non-GCSE qualifications following the adoption of the recommendations from Professor Alison Wolf's review of vocational education (Reference WolfWolf, 2011). This resulted in the removal of around 3,000 unique qualifications from the performance measures and an adjustment in the associated point scores for non-GCSE qualifications so that no qualification counts for more than one GCSE.Footnote 6 In addition, the number of non-GCSE qualifications that count in performance measures was limited to two per pupil.

  3. 3. A further change was introduced in 2013/14 meaning that when pupils were taking qualifications multiple times, only the first result a pupil achieved would count in performance measures.Footnote 7 This only applied to English and mathematics taken in Year 11 in 2013/14, but in subsequent years it applied to GCSEs in all subjects that were included in the EBacc taken in Year 10 or later. Previously, the best result that a pupil achieved was considered.Footnote 8

  4. 4. In October 2013 the DfE announced that a new secondary school accountability system would be implemented in 2016. It includes two new headline measures: Progress 8 and Attainment 8. Attainment 8 measures the achievement of a pupil across eight qualifications including mathematics and English. In addition, the points associated with different qualifications were changed as discussed above. Progress 8 is calculated for each pupil in a similar way to the value added indicators used between 2010/11 and 2014/15. The cap on the number of eligible non-GCSE qualifications was lifted from two to three.

  5. 5. Up to 2014/15 when English language and English literature options in English were chosen, exams in both subjects had to be taken and a C grade or above in English language achieved to count to the Five A∗–C indicator. In 2015/16, to meet the English requirement of the EBacc, exams in both must be taken and a C grade or above achieved in either English language or English literature counts to the five A∗–C indicator. There is no requirement to take both subjects.

The above changes are important as they affect the comparability over time in school and pupil performance indicators used in the school performance tables. Our analysis of the variance of pupil attainment indicators at the end of KS4 focuses on the two headline indicators, but concerns over the consistency of these indicators means that we also consider a set of indicators that include the same qualifications over time.

These additional indicators are based exclusively on GCSE qualifications. These are the most common form of qualification taken by pupils at the end of KS4 comprising roughly two-thirds of exam entries that are eligible for inclusion in the headline measures, discussed above, in each year from 2009/10 to 2012/13. From 2013/14, when a large number of qualifications were excluded from the eligible qualifications list for the headline indicators, the GCSE share of exam entries increased to around 90 per cent of all eligible qualifications.

The total number of eligible qualifications entered per pupil fell in 2013/14 from 11.3 to 8.9, whilst the average number of GCSE entries increased from 7.6 per pupil in 2012/13 to 8.0 in 2013/14 and 8.5 and 8.8 in 2014/15 and 2015/16 respectively. This is consistent with changes in the value of qualifications, whereby before 2013/14 some vocational qualifications counted as equivalent to four GCSEs, but from 2013/14 they only counted as one GCSE. It is also consistent with some pupils and schools substituting out of now ineligible qualifications and instead taking more GCSEs. However, in 2013/14 the increase in GCSEs taken did not match the overall fall in eligible qualifications entered.

Throughout our main analysis we focus on four indicators. Two are reported in school performance tables:

  • The attainment of five or more GCSE A∗–C grades in GCSE or equivalent qualifications including English and maths,

  • the KS2 to KS4 value added score, based on the best eight qualifications.

The other two are our own calculations:

  • Total GCSE points score

  • GCSE points score per entry.

The total GCSE points score will be affected by the increase in the average number of GCSEs entered by each pupil, but GCSE points score per entry will not. In addition, it is important to note that whilst the points scores associated with different GCSE grades changed with the adoption of the Attainment 8 and Progress 8 measures, to maintain consistency over time we stick with the previous points allocation in our analysis.

Table 2 gives some summary information for each of the four indicators.

The percentage of pupils achieving five or more A∗–C grades in GCSE or equivalent qualifications increased between 2009/10 and 2012/13 from 57 to 62 per cent. In 2013/14 the percentage fell back to 58 per cent as numerous qualifications were no longer counted. In addition, for pupils who took qualifications multiple times only their first attempt was considered (previously their best attempt counted).

In 2015/16 the percentage of pupils achieving five or more A∗–C grades in GCSE or equivalent qualifications increased to 62 per cent as results in either English language or English literature counted as an English qualification (previously only the English language qualification was considered).

The standard deviation of this indicator was 0.49 throughout the period.

The value added indicator has a mean close to zero in all years. As noted previously the indicator changes over time. It began as a contextualised value added measure in 2009/10, a simple value added indicator between 2010/11 and 2014/15 and the new Progress 8 indicator in 2015/16. All measures of dispersion are similar in 2009/10 and 2010/11 despite the change in how value added was calculated. However, as the points scores associated with each qualification are much lower under the Progress 8 indicator, this is reflected in a much lower standard deviation of the indicator (10.5 compared with 74.1 in the previous year).

The value added indicators are also affected by the change in the eligible qualifications and only counting the first attempt at a qualification discussed above. This is reflected in an increased standard deviation from 66.6 in 2012/13 to 73.4 in 2013/14. This increased dispersion is evident at both ends of the distribution with a fall in the 10th percentile score from −64.4 in 2012/13 to −81.7 in 2013/14 and an increase in the 90th percentile score from 66.4 in 2012/13 to 75.8 in 2013/14.

We explore this further by looking at the variation in the value added measure in 2013/14 by the share of exam entries that were GCSEs in the previous year. The idea here is that exam entries in the previous year were not affected by the change in qualification eligibility and GCSEs remained eligible qualifications for the value added calculation in all years, so that schools which had previously offered almost exclusively GCSEs would not be greatly affected by the change in eligible qualifications effective from 2013/14. The variation in value added scores in 2013/14 is similarly high for schools that had previously had 95 per cent of exam entries as GCSE compared to the overall variation in scores, so the increase in variation of the best 8 value added measure does not appear to be directly related to the change in eligible qualifications.

The mean GCSE points score shows some changes over time, increasing by 16 points in 2013/14 and a further 20 points by 2015/16. This is in line with the increase in the average number of GCSEs taken after 2013/14 as other qualifications were excluded from the league tables. The increase in points was greatest at the 10th percentile, an increase from 100 in 2012/13 to 124 in 2013/14 and 176 in 2015/16, so is consistent with a switch into GCSEs and out of qualifications that are no longer eligible for inclusion in the value added and five or more A∗–C indicators for pupils that are towards the bottom of the GCSE points score distribution.

The GCSE points score per entry is relatively stable throughout the period at around 39 points, roughly the equivalent to a grade C GCSE.

4. Assessing the variance in pupil attainment at school and pupil level

We now turn to an examination of the variance in school level attainment. Table 3 shows school-level variance as a percentage of pupil level variance for each of our indicators. These figures do not account for any other factors related to pupil attainment, but provide a benchmark for later estimates that take into account pupil characteristics.

The amount of variation accounted for by schools varies between the different indicators and over time, with different time trends for the different measures. For the 5 or more A∗–C indicator, the amount of variance accounted for by the school varied between 10.4 (2011/12 and 2012/13) and 12.3 per cent (2013/14 and 2014/15). However, the share of the variances accounted for by schools was only slightly higher at the end of the period compared to the beginning (12.1 per cent in 2015/16 compared with 11.9 per cent in 2009/10).

For the value added indicator (row 2), which is most similar to the indicators used in the early DfES work, between 9.4 and 13.8 per cent of the pupil variance is accounted for by the school. Note the 2003 estimates for the first and second half of secondary schooling were 13 and 8 per cent, so our estimates are of a similar order. There is some evidence of an increase over time in the variance accounted for by schools in this indicator, with 9.4 per cent of the variance accounted for in 2009/10 and 13.8 per cent in 2015/16. The largest changes occur from 2013/14 onwards, which is in line with the announcement by the DfE that the Progress 8 value added measure would be introduced in 2015/16. This is consistent with schools, on average, placing a higher priority on value added indicators from 2013/14 onwards.

The variance of the GCSE points score and points per entry indicators accounted for by schools is much higher than for the other measures. These range from 29.4 to 35.8 per cent for GCSE points scores and 22.3 to 23.6 per cent for points per entry. This is in line with the higher variance accounted for by schools reported by Reference Scheerens and BoskerScheerens and Bosker (1997) when initial differences between students were not taken into account. The amount of variance accounted for by schools in the points per entry indicator is relatively stable over time, but for the GCSE points indicator the variance accounted for by schools fell by 6.4 percentage points between 2009/10 and 2015/16 – in contrast to the value added indicator.

Further analysis, reported in Appendix table A1, uses newly derived indicators that attempt to turn the five or more A∗–C indicator and the GCSE points score and points per entry indicators into value added indicators. To do this, we run a pupil-level regression for each indicator on attainment recorded at the end of primary schooling (the KS2 total points score). The difference between the actual indicator and the model prediction provides a crude measure of value added for each indicator. The results reported in Appendix table A1 (available on-line) show, in line with Scheerens and Bosker, that taking into account initial differences reduces the amount of variance accounted for by schools.

Additional analysis for a group of schools that are in the data in all seven years and schools that enter or exit the sample at some point between 2009/10 and 2015/16 yields results that are not qualitatively different from the ones for all schools, indicating that the changing composition of schools does not account for much of the overall variance in pupil attainment. However, analysis excluding selective schools shows that the share of the variance accounted for by schools is reduced slightly for all measures, indicating that selective schools are more able to account for variance in pupil attainment.

5. Pupil characteristics and the impact of the school

We next examine the interaction between school-level attainment and the effects of pupil characteristics on attainment. Following an approach set out to look at the variance in earnings in Reference Barth, Bryson, David and FreemanBarth et al. (2016), we estimate an equation for pupil attainment, V, for pupil i in school s. Estimates are run separately for each year, so no time subscript appears in these equations.

(1)Vis=xisb+φs(i)+uis,withE(uis|xis,φs)=0

In this equation xis is a vector of pupil characteristics (gender, ethnicity, whether eligible for Free School Meals and whether have Special Educational Needs) for pupil i in school s. Also included is a vector of dummy variables φs(i) for the school s where pupil i studies. We impose the assumption that pupils in a school share the same school effect on scores. This implies that any individual heterogeneity in the school effect (representing the quality of the pupil-school match) is in the error term, (uis).

The variance of the score is then decomposed into the part due to the variance of predicted scores from observed pupil characteristics (xb), the variance of scores among schools, net of these pupil characteristics (φ), the covariance between them, and the variance in the error term (u).

(2)Var(V)=Var(xb)+Var(φ)+2Cov(xb,φ)+Var(u)

Defining S as the school average level of the predicted score from observable pupil characteristics, x, we can define ρ = Cov(xb,S)/Var(xb) as a measure of the similarity of pupils in a school. This is akin to Reference Kremer and MaskinKremer and Maskin's (1996) index of worker-worker segregation across establishments discussed in Reference Barth, Bryson, David and FreemanBarth et al. (2016). When ρ = 1, pupils are perfectly sorted with similar characteristics into schools. When ρ = 0, pupils' observable characteristics appear as if they had been randomly placed in schools. We also measure the extent to which attributes of pupils that contribute to scores are associated with the school effect on scores by ρs = Cov(xb, φ)/Var(xb). When the characteristics of pupils in the intake to schools is independent of their impact on scores, ρs = 0.

The between school variance, Varb, then divides into a part due to sorting of pupils and a part due to ‘pure’ variation of scores among schools:

(3)Varb=Var(xb)(ρ+2ρs)+Var(φ)

Similarly, the within-school variance, Varw, can be decomposed:

(4)Varw=Var(xb)(1ρ)+Var(u)

When schools only have pupils with the same characteristics (ρ = 1) the variance of the pupil score contributes nothing to within-school variance. When the pupil intake of schools is independent of pupil characteristics (ρ = 0) the variance of the distribution of pupil characteristics contributes to the within-school variance, but not to the between school variance.

Tables 4ad show our decomposition for each of the four attainment indicators. The total variance figures are the same as those shown in tables 2 and 3.

The same patterns of the share of between and within-school variance shown in table 3 are evident for all indicators.

In general, the estimate of ρ is around 0.1 in all models indicating that the pupil intake of schools is not completely independent of pupil characteristics, and hence the distribution of pupil characteristics contributes to the within-school variance. This is consistent with the reduced share of the variance explained by schools for the value added indicators when compared to the simple pupil attainment measures discussed in relation to table 3.

The estimate of ρs is more variable, but typically positive, indicating that the impact of pupil characteristics on attainment is also not completely independent of the school effect on attainment. For the value added indicators (table 4b), however, some of the estimates are close to zero and for 2009/10 when the value added indicator already controls for differences in pupil characteristics the estimate of ρs is negative.

The implications of this are that pupil characteristics account for part of the between-school variance, reducing the ‘pure’ school effect by up to 16 per cent for the value added indicators and up to 30 per cent for the end of KS4 indicators.

Pupil characteristics also explain part of the within-school variance, with a contribution of between 10 and 19 per cent in the total variance for end of KS4 indicators; and between 5 and 8 per cent for the value added indicators (bottom row of tables 4a4d). These lower figures for the value added indicators are because pupil characteristics will also be an important determinant of pupil attainment at the end of primary schooling, so are partly encapsulated in the value added indicators.

6. The role of school characteristics

This final substantive section of the paper decomposes the between-school element of variance for the four attainment indicators. Recall that this accounts for different shares of total variance depending on the indicator considered. Here we consider the composition of pupils in a school more directly than in the previous analysis, and also take into account the size of the school and school resources, captured by the pupil–teacher ratio.

To do this we regressed mean school scores on school characteristics using the following equation:

(5)Vs=Csa+Nsb+Rsc+φp

where Vs is the average score in school s. The vector φp gives the mean school score net of the other variables in the regression. Cs is a vector giving the composition of pupils in school s. It covers the share of girls, the share of pupils from different ethnic groups, the share of pupils eligible for Free School Meals, and the share of pupils with different Special Educational Needs in each school. Ns is the number of pupils in the school and Rs is the pupil–teacher ratio in each school.Footnote 9 Again, estimates are run separately for each year, so no time subscript appears in these equations.

Table 5 summarises the results from estimating equation 5.

For each indicator the composition and size of the school account for a significant proportion of the school-level variance – as much as 63 per cent for each of our three end of KS4 attainment indicators. For our value added indicator, school composition and size still account for between 13 and 30 per cent of the school-level variance. The exception is in 2009/10 when the best 8 value added indicator controls for pupil characteristics in that year, just 5 per cent of the school-level variance is accounted for by composition and size.

We conducted the same analysis for our constructed value added indicators for the 5 or more A∗–C indicator and the GCSE points score and points per entry. Results are reported in Appendix table A3 (available on-line) and show that the amount of school-level variance accounted for by school composition and size is reduced compared with the original indicators, but is higher than for the headline value added indicator.

The pupil–teacher ratio accounts for a negligible share of the variance in all models. However, school resources are related to the composition of pupils in schools, also included in our model, so the pupil–teacher ratio will be partly determined by the pupil composition, and the effects of school resources may be picked up by the school composition variables. Sensitivity analysis, using a variable for the ratio between the number of pupils and total school workforce produces similar results, as does analysis conducted separately for London and the rest of England.

A Department for Education (2014) review of literature concluded that additional school resources positively influence attainment, although the effects are relatively modest at all Key Stages, whilst Reference Dearden, Ferri and MeghirDearden et al. (2002) show that having controlled for ability and family background the pupil–teacher ratio has no impact on educational qualifications, although it did influence women's wages at the age of 33, but not men's wages. Our findings on educational attainment are consistent with these studies, but we are not able to examine the effect of the pupil–teacher ratio on wages.

It is possible that the school composition variables overstate the influence of these factors on school attainment. Our model is limited by the availability of data on relevant factors that are associated with effective school processses, as discussed in the review by Reference SammonsSammons (2007). A synthesis of reviews on this topic by Reference Scheerens and BoskerScheerens and Bosker (1997) highlight: a productive climate and culture; focus on central learning skills; appropriate monitoring; practice orientated staff development; professional leadership; parental involvement; effective institutional arrangements and high expectations. The leadership and institutional aspects of these factors are a key element of the academies programme, with appropriate monitoring being part of the tighter inspection regime. In parallel research we explore leadership and monitoring, staff development and institutional arangements through examination of Human Resource Management indicators (Stokes et al., forthcoming, and Bryson et al., forthcoming).

7. Conclusion

The analysis shows the relative importance of schools in explaining the variance in pupil attainment. In line with previous studies, schools account for a minority of the variance in attainment. For the current headline pupil attainment indicator, Progress 8, we find that in 2015/16 schools explain 14 per cent of the variation in Progress 8, with a similar percentage for other value added (or progress) indicators that relate to earlier years. Part of this is related to the similarity of pupils within schools, in terms of observable characteristics, and how these characteristics relate to pupil attainment. Excluding this component of the school effect on pupil attainment reduces the amount of variance explained by schools to around 9 per cent.

This contribution, whilst relatively small, is still important; hence schools do matter for academic attainment. However, perhaps not quite so much as the policy debate sometimes suggests. Evidence, for example from Reference Nicoletti and RabeNicoletti and Rabe (2013), shows that family explains at least 43 per cent of the variance in educational attainment at the end of KS4 whilst neighbourhood explains 10 to 15 per cent of the variance.

Our analysis also shows that the extent of the variation explained by schools is sensitive to the measure of pupil attainment used and whether prior attainment is taken into account. For example, looking at other end of secondary schooling indicators, up to 35.8 per cent of pupil variance is explained by schools. However, once we control for prior attainment the amount of variance explained by schools is reduced to a maximum of 30.7 per cent.

Our analysis covers a relatively short time period, 2009/10 to 2015/16; however this was a period when the expansion of the academy programme in English secondary schools was rapid. Despite earlier evidence showing that academies improved average levels of attainment prior to 2010 (Reference Eyles and MachinEyles and Machin, 2015), the latest evidence indicates that the recent academy expansion had little impact on average attainment in these schools (Reference Andrews, Perera, Eyles, Sahlgren, Machin, Sandi and SilvaAndrews et al., 2017). Our analysis over this same period indicated some fluctuations in pupil and school-level variance which are sensitive to changes in the method of calculation of some of our indicators. The trends in the share of pupil attainment accounted for by the school over this period move in different directions for different indicators. For the headline indicator, five or more A∗–C grades in GCSE or equivalent qualifications, including English and Mathematics, the variance accounted for by schools was relatively stable over time, whilst the variance accounted for by schools for the value added measure increased, particularly between 2012/13 and 2013/14. The variance accounted for by schools for the GCSE points score indicator, however, fell in this year. This year saw significant definitional changes in eligible qualifications as well as the announcement that Progress 8 would become the primary attainment measure against which schools were to be assessed, so schools may have chosen to focus more on value added from this point in time.

Our school-level analysis shows that school composition makes a significant contribution to the school-level variance, yet much of the school-level variance is left unexplained. This suggests that broader measures than school composition need to be examined to explain the variance in school-level attainment. This should include quality of teaching and learning and the curriculum offered as qualification eligibility for inclusion in school performance tables changed over the period. Further, classroom variation may be an important element of within-school variance.

Footnotes

Acknowledgements: We thank the Nuffield Foundation (grant EDU/41926) for funding and members of the project advisory group, and participants at seminars at the Department for Education and NIESR for comments. The authors acknowledge the Department for Education (DfE) for granting access to data from the National Pupil Database and School Workforce Census. The views expressed are those of the authors, and all errors and omissions remain the authors' sole responsibility.

1 At this time results were based on GCSEs and General National Vocational Qualifications (GNVQs) only. By 2009/10, when our study begins, a large number of additional qualifications were included, so there was more variation between schools in the qualifications offered.

2 Our analysis is not sensitive to this selection of schools, indicating that the changing school composition is not an important factor in explaining the variance of pupil attainment in this period.

3 Information on all accredited qualifications approved by the Secretary of State for Education can be found at the Office of Qualifications and Examinations Regulation (Ofqual) website at: http://register.ofqual.gov.uk.

4 For the latest performance tables see https://www.compare-school-performance.service.gov.uk/.

5 The characteristics used were gender, whether pupil had Special Educational Needs, Ethnicity, Eligibility for Free School Meals, whether first language was not English, whether pupil had moved between schools at non-standard times, age, whether pupil had been in care at any time whilst at the current school, a measure of deprivation (the Income Deprivation Affecting Children Index, IDACI) based on the pupil's postcode. In addition school average and standard deviation KS2 scores were included.

6 For example, previously a Business and Technology Education Council (BTEC) qualification may have counted as the equivalent of four GCSEs, from 2013/14 it could only be considered as equivalent to a single GCSE in its contribution to performance measures.

7 This was a response to Departmental analysis that found the number of pupils taking qualifications multiple times had increased in recent years.

8 This new rule only affects a school's performance measure calculations; pupils are still accredited with every grade they have achieved, regardless of the number of entries.

9 We focus on the pupil-teacher ratio as our indicator of school resources as it is consistent between different types of school. Published expenditure data, for example, relate to different time periods for local authority maintained schools and academy schools, so are not strictly comparable across all schools.

References

Andrews, J. and Perera, N., with Eyles, A., Sahlgren, G.H., Machin, S., Sandi, M. and Silva, O. (2017), ‘The impact of academies on educational outcomes’, Education Policy Institute, July.Google Scholar
Barth, E., Bryson, A., David, J. and Freeman, R. (2016), ‘It's where you work: increases in the dispersion of earnings across establishments and individuals in the United States’, Journal of Labor Economics, 34, 2, pt. 2.CrossRefGoogle Scholar
Brookover, W., Beady, C., Flood, P., Schweitzer, J. and Wisenbaker, J. (1979), School Social Systems and Student Achievement: Schools can make a difference, New York: Praeger.Google Scholar
Bryson, A., Stokes, L. and Wilkinson, D. (forthcoming), ‘Can HRM improve schools’ performance?', mimeo.Google Scholar
Coleman, J.S., Campbell, E., Hobson, C., McPartland, J., Mood, A., Weinfeld, F. and York, R. (1966), Equality of Educational Opportunity, Washington DC: US Government Printing Office.Google Scholar
Dearden, L., Ferri, J. and Meghir, C. (2002), ‘The effect of school quality on educational attainment and wages’, The Review of Economics and Statistics, 84, 1, pp. 120, The MIT Press.Google Scholar
Department for Education (2010), ‘A technical guide to contextualised value added (including English and maths) Key stage 2 to 4 2010 Model, available at http://webarchive.nationalarchives.gov.uk/20150814121645/ http://www.education.gov.uk/schools/performance/archive/schools_10/cvacalc.pdf.Google Scholar
Department for Education (2011), ‘A guide to value added Key stage 2 to 4 in 2011 School and College Performance tables and raise online’, available at http://webarchive.nationalarchives.gov.uk/20151008063953/ http://www.education.gov.uk/schools/performance/2011/secondary_11/KS2-4_General_VA_Guide_2011_FINAL_AMENDED.pdf.Google Scholar
Department for Education (2013), ‘2012 OECD PISA results’, Oral statement to Parliament, Secretary of State for Education Michael Gove, 3 December.Google Scholar
Department for Education (2014), ‘What impact does school spending have on pupil attainment? A review of the recent literature’, Department for Education, June, available at http://www.parliament.uk/documents/commons-committees/Education/Impact-of-school-spending-on-pupil-attainment.pdf.Google Scholar
Department for Education (2017), ‘Progress 8 and Attainment 8: Guide for maintained secondary schools, academies and free schools’, January, http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/659860/Secondary_accountability_measures_guide.pdf.Google Scholar
Department for Education and Skills (DfES) (2004), ‘Statistics in education: variation in pupil progress 2003’, Department for Education and Skills National Statistics Bulletin, Issue No 02/04, July.Google Scholar
Eyles, A. and Machin, S. (2015), ‘The introduction of Academy Schools to England's education’, CEP Discussion Paper No 1368, August.Google Scholar
Halsey, A.H., Heath, A. and Ridge, J. (1980), Origins and Destinations: Family, Class, and Education in Modern Britain, Oxford: Clarendon Press.Google Scholar
Jencks, C.S., Smith, M., Ackland, H., Bane, M.J., Cohen, D., Gintis, H., Heyns, B. and Michelson, S. (1972), Inequality: A Reassessment of the Effect of the Family and Schooling in America, New York: Basic Books.Google Scholar
Kremer, M. and Maskin, E. (1996), ‘Wage inequality and segregation by skill’, National Bureau of Economic Research Working Paper Series, Working Paper 5718, August.CrossRefGoogle Scholar
Madaus, G., Kellaghan, T., Rakow, T. and King, D. (1979), ‘The sensitivity of measures of school effectiveness’, Harvard Educational Review, Summer.CrossRefGoogle Scholar
Nicoletti, C. and Rabe, B. (2013), ‘Inequality in pupils' test scores: how much do family, sibling type and neighbourhood matter?’, Economica, 80, pp. 197218.CrossRefGoogle Scholar
Organisation for Economic Co-operation and Development (OECD) (2014), PISA 2012 Results: What Students Know and Can Do: Student Performance in Mathematics, Reading and Science, Volume 1, Paris: OECD.Google Scholar
Organisation for Economic Co-operation and Development (OECD) (2016), PISA 2015 Results: Excellence and Equity in Education, Volume I, Paris: OECD.Google Scholar
Rutter, M. (1983), ‘School effects on pupil progress – findings and policy implications’, Child Development, 54:1, pp. 129.CrossRefGoogle Scholar
Rutter, M., Maughan, B., Mortimore, P. and Ouston, J., with Smith, A. (1979), Fifteen Thousand Hours: Secondary Schools and Their Effects on Children, London: Open Books and Boston, MA: Harvard University Press.Google Scholar
Sammons, P. (2007), School Effectiveness and Equity: Making Connections; A Review of School Effectiveness and Improvement Research – Its Implications for Practitioners and Policy Makers, CfBT Education Trust.Google Scholar
Scheerens, J. and Bosker, R. (1997), The Foundations of Educational Effectiveness, Pergamon.Google Scholar
Shakeshaft, N., Trzaskowski, M., McMillan, A., Rimfield, K., Krapohl, E., Haworth, C., Dale, P. and Plomin, R. (2013), ‘Strong genetic influence on a UK nationwide test of educational achievement at the end of compulsory education at age 16’, PLoS ONE 8(12): e80341 doi:10.1371/journal.pone.0080341.CrossRefGoogle ScholarPubMed
Stokes, L., Bryson, A. and Wilkinson, D. (forthcoming), ‘What does leadership look like in schools and does it matter for performance?’, mimeo.Google Scholar
Wolf, A. (2011), Review of Vocational Education – The Wolf Report, DfE Report DFE-00031-2011, http://www.gov.uk/government/uploads/system/uploads/attachment_data/file/180504/DFE-00031-2011.pdf.Google Scholar
Supplementary material: PDF

Wilkinson, et al. Supplementary Material

Wilkinson, et al. Supplementary Material

Download Wilkinson, et al. Supplementary Material(PDF)
PDF 81.4 KB