Generating ‘Centre Assessment Grades’: the challenges

April 6, 2020
By: Andy Byers, Headteacher, Framwellgate School, Durham

In the guidance published by Ofqual on 3rd April, centres have been asked to provide a ‘centre assessment grade’ for each student in each subject, and to rank order the students within each grade. The guidance states:Screen Shot 2020-04-04 at 15.55.12School leaders across the country will be determining how best to manage this process but also questioning the likely outcome for their cohort.

To help think my way through all of this, I have carried out some analysis on last year’s cohort in GCSE English Literature. The graph below shows the percentage of students achieving each grade in the summer of 2019, alongside the predictions (made at Easter) and the outcomes of the mock exams, which in my school are taken immediately after Christmas (Paper 1 in the E-Bacc subjects) and then immediately after February half-term (Paper 2 in the E-Bacc subjects and all other papers).

Screen Shot 2020-04-04 at 15.59.41

Some thoughts about our actual GCSE results

There has been much debate on social media about students in turnaround schools, or rapidly improving schools losing out if the previous year’s results (or those in recent years) are taken into account when turning the centre assessment grade into actual results. In truth, all schools have significant in-school variations between subjects and across years.  In my school, Progress 8 in 2017 was -0.4.  The improvements we put in place have meant national average progress (-0.1) for the last two years (2018 and 2019). But that improvement hasn’t been seen uniformly across all subjects.

Screen Shot 2020-04-04 at 13.23.40

In English Literature, 18% of students achieved a 7+ in 2017, with 28% in 2018 and 29% in 2019, and that pattern (a big improvement in 2018, largely maintained in 2019) could be seen in the percentage of students achieving grades 4+ too. The prior attainment of the cohort over these three years is similar although cohort sizes fell as the impact of the school’s troubles over the previous 5 or 6 years was felt.

But English was not typical of all departments. Some subjects have not improved in line with the school average.  Others have seen fluctuations which buck the trend. Cohort sizes and makeup in some subjects will have had an impact, as will subject leadership, the stability and experience of the teaching teams, and a variety of other factors.

It may be too simplistic to say that students in improving schools will lose out, and those in schools going in the other direction will gain. Only those at Ofqual and the exam boards, far more able than me, will know how they will treat the centre assessed grades when they receive them.

All we can do is control those things we can control. Which brings us to predicted grades.

Some thoughts about our predictions

I have spent a large part of my career in four schools discussing what we mean by a ‘predicted grade’. I don’t think my experience will be hugely different to those elsewhere and I would be surprised if every teacher within any given school (let alone between schools) gave the same definition of a ‘predicted grade’.  I have seen it confused with both target grades and ‘working at grades’. We also know that asking some teachers to ‘predict a grade’, even at the end of Y11, without the crutch of the target grade next to them in their mark book, causes anxiety!

Predicting grades with any degree of accuracy can be surprisingly difficult but that should not deter us on this occasion. Our predictions for English Literature in 2019 were, seemingly, relatively accurate.  They followed the reassuring bell curve we see with the final outcomes. At the ‘top end’, the English department predicted 25% of students would achieve 7+ and the actual outcomes were slightly better (29%). At 5+ the department was spot on (75% predicted and achieved).

It is also worth saying that the purpose of these predicted grades (and any predicted grades) differs significantly from the ones we have been asked to produce. The predictions we have given in the past may (like it or not) have been used by some teachers as a kick up the backside for their students. They will almost certainly not have undergone any intensive standardisation or scrutiny in the way that mock examination grades might have done, and they will have suffered from the inherent assumption that they don’t really matter!

I am old enough to remember the endless sheets of paper which had to be completed (HB pencil only) with predictions for the exam boards for their use, should there be queries or special consideration required. Whatever happened to them?

To return to our predictions and taking all of the above into consideration, it is still worth noting that however accurate these predictions are, we haven’t considered these at a student level. The accountability measures have been waived and the stresses of senior leaders about whether a department is able to provide some reassurance about headline figures to tide us over the summer, is now irrelevant. What matters now is all at a student level.

If our English department were able to accurately predict that 75% of students would achieve a 5+, did they get it right with every student? Nearly. Of the 102 students they predicted would achieve 5+, 8 failed to reach that threshold (a success rate of 92%) and all bar one of those who didn’t, achieved a 4 (the other student achieved a 3).

Do thresholds matter?

I think they do. We know that this system is imperfect. We should probably try to stop thinking about whether all students will achieve the grade they would have got had they taken exams. They won’t (more of that below) so we should hold onto what we are trying to achieve.

Whilst we hope to make this process both as robust and as fair as possible, we are relying on university admissions officers and Heads of Sixth Forms to provide any fairness that this process is unable to deliver.

My own A Level grades weren’t the best, but I met the requirement (just) for my first-choice course at Newcastle University, and I moved on. I had a second chance to prove my academic credentials there. I believe in comprehensive and inclusive education but have sat in many a (difficult) meeting with students and parents where sixth form entry requirements have not been met.  None of the Sixth Forms I have worked in have been entirely open. You couldn’t pitch up with a 3 in GCSE maths and take A Level Physics.  You won’t be able to again this year, but which Head of Sixth Form is going to sit in front of a parent and ‘hold the line’ with a child who might have needed a 5 in English to take History but has been ‘awarded’ a 4?

So, keep that in mind as we delve more deeply into the accuracy of individual predictions. Of the 18 students the English department predicted in 2019 would achieve a 7 (not a 7+ but that specific grade), only 7 did so. 3 achieved a grade 8, 7 achieved a grade 6, and 1 achieved a grade 5. That is only a success rate of 40%. Clearly, the 3 students achieving the 8 would have been happy to have done better than they were predicted. They would have been (hypothetically) less happy under this process as these are the students who would potentially ‘lose out’ by being awarded a 7. The rest would have ‘gained’ as they were predicted a higher grade than they achieved.

You don’t know what you don’t know of course, so most of our current students will only be disappointed if they are awarded grades which don’t reflect the work they believe they put in. We all know students who believe they can pull it out of the bag in the final exams despite all evidence to the contrary. They might be disappointed this summer. Our least accurate prediction last year was a girl who was predicted a 5 and achieved an 8. Sadly, those success stories won’t happen this year. But being positive, those students who, despite all their effort and hard work, go to pieces in the exam, won’t be penalised either. These are the extremes; they are few and far between every year.

The curse of overpredicting

The example I have given here is of a department which usually predicts cautiously. Last year 60 of the 133 students achieved the grade they were predicted, and a further 33 students achieved a grade (or occasionally 2) more. Those departments which have typically given students the benefit of the doubt and overpredicted throughout their GCSE courses could well have disappointed students on their hands. In the past, students (and their parents) will have attributed this discrepancy to their performance in the examination (although in my last – very middle class – school, this was still often seen as our fault anyway).  This time, if we don’t anticipate this and manage expectations and communications accordingly, we’ll have some parents saying, ‘you predicted him a 6 all the way through and he achieved a 3’. Of course, these discrepancies may not be due to overprediction, but because of the improving school or department syndrome outlined earlier.

We need to keep returning to the things we can control.  Remember, the predicted grades of 2019, outlined above, were made in very different circumstances. It may be that where teachers “over predicted” (there were 40 such predictions in English Lit) it was because they were expecting a bigger improvement from the mock exam results than materialised.  Incidentally, looking at our mock exam results in 2019 (in the first graph), I am relieved that these aren’t being called for or used to work out the final grade. That was never a realistic possibility anyway.

Generating a centre assessed grade this year

I have read a number of good suggestions on social media already and will incorporate these into our system.  This has still to be finalised but is likely to involve some of the following (in each subject):

  • Finding a starting point – it could be the mock examinations (see above) or the last set of teacher predictions which were given to the students. Using the mock exams has the benefit of us being able to rank order the raw scores (it is a starting and not an end point)
  • Overlay any other data the department might have (we don’t use grades much and give raw scores out to students for assessments they do) which can be ranked too
  • The Curriculum Leader will need to pull this together first but they will need a set of predictions and, if possible, a starting rank order, for each grade
  • Fix up a series of meetings. This is a huge challenge as it will involve all teachers of the cohort “meeting” together remotely.  If that doesn’t happen, the logistics of sending round lists doesn’t bear thinking about.
  • The meetings should allow a case to be made for every single child. It is too easy to ignore the quiet students so that they find themselves lower down the ranking, or to promote the case of the lovely hard-working students at the expense of someone who has been challenging.
  • When each student is considered, the class teacher should be able to say, “they are a solid 5, or they are a very strong 5 and should be near the top of that group, or they are a borderline 4/5.
  • It is impossible for us to get a ranking like this 100% right, but we are trying to ensure that if boundaries are set so that 5 of our Grade 6 students drop to a grade 5, it is the those we consider the weaker ones and not those who could have been pushing for a grade 7.
  • There will need to be a few of these meetings, and plenty of reflection. Teachers will know 30 (or even 60) of the students on the list. The class ranking can at least be accurate (or as accurate as this process allows).

Once these meetings have taken place, and a predicted grade and ranking achieved, it needs to be considered by senior leaders. We have to be realistic. Are we predicting 20 students to get a 9, having never had any before?  We owe it to the system, not to submit unrealistic grades. As Duncan Baldwin from ASCL has said, much as we may be tempted, this is not the time to right the wrongs of the system and inflate the grades of those in vulnerable groups, unless their work and prediction demands that.

If this section is still a little muddled, my only excuse is that I am still thinking about it.  When we ‘return’ to school after Easter, I will have a plan.


I want to return to an earlier point.  This is the least-worst option. Far from being one who believes the future will not involve exams, we may find after this that we need them more. Teachers aren’t enjoying this process at all! But remember, we should be devoting our energies to the things we can control or influence, and that is to ensure that students move on to their chosen course or career. Sure, they will need these grades in years to come, and will never shake off being the 2020 cohort, but for now, we want them to join our sixth forms, colleges and universities, and we don’t need an exact grade for that to happen.

There are more students than we care to mention (I live with one) for whom achieving the exact (high) grade they have worked for matters, but we aren’t dealing with exact anymore. Of all the uncertainties in the world right now, achieving an A rather than an A* or a 4 rather than a 5, is not our most challenging.

This post first appeared on