To project or not to project, that is the question…are you worried about how best to evaluate your 2017 cohort? You may be finding that your headline measures don’t seem to be comparable to previous years, and that Progress 8 scores and EBacc pass percentages seem to be awfully low.
What is the problem? The issue, as you are no doubt aware, lies in the tension between the 9-1 and A*-G grades, and the tension between the 2016 and 2017 points, which makes the calculation and use of some of the new performance measures, though not all, quite problematic.
Let’s look at the measures one by one. I’m going to start with the EBacc and work up.
At a headline level for Year 10 any projected % achieving the EBacc is going to be accurate, since it is easy to calculate who has got 5 in English and maths, and C in the other relevant subjects. However, it’s not going to be that useful. It isn’t comparable to previous years, since in order to achieve it, your students need to attain a 5 in English and Maths, whereas previously they would have had to attain a C.
As we can see from the Ofqual postcard, only one third of those who would have attained a C will get the new 5, so you can expect your EBacc attainment to drop.
Threshold in English & maths
The next measure, the % achieving the threshold in English and maths, is also an easy measure to project. Again though, it’s not comparable to previous year groups due to the difference between a C and a 5, so the jury is still out as to how you work out whether your percentage for Year 10 is a good one or not!
Your Attainment 8 figures can also be calculated quite easily, by allocating the new 2017 points to the A*-G grades for your unreformed qualifications. Once again, the resulting scores are not comparable to previous years, so be careful how you use them.
Of course the Attainment 8 scores are also used to calculate Progress 8, and it’s here that things get really tricky.
This is because the only national estimates we have are based on 2016 exam results, and therefore are calculated using 2016 points. A comparison of in-school figures using 2017 points with national figures using 2016 points is not going to produce Progress 8 figures that mean anything. I have heard data managers and members of SLT say ‘but even if they are not accurate, aren’t they better than nothing?’ I personally think that it’s the other way round, ‘nothing’ is better than data that is at best misleading and at worst downright wrong. The change in the points, as you can see from the graphic, is not even linear, so there really is no correlation.
An alternative option would seem to be to stick with 2016 points across the board. Surely we could compare in-school Attainment 8 based on 2016 points with national estimates based on 2016 points, and it will at least give us an idea of where we stand?
Sadly not. We can’t actually choose to stick with 2016 points in school, due to the 9-1 grades in English and maths. The 8 points a student will be allocated for a grade 8, 7 points for a grade 7 etc. are the 2017 points, and therefore don’t equate directly to the available English/maths estimates, which are based on 2016 points. The easiest way to demonstrate this is by way of an example. Let’s have a look at the English basket, and how the change from A*-G to 9-1 would affect Attainment and Progress 8 figures for similar students.
One of last year’s Y11, Bobby Smith, arrived in Year 7 with a KS2 Eng/mat average of 4.4, giving him an English basket estimate of 9.88. He is predicted a low to middling grade C in English language or literature (it doesn’t matter which as they’re now equally weighted) giving him an English basket score of 10, i.e. 5 doubled. This gives him an English Progress 8 score of +0.06, just slightly positive.
However, a Y11 student this year with the same KS2 level, and the same projected English achievement – a low to middling C, which will actually give him a 4 in 2017, will end up with an English basket score of 8. If we compare this to the 2015 estimate, he gets a pretty hefty negative English basket progress score, which is clearly wrong. This of course will in turn affect the overall P8 calculation.
What would make all our lives a lot easier would be a set of ‘shadow’ estimates based on the 2016 national achievement, but calculated using the 2017 points, to enable projection of P8 scores for current Y11. However, the DfE have said that they are not going to produce any ‘shadow’ estimates, because they actually don’t WANT us to project our scores for the new measures.
Now, if I am completely honest, there are a couple of ways in which you could get around this. Firstly, you could create your own set of national estimates based on 2017 points, using a formula to adjust each estimate. Then you could compare your in-school 2017 Attainment 8 scores to your fake 2017 national estimates to get Progress 8 scores based on 2017 points. Secondly, you could take your 9-1 grades, and adjust the points allocated to them, to give them equivalency to the 2016 points allocated to the A*-G grades. You would then have a set of in-school 2016-style Attainment 8 scores to compare to the actual 2016-style national estimates. Due to enormous pressure from our customers, SISRA are actually looking into the least misleading way of doing it.
However, I have concerns about using any kind of workaround. Both these options, and any others that could be available, carry the risk of encouraging you, as school leaders, into basing decisions on data that is unreliable, because it is calculated using figures that are not real. Just because you can do it doesn’t mean you should. Doing this could lead to some major surprises come summer 2017. If you really have to use a workaround, I would use option 2, since it is almost impossible to know by how much to adjust the national estimates.
So what can you do?
For our new Y11 students calculating useful headline figures seems to be a problem. In some ways, I can’t help feeling that this should be liberating for school leaders. Instead of constantly worrying about the headline figures, the focus can shift to the departments, the classes and the individual students. There are many ways we can analyse the data to ensure that we know our students are making good progress. We are going to have to adopt the idea that, in the same way that if we look after the pennies the pounds will look after themselves, if we look after the students and departments, the overall performance measures will look after themselves. We need to focus on the small data, rather than the big data.
For details of how, see part two of this blog, ‘Effectively tracking and evaluating your new Y11‘