Comparing against targets (PT.2)

June 16, 2017

With Results Day fast approaching we will soon be making comparisons between our exams and targets data. As discussed in my earlier blog ‘Getting Results Day Ready’, preparation is key! With this in mind, it’s a good time to ensure that the number of targets tally with the number of results you are expecting. Let’s take a look at some data to see how important this is.

Below we can see the Attainment 8, Progress 8 and EBacc data taken from the Headlines > Charts in SISRA Analytics. I have compared the Y11 spring data against school targets. All of the school’s timetabled qualifications have been included in both datasets.

 

Imagine we have just found out that 6 students will take a GCSE in Polish. The Head of MFL expects they will all achieve a grade B. I have added these grades to our Y11 Spring collection. As a result, our Attainment 8 figure has increased slightly and our Progress 8 figure is now positive *big cheer!*

 

Is there anything else we need to consider? Yes; for complete accuracy we should also ensure that any datasets we compare against have the same number of grades uploaded. For the 6 exam grades I have entered, I should also enter 6 target grades to enable me to make accurate comparisons.
See how this has affected the Attainment and Progress 8 target figures.

 

This is extremely important at qualification and class level, particularly if the data forms part of a teacher’s performance management. Here we are looking at the cumulative pass for the MFL faculty without the Polish grades:

 

Once the Polish grades have been added to the spring collection this affects some of the summary figures for the department, most notably the average points and residual.

 

When we factor the targets in too, see how the figures change again.

 

As Heads of Department and Class Teachers can be judged on A*-C performance, in this example ensuring the Polish results and targets are added, has a positive effect on the data. The average points have seen an increase too, as has the Residual figure, which shows how well the faculty is performing overall across qualifications with the same point scale. This could be the difference between a pay increase or not!

This also applies to any other dataset you compare with – whether its FFT estimates, performance management targets, CATs, MIDYIS, or YELLIS. Always ensure the figures tally!

Many schools use Analytics to model targets or for forecasts; again, a complete set of grades is essential for accuracy.
Another common mistake is that qualifications are not correctly nominated as EBacc subjects. Here we can see the effect of computer science on some of the key headlines when it is incorrectly set up, against when it is correctly set up as a special.

 

Another subject often incorrectly nominated is RE as a humanity. This has the opposite effect of Computer Science on the EBacc basket.

A simple check can be made in Analytics to see if your datasets tally; simply compare an assessment collection against your targets and check the ‘Total Grades’ column. Your colleagues may just thank you for it!

Hopefully by reading my earlier blog as well as this one, you should now be feeling more confident about results days and the accuracy of your data. Within the next few days, you will be able to read a further blog which will help with troubleshooting if there are discrepancies between your figures and the DfEs.

by Emma Maltby, Data Consultant

Leave a Reply