So; what better thing to do on a Sunday evening but to write some comments about data and the likes! There has been a lot of talk of late about the reliability and validity of attempting to predict outcomes and Ofsted School Inspection Update (March 2017) wrote…
’As inspectors, we can help schools by not asking them during inspections to provide predictions for cohorts about to take tests and examinations. It is impossible to do so with any accuracy until after the tests and examinations have been taken, so we should not put schools under any pressure to do so – it’s meaningless. Much better to ask schools how they have assessed whether pupils are making the kind of progress they should in their studies and if not, what their teachers have been doing to support them to better achievement.’
I understand the sentiment of this statement and to some extent support this, especially when one looks at the reliability and accuracy of some Ofsted judgements prior to results actually being published! I am very much aware that a school should not be judged on outcomes alone, but it is interesting to note the percentage of schools that have been judged as good (based on predictions) and then observe the Progress 8 scores as illustrated in the diagram below. All the green dots represent schools judged as ‘good’ as of January 2017.
There is such a wide range of outcomes here yet some schools who, on the face of it, have ‘inadequate’ outcomes and conversely some who have ‘outstanding’ outcomes have been labelled as ‘good’.
So where does the data influence judgements and what do Ofsted mean by ‘assessed whether pupils are making the kind of progress they should in their studies?’ Is this an objective process or a subjective one? I have seen first-hand how some inspectors have made judgements about ‘the quality of teaching and learning’ (and standards) because a child had drawn in their English book…and it wasn’t a very good drawing! So how can we make this process more objective? I would suggest considering and reflecting on the notion of ‘assessment without levels’, mapping the curriculum to the programme of study from year 7 through to year 11. This could then be matched to what a child should know, understand and can do in the various years and backed up with evidence of testing. Taken together, these measures would then allow for pretty objective formative assessments.
These are testing times and because of the changes in the examination system, schools really do not know what the outcomes will be, apart from the fact (as announced by OFQUAL) that there will be a percentage who obtain the various grades regardless, to some extent, of what the standard is as illustrated in this diagram:
The new EAP area in SISRA Analytics is allowing me (for the first time ever) to see if my cohort are on track regardless of which year they are in by matching up teacher assessments to where they should be in any particular year group or time (subject to how a school decides to assess. i.e. current/predicted/end of year/different KS3 and KS4 grading system etc.). However, I do feel the SISRA developers have been as flexible as possible in creating a system that caters for most (if not all) methods, which is pretty impressive. I suppose then Ofsted inspectors can observe what is going on in the classroom and in books and then look to triangulate those observations with some objective assessments as illustrated by the data in whatever year group they like (mind you, that’s what they were meant to have done before wasn’t it but how then does that account for the wide range of ‘goods’ based on the outcomes as above!).