Primary

School data – What have we learned over 10 years of assessment?

woman pointing at data chart; school data

School data is more pervasive now than ever, but what have we learned about how to use it to our advantage over the last decade?

Joshua Perry
by Joshua Perry
DOWNLOAD A FREE RESOURCE! Pie Corbett KS2 Non-Fiction Collection
PrimaryEnglish

In October 2013, the Department for Education (DfE) announced that levels would be removed from the national curriculum. But what have we learned about assessment and school data since then? And are we in a better place than we were a decade ago? 

Well, the removal of levels has helped to expose some of the more bonkers things that used to happen in the name of tracking student progress.

It’s much less common these days, for instance, to find a school tracking ‘progress points’. As if progress were a perfectly linear and divisible commodity.

Mercifully, we’ve also learned that measuring progress cannot be achieved by ‘taking what they got last term and adding two’.

Of course, we all want children to make progress. But when we attempt to measure it we may contort our data to breaking point. 

School performance data

A decade ago the default approach of many schools was to have a half-termly data drop. Leaders and governors would stare blankly at reports in the hope that meaningful insights would jump out, like those 3D stereogram pictures from the 90s.

The DfE’s 2018 Making Data Work report was helpful in moving on our thinking. It stated clearly that a school need have no more than two or three attainment milestones a year.

Nowadays we collect less data, and (hopefully) think more about the purpose of that collection and how it will inform meaningful actions.

That has a benefit in terms of teacher workload. It also frees up time for assessment approaches designed to feed into the learning process. 

Teacher observation

There are, of course, some circumstances where teacher judgments are vital.

In EYFS, for example, you may want to track concepts like curiosity or risk-taking, and that requires recording teacher observations. 

That said, authors like Daisy Christodoulou have helped us to understand the limitations of teacher judgment.

Tracking systems historically included tons of descriptors like ‘recognise and name common 2D shapes’. Teachers would decide if the child reached the ‘expected’ standard in that area.

But surely it is more meaningful to set a test which asks them to identify a square, a triangle, and so on?

That way, if there’s a gap in knowledge, you can pinpoint and correct it. 

What’s more, teacher judgments can be skewed by subconscious bias.

Research from the Institute of Education in 2015 found that children from disadvantaged backgrounds tend to be perceived by teachers as less able than their more advantaged classmates.

So increasingly, schools have moved towards question-led forms of assessment to provide more precise and objective feedback.

Standardised assessments have played an important role here, too, offering objective and benchmarked data. 

Ways to measure progress

Another hot topic is how the DfE should calculate a baseline for its progress measures.

Until recently the answer has been to use KS1 outcomes, but this has been unpopular because: (a) it’s not the start of the primary phase; (b) the data relies on teacher judgments; and (c) the system cannot distinguish between the different trajectories of children with English as an Additional Language (EAL) as opposed to, say, children with Special Educational Needs (SEN).

So the government went searching for a better approach, and settled on a Reception baseline.  

The problem is, nobody seems to care much for that baseline either.

Infant and junior schools point out that a measure running from Reception to Y6 doesn’t help them; school staff sigh at having to administer an assessment that they can’t use, as the government does not share the data with schools.

Others fret about how meaningful it is to track a change between four- and five-year-olds and their 11-year-old selves.

Seven teachers could have contributed to this development, overseen by different school leaders; and that’s before you start to factor in externals such as the home environment, student mobility, month of birth, and so on.  

So, over a decade we’ve learned plenty of positive things, like how to collect less data of better quality.

But as we understand school data better we also become more aware of its limitations.

And when it comes to progress measures, we’ve perhaps learned more about why we should be wary of them than we have about how to make them work! 

Joshua Perry is co-founder of Smartgrade and Carousel Learning, and will be speaking at the Data In Schools Conference (DISCo) series. For more details search for ‘Data In Schools Conference’ online. 

You might also be interested in...