‘To understand and use the results of education research, we need to know how they were gained’

We can believe that 'baked beans raise children’s attainment in maths' – or we can look more deeply at different approaches to research and how valid they are before acting on ‘evidence’, says Phil Wood…

- by Phil Wood

In the past few years there has been an increasing interest in research evidence in schools. This has included the development of teacher-led conferences, tools such as lesson study and the rise of quasi-experimental approaches as advocated by the Education Endowment Foundation.
Increasing the level of evidence-informed practice in schools can only be a good thing, but like teaching and learning, research is a complex business, and pedagogical research is a messy science. Researchers use many different approaches to investigate education, and as a result there is a wide range of tools that can be used. So, to understand and use the results of research we need to have a clear idea of how they were gained. And there are continual debates concerning the validity of approaches to research.
Full of beans?
At a recent workshop on research design, a novice teacher researcher discussed with us his plans to try out an intervention with half of his English class, while teaching the other half in the way he normally taught. To ascertain the effectiveness of the intervention, he would compare the two groups’ levels of attainment at the start and end of the project in a test and retest model. There were two aspects of this approach that concerned us. First, we asked him to consider whether he would know for certain that the intervention was the sole factor effecting any measurable improvement. As he looked somewhat bemused, we explained it in these terms:
My colleague’s maths class on Friday afternoons were achieving below the national average, so she trialled taking them out for lunch on Fridays for a month, where she bought them beans on toast. After a month, every child had improved by at least one level. Thus, her research ‘proved’ that giving a child beans will improve his maths.
“Is this a reasonable claim,” we asked him, “or could other factors have been at play?’” This discussion went on for some time. “Maybe they tried harder for her because she paid them more attention”; “Maybe they used to be too hungry to concentrate in the afternoon”; “Maybe their self-esteem improved as she remembered their names after taking them out and chatting’, and so on. So yes, just adding beans is unlikely to lead to improved attainment. Learning is a highly complex area to study. Can we isolate variables and establish causal relationships in a teaching and learning context? Or are the processes just simply too complex to tease out one by one?
A question of control
Our other issue of concern, though, was that of using control groups in educational research with children. It’s something that teachers starting out as practitioner researchers quite commonly do, but by identifying a test group and a control group, and trialling the intervention with the test group – whether it’s a new teaching strategy or a set of activities – while the control group carries on with their ‘normal’ classroom activities, you’re potentially deliberately disadvantaging one group against another. Should this approach therefore be seen as unethical? One very different approach is practitioner research, which focuses on your own classroom and on developing and improving your own practice. Practitioner research does not merely attempt to understand and describe a context, but to change it for the better.
The emphasis is on transformation, linked to the concept of praxis – which in this context highlights the need for a symbiotic relationship between theory and practice. Theory can therefore be used in the initial framing of a project, but with the explicit purpose of playing a role in bringing about practical change.
For practising teachers it is a positive choice, as it can be used to interrogate and develop pedagogy and serve as the basis for collaborative work with colleagues that offers direct benefits for children. The two approaches to research are fundamentally different, and groups of educationalists may argue for or against these contrasting perspectives as being valid ways of understanding pedagogy. And this highlights the need for teachers to have a good grounding in research if they are to engage critically with the insights that can be gained from educational research. No baked beans need be involved.
This article is adapted from the book Educational Research: Taking the Plunge, co-authored by Phil Wood and Joan Smith and published by Independent Thinking Press
Phil Wood teaches master’s programmes in international education at the University of Leicester and blogs at hereflections.wordpress.com