Giving the developing world the evidence that it deserves
Richard Churches
Richard is Global Head of Research at Education Development Trust and leads its programme of global public benefit research. He has worked in education for over thirty years as a teacher, school leader, consultant and government adviser. He is the author of a number of books including Neuroscience for Teachers: Applying brain science in the classroom and Teacher-Led Research: How to design and implement randomised controlled trials written with Eleanor Dommett and Ian Devonshire.
16 February 2021
As the first Coronavirus vaccines roll out across the globe and we witness the ‘joy of the jab’ as a recent BBC commentator put it, it is hard not to stop and pause for a moment to reflect again on the state of evidence in education.
Lessons from medical practice
Repeatedly, we seem more than capable of identifying the problems we face with increasing accuracy. To coin a medical term, we become better and better at ‘diagnosing’ where we fail. Yet, we have never fully grasped the potential of undertaking truly systematic experimentation to find out what works in education, in which context and with which children, to solve these problems.
For developing country teachers and systems, the picture is even worse. Not only is it rare for studies into specific aspects of classroom pedagogy to take place in developing countries (not that such studies are common in the developed world); but lacking such context specific evidence leads to the only other route possible – the application of interventions that people not from these contexts developed and researched elsewhere.
What is also noticeably different about the types of programmes undertaken in medicine and healthcare, compared to education, is the use of replication (repetition of studies) to establish effectiveness and an awareness of the importance of dosage and treatment windows. Of course, such things still provoke debate and differences in interpretation as we have seen with the recent discussions about the length of time that needs to elapse between doses of one vaccine compared to another. Nonetheless, these vital differences are known to be important by our cousins in medicine and healthcare and are a key aspect of clinical trial research.
In education, we have rarely looked at the efficacy of a pedagogy in terms of how long it should be used, or whether different pupil groups might respond differently depending on dosage. Instead, a blanket approach is often adopted where all children are exposed to the same approach (irrespective of the learning ‘symptoms’ the children ‘present’ with). Should a child still be exposed to intensive phonics teaching if, or once, their reading is already fluent? Or should we be espousing similar approaches in systems where the alphabet is inherently phonetic (as in the Arab world)? I would argue that because we don’t know the answer, yet, we need to experiment further even with treatments that have efficacy for the average western child.
In a series of programmes, we have been working to teach teachers to design randomised controlled trials paralleling the role that serving clinicians have in the development of clinical evidence (particularly the application of a treatment to a specific group in a specific context). In contrast, in education, it is often people that no longer practice as teachers or who never practiced who conduct the classroom pedagogy research. This contributes to a democratic deficit that is amplified in the developing world when evidence from other contexts is transferred with the assumption that practice will produce improved pupil outcomes irrespective of context.
The programmes we have led have included teaching teachers to explore the neuroscience and cognitive psychology evidence and not just assume how it will work in the classroom. We also implemented a large-scale programme of controlled studies in which teachers designed and implemented workload reduction strategies to evaluate their impact on pupil attainment and staff well-being.
A similar teacher-led randomised controlled trial programme has also just begun using scientific method to explore science pedagogy. This aims to produce an extensive summary of the evidence from multiple repetitions of similar approaches with different children in different contexts – an analysis which will take into account the length of the ‘treatment windows’ and pupil outcomes.
Teacher-led trials have an additional advantage, compared to the vast expense of running some externally implemented trials in education, teacher-led trials are inexpensive and can often be embedded in existing plans and workload. Teacher researchers with training and support can successfully design, implement and report findings from smaller scale trial with those results amalgamated to enable effective interpretation.
Taking forward the principle that it should be serving teachers, in a specific context, who should be at the forefront of evidence generation, a further project sought to demonstrate the utility of such an approach in very different contexts. We therefore supported teachers to conduct randomised controlled trials in a wide range of countries (including Chile, Columbia, India, Lebanon, Sierra Leone, Malawi, Nigeria and the Philippines).
In Sierra Leone, we supported Miriam Mason-Sesay to implement an innovative randomised controlled trial that looked at whether reading to a doll (made by the child) was more effective than reading alone or to another child. This trial demonstrates another issue in education evidence uptake. Often ideas spread quickly from conference to conference, blog to blog (and teacher to teacher) that have never been tested in any way in classes. Reading to an inanimate object (or even a Dog) is something that took off in a number of schools and has been assumed to be effective. As it happens, in Miriam’s trial reading to a doll was less effective than reading alone or reading in pairs.
Working with these teachers in a wide range of developing contexts it has been clear that there is not just potential in such countries for teachers to be placed at the forefront of evidence generation; but also that, the need for this goes far beyond that and can be seen as a moral imperative in education system reform. For, if we are serious about evidence in education, we must move to the generation of evidence that is not only context tested but meets the needs of local teachers in their contexts. Furthermore, there needs to be extensive enough replication to enable the answering of questions related to when and for how long such approaches should be used, and which pupils are most suited to the approach. More than that we need to recognise that imposing pedagogy from one context to another (without testing) simply cannot be acceptable anymore.