Posted by: lidcblog | September 8, 2014

Evaluating social interventions: What works? In whose terms? And how do we know it works?

Women participating in a farmer field school organised by CARE in Bangladesh.

Women participating in a farmer field school organised by CARE in Bangladesh (source: Wikimedia Commons).

What do farmers attending schools in the African fields have in common with women attending maternity clinics in England? Both groups have played a role in rigorous academic research. They have influenced studies evaluating programmes that were designed to improve their lives.

In the mid-1990s Farmer Field Schools were spreading across Africa. These schools use active, hands-on learning and collaboration to improve agricultural productivity. Their strong participatory ethos makes the field schools very relevant to those involved.

Logic tells us that these schools should make a big difference to the farmers’ yields and to their lives. However, a strong theoretical base, enthusiasm and participatory principles don’t guarantee success. A research study seeking to collect, analyse and synthesise a wide range of evaluations of field schools found their success is largely limited to pilot projects. Furthermore, success is less likely with poorer farmers and women farmers.

It would be helpful to know how Farmer Field Schools compared with other approaches to improving agriculture – but the authors found a dearth of such rigorous impact evaluations. They see a need for studies that track potential changes through the whole course of the project — from the preparatory work of training facilitators and identifying potential participating farmers through to the ideas they discuss, try out and share with their neighbours.

They particularly recommend rigorous evaluations assessing impact in broad terms – not just agricultural productivity, but also empowerment, health and the environment.

Carrying out such evaluations is highly skilled work. In fact, knowing how to commission research that will yield really practical information – that will answer the questions and concerns of the people whose lives it is seeking to benefit – is not straightforward either.

Such issues will be part of a short course in Evaluation for Development Programmes offered by LIDC later this year, on which I will be teaching.

The course will offer opportunities for participants and tutors to all learn from each other, and is designed for:

  • development professionals who commission and use evaluation studies
  • academics who plan to work in multi-disciplinary teams on future evaluation studies of development programmes and
  • postgraduate students who wish to gain a better understanding of the terminology and fundamentals of evaluation methods.

Our vision for the new course is that it will help to achieve effective and appropriate support for better health and wellbeing through training professionals who design social interventions. It will help them to understand, commission, use and interpret evaluation studies, and work with potential beneficiaries such as farmers in Africa or pregnant women in the UK.

Research on anti-smoking support for pregnant women in the UK offers a contrasting example of why rigorous academic evaluation of the impact of social interventions is not enough.

In many high income countries in the 1990s, pregnant women were commonly advised to avoid or give up smoking for the health of their baby. The success of this strategy was assessed by rigorous randomised controlled trials, which reported reduced proportions of women smoking and fewer babies born too soon, too small or sick.

However, these trials took little notice of other criteria considered important by health promotion specialists and pregnant women themselves. What, they wondered, were the effects of encouraging women to give up smoking, if smoking helped them cope with the daily pressures of disadvantaged lives? Might asking midwives to counsel women against smoking interfere with supportive midwife-mother relationships?

Concerned practitioners and women who smoked (some who gave up, and some who did not) discussed their theories about the impact of smoking cessation programmes in pregnancy. At that time these theories had not been tested. Drawing attention to this gap in our collective knowledge encouraged a new generation of randomised controlled trials that took into account the social and emotional consequences, not just biomedical measures, of smoking cessation programmes. Subsequent studies showed that concerns about potential harm, such as stressed mothers and damaged family relationships, were largely unfounded. Now national and international guidelines are based on rigorous evaluations designed with women, not just for them.

These two very different examples raise questions in common about theories of change, research methodology, criteria for success, equity and ethics. They also feature not just individual studies, but whole literatures of similar studies which strengthen the evidence underpinning current recommendations. These key characteristics for evaluating complex social interventions require research approaches that cut across traditional academic disciplines, and draw heavily on the policy, practice and personal knowledge of those directly involved.

Contributed by Prof. Sandy Oliver, Professor of Public Policy, Institute of Education.
This post was first published on the IOE blog.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: