30 years of Experiments in Evaluation

30 years of Experiments in Evaluation

Whether evaluating energy projects in Kenya or Sri Lanka, services provided to UK MPs in Westminster or a UK university’s grant-making, anthropological research skills helped me side-step the customary assumptions. Evaluation often involves simplistic questions like, ‘Is this project a success or failure?’, when most human endeavour is usually both. It frequently fails to take account of plural views, complex histories and the socio-political processes involved in evaluating people. It is often mind-deadeningly dull and formulaic when creativity is needed.

In 1988 I was asked to undertake an evaluation of a project in Western Kenya. I had recently read an angry article by Adrian Adams about agricultural development in Senegal; in particular, about how foreigners fail to understand that they are not creating something new when they embark on projects in other people’s countries, they are interjecting into the lives of others who are already busy responding to their own aspirations. I plunged into the world of development with an anti-foreigner disposition, highly sceptical of my own value.

I was given a brief training by the INGO I was working for – about 20 minutes – telling me that evaluation was about measuring (a) progress (or lack of it) against the objectives, (b) impact for the intended beneficiaries, and a template of headings to help me organise my report. In theory, the project should have collected the data for me to make these assessments along the way. In practice, the data was thin – a few reports revealing that the beneficiaries loved the project. So I visited some of the beneficiaries. The project aim was to train women’s groups to make and sell fuel-efficient stoves as a way of generating income. They loved the project staff, and enjoyed the training, but did not believe that there was a market for these stoves. So the women’s groups continued what they usually do – trading maize – and told me that loans for their existing business would have been more useful. I reported these findings. When the head of the department asked me if she should close the project, and spend the funding elsewhere, I thought yes but I said no. I couldn’t sack the women’s groups or the project staff from development; I did not have the heart to exclude these people who had been kind to me, given me food and conversations. 

The next evaluation was equally tricky. I found that although the intended beneficiaries – including highly productive potters from about 20 households – benefitted hugely in another income generation project (this time in Sri Lanka), the rest of the potter community  (some thousands of people across the country) were excluded. The richest few started employing their poorer relatives as waged labourers in a community that was previously characterised by relative egalitarianism. The project had created new capitalist enterprises, once again ignoring what the political economy of the community had been for decades. Some were so angry that they sabotaged the firing of new products but it did not stop a few becoming far richer than even the project staff or than government officials. I pointed this out to the INGO but they were not especially troubled by this finding.

I was beginning to understand that evaluation is as political a process as any other aspect of work within the international aid and development industry. I even got on the board of the INGO much later to argue that we should be more politically savvy about the changes we were bringing about. If anticipating that we will increase inequality, surely we should think again? But the senior management of the organisation did not agree. They were more focused on provided evidence of impact for some, not worrying too much about who they were, rather than scrutinising the effects on relationships between people.

I jumped sideways in 1998 from studying the politics of aid to researching the social life of politics. Some years later I was asked by officials in the House of Commons about how to evaluate the services that the adminstration delivers to MPs – ranging from clerkly advice about amendments to bills to library research and catering. After a series of conversations with the now Clerk/CEO of the House, John Benger (who was head of Service Delivery at the time), we co-designed a new way of doing evaluation. They stopped the annual survey carried out by an external marketing organisation, which provided little more than a ranking and odd comment about failings. I suggested that they use their own staff, rather than consultants, to interview MPs and MPs’ staff, not only in Westminster but also in constituences, to find out about existing services but also elicit ideas for new ones. I recommended that it would be hard for the most junior staff but that the most senior officials would intimidate MPs, so middle ranking officials would be good as interviewers. I briefed the interviewers about how to handle the delicate process of talking to MPs, they were trained by the Social Research Association in interviewing techniques, and they set off in pairs, armed with a checklist of questions and instructions to be responsive whatever MPs wanted to raise, to interview MPs and their staff.

The results were illuminating. Officials discovered precisely how MPs found the various services provided by parliament, but also realised that there were gaps. Constituency staff felt isolated and one of the many innovations that came out of the project was a huge expansion in services provided for them. But as importantly, officials understood far more about the diverse needs of MPs, as revealed in the report that they wrote about it. That was the end of my involvement because unlike many evaluators, I resisted the temptation to create further evaluation work for myself. This wasn’t entirely altruism. I felt that to remain impartial towards the House of Commons as a researcher I should not be employed by them.

My third significant experiment in evaluation has arisen out of a project that marries my obsessive interest in parliaments with a commitment to working with colleagues in South Asia and Eastern Africa. We have been giving grants to scholars and artists in Ethiopia and Myanmar to design and carry out their own research on the relationship between parliaments and society. The explosion of outstanding research speaks for itself – all available on our website on partners’ research pages as well as in an outputs library. But the more complex task was to evaluate ourselves – a small team of four in SOAS co-ordinating the management of the project. Since we are all training in anthropology, we have been doing a collaborative ethnography. 

How did we do our ethnographic evaluation? It is premised on our own participant-observation in the project since 2017. We have emailed, phoned, visited, Skyped/Zoomed and worked with colleagues across a multitude of organisations – endlessly reflecting on how the project has been experienced from different perspectives. The four SOAS team members discussed the ‘findings’ as narrated in various files on each funded project, logs, visit reports, videos, and other outputs, often debating furiously about how we remembered what happened and why we thought events unfolded as they did. We critiqued ourselves and each other, sometimes uneasily, and always learned something from every conversation. Evaluation is partly history and history is always contested. So rather than worrying about contestation, we embraced it. 

We are producing various articles from our different viewpoints but we have also attempted to write about our shared experience in a policy briefing. This is in draft form and we are currently inviting all participants to add their perceptions, comments and suggestions. Please add your own and send them to grnpp@soas.ac.uk.

Comments are closed.