Posted by: lidcblog | May 4, 2012

Evaluating the impact of higher education for development – opportunities and challenges

In conjunction with the Association for Commonwealth Universities, LIDC recently organized a two-day conference entitled “Measuring impact of higher education for development”. The event aimed to generate critical discussion around assessing the impact of higher education interventions in developing contexts, and it attracted a wide variety of stakeholders, from development professionals to academics to evaluation experts. Although somewhat lacking in Southern perspective, the conference did an excellent job bringing together diverse perspectives and raising a number of significant challenges and opportunities.

For decades, higher education was largely ignored as a potential contributor to development.  Rather than supporting higher education, donor agencies have focused their efforts on what they perceive as a more “pro-poor” education agenda – the attainment of Universal Primary Education, prioritized in the Millennium Development Goals (MDGs).  Any aid to higher education since the 1980s has tended to be in the form of scholarship programs for students from the developing world to attend universities in the West.

In recent years, however, the situation has started to change. A new perception of the role of universities in the context of globalization and the emerging “knowledge economy” have triggered a wave of revitalization efforts aimed at improving the quality of universities in the developing world.

Although this renewed focus on higher education for development is encouraging, it is unclear if the approaches adopted – scholarships, research partnerships and structural institutional reforms – are really the best methods for improving the capacity of higher education. In fact, there is some evidence that current interventions may actually be damaging. (See, for example, Mahmood Mamdani’s damning critique of neoliberal reforms at Makerere University in Scholars in the Marketplace).

The LIDC conference attempted to address this issue by considering the potential for using rigorous evaluation methods to determine the impact of higher education interventions on development.  Although the event was a crucial first step in determining the feasibility of such an approach, discussions during the conference underlined a number of significant challenges.

The first and most fundamental challenge is how to articulate one consistent definition of evaluation for higher education interventions. Although individual programs and initiatives certainly need to establish procedures for evaluating the outcomes of their own interventions, the wider question of how to evaluate the impact of higher education interventions for development appears to be a question that can only be answered at a more macro level. If one of the questions that should be asked is “which interventions have an impact on development”, it seems unreasonable to expect practitioners to respond, as the answer might be that their own intervention has very little impact on development! Impact evaluation of higher education interventions, therefore, seems to be a question for the development and/or research communities – or, even better, a question to be answered within individual national contexts by recipient nations themselves. Another significant challenge is the meaning of the word “impact.” First, there needs to be a shared understanding of what higher education should actually be doing in terms of development. This begs complicated questions about the purpose of higher education and the definition of development. Both are probably best addressed from within individual national contexts.

The nature of higher education further complicates the potential of using impact evaluation methods. First, attribution of cause and effect, a fundamental component of impact evaluation, is difficult when working with a highly complex institution with multiple programs and initiatives. For instance, how can particular development outcomes at a societal level be specifically attributed to individual programs implemented within particular faculties or departments of a large university? Furthermore, the time lag inherent in education initiatives adds additional complexity. How, for example, might the impact of a research capacity-building partnership be evaluated, given that training of research professionals may not yield results for 10 or 15 years? Given the lack of accurate historical data at many universities in the developing world, any evaluation would likely need to start with the articulation of baseline data, but that could prevent retroactive analysis of past initiatives and necessitate an incredibly long evaluation timeline with no real prospect of results for many years. Finally, and critically, the very nature of higher education poses critical questions regarding the level of impact to assess. Can the impact of interventions that focus on the individual level, such as scholarships, really be accurately compared to interventions that focus on the institutional level, such as research capacity partnerships, or the national systems level, such as differentiation efforts? How can the societal impact of individual interventions even be assessed? Evaluation methodology depends on the definition of a counterfactual (or, more simply, what would have been if the intervention had never happened). How can this be defined for higher education interventions?

The use of evaluation methods for higher education also poses a significant risk if they are not implemented with care. Given the difficult financial and political circumstances that have impacted many universities in the developing world, there is a very real possibility that an impact evaluation might demonstrate that higher education interventions have had no impact on development. Public lack of support for universities has been an unfortunate side effect of many studies into academic outcomes at institutions in the West. Given the particular history of aid to higher education, there is a significant danger that any “negative” evaluation of higher education interventions could, similarly, lead to renewed pressure to move away from funding higher education interventions in the future. As a result, the most productive use of the methodology would be to highlight areas for improvement in the field, rather than to attempt to demonstrate the donor “value for money” of supporting higher education in the developing world.

The end of the timeline for attaining the Millennium Development Goals (2015) offers an opportunity to re-establish higher education as a critical driver of development in many national contexts. The diversity of participants at the LIDC conference – and the clear lack of communication between initiatives, even within the same national contexts – suggests that the first priority in such an agenda is to establish better communication between practitioners in the field. Increased coordination between programs can eliminate potential overlap, increase the possibility of mutually reinforcing initiatives and assist in future evaluation efforts by coordinating data.

If implemented thoughtfully and carefully, impact evaluation methodology could be a useful tool for assessing what’s working – and what isn’t – in higher education, offering an opportunity for recipient nations to determine the most effective interventions for increasing the development potential of their higher education institutions.

Contributed by Rebecca Schendel, a PhD candidate at the Institute of Education. Her research focuses on student learning outcomes at Rwanda’s public universities.


Responses

  1. Interesting and well written bit. However, there are already too many examples in (higher) education of “thoughtful and careful” implementation of “rigorous evaluation methods” with opposite results than what was promised. Take just the example of the last decade’s fad of “rigorous” university rankings that are not at all that rigorous. It is also ironic to read about a “productive methodology” going beyond “value for money” obsession and lack of vision. The productive part is what stays behind the “value for money” approach.. it should be significant, diversified, comprehensive, meaningful, human… anything, but not “productive”. The article – using the same neoliberal jargon with bits like “eliminate potential overlap” and “assist in future evaluation efforts by coordinating data” is leaving unclear the… conclusion. I read that this will work if it will be implemented carefully and thoughtfully and it could. What if this is implemented by the same accountants with no idea about education, care or other thoughts than immediate profit? The analysis of these ifs may be very interesting.


Leave a reply to popenici Cancel reply

Categories