A Brief Account of Four Evaluation Paradigms (Blog 1 of 3)

EvaluationThe evaluation discipline and practice are generally guided by four main evaluation schools of thought or paradigms. They are namely (1) the post-positivist (known as the method-driven), (2) the pragmatist (mostly referred to as the use or utilization-based), (3) the constructivist (best captured as the value-driven) and (4) the transformative paradigm that is too entrenched in the values and principles of equity and justice. I will try to provide a snapshot of these four main evaluation paradigms over a series of 2 to 3 short blogs, describing the key foundations, and taking stock of the similarities and differences among them.

 

To do so, however, it is useful to shed light on Thomas Kuhn’s conception of the notion of “paradigm”. Conceived in his masterpiece “The Structure of Scientific Revolution”, the paradigm is “the entire constellation of beliefs, values, techniques, and so on shared by the members of a given community”  (Kuhn, 1970). In other words, in a paradigm, the community of researchers share common construct of reality with all its axiological (values and ethics), ontological (assumptions about the nature of reality and knowledge), epistemological and methodological assumptions.

 

Adopting a paradigm necessitates the presence of anomalies that culminate into a crisis that disturbs the existing theories, concepts and other means of understanding. To survive the crisis, scholars and researchers conceive new beliefs, values, and methods; and build a faith that the newly conceived “paradigm” explains the anomalies and that it “will succeed with the many large problems that confront it” (Kuhn, 1970, p.158). Despite the resistance of the scientific and research community, the existing paradigm collapses when the new paradigm proves its ability to yield new axiological, ontological and epistemological assumptions, as well as methodologies that are capable to explain the current manifestations. For Kuhn, two paradigms never coexist. The vanishing paradigm exhibits an “incommensurability” (incompatibility) with the emerging one that will take over to the extent that “the whole profession will again be practicing under a single, but now a different, paradigm” (Kuhn, 1970, p.152).

While in scientific revolution, paradigms do not co-exist, given the existing paradigm’s inability to provide answers to emerging anomalies, it is noticed that in most social science disciplines (evaluation included), paradigms cohabitate. In evaluation research particularly, Daniel and Wirth noticed that “experimentation did not vanish from paradigm to paradigm, but changed its emphasis and application with each paradigm” (Daniels & Wirth, 1983)

Studying the different paradigms is better informed when exploring the underpinning philosophical assumptions about their (1) axiology (the ethics and value systems on which it is founded), (2) ontology (mostly concerned with the nature of social reality (i.e. addressing what do we believe about the nature of reality?), epistemology (that inquires about the nature of knowledge and the ways to know (i.e. what are the sources of knowledge and how do we know what we know?), and the methodology (related to the approaches and means to inquire and understand (i.e. how should we study the world?).

Next, we will explore the post-positivist and pragmatic paradigms.

 

 

How to explain OM in 2 minutes…

I was intrigued withOM logo the question of “how to explain Outcome Mapping (OM) in 2 minutes?”, as it takes me –like many other program evaluators- hours to address this question! Nevertheless, challenged to describe the approach to non-OM conversant, I tend to highlight key features to distinguish OM from other tools. Ideally I tend not to bombard the client/ discussant with OM terminology. I prefer to speak the conventional evaluation notion/ language, and often match and supplement it with OM-specific terms/ notions (such as BP~ stakeholders; progress markers ~ indicators, etc…). Here is my 2 cents input:

OM’s power is that it uncover the unintended outcomes that the project/ program contributes to in a complex dynamic system (typical of any development project/ program). Something that other tools overlook!

  1. Conceptually:
    • Unintended outcomes relate to BP partners’ behaviors (usually I use the term stakeholders or partners/ actors to introduce Boundary partners BP notion). These outcomes are a key guarantee for sustaining the intervention’s effects in a community. These are the earnings that communities will accumulate and use.
    • Contribution rather than attribution: (1) since in a dynamic system, actors are exposed to various factors that shape/ influence/ and affect their behaviors; (2) we can’t isolate one intervention and chase it to evaluate its effects (we are not experimenting in-vitro!)
  2. Methodologically- as a tool:
    • It provides another check on the intervention outcomes by revisiting and redefining them as they emerge in real-time interventions through the theory of change framework
    • It provides  a realistic narrative of the intervention linking project intended outputs and outcomes  with unintended ones.
    • Its provides an appropriate mapping of partners/ actors involved in the intervention
    • OM Map is essential tool to trace the changes through the progress markers. It is fundamental in the next planning cycle.

I conclude in nutshell that OM provides the opportunity to look at the intervention inductively. I believe this is a great learning opportunity for clients to match their deductive approach in planning with the OM inductive lens! It is definitely an eye opener for program evaluators and actors too.

This is my quick take on OM. Am I done in 2 minutes?

As I member of the http://www.outcomemapping.ca/members/4494, the blog reacts to a questions raised on the OM community forum http://www.outcomemapping.ca

The missing partner in the Arab post-2015 development (SDGs) agenda

In the context of the post 2015 development agenda and in the lead toward adopting the sustainable Development Goals (SDGs) in September 2015, the multi-stakeholder debates have focused lately on two critical enablers for sustainable development – namely “Financing for Development- FfD” and the “Data Revolution”.

The post 2015 process has been qualified to be consultative, participatory and bottom-up making serious efforts to learn from the pitfalls of the MDGs. From a monitoring and evaluation perspective, the MDGs reporting was constrained by data coverage and representation, lag in reporting and weak national statistical capacity, besides being donors’ driven. To that end, the High-Level Panel of Eminent Persons (HLP) in its 2013 report on the post 2015 agenda concluded that lack of data has hampered development efforts, and that monitoring and evaluation “[at all levels and in all processes of development] will help guide decision making, update priorities and ensure accountability”. The report then called for a “data revolution” that would enable overseeing the implementation of the 17 SDGs (through more than 100 indicators[1] identified so-far).

Data revolution, as promoted by the HLP and the SDSN[2], is happening and is shaped by technological innovation. The challenge is to leverage it to ensure (1) high-quality data, (2) unified definition and conceptualization, (3) timeliness of reporting in order to serve and improve real-time decision-making and implementation.  The key challenges in this regard revolve around developing the capabilities, resources and principles that would harness blending and integrating the traditional data and data sources with the emerging ones and while devising means to ensure their reliability.

In the Arab region (Middle East and North Africa), the efforts have been marginally successful in monitoring and evaluating the MDGs. A recently commissioned study[3] by UNESCWA revealed no news by concluding that, on average, Arab countries produced official statistics for almost 50% of the 45 MDG indicators they used to report. The remaining was done on an ad-hoc basis through the UN or other funding agencies. The report questioned the region’s readiness for monitoring, reporting and ultimately evaluating the post 2015 era. It highlighted key challenges related to the following M&E foundations:

  • Institutional- mostly related to Sustainable Development data compilation and reporting mandate. Nationally, instead of being directly linked to the center of government, reporting on the sustainable development is often mandated to either the ministry of social affairs or environment (mostly to the latter). The same applies regionally with the League of Arab States council of the ministers of the environment. Besides, the institutional challenge reflect on the processes, hence the lack of integration both nationally with the national statistics offices, and weak coordination regionally.
  • Capacity – directly influenced by the institutional challenge too. It is mostly related to the availability of resources (human and financial), associated with lack of interest (compared to the governments’ focus ns fascination by the macroeconomic indicators), lack of knowledge, and weak coordination.
  • Quality of the collected data – mostly related to its (a) representation nationally and sub-nationally; (B) comparability given the variation with the set/ agreed upon definitions and concepts; (c) limiting its benchmarking with others; (d) timeliness of the data and (e) accessibility.
  • Measurement approaches and methodologies – mostly too conventional with high emphasis on quantitative/ numeric indicators and less interest in the qualitative aspects associated with capturing the learnings, institutionalizing the knowledge and exploring unintended outcomes.

It is worth noting that these attributes of the Arab M&E and statistical capabilities have long been overseen by the various partners when reporting on the MDGs. MDG related reports were mostly objective and target- driven. Yet, when addressing the development enablers, the identified weak SD governance and processes fail to highlight the monitoring and evaluation elements.

In parallel to the global and regional consultations in preparation for the post-2015 development agenda, lead evaluation networks have promoted and facilitated the establishment of VOPEs[4] worldwide, and advocated for declaring 2015 the International Year of Evaluation. These efforts were timely. VOPEs have emerged as primary stakeholders whose expertise, capability and mandate (mostly related to advocating for high-quality, reliable, relevant and timely monitoring and evaluation processes) are cornerstones to foster informed decision-making.

In MENA, the Evaluation Network and its associated national VOPEs (Morocco, Egypt, Jordan, Lebanon, Sudan and Tunisia, among others) have been established (officially registered) to support good governance and influence evidence-based decision-making through advocating and mainstreaming monitoring and evaluation. The booming of these organizations in the region seized the “Arab Uprising” momentum, responded to necessities and envisioned to fill in gaps – same gaps identified above – which were for long undermined. Over the last 4 years, evaluation has been recognized widely in the MENA region and has earned a broader visibility and greater emphasis by various development stakeholders – primarily among governments, parliamentarians and international organizations.

Fundamentally, the evaluation societies in the region share a common mission aiming at addressing the institutional, data quality methodologies. They are mandate to mainstream M&E, promote non-conventional M&E theories and approaches, and advance M&E standards and practices. They are well equipped to build capacity and provide the quality resources needed to keep close eye on the SDGs. They are well positioned to contextualize and provide a use-based analysis drawn from any M&E system – even those highly ICT-driven. They are the actors without whom “data revolution” will turn inefficient. Yet, to-date, they have not been harnessed! In fact, the various regional consultations on the SDGs and post 2015 agenda (both government- led or UN-led over the last 3 years) has fallen short in engaging the emerging evaluation community in MENA. It is time for post-2015 agenda custodians to tap into such resource…

It is always time for the MENA VOPEs to strive to push the M&E component and devise innovative means to lead the post-2015 M&E agenda forward…

[1] Open Working Group proposal for Sustainable Development Goals (https://sustainabledevelopment.un.org/focussdgs.html) (Accessed June 30, 2015)

[2] Sustainable Development Solution Network

[3] Measuring sustainable Development in the Arab Region, UNESCWA, 2015

[4] Voluntary Organizations for Program Evaluation

Evaluating Evalpartners: some reflections

IYE 2015I participated in the webinar run last week on the findings of the first external evaluation of EvalPartners. The evaluation, done by both Nancy and Sarah, provides a broad snapshot of what EvalPartners is and does with the intention to shape decisions about what EvalPartners could be and achieve beyond 2015 (the International Year of Evaluation). The presentation highlighted the key findings around Evalpartners functions (the initiatives it has been managing), and its institutional set up. Yet most of the recommendations addressed the latter.

I see the report an eye opener for all of us within the Evalpartners network, I learned a lot and am reflecting! But it is definitely beyond that for Evalpartners’ management. By addressing the gaps, the report has highlighted key programs Evalpartners is managing, yet indirectly provoked many questions that help shape the future agenda. These seem more relevant within the context of Evalpartners’ strategic discussions. I am happy it already caught the immediate attention of Evalpartners’ management and triggered the management response.  I trust the below will be insightful too in such “big picture” discussions.

There is a clear acknowledgment that EvalPartners is a young global movement with loose boundaries within the evaluation landscape. Driven by champions, it emerged from on strong partnership between UNICEF and IOCE. It has emerged in becoming the global network and has succeeded in reaching out to a wide set of partners within the government, civil society and international organizations spheres.  I think its branding and affinity to attract such a wide array of partners stems from its intrinsic structural format. This is not a call to undermine the well -crafted governance recommendations. Yet, it is a flag to be raised when engineering its governance. A too-structured network adds to existing ones. Rigidity might create a barrier to outreach with other networks, hence triggers a competing rather than a collective spirit.  Of course more transparency in decision making and implementation processes, as elegantly suggested by the report, are fundamental to promote Evalpartners as the “network of networks”.

MillGoals01On another side, the key issues identified in the report unveil critical aspects that would ensure the network’s sustainability and added value in the international evaluation landscape. Yet from a program perspective, there has been an opportunity to highlight strategic thematic directions that promote its mandate and relevance. I am happy to hear Ziad (IOCE president) addressing this dimension in his intervention in the webinar.  Yet, I believe there is still a prospect for Evalpartners to build on the momentum created around the coincidence between the international year of evaluation and launching the sustainable development goals (SDGs). The network is capable through its web of regional and national VOPEs to drive the post 2015 development agenda’s enablers and implementation mechanisms. It would be of great value to further explore the network’s future directions in the context of the post 2015 agenda. Indeed the latter is hugely enabled by innovative evaluation capabilities, transformative policies and implementation mechanisms. Though the clock is ticking, there are a couple of months to pitch in with contributions on the data and evaluation capability among others to inform global decisions (scheduled for September 2015). Platforms will then emerge and huge efforts are needed to gear the SGDs implementation globally, regionally and
nationally in the next decade. I am confident Evalpartners is up to it. Evalpartners will smartly align its programs within the context of the post 2015 development agenda… VOPEs will follow… Everyone will follow…