Learning Portal

Learning Portal - Theory-based approaches

Theory-based evaluation is aimed at understanding and assessing how programme interventions contribute to policy objectives, with a strong focus on its effectiveness. Known for its thorough use of a causal chain or theory of change, which explains the reasons and ways an intervention is likely to be successful.

a person's hand holding a pen and typing on a laptop

Basics

In a nutshell

Explaining the core

Theory-based evaluation (TBE) is a systematic and comprehensive approach that can be used to explain and appraise the contribution of programme interventions to policy objectives in terms of effectiveness. The starting point of a TBE is always a causal chain or theory of change, which explains how and why an intervention will work and is expected to lead to intended outcomes. This approach places a strong emphasis on the collection of empirical evidence to test and validate these theories explaining the processes of change and the role of an intervention in achieving policy objectives. The TBE is particularly beneficial for policymakers, programme managers and other stakeholders who seek a deeper and more nuanced understanding of programme effectiveness. It premises that programmes are based on explicit or implicit theory about how and why a programme will work.

The underlying theory of change can refer to a programme theory, or it can encompass broader social, economic or political theories that explain how allocating funds will produce outputs through which intended results (specific objectives) and impacts (general objectives) are to be achieved (the expected change). In the case of a ‘programme-theory-based evaluation’, a plausible programme theory is established by the intervention logic which is an essential cornerstone for the assessment.

Programme-theory at heart

Programme-theory-based evaluation follows each step of the programme´s intervention logic with a focus on identifying causal links and mechanisms of change. Various analytical methods can be applied to scrutinize these links, enabling an assessment of whether the theoretical framework has manifested in practice. Valuable sources of programme theory encompass: programme-related observations, exploratory research designed to test critical assumptions, preexisting theoretical knowledge and research findings, and implicit theories of those closely associated with the programme. The theory of change evaluation approach answers a key question related to effectiveness: how and to what extent have the stated objectives been achieved? With this approach, a coherent and logical set of criteria is developed that formulate, in a logical chain, the preconditions and conditions that are necessary to ultimately achieve desired (positive) effects. These criteria are then checked step-by-step to see whether they are met and, thus, the degree to which the postulated results can be achieved. The more preconditions along the impact chain can be fulfilled, the more likely it is that the expected results and impacts will be achieved. The review of the chain of results should provide indications as to whether the funding strategy has been successful or if it should be adapted. Although the effects are mainly recorded qualitatively, the overall consideration of the building blocks along the impact chain increases the robustness of the evaluation.

Empirical testing

Empirically, theory-based evaluations seek to test programme theory and to investigate whether, why or how policies or programmes cause intended or observed outcomes. Testing the theories can be done on the basis of existing or new data, both quantitative (experimental and non-experimental) and qualitative. Testing requires gathering evidence to strengthen the contribution story, using appropriate data gathering techniques, such as surveys and reviewing and analysing administrative data. From a theory-based perspective, several frequently used data gathering techniques can be applied (e.g. key informant interviews, focus groups and workshops, or case studies). Thereafter, newly collected empirical evidence is used to build a more credible contribution story with strengthened conclusions on the causal links in the theory of change.

The programme theory approach should be applied as a first step to estimating a programme's indirect effects, inter alia, by providing answers to questions such as: who might be affected by the programme and how this is linked to the programme's intervention logic?

Pros and cons

Advantages

Disadvantages

  • Explaining causality: TBE excels in explaining the causal processes underlying programme effects. Following the sequence of stages provides a comprehensive understanding of the micro-steps that link programme inputs to desired outcomes. This level of detail allows evaluators to precisely explain how and why certain effects occurred, offering valuable insights for programme improvement.
  • Utilising diverse information: Beyond quantifiable causal effects, TBE recognises the significance of diverse information. It acknowledges that policymakers require more than just numbers to make informed decisions about policy implementation and design. TBE generates qualitative insights and contextual understanding, offering a comprehensive perspective that supports policy formulation and improvement.
  • Stakeholder engagement: TBE fosters active engagement with a wide range of stakeholders. Involving stakeholders in the development, review and refinement of the programme theory ensures that an evaluation process captures a comprehensive spectrum of effects that are valued by different stakeholders. This inclusivity enhances the relevance and credibility of evaluation findings.
  • Systematic identification of evidence: TBE facilitates the systematic identification of evidence required for evaluation. It directs attention to the specific data and information needed to assess a programme's impacts, thereby streamlining the data collection process and ensuring that an evaluation is focused on the most relevant aspects of an intervention.
  • Promoting causal analytical methods: The approach encourages the systematic use of causal analytical methods to develop and test hypotheses regarding programme effects. This approach enhances the validity and reliability of evaluation findings, as it employs established analytical techniques to assess causal contributions accurately.
  • Iterative data collection: Analysis conducted within a theory-based evaluation often leads to the identification of emerging patterns and hypotheses. TBE recognises the dynamic nature of programme impacts and allows for iterative data collection to test these evolving hypotheses. This adaptability ensures that an evaluation remains responsive to changing circumstances and evolving program dynamics.
  • Focus on stated intended effects: TBE often places a strong emphasis on evaluating the stated intended effects of a programme or policy. This focus can lead to a potential limitation as it may overlook unintended consequences or effects that were not explicitly outlined in a programme's initial theory of change.
  • Implicit causal contribution: While TBE assumes causal contribution if there is evidence of an expected causal chain, it does not always explicitly address the degree of causal contribution an intervention made. This can make it challenging to detect the extent to which a programme was responsible for observed outcomes.
  • Invisible or unexpressed theories: The underlying theories that inform a policy or programme are not always readily visible or expressed in official documents. Evaluators may need to invest significant effort in searching for and articulating these theories in a testable manner, which can be time-consuming and complex.
  • General and loosely constructed theory statements: Empirical testing of underlying theories in TBE can be cumbersome, especially when theory statements are overly general or loosely constructed. This vagueness can hinder clear-cut testing, identification of assumptions and the establishment of precise causal relationships.
  • Measurement challenges: Evaluators may encounter difficulties in measuring each step of a theory-based evaluation, particularly if appropriate instruments and data are not readily available. Inadequate measurement tools can compromise the accuracy and reliability of the evaluation results.
  • Interpretation and generalisation issues: Problems of interpretation can arise in TBE, making it challenging to generalise findings. The complex and multifaceted nature of programme theories, especially those linked to various economic, social, and political concepts, can lead to ambiguity and hinder the ability to draw broader conclusions or policy implications.

 

When to use?

TBE is appropriate when there is a need to assess the validity of the assumptions embedded in a CAP Strategic Plan's intervention logic. The intervention logic, accompanied by well-defined evaluation questions and relevant indicators, serves as the foundational framework for conducting a TBE. This approach ensures that the evaluation is firmly rooted in a plausible programme theory. TBE allows for a systematic examination of the various links within the intervention logic. It employs diverse methods to analyse and build a compelling argument about whether the theory proposed in an intervention logic has been effectively realised in practice. The method is able to explain why and how results have been achieved and to appraise the contribution of a CAP Strategic Plan to the CAP’s objectives.

In general, this approach does not produce a quantified estimate of the impact; it produces narrative and non-parametric data such as qualitative classifications (e.g. low, medium, high contribution of a measure to achieve its objectives). The most effective way to develop a programme theory is therefore an interactive process involving a combination of literature review and engagement with programme stakeholders.

According to the European Commission’s 2012 impact evaluation guidelines, if an experimental setting design is not possible, and if different designs of non-experiments are not possible, then one can move to more qualitative approaches, including a theory of change, to establish counterfactual: “A theory of change sets out why it is believed that the intervention’s activities will lead to a contribution to the intended outcomes; that is, why and how the observed outcomes can be attributed to the intervention. The analysis tests this theory against available logic and evidence on the outcomes observed and the various assumptions behind the theory of change, and examines other influencing factors. It either confirms the postulated theory of change or suggests revisions in the theory where the reality appears otherwise”.

The theory of change is also suitable when evaluating complex contexts, such as the involvement of various actors in processes of change, as observed in initiatives like LEADER or AKIS. The AKIS strategic approach is very much related to the context of each Member State. For this reason, one of the key principles for designing and conducting an evaluation is to provide for the systematic production of specific evidence-based knowledge using the theory of change approach. This will allow reflections on the different features of the AKIS in Member States.

Moreover, TBE can contribute to the ex post assessment of programme effects in situations where impact evaluation, including the counterfactual established through quasi-experimental or non-experimental approaches, has been carried out. However, a deeper understanding of the obtained results is needed, which can help clarify the causal pathways and contributing factors that led to observed outcomes.

Preconditions

Clearly defined programme and availability of programme theory: TBE requires a well-defined programme, policy or intervention with clear objectives and activities to be evaluated. An existing or developed programme theory or theory of change should outline how the intervention is expected to work, the causal pathways and underlying assumptions.

Knowledge of alternative theories: Evaluators should possess knowledge of alternative theories explaining why and how specific outcomes can be attributed to a given intervention. This understanding enables a robust comparison of alternative theories during the evaluation process.

High analytical skills and knowledge of testing principles: Evaluators should have high analytical skills and a strong grasp of basic testing principles. This includes the ability to design and conduct evaluations effectively, analyse data rigorously and apply appropriate testing methods.

Step-by-step

  • Step 1 – Map out (reconstruct) the conceptual model of interventions to capture goals at different levels, the planned activities and target groups to achieve the desired change. The explicit statement of the ‘programme theory’ is important as it provides the underlying logic for evaluation.
  • Step 2 – Verify the implementation of the different building blocks of the impact model by mixed information sources and tell the ‘performance story’ at a detailed activity level through empirical research which explores how the conceptual model has worked in practice.
  • Step 3 – Draw evidence-based conclusions on whether implementation and practice actually fit with expected goals and theory of change. Based on the collected evidence, a judgement is made on the effectiveness of achieving the strategic and operational goals mapped out at the beginning, i.e. in the CAP Strategic Plan.

In its simple format, the theory of change approach to evaluation is based on non-rigorous methods such as monitoring hard data analysis (quantitative), interviews, focus groups and case studies (qualitative), which deliver the information necessary to verify (or not) the implementation of planned activities in line with an intended change.

Hence, it relies on quantitative information on financial inputs and outputs, and qualitative estimates on results and impacts. This exercise ends up with a judgement on the contribution of the main outputs and identified results with respect to a certain intervention(s) to the intended change. It produces narrative and non-parametric data, such as qualitative classifications, e.g. low, medium or high contribution of an intervention to achieving defined objectives.

See Section 4.3, ‘The theory of change: theory and practice,’ and Section 7, ‘Practical example of the suggested approach’ for a more detailed description of how the theory of change is used in evaluating the AKIS strategic approach.

Main takeaway points

  • TBE identifies and analyses the underlying mechanisms of a programme, providing a comprehensive understanding of how interventions lead to desired outcomes.
  • It combines qualitative and quantitative data to provide a rich understanding of programme impacts.
  • Stakeholder involvement is crucial in TBE, ensuring diverse and relevant perspectives in the evaluation process.
  • While TBE excels in providing deep insights, it can struggle with unintended effects and complex, unexpressed theories.
  • TBE is most effective in complex, multidimensional settings, such as evaluating CAP Strategic Plans, offering detailed qualitative assessments, and fostering deep understanding of intervention impacts.

Learning from practice

  • Donaldson, S. I. (2001)
    Overcoming our negative reputation: Evaluation becomes known as a helping profession. American Journal of Evaluation, 22(3), 355–361
  • Donaldson, S. I. (2003)
    Theory-driven program evaluation in the new millennium. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social programs and problems: Visions for the new millennium (pp. 109-141)
  • Donaldson, S.I. (2007)
    Program Theory-Driven Evaluation Science: Strategies and Applications (1st ed.). Routledge
  • Michalek J.
    Counterfactual impact evaluation of EU rural development programmes - Propensity Score Matching methodology applied to selected EU Member States. Volume 2: A regional approach. EUR 25419 EN. Luxembourg (Luxembourg): Publications Office of the European Union; 2012. JRC72060 

Further reading