Program effectiveness

Introduction

Program evaluation has officially been part of the federal government since 1977 when the Treasury Board released its first formal policy, suggesting programs should be evaluated every 3-5 years  (Segsworth, 2005).

Treasury Board circular 1977-47 established four main  evaluation issues:
"1. Program Rationale (does the program make sense)
a) To what extent are the objectives and mandate of the program still relevant?
b) Are the activities and outputs of the program consistent with its mandate and plausibly linked to the attainment of the objectives and the intended impacts and effects?
2. Impacts and Effects (what has happened as a result of the program)
a) What impacts and effects, both intended and unintended, resulted from carrying out the program?
b) In what manner and to what extent does the program complement, duplicate, overlap or work at cross purposes with other programs?
3. Objectives Achievement (has the program achieved what was expected)
a) In what manner and to what extent were appropriate program objectives achieved as a result of the program?
4. Alternatives (are there better ways of achieving the results)
a) Are there more cost-effective alternative programs which might achieve the objectives and intended impacts and effects?
b) Are there more cost-effective ways of delivering the existing program?" (OCG, 1981a, p. 7).

 

However, in the early 1990s, this policy was rescinded and a more general approach outlined that didn't require all programs to be evaluated and extended the timelines.  The core evaluation issues were simplified.   This approach lasted only a few years and was replaced in 1994 and again in 2001, with more general approaches that suggested evaluation was really a management function, once again narrowing the frame.  Many of these changes resulted from shifts at the centre of government that produced unit reorganizations and mergers.  Resources available to  departments to conduct evaluations were also highly variable, however, compared to the 1990s, there were signs of increased evaluative activity, although narrower in scope, in the mid 2000s (Segsworth, 2005).

A significant criticism is that evaluations have tended to focus on management practices and operational concerns rather than outcomes, whether the program actually delivers what is intended (Segsworth, 2005). In most areas discussed on this site programming is criticized for combinations of: application and accounting complexity, designs that don't properly address the core challenges, insufficient consultation with target groups on program design, poor timing of program delivery to address the problem cycles being targeted, short funding cycles, lack of targeting, and underfunding.  This focuson management and operations can be seen in AAFC's evaluation of their Innovation and Adaptation programming, where they concluded that there was a need for:

  • "distinctive guidelines around program objectives and eligibility requirements;
  • streamlining of administrative processes;
  • expansion of communication strategy plans for proponents;
  • improved performance reporting and data management processes; and
  • enhanced coordination and communication between the Science and Technology Branch and the Programs Branch to provide consistent performance reporting and project monitoring systems." (AAFC Evaluation of Agriculture and Agri-Food Canada's Innovation and Adaptation Programs)

These recommendations are primarily management and operations related.  Assessing outcomes, they concluded, would require further assessments.

Canadian governments have generally been reluctant to evaluate food - related programming. AAFC was named in a 2013 Auditor General report as out of compliance with federal legislation regarding evaluation of grant and contribution agreement programs (OAG, 2013). Since then,  AAFC evaluations have focused on business risk management programs, marketing and innovation.  Earlier, the Commissioner for Environment and Sustainable Development had criticized the department's evaluation of environment pillar programs of the Agricultural Policy Framework, saying they were operational assessments that ultimately did not address whether the programs were having any impact on farm environmental performance (CESD, 2008). Since 2015, no further assessment of environmental programs appear to have been conducted.

However, some meta-reviews of programming from different jurisdictions offer guidance for program design.

Health

McGill et al. (2015) conducted a  graphical and narrative analysis of articles evaluating programs for healthy eating, with an emphasis on the degree to which they addressed socio-economic equity.    They undertook the analysis because there is some evidence in the literature that some program designs are more likely to reduce nutritional inequalities, and others actually exacerbate them.  They found 36 studies that met their criteria (designed to reduce intake of salt, sugar, trans-fats, saturated fat, total fat, or total calories, or increase consumption of fruit, vegetables and wholegrain, in other words, changing dietary intake) , and sorted them by primary  intervention:

  • 18 were “Price” interventions (fiscal measures such as taxes, subsidies, or economic incentives)
  • 6 “Place” (environmental measures in specific settings such as schools, work places  - e.g. vending machines - or planning  - e.g. location of supermarkets and fast food outlets) - or community-based health education),
  • 1 “Product” (changing food  to make them healthier/less harmful ),
  • zero “Prescriptive” (advertising/marketing restrictions through controls or bans, labelling, recommendations or guidelines),
  • 4 “Promotion” (mass media public information campaigns), and
  • 18 “Person” interventions (Individual-based information and education, e.g. cooking lessons, tailored nutritional education/counselling, or nutrition education in the school curriculum).

Key design dimensions that can effect equality include: intervention efficacy, service provision or access, uptake, and compliance.  The literature suggests compliance is higher among more well off groups and also designs focusing on individual behaviour modification are also more likely to have some success with those in a higher socio-economic category.  Low SEC groups are typically more difficult to reach, but if reached, impacts can be very positive.  In contrast, structural changes that are population wide reduce the resources associated with individual implementation.  This study had similar conclusions, that “Price” appeared most likely to decrease health inequalities, while  “Person” interventions appeared most likely to increase inequalities.  Other types of interventions had mixed results.

Environment

 

Economy