menu
A Model Evaluation Framework for Industrial Policy in Canada featured image
Building new foundations for economic growth

A Model Evaluation Framework for Industrial Policy in Canada

Douglas Nevison
by Douglas Nevison November 25, 2025

Governments have shown a renewed interest in industrial policy in recent years as Canada navigates geopolitical tensions and trade uncertainty, along with long-term challenges such as the global energy transition. While Canada has a long history with industrial policy, robust and consistent evaluation practices are not always followed. This can perpetuate suboptimal policies and inhibit learning from successes and failures.

Canadian governments need a new and consistently applied evaluation framework for industrial policy. Too many evaluations focus on process and operational issues without considering the information needed for funding decisions. While there are methodological challenges in evaluating policy outcomes, modern approaches and tools offer promising avenues for progress.

The author recommends that governments:

  1. Show leadership from senior decision-makers, with an explicit commitment at the highest possible level to evaluate industrial policy, backed by sufficient resources.
  2. Incentivize industrial policy evaluation by elevating its status in budget and funding decisions.
  3. Create a dedicated, centralized industrial policy evaluation unit, resourced over and above the budgets of existing departmental/ministerial evaluation units.
  4. Spell out clear guidelines for evaluation of industrial policy through mechanisms such as a Cabinet Directive, and mandate the publication of all evaluations.
  5. Develop an industrial policy evaluation schedule that is synchronized with key funding decisions, such as budget cycles.
  6. Devise a system of fast-track evaluations for industrial policy decisions to provide sufficient insights to inform evidence-based decision-making;
  7. Develop a general logic model template to help frame industrial policy evaluations and translate outcomes and impacts into measurable performance indicators.
  8. Adopt a consistent approach to industrial policy evaluation reporting and dissemination to allow for comparison across policies, including tax expenditures.
  9. Invest in advanced digital technologies, such as big data analytics, artificial intelligence and machine learning, to improve the design of industrial policies and lower the cost of evaluations.

The renewed interest in Canadian industrial policy should be accompanied by a renewed focus on sound evaluation practices. Governments need to break the cycle of disinterest in evaluation, given the scale of industrial policies and the risks involved. Robust evaluation practices are critical to the successful use of industrial policy to address Canada’s most pressing challenges.

Industrial policy is drawing renewed interest in Canada. But to be successful, it should be accompanied by more useful, refined and consistent evaluation. Timely, credible and objective policy evaluation is essential to ensure the effective use of public resources and avoid inefficient or failed initiatives. However, too many evaluations miss the mark on providing the information decision-makers need. This paper proposes a framework to guide industrial policy evaluation — one that aims to provide the best possible information to design impactful policies, enable evaluators to contribute more effectively to the policy-making process and better engage Canadians. The author recommends the creation of a dedicated industrial policy evaluation unit and the development of clear guidelines for evaluation to ensure consistency and usability.

Introduction

Industrial policy has long been a part of Canada’s economic landscape, even if it is often treated as a novel concept. While controversial in some corners, industrial policy has seen renewed interest in recent years.

Numerous thorny public policy challenges have led to this interest, including threats and opportunities stemming from Canada’s low productivity growth, rising geopolitical tensions, competitive pressures, advances in digital technologies and climate change. Indeed, in the past few years, the federal, provincial and territorial governments have increasingly turned to big-ticket industrial policy measures in the hope of attracting and stimulating business investment, creating jobs, and positioning Canada to succeed in a changing global economy. These measures include the Canada Growth Fund, a suite of federal Clean Economy investment tax credits, and significant investments by the federal, Ontario and Quebec governments in an electric vehicle (EV) supply chain. Giswold (2024) estimates the latter to be worth up to $52.5 billion in government support.

Yet one critical aspect has largely been overlooked. To wield the tools of industrial policy effectively and efficiently, Canadian decision-makers must evaluate the success of their efforts in a credible and objective way. Only through robust evaluation can governments demonstrate results to Canadians and avoid wasting scarce public resources on low-value or even failed initiatives. Effective evaluation is particularly important at a time of limited fiscal resources, given the significant opportunity cost of misguided projects.

The potential pitfalls of industrial policy have been well documented, including inefficient allocation of labour and capital, anti-competitive effects, scope creep, exit challenges, government capture by vested interests, rent-seeking by firms and the risk of retaliatory actions by trading partners. Due to the scope for mistakes, corruption and unintended consequences, the Organisation for Economic Co-operation and Development (OECD) stresses that “governments need to put a strong emphasis on evaluation and the regular assessment of targeted industrial policies” (Criscuolo et al., 2022a, p. 5).

What’s more, the OECD notes that the evaluation of industrial policy is “weak relative to other fields,” such as health care, education, labour markets and poverty reduction (Warwick & Nolan, 2014). This is not for lack of trying. Indeed, governments around the world, including those at the federal, provincial and territorial levels in Canada, have undertaken a wide range of impact and evaluation studies in this area. The halting progress toward crafting a robust evaluation model may, however, reflect the fact that industrial policy is a broad concept. No single formal definition encompasses its typical multiple economic, environmental, social and security objectives.

An industrial policy may involve a wide range of supply- and demand-side instruments. Such measures may include grants, public loans, tax expenditures, public credit guarantees, government-backed venture capital funds, trade restrictions and promotion tools, local content and procurement requirements, intellectual property protection and skills provision. These measures can be horizontal and untargeted, such as R&D tax credits available to all firms. Alternatively, they can be vertical and targeted at specific segments, units or activities within an economy defined, for example, by sector, technology, firm size or development stage. Industrial policies can also be oriented toward “place-based” objectives designed to promote economic development in a specific geographical area that is underdeveloped or has specific endowments of natural resources or skills (European Bank for Reconstruction and Development [EBRD], 2024). They may even be trumpeted as transformational “missions” to address societal challenges, such as climate change (Criscuolo et al., 2022a). For example, the Net-Zero Advisory Board recently proposed a net-zero industrial policy strategy to promote clean technologies in Canada (NZAB, 2025). Finally, such policies can be implemented as individual measures or combined in policy packages or strategies, which may have overlapping goals and approaches. For example, a mission-oriented strategy aimed at promoting a “green transformation” could have a variety of economic, environmental and social goals, and incorporate technology-focused, sectoral and place-based measures to achieve them.

According to the OECD, evaluation of industrial policy has been hindered by “the lack of transparent, publicly available and easily accessible information” (Criscuolo et al., 2022a, p. 35). This concern is based on the complexities inherent in most industrial policies:

  • Identifying suitable control groups and counterfactuals.
  • Uncertain time lags to assess long-term outcomes and impacts.
  • Multiple influences on outcomes, including spillover effects, such as from global linkages associated with Canada’s relatively small, open economy, as well as spill-ins from non-targeted players.
  • The need to account for interactive effects between policies, as well as between instruments within a strategy.

These challenges may limit evaluators’ ability to use modern methods to assess the effectiveness of industrial policy. One that comes to mind is randomized controlled trials, which have proven successful in other policy areas.

The OECD notes that, taken together, the challenges in evaluating industrial policy may have contributed to the conclusion that “the evidence on the effectiveness of targeted [industrial policy] interventions is limited and mixed so far” (Criscuolo et al., 2022a, p. 36) and that “little consensus exists on the effectiveness of [industrial policy] interventions” (Criscuolo et al., 2022b, p. 3). This conclusion is corroborated by the International Monetary Fund (IMF), which notes that “the empirical evaluation of net economic benefits of past industrial policies has been challenging” and that “despite recent efforts to overcome methodological challenges, the empirical evidence on the net economic impact of industrial policy is mixed” (IMF, 2024, p. 10). EBRD (2024, p. 6) adds that industrial policy’s “track record is mixed at best.”

Better evaluation means better policies

While the fields of industrial policy and policy evaluation are broad, the scope of this paper is deliberately narrow. It is not a critique of Canada’s current evaluation practices. Nor is it an assessment of the methods used to evaluate industrial policy interventions. Rather, it aims to identify the challenges that arise from using evaluations in the policy-making and approval process, and to recommend practical steps to overcome these hurdles. My ultimate goal is to propose a framework that provides government decision-makers with the best possible information to design industrial policies, enables evaluators to make an effective contribution to the policy-making process, and informs and engages Canadians in this important public policy area.

Despite the methodological challenges and mixed results, the OECD stresses that a “strong push on policy evaluation is urgently required” (Criscuolo & Lalanne, 2023). The IMF takes much the same view, noting that “industrial policy should be supported by an appropriate governance framework 
 [which] involves 
 regular monitoring and reviews” (IMF, 2024, p. 12).

To that end, this paper offers recommendations for a practical evaluation model that could help government decision-makers in Canada demonstrate results and design more effective industrial policies. Such a model could apply to either individual measures or policy packages. It would be based on a set of well-defined objectives and measurable performance indicators, and would be relevant to a variety of industrial policy instruments.

THE MAIN ELEMENTS OF A FRAMEWORK

As a starting point, I undertook a literature review to determine best practices in policy and program evaluation in general and, where applicable, the specific field of industrial policy. Source materials include: publications and documents prepared by multilateral institutions, such as the OECD and the IMF; domestic jurisdictions, including the government of Canada and provincial/territorial governments; and foreign governments, such as Australia, Ireland, New Zealand, the United Kingdom and the United States.

Based on this review, I recommend a framework for industrial policy evaluation comprising two main elements:

  • A set of principles to guide industrial policy evaluations in Canada, including a description of a policy cycle that explicitly embeds evaluation into the decision-making process across a program’s entire life cycle.
  • A set of recommendations grouped under two pillars: (1) leadership and culture; and (2) policies and practices. The approach would include a logic model template that could be applied flexibly to the wide range of activities, outputs, outcomes and impacts common to industrial policy, covering both specific measures and policy packages.

This framework draws extensively on the IRPP’s multi-year project Canada’s Next Economic Transformation: What Role Should Industrial Policy Play? It owes much to that project in terms of rationale, objectives, activities, inputs and outputs, as well as expected outcomes and impacts.

Building blocks for an evaluation framework

There is a rich literature on the role of evaluation in public policy-making. An array of practical directives, backgrounders, how-to guides, handbooks and toolkits highlight the rationale for evaluation, and describe various purposes, types, timings and methods. These resources also demonstrate the potential uses of evaluation results, such as developing new policies and programs, and assessing the performance of existing ones (Australian Centre for Evaluation, 2023; Department of Children, Disability and Equality, 2021; HM Treasury, 2020; OECD, 2025; Social Policy Evaluation and Research Unit [Superu], 2017).

In Canada, the federal government’s 2016 Policy on Results, together with an accompanying Directive on Results and supporting documents, sets out the general requirements for all program evaluation and provides answers to the frequently asked questions of why, what, who, when and how to evaluate (Treasury Board of Canada Secretariat, 2016a, 2016b, 2020a, 2020b).

Despite the plentiful literature, the OECD has noted a “dearth of good examples of influential industrial policy evaluation” (Warwick & Nolan, 2014, p. 67). To address that gap, it set up an expert group on evaluation of industrial policy in 2012, comprising government officials and academics with experience in this area. The group drew upon evaluation materials and member experiences to come up with a set of principles for effective industrial policy evaluation.

While the reference may be somewhat dated and “both the economic and policy environment have changed dramatically since that work” (Criscuolo et al., 2022a, p. 8), its recommendations remain relevant today, if only because little progress has been made in developing an up-to-date evaluation framework specific to the field of industrial policy. (IMF [2024] provides a conceptual framework for assessing industrial policy, but only in the specific context of IMF surveillance exercises.)

Taken together, the materials described above provide the building blocks for a robust framework to evaluate industrial policy. Each of these elements is described below.

Governance

To start, evaluation of industrial policy needs to be set within the context of an appropriate governance framework and policy cycle. The IMF recommends “regular monitoring and reviews, supported by credible expertise and fact-based approaches to decision-making, as well as clearly defined sunset clauses to ensure that industrial policy support is phased out gradually” (IMF, 2024, p. 12). Similarly, the OECD identifies the key attributes of effective industrial policy as including explicit policy rationales, clear objectives, measurable performance indicators, an oversight regime and regular assessment to measure results and recommend adjustments over time (Criscuolo et al., 2022a).

Types of evaluation

While they may have different labels across the various sources cited in this paper, evaluations generally fall into three main types, based on their purpose:

  • Formative, delivery or process evaluations relate to the activities involved in implementing and delivering a policy or program.
  • Summative, outcome or impact evaluations focus on the changes caused by or attributed to a policy or program.
  • Value-for-money evaluations assess the costs and benefits of an intervention.

According to Superu (2017, p. 4), “formative evaluation is about steering the ship, while summative evaluation is about making sure it arrived in the right place.”

An evaluation’s approach can be based on any one of these types or a combination of all three. It can assess: relevance (whether a policy or program addresses a demonstrated need and reflects a government’s priorities and responsibilities); efficiency (whether inputs are used in the most efficient way to maximize outputs); and effectiveness (whether outcomes and impacts align with policy-makers’ intentions) (Lester, 2024).

In this way, decision-makers can use an evaluation to assess the functioning, merit, or value of a policy or program, as well as its unintended consequences. The evaluation can also help them determine whether the policy should be sustained, improved, scaled up, replicated or abandoned, with the associated resources flowing to other policy priorities.

The purpose of a policy evaluation

An evaluation has two main purposes:

  • Learning. Decision-makers can use the feedback from evaluations to help manage risk and uncertainty, and to influence the design and implementation of existing or future programs.
  • Accountability. Evaluations help ensure that public funds are used for their intended purpose, thereby encouraging public trust.

Along with capacity and capability issues, the purpose of an evaluation may influence whether it is conducted by an internal or external team. An internal group typically has much to gain from such an experience, but an external one may be best suited for accountability purposes that require a greater degree of independence and objectivity.

Timing

An evaluation can take place at various points across the policy or program development and implementation cycle.

Evaluations are traditionally scheduled at or near the end of a mature policy or program to determine whether the expected outcomes and impacts were achieved. These assessments can be used, for example, to determine policy and funding renewals under budget and cabinet approval processes.

Developmental evaluations, which take place while a program is in operation, encourage continuous improvement and allow for course adjustments. Even in the early design phase, decision-makers can benefit from past evaluations of similar measures by both domestic and international players. They can also take lessons from evaluations of pilot programs that test a policy intervention in a controlled environment and on a smaller scale prior to the program being rolled out for full-scale implementation. In this way, “evaluations feed into each other — the ‘after’ of one phase is the ‘before’ of a new phase” (Superu, 2017, p. 4).

Design

Developing and agreeing on an evaluation plan is an essential starting point. These early steps involve identifying the policy or program to be evaluated, followed by the objectives of the intervention and the focus of the evaluation and its approach, including questions to be addressed. Attention then turns to these items: the existing evidence base and the data to be collected and analyzed; methods to produce the findings and recommendations; and requirements for funding, staffing, time and skills. Early evaluation planning can help maximize learning and reduce the cost of data collection by ensuring that the right information is collected in the most cost-effective and efficient way.

To develop, monitor and, if necessary, adjust an evaluation plan, the U.K. Treasury suggests setting up an evaluation steering group that includes representatives from the policy or program design and delivery teams, the evaluators and, possibly, key stakeholders and independent experts. The latter could also take on a quality control role through an expert peer review process (HM Treasury, 2020). These types of governance safeguards tend to enhance the credibility of an evaluation and help promote ownership or buy-in by users of its results.

A typical evaluation plan has two key elements:

  • A logic model, which represents the relationships or connections between a policy or program’s inputs (e.g., funding), activities, outputs (e.g., number of funded projects), outcomes (i.e., behavioural changes) and impacts (i.e., longer-term consequences).
  • A theory of change, which describes how an intervention is expected to work, as well as the assumptions underlying the rationale for a policy or program.

Taken together, these elements “describe how and why a desired change is expected to happen in a particular context” and help “link what you want to achieve with how you will measure success” (Australian Centre for Evaluation, 2023). Logic models and theories of change can help policy-makers develop policies and programs, consult with stakeholders regarding shared objectives, and produce a roadmap for future evaluation. As a result, they can be used “as part of the policy planning process, or 
 developed retrospectively to help frame an evaluation of a policy or programme that does not have one” (Department of Children, Disability and Equality, 2021, p. 13).

Methods and data

An evaluation can draw on a variety of approaches to social science research. Among the options are:

  • Document reviews, interviews, surveys, case studies, focus groups, output or performance monitoring, and structural econometric modelling
  • Theory-based methods, in cases where it is not possible to compare units affected and not affected by an intervention (Treasury Board of Canada Secretariat, 2021)
  • Experimental approaches such as randomized controlled trials which the Netherlands, United Kingdom and United States have used to evaluate innovation vouchers and natural experiments (Criscuolo et al., 2022b; Warwick & Nolan, 2014)
  • Quasi-experimental approaches, when randomization is not possible

Value-for-money evaluations typically use a structured cost-benefit analysis to identify the positive and negative impacts of a program. An example is the approach applied to all regulatory proposals in Canada under the Cabinet Directive on Regulation, with the goal of demonstrating that “the benefits to Canadians outweigh the costs” (Treasury Board of Canada Secretariat, 2024). The U.S. Department of Energy provides a helpful summary of the characteristics of various evaluation methods for research and development programs, which are also applicable to broader industrial policy programming (Ruegg & Jordan, 2007). Even so, the U.K. Treasury notes that “no one evaluation approach can appropriately evaluate all types of interventions — each design has its advantages and disadvantages — and often approaches may need to be combined” (HM Treasury, 2020). As a result, many evaluations use “mixed methods,” which combine quantitative and qualitative information (Australian Centre for Evaluation, 2023).

Data collection is an “essential component of any evaluation” and should be planned at the same time as the policy or program is developed (HM Treasury, 2020). In particular, baseline data should be gathered before the policy or program begins. A logic model or theory of change can help identify data needs and gaps with respect to inputs, outputs, outcomes and impacts. Sources may include existing administrative and monitoring data, such as from firms participating in a program, as well as new data collected specifically for the evaluation. Data quality is key. Existing data should be assessed at the outset, as poor-quality or partial data will distort the scope and scale of the evaluation. Data also need to be seen as valuable to those using them. As the U.K. Treasury warns, “if [data collection is] seen purely as an administrative burden, the incentive to ensure data quality is typically low” (HM Treasury, 2020).

Scale and ambition

Evaluations can differ in terms of scale and ambition. In terms of calibrating these efforts, the U.K. Treasury (2020) notes that a “good evaluation” should be “fit for purpose” and “reflect the needs of decision-makers and those scrutinizing policy from the outside.” The latter could include public auditors, parliamentary committees and budget officers, civil society groups and non-governmental organizations, as well as the general public.

With that mind, the Treasury recommends that evaluations should be designed to be: (1) useful, by providing accessible and actionable findings at the right time; (2) credible, in terms of objectivity and transparency; (3) robust, in terms of technical design; and (4) proportionate to the strategic importance, scale, cost and risk profile of the intervention (HM Treasury, 2020).

Ultimately, as the Treasury points out, “the value of evaluation comes through its use and influence.” The OECD adds that “ensuring evaluation is influential is of course every bit as important as ensuring it is soundly based” (HM Treasury, 2020; Warwick & Nolan, 2014, p. 54).

The phases of an evaluation

A robust evaluation should go through these distinct phases:

  • Planning, as described above
  • Conducting, which involves data collection and analysis
  • Reporting, or drafting evaluation findings and recommendations
  • Dissemination, in other words, releasing the evaluation report and sharing its findings through various communication products tailored to a specific audience (Treasury Board of Canada Secretariat, 2020b) — which may range from front-line delivery staff and ministers for internal communications to parliamentarians and the wider public

The OECD stresses the importance of the reporting and dissemination phases, noting that “no matter how good the quality of the underlying evaluation, its value will only be realized if there are effective channels for communication and influencing to increase the likelihood that the results are used” (Warwick & Nolan, 2014, p. 54).

In each of these phases, evaluators and decision-makers, as the key users or “consumers” of evaluation, need to maintain strong links and work together (Bourgeois & Whynot, 2018). Only by doing so can they develop and maintain the sound processes and communication channels needed to ensure that feedback from evaluators is useful to policy-makers, and that evaluators understand the context, needs and constraints of the decision-makers. Indeed, the New Zealand government takes the view that “with evaluation, the processes you follow are as important as the content that is produced” (Superu, 2017, p. 3).

Objectives

The OECD notes that the objectives of industrial policy have expanded greatly, now going “beyond productivity growth and innovation to include, inter alia, sustainability, resilience and strategic autonomy” (Criscuolo et al., 2022b, p. 3). The IMF (2024) lists economic, environmental, social and security goals under its taxonomy of industrial policy objectives and tools. And an industrial policy program or strategy may have multiple objectives. For example, in a disadvantaged region undergoing significant economic transition, enlightened industrial policies could help address the distributional impacts and promote inclusiveness, while supporting the economic objectives of greater investment and employment through a mix of instruments such as domestic subsidies and training.

However, EBRD (2024) warns that, without a clear prioritization of objectives, multiple goals often clash. The OECD also cautions that, given the relatively recent emergence of these various new industrial policy objectives, there is an urgent need for more empirical work on the effects of industrial policy on objectives other than innovation, productivity and growth, as well as on the evaluation of industrial strategies (Criscuolo et al., 2022b). Bourgeois and Whynot (2018) take the view that such “portfolio-wide” evaluations would provide senior policy-makers with a more strategic view in their reallocation decisions across programs.

Measuring performance

As part of the evaluation process, performance measures can be applied to inputs and outputs, as well as to the intended outcomes and impacts of a policy or program. This information helps decision-makers and evaluators define what constitutes success and failure. According to the U.K. Treasury, evaluation and monitoring “are closely related, and a typical evaluation will rely heavily on monitoring data” (HM Treasury, 2020). To distinguish between the two, the New Zealand authorities note that monitoring is required to do evaluation, but evaluation applies critical thinking to monitoring and other data (Superu, 2017). In terms of timing, both the British and Irish governments recommend that “monitoring and evaluation occur across the policy life cycle and should be built in from the very beginning” (Department of Children, Disability and Equality, 2021, p. 14; HM Treasury, 2020). Indeed, New Zealand notes that “the most useful evaluations occur when a programme has been designed with evaluation in mind” (Superu, 2017, p. 37).

In the case of industrial policy, the OECD notes that outcome metrics are increasingly likely to be “multidimensional” and not just based on aggregate productivity (Criscuolo et al., 2022b). It adds that, while “many … evaluations focus on the impact of policies on treated firms … a complete cost-benefit analysis would also require identifying the effect on non-treated firms” to assess spillover effects through labour and product markets (Criscuolo et al., 2022b, p. 19). The New Zealand government stresses that it usually makes more sense to monitor a small number of good indicators rather than a large number of “only vaguely-helpful-at-best indicators” (Superu, 2017, p. 34). Similarly, the OECD recommends “a pragmatic approach to indicators that are likely to be available at an early stage” of the evaluation planning exercise (Warwick & Nolan, 2014, p. 54).

Evaluations can be conducted at either a micro level, centred on a specific firm or program recipient, or a macro level, involving a broader industry, sector or the entire economy. The OECD cautions that firm-specific studies “cannot easily take into account policy packages or strategies,” but also that “analyses at the industry or country level are poorly designed to identify a causal relationship between the policy instrument and the outcome, such as productivity, employment or growth” (Criscuolo et al., 2022b, p. 16).

In Canada, federal industrial policy programs that have undergone recent evaluations, such as the Strategic Innovation Fund and the Innovation Superclusters Initiative, have focused on the firm-level results of program recipients to “assess the relevance, performance and efficiency” of the programs (Innovation, Science and Economic Development Canada [ISED], 2021, p. 8, 2022a, p. 12). In the case of the Strategic Innovation Fund, however, an “impact report” based on aggregated project results over the first five years of the program compares “select aggregate metrics 
 to the averages found in Canada’s national statistics 
 to observe the impacts of projects supported through SIF” at a more macro level (ISED, 2024, p. 9).

Principles and best practices

Based on the building blocks described above, I can articulate a set of principles and best practices to guide industrial policy evaluation in Canada:

  • Make an explicit commitment, at the highest possible level, to the evaluation of industrial policy in order to build and embed a culture of evaluation within an organization or a wider industrial policy “community of practice,” which would include program design and delivery teams, as well as relevant evaluators.
  • Establish robust governance arrangements to ensure the evaluation process is relevant, rigorous, transparent, ethical and free from political and other undue influences.
  • Provide the funding, skills and time for evaluation commensurate with a program’s scale, strategic importance and risk profile.
  • Secure the early involvement of key decision-makers, program managers and stakeholders.
  • Develop and agree on an evaluation plan and data collection strategy before an industrial policy program begins. The goal is to ensure the following: the program’s purpose, objectives and audience are clear; it is adequately resourced; and the required evidence base, including baseline data and measurable performance metrics, is collected from the start.
  • Choose credible evaluators — whether internal, external or a combination of both — and rigorous techniques that are appropriate or “fit for purpose” for the industrial policies concerned.
  • Maintain links with key decision-makers, program managers and stakeholders throughout the evaluation process.
  • Establish clear communication plans, including full disclosure of evaluation reports. These should contain comprehensible and meaningful findings and actionable recommendations for decision-makers, and should explicitly require a management response.
  • Develop effective learning mechanisms so that evaluation findings feed back into future industrial policy-making processes.
  • Build a vibrant knowledge management system to disseminate industrial policy evaluation data and findings to key internal and external partners and stakeholders through, for example, workshops, seminars and conferences, as well as accessible data portals.

Most national and sub-national government evaluation frameworks in Canada encompass many, if not all, of the guiding principles mentioned above. However, while these best practices are necessary, they may not be sufficient. As the OECD (2020) notes, “most countries face significant challenges in promoting policy evaluation across government” (p. 3) and that “the use of evaluations 
 often falls below expectations” (p. 4). While inadequate resources — in terms of capacity and capability — is typically identified as a key supply-side obstacle, the root of the problem may be, in the OECD’s words, that “sustaining political interest and demand for policy evaluation remains challenging” (p. 13).

In the Canadian context, Lester (2024, p. 9) corroborates this point by noting that “disinterest in evaluation by policy-makers is a longstanding issue,” while Bourgeois and Whynot (2018, p. 339) add that “evaluation is viewed by program personnel as a cost of doing business 
 rather than as a management support function.”

There are a number of explanations for why decision-makers do not make more use of evaluations. One is a misalignment between their needs and evaluation results, in terms of timing, timeliness and focus. For example, with fixed, mandated assessment cycles, such as under the federal government’s Policy on Results (i.e., once every five years), Mayne (2018) notes that evaluations may not be ready or may be too old for policy-makers at key decision points. Moreover, evaluation units in ministries tend to focus on process evaluations and operational issues, which are useful for program managers, but may have less relevance for officials making budget and funding decisions (Mayne, 2018). Decision-makers may also be more engaged at the broad portfolio level than in individual programs, given their strategic role in an organization. More generally, an evaluation initiative may find itself competing with other factors that influence policy and programming decisions, such as political imperatives, special interests and anecdotal information (Bourgeois & Whynot, 2018).

This disinterest may also reflect the fact that evaluations are often mandatory due to program or legislative requirements. Under the Canadian government’s Policy on Results, all ongoing programs of grants and contributions with average annual expenditures of $5 million or more over five years must be evaluated once every five years under section 42.1 of the Financial Administration Act. This updated policy replaced the 2009 Policy on Evaluation. At the same time, Treasury Board of Canada Secretariat’s Centre of Excellence in Evaluation wound down, with evaluation activities being handled by a newly created Results Division.

However, as the New Zealand authorities have cautioned, undertaking an evaluation just because it’s a requirement is “the worst possible reason to do it, and tends to result in something done neither particularly well, nor used particularly effectively, which is a waste of time and resources” (Superu, 2017, p. 6). Similarly, the U.K. Treasury stresses the need for “a shift away from seeing evaluation as a necessary evil or a ritualistic necessity to one in which attributing value to the evaluation of policy is built in as a core dimension of good policy practice” (HM Treasury, 2020). The OECD (2020, p. 23) adds that if “an effective connection between evaluation results and policy-making remains elusive,” it will result in “insufficient use,” which weakens the rationale for conducting an evaluation.

We clearly need to break this vicious cycle of disinterest and neglect, given the significant scale of industrial policy and the risks involved. Instead, Canadians would be well served by a virtuous cycle of ever-growing interest in, and demand for, rigorous evaluation of industrial policies. The OECD, however, notes that “the culture of evaluation is still in its infancy for industrial policies” (Criscuolo et al., 2022b, p. 17). A key finding of this study is the need to overcome this hurdle and build a culture in Canada’s industrial policy community that values evaluation as an “embedded practice” and, as the U.K. Treasury puts it, an “integrated part of an organization’s culture and operational structure” (HM Treasury, 2020). In the words of Polastro (2024), the goal is to move “from ‘having to’ to ‘wanting to’ evaluate” industrial policy.

Recommendations

The OECD (2020, p. 3) notes that “frameworks provide high-level guidance and clarity to institutions by outlining overarching best practices and goals.” To this end, the industrial policy evaluation framework proposed in this paper is built on two pillars: (1) leadership and culture; and (2) policies and practices. These are two of the four “capacity areas” identified by Results for America, a Washington-based consultancy that helps organizations develop evaluation frameworks in a much broader policy context. These pillars support an organization’s evaluation incentives and capabilities, and already exist in many of the general evaluation systems used by Canada’s federal, provincial and territorial governments. Consequently, there is “no reason to start from scratch” (Results for America, 2024, p. 9).

The recommendations below aim to shore up these pillars and apply them consistently in the challenging area of industrial policy.

Leadership and culture

Recommendation 1: Senior decision-makers should show leadership and make an explicit commitment at the highest possible level to the evaluation of industrial policy, backed by sufficient resources.

Leadership is essential to help staff understand why evaluation is important and how it adds value and contributes to the success of different evaluation users. Leadership is also needed to dispel fears that evaluation is about apportioning blame. Rather, it is about learning and growth and enabling informed decision-making.

Cultivating an evaluation culture is a long-term endeavour, requiring sustained and formal commitment across three areas: (1) vision and commitment; (2) structures and resources; and (3) skills and knowledge (Results for America, 2024). This commitment should include these actions:

  • Promote the leadership’s vision for evaluation.
  • Mobilize sufficient resources to support this vision.
  • Provide the staffing, tools (e.g., for data analysis and visualization) and processes required to implement the vision.
  • Use different evaluation methods, analyze data and use the results productively.
  • Build and sustain relationships between evaluators, policy or program teams, and external stakeholders.

Recommendation 2: Incentivize industrial policy evaluation by elevating its status in budget and funding decisions.

As the OECD (2020, p. 3) puts it, “sound institutional setups can provide incentives to ensure that evaluations are effectively conducted.” It sees “a close connection between incentives to enhance the quality of public expenditures and incentives to deliver results” (p. 4). Similarly, Results for America (2024, p. 80) notes that “budget and funding decisions are a great place for an institution to commit to conducting evaluations and using evidence of effectiveness.” Allocating funds to “what works” is especially important at a time of budgetary constraints.

One purpose of the Canadian government’s Policy on Results is to “inform Memoranda to Cabinet, Treasury Board submissions and expenditure management generally” (Treasury Board of Canada Secretariat, 2016b). Similarly, the Treasury Board’s (2023) submission template asks whether an “initiative’s design and delivery model … has been informed by past evaluations” and inquires about “the anticipated timing and scope of the evaluation of the initiative,” as per the Directive on Results. The Department of Finance’s template for budget proposals requests “where applicable, past program evaluations” (Department of Finance Canada, 2024; Treasury Board of Canada Secretariat, 2023).

Mayne (2018) argues for a tailored form of evaluation, which he calls rapid expenditure evaluations, given the risk of information misalignment between typical evaluations and the specific needs of decision-makers during the budget and funding process. In the context of industrial policy, I support this proposal, described below.

Policies and practices

Recommendation 3: Create a dedicated, centralized industrial policy evaluation unit.

To signal the importance of industrial policy evaluation and enhance its usefulness to senior decision-makers, I recommend the creation of a dedicated industrial policy evaluation unit, resourced over and above the budgets of existing departmental evaluation units. The head of the unit should have sufficient seniority to champion industrial policy evaluation across government. They would work closely with other leaders in the industrial policy community, including those with program design and delivery responsibilities, as well as key stakeholders, to schedule, scope, design, conduct and disseminate industrial policy evaluations. For example, select OECD countries have effectively used “Evaluation Champions” (OECD, 2025).

Two-thirds of governments surveyed by the OECD have located the evaluation function at their centre to provide strategic direction and embed a whole-of-government approach. Locating the policy evaluation function close to decision-making power may allow it to effectively commission policy evaluations and follow up on commitments by ministries (OECD, 2020). Given that it is not reasonable to expect departmental evaluators to question the existence of programs that have been endorsed by their minister, Mayne (2018) argues for a centralized evaluation team that can “focus on horizontal programming themes and policies across ministries” (p. 323) and “manage a targeted schedule of evaluations … specifically designed to support budget preparation” (p. 322). Under this proposal, departmental evaluation units would continue to conduct process evaluations to help senior management adjust programs to make them more effective and cost-efficient.

One example of such a model is the Australian Centre for Evaluation, established in May 2023 and situated in the Treasury department “to put evaluation and evidence at the heart of policy design and decision-making.” The centre partners with government departments and agencies to “conduct flagship evaluations on agreed priorities.” It works closely with the evaluation units of other departments and agencies, helps “ensure government programs deliver value for money,” and aims to build a “culture of continuous improvement” within the Australian government (Australian Centre for Evaluation, 2023).

In Canada’s case, I would recommend that the industrial policy evaluation unit be part of the Privy Council Office, in partnership with both the Department of Finance and the Treasury Board Secretariat, given the former’s responsibilities for tax policy and budgeting and the latter’s for funding decisions. Extra support would be provided by key departments, such as ISED.

Recommendation 4: Spell out clear guidelines for evaluation of industrial policy.

One of the first jobs of the proposed industrial policy evaluation unit should be to develop an explicit policy on evaluations. According to Results for America (2024), evaluations are more likely to become “business as usual” if clear guidelines for the process are in place. Such guidelines would articulate the goals and requirements of evaluation. They can also help promote and protect the integrity of these activities, justify the resources and time devoted, and sustain an organization’s commitment to the practice “beyond the tenure of one supportive, influential leader” (Results for America, 2024, p. 40).

In some cases, evaluation policies are enshrined in legislation, sending a clear message of their high priority to a government — for example, the U.S. federal Foundations of Evidence-Based Policymaking Act (2018), known as the “Evidence Act.” Such visibility can help secure buy-in and smooth implementation, and ensure the policy endures over time, through changes in the political environment and turnover of elected officials (Results for America, 2024). However, a legislated policy may take time to adopt, be difficult to amend and attract heightened scrutiny, which may impact the willingness of policy-makers or program managers to participate. As a result, I recommend that an industrial policy evaluation strategy be established through a less onerous and more flexible approach, such as through a cabinet directive. For example, the May 2024 Cabinet Directive on Strategic Environmental and Economic Assessment aims to build climate and environmental considerations into federal policy, budget and funding decision-making processes.

The Canadian government has followed the requirements of its Policy on Results in conducting industrial policy evaluations for the Innovation Superclusters Initiative and the Strategic Innovation Fund, among others. However, Lester (2024) pinpoints a number of shortcomings in this process, including: a lack of focus on value-for-money assessments; exclusion of tax expenditures, such as investment tax credits; and the “considerable flexibility” given to government departments to decide which results they wish to publish. To correct these deficiencies, Lester suggests that the Policy on Results be amended to require value-for-money assessments for all program spending and tax expenditures, and to mandate publication of all evaluations.

These proposals align with the government’s cabinet directive on regulations, which mandates that all regulations pass a benefit-cost analysis prior to implementation. This requirement should also apply to stand-alone evaluation guidelines for industrial policy. Indeed, a specific policy on evaluations could serve as a pilot program for Lester’s proposed amendments to the Policy on Results, which recently went through its own formal evaluation, the first since the document was compiled in 2016. This review of the Policy on Results was completed by the Treasury Board Secretariat’s Results Division in 2024, and the main findings and recommendations have been shared through internal and external channels.

To make an evaluation policy stick, details should be shared widely, often and creatively within the industrial policy community. The message could be pressed home in various ways, including: training programs and development opportunities, such as rotational assignments between program and evaluation teams; performance management exercises; and reward and recognition programs (Results for America, 2024). These efforts should be targeted at all levels of the organization and the wider industrial policy community, and should include “significant investments upstream in the policy-making chain,” such as targeted programs to help decision-makers interpret and use evaluation evidence effectively (OECD, 2020, p. 9).

Recommendation 5: Develop an industrial policy evaluation schedule that is synchronized with key funding decisions.

The industrial policy evaluation unit should develop and maintain a multi-year evaluation schedule, which synchronizes the industrial policy evaluation and budget cycles. In this way, evaluations would align with the government’s strategic priorities and be embedded in the industrial policy decision-making process across a policy or program’s life cycle. The timing and approach of evaluations would thus be linked to key milestones, such as initial budget decisions for new policies or programs, or the assessment of pilot or other programs nearing an end that may merit renewed or additional funding. This approach would also link evaluation more closely to the needs of decision-makers, in terms of both timing and content (Bourgeois & Whynot, 2018; Mayne, 2018).

Recommendation 6: Devise a system of fast-track evaluations for industrial policy funding decisions.

As noted above, Mayne (2018) recommends a specialized approach to evaluations that support budget and funding decisions. These “rapid expenditure evaluations” would be conducted by the centralized industrial policy evaluation unit at the direction of senior budget officials. They would assess alignment with current government priorities and draw on prior evaluations, such as those conducted at the departmental level, to help improve delivery. As Mayne sees it, the evaluations would use a program logic analysis and theory-based approach to assess “the likelihood that a program is contributing in a meaningful way to desired outcomes, rather than trying to confirm the causal links” (p. 323) through experimental approaches. These specialized evaluations could be conducted within four months and would give decision-makers sufficient insights for “evidence-informed decision making” (p. 321).

A final, important point: to succeed, these quick, centralized evaluations would require a close working relationship between the dedicated industrial policy evaluation unit and departmental evaluation teams. Without such collaboration, it may be difficult to secure access to data, information and relevant internal evaluations.

Recommendation 7: Devise a generic logic model template to help frame industrial policy evaluations.

The OECD concludes that “it is too difficult to draw up a single framework for industrial policy evaluation, given the multitude of approaches, and it would be unhelpful to be too prescriptive” (Warwick & Nolan, 2014, p. 63). As a result, a fresh industrial policy evaluation framework needs to be sufficiently flexible to guide Canadian decision-makers and evaluators across the wide range of industrial policy objectives and instruments.

The IRPP has identified examples of industrial policy objectives that could be relevant for Canada at this time (IRPP Steering Group, 2024). These examples fit roughly into the IMF’s (2024, p. 9) illustrative taxonomy of industrial policy objectives and tools shown in table 1.

In a logic model, the examples and tools should be translated into specific impacts and activities, respectively. Table 2 is a highly simplified outline of a generic logic model covering a sample of inputs and activities and their wide-ranging outcomes and impacts on industrial policy:

Finally, it is important to translate outcomes and impacts into measurable performance indicators. Below is a list of indicators based on the IMF’s (2024) four categories of industrial policy objectives:

The above performance indicators are by no means exhaustive and can be expressed in various ways, such as levels, growth rates, shares and ratios (e.g., R&D intensity ratios [per cent of revenues] and investment leverage ratios [additional private expenditure generated per unit of public support]). Depending on the scope of the study, they can be collected at the micro (firm) and/or macro (sectoral, regional or national) levels. In any case, as noted above, the key step in designing an industrial policy evaluation system is to determine the data requirements at the start of the process to assess availability and quality. Performance information can be sourced from administrative and monitoring data collected during the program, as well as from targeted surveys of program recipients, such as the number and nature of research collaborations. Evaluations can — and should — also draw on publicly available commercial information and national statistics.

Where possible, having information on firms that applied for a program but were not selected could help to build a counterfactual. Unfortunately, the OECD notes that “this information is seldom collected” (Criscuolo et al., 2022b, p. 17). However, the federal government’s Strategic Innovation Fund performs what it calls “active monitoring,” comparing the projects it supports (which must each provide an annual performance benefits report) with those that applied for funding but did not receive it. This practice is recommended at the inception of any industrial policy intervention to help facilitate quasi-experimental analysis.

Recommendation 8: Adopt a consistent approach to industrial policy evaluation reporting and dissemination.

To assess trade-offs across industrial policy programs and strategies, it is helpful to have a consistent approach to evaluation reporting and dissemination. At the federal level, reflecting the reporting flexibility inherent in the Policy on Results, recent evaluations of the Strategic Innovation Fund, the Innovation Superclusters Initiative, the National Intellectual Property Strategy and the not-for-profit national research group Mitacs all used inconsistent terminology in their recommendations. For example, the National Intellectual Property Strategy evaluation referred to “opportunities for improvement” (ISED 2023), while the Mitacs evaluation provided just one “lesson learned” (ISED 2022b, p. 39).Only the evaluation of the Innovation Superclusters Initiative contained a management response and action plan.

As noted above, tax expenditures are not subject to the evaluation requirements applicable to spending programs under the Policy on Results. Only a regression-based analysis of the responsiveness of R&D expenditures to tax incentives was published in the 2021 Report on Federal Tax Expenditures (Department of Finance Canada, 2021). Nonetheless, one popular tax incentive scheme, the Scientific Research and Experimental Development (SR&ED), publishes annual program statistics (number of claims, value of investment tax credits) and “success stories.”

Best practice suggests the findings and recommendations of all industrial policy evaluations should be published in a consistent format, supported by a management response and action plan, with implementation progress tracked by the centralized industrial policy evaluation unit. These reports should be supported by tailored communications initiatives to reach specific audiences, ranging from ministers to the general public. An annual report on industrial policy evaluation findings and recommendations is also worth considering. In any case, relevant data should be accessible to academics and practitioners outside the government to build a broad community of evaluation.

Recommendation 9: Invest in digital technologies.

The OECD notes that digital technologies, such as big data and machine learning, have the potential to improve the design of targeted industrial policies and to lower the costs of policy impact evaluations, as well as to make them more timely (Criscuolo et al., 2022a, 2022b). Furthermore, improving access to more granular data can facilitate the use of standard policy evaluation methodologies, as big data can provide evaluators with invaluable information to identify relevant control groups. This information can also help policy and program designers understand the characteristics of firms for which a policy or program may be most effective and, therefore, enhance their ability to identify the most suitable candidates.

Next steps

Building an evaluation framework along the lines set out above is by no means the end of the process. The model needs to be carefully examined by decision-makers and evaluators and adjusted accordingly based on their feedback. It can then be used as the basis for numerous other initiatives:

  • A benchmark for approaches to industrial policy evaluation currently followed by the federal government, provinces and territories.
  • A tool to assess significant legacy industrial policy measures, such as tax incentives under the federal government’s Strategic Innovation Fund and Scientific Research and Experimental Development (SR&ED) program.
  • Support evaluations of more recent industrial policy initiatives, such as Canada’s clean economy investment tax credits and the Canada Growth Fund.
  • Develop and design new industrial policies.
  • Inform budget and funding decisions related to industrial policy in Canada.

With an effective evaluation framework in place, decision-makers can better assess the benefits and costs of industrial policy interventions. They can demonstrate results to Canadians and, perhaps most important, justify the use of scarce public funds to achieve the diverse but critical goals of a well-grounded industrial policy.


References

Australian Centre for Evaluation. (2023). Commonwealth evaluation toolkit. Government of Australia. https://evaluation.treasury.gov.au/toolkit/commonwealth-evaluation-toolkit

Bourgeois, I. & Whynot, J. (2018). Strategic evaluation utilization in the Canadian federal government. Canadian Journal of Program Evaluation, 32(3), 273-289. https://doi.org/10.3138/cjpe.43179

Criscuolo, C., Gonne, N., Kitazawa, K., & Lalanne, G. (2022a). An industrial policy framework for OECD Countries. OECD Science, Technology and Industry Policy Papers, No. 127. OECD Publishing. https://doi.org/10.1787/0002217c-en

Criscuolo, C., Gonne, N., Kitazawa, K., & Lalanne, G. (2022b). Are industrial policy instruments effective? A review of evidence in OECD countries. OECD Science, Technology and Industry Policy Papers, No. 128. OECD Publishing. https://doi.org/10.1787/57b3dae2-en

https://www.promarket.org/2023/01/17/a-new-framework-for-better-industrial-policies/

Department of Children, Disability and Equality. (2021, September 29). Frameworks for policy, planning and evaluation. Government of Ireland. https://www.gov.ie/en/department-of-children-disability-and-equality/publications/frameworks-for-policy-planning-and-evaluation-evidence-into-policy-guidance-note-7/

Department of Finance Canada. (2024). Budget/off-cycle proposals. Government of Canada. https://www.canada.ca/en/department-finance/services/publications/federal-budget/proposals.html

European Bank for Reconstruction and Development (EBRD). (2024, November 26). Transition report 2024-25: Navigating industrial policy. https://www.ebrd.com/home/news-and-events/publications/economics/transition-reports/transition-report-2024-25.html

Giswold, J. (2024, June 18). Tallying government support for EV investment in Canada. Office of the Parliamentary Budget Officer. https://www.pbo-dpb.ca/en/additional-analyses–analyses-complementaires/BLOG-2425-004–tallying-government-support-ev-investment-in-canada–bilan-aide-gouvernementale-investissement-dans-ve-canada

HM Treasury. (2020). Magenta book: Central government guidance on evaluation. Government of the United Kingdom. https://www.gov.uk/government/publications/the-magenta-book/magenta-book-central-government-guidance-on-evaluation-html

Innovation, Science and Economic Development Canada (ISED). (2021). Evaluation of the Strategic Innovation Fund (SIF). Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/attachments/h_03942_en.pdf

Innovation, Science and Economic Development Canada (ISED). (2022a). Evaluation of ISED’s Innovation Superclusters Initiative (ISI). Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/documents/2022-07/ISI_Evaluation_Report-2022-EN.pdf

Innovation, Science and Economic Development Canada (ISED). (2022b). Evaluation of Innovation, Science and Economic Development Canada (ISED) funding to MITACs. Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/attachments/2022/evalmitacs.pdf

Innovation, Science and Economic Development Canada (ISED). (2023). Evaluation of the National Intellectual Property Strategy. Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/en/evaluation/evaluation-national-intellectual-property-strategy

Innovation, Science and Economic Development Canada (ISED). (2024). Strategic Innovation Fund: Impact report. Government of Canada. https://ised-isde.canada.ca/site/strategic-innovation-fund/en/impact-report

International Monetary Fund (IMF). (2024, March 11). Industrial policy coverage in IMF surveillance – broad considerations. (Policy Paper No. 2024/008). https://doi.org/10.5089/9798400266683.007

IRPP Steering Group. (2024). Should governments steer private investment decisions? Framing Canada’s industrial policy choices. Institute for Research on Public Policy. https://irpp.org/wp-content/uploads/2024/10/Should-Governments-Steer-Private-Investment-Decisions-Framing-Canada-Industrial-Policy-Choices.pdf

Lester, J. (2024, September 5). Minding the purse strings: Major reforms needed to the federal government’s expenditure management system. C.D. Howe Institute. https://cdhowe.org/publication/minding-purse-strings-major-reforms-needed-federal-governments-expenditure/

Mayne, J. (2018). Linking evaluation to expenditure reviews: Neither realistic nor a good idea. Canadian Journal of Program Evaluation, 32(3), 316-326. https://doi.org/10.3138/cjpe.43178

Net-Zero Advisory Body (NZAB). (2025). Collaborate to succeed: The Government of Canada’s role in growing domestic clean technology champions. https://www.nzab2050.ca/publications/collaborate-to-succeed-the-government-of-canadas-role-in-growing-domestic-clean-technology-champions

Organisation for Economic Co-operation and Development (OECD). (2020). How can governments leverage policy evaluation to improve evidence informed policy making? Highlights from an OECD comparative study. OECD Publishing. https://legalinstruments.oecd.org/api/download/?uri=/private/temp/7ecfe9af-5b05-4992-9a10-ebdc45a944a8.pdf&name=policy-evaluation-comparative-study-highlights.pdf

Organisation for Economic Co-operation and Development (OECD). (2025). OECD recommendation on public policy evaluation: Implementation toolkit. OECD Publishing.
https://www.observatoriopoliticaspublicas.es/wp-content/uploads/2025/02/OCDE_2025.pdf

Polastro, R. (2024, December 10). Moving organisations from “having to” to “wanting to” evaluate: Five recommendations for practice. BetterEvaluation. https://www.betterevaluation.org/blog/moving-organisations-having-wanting-evaluate-five-recommendations-for-practice

Results for America. (2024). Evaluation policy guide. https://results4america.org/tools/evaluation-policy-guide/

Ruegg, R., & Jordan, G. B. (2007). Overview of evaluation methods for R&D programs: A directory of evaluation methods relevant to technology development programs. U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy.
https://www1.eere.energy.gov/analysis/pdfs/evaluation_methods_r_and_d.pdf

Social Policy Evaluation and Research Unit (Superu). (2017). Making sense of evaluation: A handbook for everyone. Government of New Zealand. https://www.dpmc.govt.nz/publications/making-sense-evaluation-handbook-everyone

Treasury Board of Canada Secretariat. (2016a). Policy on results. Government of Canada.
https://publications.gc.ca/collections//collection_2017/sct-tbs/BT22-172-2016-eng.pdf

Treasury Board of Canada Secretariat. (2016b, July 1). Directive on results. Government of Canada. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=31306

Treasury Board of Canada Secretariat. (2020a, July 23). Policy on results: What is evaluation? Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/policy-results-what-evaluation.html

Treasury Board of Canada Secretariat. (2020b, November 20). Evaluation 101 backgrounder. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/evaluation-101-backgrounder.html

Treasury Board of Canada Secretariat. (2021, March 22). Theory-based approaches to evaluation: Concepts and practices. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/theory-based-approaches-evaluation-concepts-practices.html

Treasury Board of Canada Secretariat. (2023, September 5). Guidance for drafters of Treasury Board submissions. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/treasury-board-submissions/guidance-for-drafters-of-treasury-board-submissions.html

Treasury Board of Canada Secretariat. (2024). Cabinet directive on regulation. Government of Canada. https://www.canada.ca/en/government/system/laws/developing-improving-federal-regulations/requirements-developing-managing-reviewing-regulations/guidelines-tools/cabinet-directive-regulation.html

Warwick, K., & Nolan, A. (2014). Evaluation of industrial policy: Methodological issues and policy lessons. OECD Science, Technology and Industry Policy Papers, No. 16. OECD Publishing. https://doi.org/10.1787/5jz181jh0j5k-en

This Insight was published as part of the Building New Foundations for Economic Growth program, under the direction of research director Steve Lafleur. The paper also supports the IRPP’s initiative, Canada’s Next Economic Transformation: What Role Should Industrial Policy Play? The manuscript was edited by Bernard Simon, copy-editing was by Prasanthi Vasanthakumar, proofreading was by Zofia Laubitz, editorial co-ordination was by Étienne Tremblay, production was by Chantal LĂ©tourneau and art direction was by Anne Tremblay.

Douglas Nevison is an economist and a former senior public servant at Environment and Climate Change Canada, the European Bank for Reconstruction and Development, the Privy Council Office and the Department of Finance Canada. Throughout his career, he has been a strong advocate for policy and program evaluation and evidence-informed decision-making.

To cite this document:

Nevison, D. (2025). A model evaluation framework for industrial policy in Canada. IRPP Insight No. 62. Institute for Research on Public Policy.


The opinions expressed in this paper are those of the author and do not necessarily reflect the views of the IRPP, its Board of Directors or sponsors. Research independence is one of the IRPP’s core values, and the IRPP maintains editorial control over all publications.

IRPP Insight is a refereed series that is published irregularly throughout the year. It provides commentary on timely topics in public policy by experts in the field. Each publication is subject to rigorous internal and external peer review for academic soundness and policy relevance.

If you have questions about our publications, please contact irpp@nullirpp.org. If you would like to subscribe to our newsletter, IRPP News, please go to our website at irpp.org.

Illustration: Istock
ISSN 2291-7748 (Online)