Governments have shown a renewed interest in industrial policy in recent years as Canada navigates geopolitical tensions and trade uncertainty, along with long-term challenges such as the global energy transition. While Canada has a long history with industrial policy, robust and consistent evaluation practices are not always followed. This can perpetuate suboptimal policies and inhibit learning from successes and failures.
Canadian governments need a new and consistently applied evaluation framework for industrial policy. Too many evaluations focus on process and operational issues without considering the information needed for funding decisions. While there are methodological challenges in evaluating policy outcomes, modern approaches and tools offer promising avenues for progress.
The author recommends that governments:
The renewed interest in Canadian industrial policy should be accompanied by a renewed focus on sound evaluation practices. Governments need to break the cycle of disinterest in evaluation, given the scale of industrial policies and the risks involved. Robust evaluation practices are critical to the successful use of industrial policy to address Canadaâs most pressing challenges.
Industrial policy is drawing renewed interest in Canada. But to be successful, it should be accompanied by more useful, refined and consistent evaluation. Timely, credible and objective policy evaluation is essential to ensure the effective use of public resources and avoid inefficient or failed initiatives. However, too many evaluations miss the mark on providing the information decision-makers need. This paper proposes a framework to guide industrial policy evaluation â one that aims to provide the best possible information to design impactful policies, enable evaluators to contribute more effectively to the policy-making process and better engage Canadians. The author recommends the creation of a dedicated industrial policy evaluation unit and the development of clear guidelines for evaluation to ensure consistency and usability.
Industrial policy has long been a part of Canadaâs economic landscape, even if it is often treated as a novel concept. While controversial in some corners, industrial policy has seen renewed interest in recent years.
Numerous thorny public policy challenges have led to this interest, including threats and opportunities stemming from Canadaâs low productivity growth, rising geopolitical tensions, competitive pressures, advances in digital technologies and climate change. Indeed, in the past few years, the federal, provincial and territorial governments have increasingly turned to big-ticket industrial policy measures in the hope of attracting and stimulating business investment, creating jobs, and positioning Canada to succeed in a changing global economy. These measures include the Canada Growth Fund, a suite of federal Clean Economy investment tax credits, and significant investments by the federal, Ontario and Quebec governments in an electric vehicle (EV) supply chain. Giswold (2024) estimates the latter to be worth up to $52.5 billion in government support.
Yet one critical aspect has largely been overlooked. To wield the tools of industrial policy effectively and efficiently, Canadian decision-makers must evaluate the success of their efforts in a credible and objective way. Only through robust evaluation can governments demonstrate results to Canadians and avoid wasting scarce public resources on low-value or even failed initiatives. Effective evaluation is particularly important at a time of limited fiscal resources, given the significant opportunity cost of misguided projects.
The potential pitfalls of industrial policy have been well documented, including inefficient allocation of labour and capital, anti-competitive effects, scope creep, exit challenges, government capture by vested interests, rent-seeking by firms and the risk of retaliatory actions by trading partners. Due to the scope for mistakes, corruption and unintended consequences, the Organisation for Economic Co-operation and Development (OECD) stresses that âgovernments need to put a strong emphasis on evaluation and the regular assessment of targeted industrial policiesâ (Criscuolo et al., 2022a, p. 5).
Whatâs more, the OECD notes that the evaluation of industrial policy is âweak relative to other fields,â such as health care, education, labour markets and poverty reduction (Warwick & Nolan, 2014). This is not for lack of trying. Indeed, governments around the world, including those at the federal, provincial and territorial levels in Canada, have undertaken a wide range of impact and evaluation studies in this area. The halting progress toward crafting a robust evaluation model may, however, reflect the fact that industrial policy is a broad concept. No single formal definition encompasses its typical multiple economic, environmental, social and security objectives.
An industrial policy may involve a wide range of supply- and demand-side instruments. Such measures may include grants, public loans, tax expenditures, public credit guarantees, government-backed venture capital funds, trade restrictions and promotion tools, local content and procurement requirements, intellectual property protection and skills provision. These measures can be horizontal and untargeted, such as R&D tax credits available to all firms. Alternatively, they can be vertical and targeted at specific segments, units or activities within an economy defined, for example, by sector, technology, firm size or development stage. Industrial policies can also be oriented toward âplace-basedâ objectives designed to promote economic development in a specific geographical area that is underdeveloped or has specific endowments of natural resources or skills (European Bank for Reconstruction and Development [EBRD], 2024). They may even be trumpeted as transformational âmissionsâ to address societal challenges, such as climate change (Criscuolo et al., 2022a). For example, the Net-Zero Advisory Board recently proposed a net-zero industrial policy strategy to promote clean technologies in Canada (NZAB, 2025). Finally, such policies can be implemented as individual measures or combined in policy packages or strategies, which may have overlapping goals and approaches. For example, a mission-oriented strategy aimed at promoting a âgreen transformationâ could have a variety of economic, environmental and social goals, and incorporate technology-focused, sectoral and place-based measures to achieve them.
According to the OECD, evaluation of industrial policy has been hindered by âthe lack of transparent, publicly available and easily accessible informationâ (Criscuolo et al., 2022a, p. 35). This concern is based on the complexities inherent in most industrial policies:
These challenges may limit evaluatorsâ ability to use modern methods to assess the effectiveness of industrial policy. One that comes to mind is randomized controlled trials, which have proven successful in other policy areas.
The OECD notes that, taken together, the challenges in evaluating industrial policy may have contributed to the conclusion that âthe evidence on the effectiveness of targeted [industrial policy] interventions is limited and mixed so farâ (Criscuolo et al., 2022a, p. 36) and that âlittle consensus exists on the effectiveness of [industrial policy] interventionsâ (Criscuolo et al., 2022b, p. 3). This conclusion is corroborated by the International Monetary Fund (IMF), which notes that âthe empirical evaluation of net economic benefits of past industrial policies has been challengingâ and that âdespite recent efforts to overcome methodological challenges, the empirical evidence on the net economic impact of industrial policy is mixedâ (IMF, 2024, p.â10). EBRD (2024, p. 6) adds that industrial policyâs âtrack record is mixed at best.â
While the fields of industrial policy and policy evaluation are broad, the scope of this paper is deliberately narrow. It is not a critique of Canadaâs current evaluation practices. Nor is it an assessment of the methods used to evaluate industrial policy interventions. Rather, it aims to identify the challenges that arise from using evaluations in the policy-making and approval process, and to recommend practical steps to overcome these hurdles. My ultimate goal is to propose a framework that provides government decision-makers with the best possible information to design industrial policies, enables evaluators to make an effective contribution to the policy-making process, and informs and engages Canadians in this important public policy area.
Despite the methodological challenges and mixed results, the OECD stresses that a âstrong push on policy evaluation is urgently requiredâ (Criscuolo & Lalanne, 2023). The IMF takes much the same view, noting that âindustrial policy should be supported by an appropriate governance framework ⊠[which] involves ⊠regular monitoring and reviewsâ (IMF, 2024, p. 12).
To that end, this paper offers recommendations for a practical evaluation model that could help government decision-makers in Canada demonstrate results and design more effective industrial policies. Such a model could apply to either individual measures or policy packages. It would be based on a set of well-defined objectives and measurable performance indicators, and would be relevant to a variety of industrial policy instruments.
As a starting point, I undertook a literature review to determine best practices in policy and program evaluation in general and, where applicable, the specific field of industrial policy. Source materials include: publications and documents prepared by multilateral institutions, such as the OECD and the IMF; domestic jurisdictions, including the government of Canada and provincial/territorial governments; and foreign governments, such as Australia, Ireland, New Zealand, the United Kingdom and the United States.
Based on this review, I recommend a framework for industrial policy evaluation comprising two main elements:
This framework draws extensively on the IRPPâs multi-year project Canadaâs Next Economic Transformation: What Role Should Industrial Policy Play? It owes much to that project in terms of rationale, objectives, activities, inputs and outputs, as well as expected outcomes and impacts.
There is a rich literature on the role of evaluation in public policy-making. An array of practical directives, backgrounders, how-to guides, handbooks and toolkits highlight the rationale for evaluation, and describe various purposes, types, timings and methods. These resources also demonstrate the potential uses of evaluation results, such as developing new policies and programs, and assessing the performance of existing ones (Australian Centre for Evaluation, 2023; Department of Children, Disability and Equality, 2021; HM Treasury, 2020; OECD, 2025; Social Policy Evaluation and Research Unit [Superu], 2017).
In Canada, the federal governmentâs 2016 Policy on Results, together with an accompanying Directive on Results and supporting documents, sets out the general requirements for all program evaluation and provides answers to the frequently asked questions of why, what, who, when and how to evaluate (Treasury Board of Canada Secretariat, 2016a, 2016b, 2020a, 2020b).
Despite the plentiful literature, the OECD has noted a âdearth of good examples of influential industrial policy evaluationâ (Warwick & Nolan, 2014, p. 67). To address that gap, it set up an expert group on evaluation of industrial policy in 2012, comprising government officials and academics with experience in this area. The group drew upon evaluation materials and member experiences to come up with a set of principles for effective industrial policy evaluation.
While the reference may be somewhat dated and âboth the economic and policy environment have changed dramatically since that workâ (Criscuolo et al., 2022a, p. 8), its recommendations remain relevant today, if only because little progress has been made in developing an up-to-date evaluation framework specific to the field of industrial policy. (IMF [2024] provides a conceptual framework for assessing industrial policy, but only in the specific context of IMF surveillance exercises.)
Taken together, the materials described above provide the building blocks for a robust framework to evaluate industrial policy. Each of these elements is described below.
To start, evaluation of industrial policy needs to be set within the context of an appropriate governance framework and policy cycle. The IMF recommends âregular monitoring and reviews, supported by credible expertise and fact-based approaches to decision-making, as well as clearly defined sunset clauses to ensure that industrial policy support is phased out graduallyâ (IMF, 2024, p. 12). Similarly, the OECD identifies the key attributes of effective industrial policy as including explicit policy rationales, clear objectives, measurable performance indicators, an oversight regime and regular assessment to measure results and recommend adjustments over time (Criscuolo et al., 2022a).
While they may have different labels across the various sources cited in this paper, evaluations generally fall into three main types, based on their purpose:
According to Superu (2017, p. 4), âformative evaluation is about steering the ship, while summative evaluation is about making sure it arrived in the right place.â
An evaluationâs approach can be based on any one of these types or a combination of all three. It can assess: relevance (whether a policy or program addresses a demonstrated need and reflects a governmentâs priorities and responsibilities); efficiency (whether inputs are used in the most efficient way to maximize outputs); and effectiveness (whether outcomes and impacts align with policy-makersâ intentions) (Lester, 2024).
In this way, decision-makers can use an evaluation to assess the functioning, merit, or value of a policy or program, as well as its unintended consequences. The evaluation can also help them determine whether the policy should be sustained, improved, scaled up, replicated or abandoned, with the associated resources flowing to other policy priorities.
An evaluation has two main purposes:
Along with capacity and capability issues, the purpose of an evaluation may influence whether it is conducted by an internal or external team. An internal group typically has much to gain from such an experience, but an external one may be best suited for accountability purposes that require a greater degree of independence and objectivity.
An evaluation can take place at various points across the policy or program development and implementation cycle.
Evaluations are traditionally scheduled at or near the end of a mature policy or program to determine whether the expected outcomes and impacts were achieved. These assessments can be used, for example, to determine policy and funding renewals under budget and cabinet approval processes.
Developmental evaluations, which take place while a program is in operation, encourage continuous improvement and allow for course adjustments. Even in the early design phase, decision-makers can benefit from past evaluations of similar measures by both domestic and international players. They can also take lessons from evaluations of pilot programs that test a policy intervention in a controlled environment and on a smaller scale prior to the program being rolled out for full-scale implementation. In this way, âevaluations feed into each other â the âafterâ of one phase is the âbeforeâ of a new phaseâ (Superu, 2017, p. 4).
Developing and agreeing on an evaluation plan is an essential starting point. These early steps involve identifying the policy or program to be evaluated, followed by the objectives of the intervention and the focus of the evaluation and its approach, including questions to be addressed. Attention then turns to these items: the existing evidence base and the data to be collected and analyzed; methods to produce the findings and recommendations; and requirements for funding, staffing, time and skills. Early evaluation planning can help maximize learning and reduce the cost of data collection by ensuring that the right information is collected in the most cost-effective and efficient way.
To develop, monitor and, if necessary, adjust an evaluation plan, the U.K. Treasury suggests setting up an evaluation steering group that includes representatives from the policy or program design and delivery teams, the evaluators and, possibly, key stakeholders and independent experts. The latter could also take on a quality control role through an expert peer review process (HM Treasury, 2020). These types of governance safeguards tend to enhance the credibility of an evaluation and help promote ownership or buy-in by users of its results.
A typical evaluation plan has two key elements:
Taken together, these elements âdescribe how and why a desired change is expected to happen in a particular contextâ and help âlink what you want to achieve with how you will measure successâ (Australian Centre for Evaluation, 2023). Logic models and theories of change can help policy-makers develop policies and programs, consult with stakeholders regarding shared objectives, and produce a roadmap for future evaluation. As a result, they can be used âas part of the policy planning process, or ⊠developed retrospectively to help frame an evaluation of a policy or programme that does not have oneâ (Department of Children, Disability and Equality, 2021, p. 13).
An evaluation can draw on a variety of approaches to social science research. Among the options are:
Value-for-money evaluations typically use a structured cost-benefit analysis to identify the positive and negative impacts of a program. An example is the approach applied to all regulatory proposals in Canada under the Cabinet Directive on Regulation, with the goal of demonstrating that âthe benefits to Canadians outweigh the costsâ (Treasury Board of Canada Secretariat, 2024). The U.S. Department of Energy provides a helpful summary of the characteristics of various evaluation methods for research and development programs, which are also applicable to broader industrial policy programming (Ruegg & Jordan, 2007). Even so, the U.K. Treasury notes that âno one evaluation approach can appropriately evaluate all types of interventions â each design has its advantages and disadvantages â and often approaches may need to be combinedâ (HM Treasury, 2020). As a result, many evaluations use âmixed methods,â which combine quantitative and qualitative information (Australian Centre for Evaluation, 2023).
Data collection is an âessential component of any evaluationâ and should be planned at the same time as the policy or program is developed (HM Treasury, 2020). In particular, baseline data should be gathered before the policy or program begins. A logic model or theory of change can help identify data needs and gaps with respect to inputs, outputs, outcomes and impacts. Sources may include existing administrative and monitoring data, such as from firms participating in a program, as well as new data collected specifically for the evaluation. Data quality is key. Existing data should be assessed at the outset, as poor-quality or partial data will distort the scope and scale of the evaluation. Data also need to be seen as valuable to those using them. As the U.K. Treasury warns, âif [data collection is] seen purely as an administrative burden, the incentive to ensure data quality is typically lowâ (HM Treasury, 2020).
Evaluations can differ in terms of scale and ambition. In terms of calibrating these efforts, the U.K. Treasury (2020) notes that a âgood evaluationâ should be âfit for purposeâ and âreflect the needs of decision-makers and those scrutinizing policy from the outside.â The latter could include public auditors, parliamentary committees and budget officers, civil society groups and non-governmental organizations, as well as the general public.
With that mind, the Treasury recommends that evaluations should be designed to be: (1) useful, by providing accessible and actionable findings at the right time; (2) credible, in terms of objectivity and transparency; (3) robust, in terms of technical design; and (4) proportionate to the strategic importance, scale, cost and risk profile of the intervention (HM Treasury, 2020).
Ultimately, as the Treasury points out, âthe value of evaluation comes through its use and influence.â The OECD adds that âensuring evaluation is influential is of course every bit as important as ensuring it is soundly basedâ (HM Treasury, 2020; Warwick & Nolan, 2014, p. 54).
A robust evaluation should go through these distinct phases:
The OECD stresses the importance of the reporting and dissemination phases, noting that âno matter how good the quality of the underlying evaluation, its value will only be realized if there are effective channels for communication and influencing to increase the likelihood that the results are usedâ (Warwick & Nolan, 2014, p. 54).
In each of these phases, evaluators and decision-makers, as the key users or âconsumersâ of evaluation, need to maintain strong links and work together (Bourgeois & Whynot, 2018). Only by doing so can they develop and maintain the sound processes and communication channels needed to ensure that feedback from evaluators is useful to policy-makers, and that evaluators understand the context, needs and constraints of the decision-makers. Indeed, the New Zealand government takes the view that âwith evaluation, the processes you follow are as important as the content that is producedâ (Superu, 2017, p. 3).
The OECD notes that the objectives of industrial policy have expanded greatly, now going âbeyond productivity growth and innovation to include, inter alia, sustainability, resilience and strategic autonomyâ (Criscuolo et al., 2022b, p. 3). The IMF (2024) lists economic, environmental, social and security goals under its taxonomy of industrial policy objectives and tools. And an industrial policy program or strategy may have multiple objectives. For example, in a disadvantaged region undergoing significant economic transition, enlightened industrial policies could help address the distributional impacts and promote inclusiveness, while supporting the economic objectives of greater investment and employment through a mix of instruments such as domestic subsidies and training.
However, EBRD (2024) warns that, without a clear prioritization of objectives, multiple goals often clash. The OECD also cautions that, given the relatively recent emergence of these various new industrial policy objectives, there is an urgent need for more empirical work on the effects of industrial policy on objectives other than innovation, productivity and growth, as well as on the evaluation of industrial strategies (Criscuolo et al., 2022b). Bourgeois and Whynot (2018) take the view that such âportfolio-wideâ evaluations would provide senior policy-makers with a more strategic view in their reallocation decisions across programs.
As part of the evaluation process, performance measures can be applied to inputs and outputs, as well as to the intended outcomes and impacts of a policy or program. This information helps decision-makers and evaluators define what constitutes success and failure. According to the U.K. Treasury, evaluation and monitoring âare closely related, and a typical evaluation will rely heavily on monitoring dataâ (HM Treasury, 2020). To distinguish between the two, the New Zealand authorities note that monitoring is required to do evaluation, but evaluation applies critical thinking to monitoring and other data (Superu, 2017). In terms of timing, both the British and Irish governments recommend that âmonitoring and evaluation occur across the policy life cycle and should be built in from the very beginningâ (Department of Children, Disability and Equality, 2021, p. 14; HM Treasury, 2020). Indeed, New Zealand notes that âthe most useful evaluations occur when a programme has been designed with evaluation in mindâ (Superu, 2017, p. 37).
In the case of industrial policy, the OECD notes that outcome metrics are increasingly likely to be âmultidimensionalâ and not just based on aggregate productivity (Criscuolo et al., 2022b). It adds that, while âmany … evaluations focus on the impact of policies on treated firms … a complete cost-benefit analysis would also require identifying the effect on non-treated firmsâ to assess spillover effects through labour and product markets (Criscuolo et al., 2022b, p. 19). The New Zealand government stresses that it usually makes more sense to monitor a small number of good indicators rather than a large number of âonly vaguely-helpful-at-best indicatorsâ (Superu, 2017, p. 34). Similarly, the OECD recommends âa pragmatic approach to indicators that are likely to be available at an early stageâ of the evaluation planning exercise (Warwick & Nolan, 2014, p. 54).
Evaluations can be conducted at either a micro level, centred on a specific firm or program recipient, or a macro level, involving a broader industry, sector or the entire economy. The OECD cautions that firm-specific studies âcannot easily take into account policy packages or strategies,â but also that âanalyses at the industry or country level are poorly designed to identify a causal relationship between the policy instrument and the outcome, such as productivity, employment or growthâ (Criscuolo et al., 2022b, p. 16).
In Canada, federal industrial policy programs that have undergone recent evaluations, such as the Strategic Innovation Fund and the Innovation Superclusters Initiative, have focused on the firm-level results of program recipients to âassess the relevance, performance and efficiencyâ of the programs (Innovation, Science and Economic Development Canada [ISED], 2021, p. 8, 2022a, p. 12). In the case of the Strategic Innovation Fund, however, an âimpact reportâ based on aggregated project results over the first five years of the program compares âselect aggregate metrics ⊠to the averages found in Canadaâs national statisticsâ⊠to observe the impacts of projects supported through SIFâ at a more macro level (ISED, 2024, p. 9).
Based on the building blocks described above, I can articulate a set of principles and best practices to guide industrial policy evaluation in Canada:
Most national and sub-national government evaluation frameworks in Canada encompass many, if not all, of the guiding principles mentioned above. However, while these best practices are necessary, they may not be sufficient. As the OECD (2020) notes, âmost countries face significant challenges in promoting policy evaluation across governmentâ (p. 3) and that âthe use of evaluations ⊠often falls below expectationsâ (p. 4). While inadequate resources â in terms of capacity and capability â is typically identified as a key supply-side obstacle, the root of the problem may be, in the OECDâs words, that âsustaining political interest and demand for policy evaluation remains challengingâ (p. 13).
In the Canadian context, Lester (2024, p. 9) corroborates this point by noting that âdisinterest in evaluation by policy-makers is a longstanding issue,â while Bourgeois and Whynot (2018, p. 339) add that âevaluation is viewed by program personnel as a cost of doing business ⊠rather than as a management support function.â
There are a number of explanations for why decision-makers do not make more use of evaluations. One is a misalignment between their needs and evaluation results, in terms of timing, timeliness and focus. For example, with fixed, mandated assessment cycles, such as under the federal governmentâs Policy on Results (i.e., once every five years), Mayne (2018) notes that evaluations may not be ready or may be too old for policy-makers at key decision points. Moreover, evaluation units in ministries tend to focus on process evaluations and operational issues, which are useful for program managers, but may have less relevance for officials making budget and funding decisions (Mayne, 2018). Decision-makers may also be more engaged at the broad portfolio level than in individual programs, given their strategic role in an organization. More generally, an evaluation initiative may find itself competing with other factors that influence policy and programming decisions, such as political imperatives, special interests and anecdotal information (Bourgeois & Whynot, 2018).
This disinterest may also reflect the fact that evaluations are often mandatory due to program or legislative requirements. Under the Canadian governmentâs Policy on Results, all ongoing programs of grants and contributions with average annual expenditures of $5 million or more over five years must be evaluated once every five years under section 42.1 of the Financial Administration Act. This updated policy replaced the 2009 Policy on Evaluation. At the same time, Treasury Board of Canada Secretariatâs Centre of Excellence in Evaluation wound down, with evaluation activities being handled by a newly created Results Division.
However, as the New Zealand authorities have cautioned, undertaking an evaluation just because itâs a requirement is âthe worst possible reason to do it, and tends to result in something done neither particularly well, nor used particularly effectively, which is a waste of time and resourcesâ (Superu, 2017, p. 6). Similarly, the U.K. Treasury stresses the need for âa shift away from seeing evaluation as a necessary evil or a ritualistic necessity to one in which attributing value to the evaluation of policy is built in as a core dimension of good policy practiceâ (HM Treasury, 2020). The OECD (2020, p. 23) adds that if âan effective connection between evaluation results and policy-making remains elusive,â it will result in âinsufficient use,â which weakens the rationale for conducting an evaluation.
We clearly need to break this vicious cycle of disinterest and neglect, given the significant scale of industrial policy and the risks involved. Instead, Canadians would be well served by a virtuous cycle of ever-growing interest in, and demand for, rigorous evaluation of industrial policies. The OECD, however, notes that âthe culture of evaluation is still in its infancy for industrial policiesâ (Criscuolo et al., 2022b, p. 17). A key finding of this study is the need to overcome this hurdle and build a culture in Canadaâs industrial policy community that values evaluation as an âembedded practiceâ and, as the U.K. Treasury puts it, an âintegrated part of an organizationâs culture and operational structureâ (HM Treasury, 2020). In the words of Polastro (2024), the goal is to move âfrom âhaving toâ to âwanting toâ evaluateâ industrial policy.
The OECD (2020, p. 3) notes that âframeworks provide high-level guidance and clarity to institutions by outlining overarching best practices and goals.â To this end, the industrial policy evaluation framework proposed in this paper is built on two pillars: (1) leadership and culture; and (2) policies and practices. These are two of the four âcapacity areasâ identified by Results for America, a Washington-based consultancy that helps organizations develop evaluation frameworks in a much broader policy context. These pillars support an organizationâs evaluation incentives and capabilities, and already exist in many of the general evaluation systems used by Canadaâs federal, provincial and territorial governments. Consequently, there is âno reason to start from scratchâ (Results for America, 2024, p. 9).
The recommendations below aim to shore up these pillars and apply them consistently in the challenging area of industrial policy.
Leadership is essential to help staff understand why evaluation is important and how it adds value and contributes to the success of different evaluation users. Leadership is also needed to dispel fears that evaluation is about apportioning blame. Rather, it is about learning and growth and enabling informed decision-making.
Cultivating an evaluation culture is a long-term endeavour, requiring sustained and formal commitment across three areas: (1) vision and commitment; (2) structures and resources; and (3) skills and knowledge (Results for America, 2024). This commitment should include these actions:
As the OECD (2020, p. 3) puts it, âsound institutional setups can provide incentives to ensure that evaluations are effectively conducted.â It sees âa close connection between incentives to enhance the quality of public expenditures and incentives to deliver resultsâ (p. 4). Similarly, Results for America (2024, p. 80) notes that âbudget and funding decisions are a great place for an institution to commit to conducting evaluations and using evidence of effectiveness.â Allocating funds to âwhat worksâ is especially important at a time of budgetary constraints.
One purpose of the Canadian governmentâs Policy on Results is to âinform Memoranda to Cabinet, Treasury Board submissions and expenditure management generallyâ (Treasury Board of Canada Secretariat, 2016b). Similarly, the Treasury Boardâs (2023) submission template asks whether an âinitiativeâs design and delivery model … has been informed by past evaluationsâ and inquires about âthe anticipated timing and scope of the evaluation of the initiative,â as per the Directive on Results. The Department of Financeâs template for budget proposals requests âwhere applicable, past program evaluationsâ (Department of Finance Canada, 2024; Treasury Board of Canada Secretariat, 2023).
Mayne (2018) argues for a tailored form of evaluation, which he calls rapid expenditure evaluations, given the risk of information misalignment between typical evaluations and the specific needs of decision-makers during the budget and funding process. In the context of industrial policy, I support this proposal, described below.
To signal the importance of industrial policy evaluation and enhance its usefulness to senior decision-makers, I recommend the creation of a dedicated industrial policy evaluation unit, resourced over and above the budgets of existing departmental evaluation units. The head of the unit should have sufficient seniority to champion industrial policy evaluation across government. They would work closely with other leaders in the industrial policy community, including those with program design and delivery responsibilities, as well as key stakeholders, to schedule, scope, design, conduct and disseminate industrial policy evaluations. For example, select OECD countries have effectively used âEvaluation Championsâ (OECD, 2025).
Two-thirds of governments surveyed by the OECD have located the evaluation function at their centre to provide strategic direction and embed a whole-of-government approach. Locating the policy evaluation function close to decision-making power may allow it to effectively commission policy evaluations and follow up on commitments by ministries (OECD, 2020). Given that it is not reasonable to expect departmental evaluators to question the existence of programs that have been endorsed by their minister, Mayne (2018) argues for a centralized evaluation team that can âfocus on horizontal programming themes and policies across ministriesâ (p. 323) and âmanage a targeted schedule of evaluations … specifically designed to support budget preparationâ (p. 322). Under this proposal, departmental evaluation units would continue to conduct process evaluations to help senior management adjust programs to make them more effective and cost-efficient.
One example of such a model is the Australian Centre for Evaluation, established in May 2023 and situated in the Treasury department âto put evaluation and evidence at the heart of policy design and decision-making.â The centre partners with government departments and agencies to âconduct flagship evaluations on agreed priorities.â It works closely with the evaluation units of other departments and agencies, helps âensure government programs deliver value for money,â and aims to build a âculture of continuous improvementâ within the Australian government (Australian Centre for Evaluation, 2023).
In Canadaâs case, I would recommend that the industrial policy evaluation unit be part of the Privy Council Office, in partnership with both the Department of Finance and the Treasury Board Secretariat, given the formerâs responsibilities for tax policy and budgeting and the latterâs for funding decisions. Extra support would be provided by key departments, such as ISED.
One of the first jobs of the proposed industrial policy evaluation unit should be to develop an explicit policy on evaluations. According to Results for America (2024), evaluations are more likely to become âbusiness as usualâ if clear guidelines for the process are in place. Such guidelines would articulate the goals and requirements of evaluation. They can also help promote and protect the integrity of these activities, justify the resources and time devoted, and sustain an organizationâs commitment to the practice âbeyond the tenure of one supportive, influential leaderâ (Results for America, 2024, p. 40).
In some cases, evaluation policies are enshrined in legislation, sending a clear message of their high priority to a government â for example, the U.S. federal Foundations of Evidence-Based Policymaking Act (2018), known as the âEvidence Act.â Such visibility can help secure buy-in and smooth implementation, and ensure the policy endures over time, through changes in the political environment and turnover of elected officials (Results for America, 2024). However, a legislated policy may take time to adopt, be difficult to amend and attract heightened scrutiny, which may impact the willingness of policy-makers or program managers to participate. As a result, I recommend that an industrial policy evaluation strategy be established through a less onerous and more flexible approach, such as through a cabinet directive. For example, the May 2024 Cabinet Directive on Strategic Environmental and Economic Assessment aims to build climate and environmental considerations into federal policy, budget and funding decision-making processes.
The Canadian government has followed the requirements of its Policy on Results in conducting industrial policy evaluations for the Innovation Superclusters Initiative and the Strategic Innovation Fund, among others. However, Lester (2024) pinpoints a number of shortcomings in this process, including: a lack of focus on value-for-money assessments; exclusion of tax expenditures, such as investment tax credits; and the âconsiderable flexibilityâ given to government departments to decide which results they wish to publish. To correct these deficiencies, Lester suggests that the Policy on Results be amended to require value-for-money assessments for all program spending and tax expenditures, and to mandate publication of all evaluations.
These proposals align with the governmentâs cabinet directive on regulations, which mandates that all regulations pass a benefit-cost analysis prior to implementation. This requirement should also apply to stand-alone evaluation guidelines for industrial policy. Indeed, a specific policy on evaluations could serve as a pilot program for Lesterâs proposed amendments to the Policy on Results, which recently went through its own formal evaluation, the first since the document was compiled in 2016. This review of the Policy on Results was completed by the Treasury Board Secretariat’s Results Division in 2024, and the main findings and recommendations have been shared through internal and external channels.
To make an evaluation policy stick, details should be shared widely, often and creatively within the industrial policy community. The message could be pressed home in various ways, including: training programs and development opportunities, such as rotational assignments between program and evaluation teams; performance management exercises; and reward and recognition programs (Results for America, 2024). These efforts should be targeted at all levels of the organization and the wider industrial policy community, and should include âsignificant investments upstream in the policy-making chain,â such as targeted programs to help decision-makers interpret and use evaluation evidence effectively (OECD, 2020, p. 9).
The industrial policy evaluation unit should develop and maintain a multi-year evaluation schedule, which synchronizes the industrial policy evaluation and budget cycles. In this way, evaluations would align with the governmentâs strategic priorities and be embedded in the industrial policy decision-making process across a policy or programâs life cycle. The timing and approach of evaluations would thus be linked to key milestones, such as initial budget decisions for new policies or programs, or the assessment of pilot or other programs nearing an end that may merit renewed or additional funding. This approach would also link evaluation more closely to the needs of decision-makers, in terms of both timing and content (Bourgeois & Whynot, 2018; Mayne, 2018).
As noted above, Mayne (2018) recommends a specialized approach to evaluations that support budget and funding decisions. These ârapid expenditure evaluationsâ would be conducted by the centralized industrial policy evaluation unit at the direction of senior budget officials. They would assess alignment with current government priorities and draw on prior evaluations, such as those conducted at the departmental level, to help improve delivery. As Mayne sees it, the evaluations would use a program logic analysis and theory-based approach to assess âthe likelihood that a program is contributing in a meaningful way to desired outcomes, rather than trying to confirm the causal linksâ (p. 323) through experimental approaches. These specialized evaluations could be conducted within four months and would give decision-makers sufficient insights for âevidence-informed decision makingâ (p. 321).
A final, important point: to succeed, these quick, centralized evaluations would require a close working relationship between the dedicated industrial policy evaluation unit and departmental evaluation teams. Without such collaboration, it may be difficult to secure access to data, information and relevant internal evaluations.
The OECD concludes that âit is too difficult to draw up a single framework for industrial policy evaluation, given the multitude of approaches, and it would be unhelpful to be too prescriptiveâ (Warwick & Nolan, 2014, p. 63). As a result, a fresh industrial policy evaluation framework needs to be sufficiently flexible to guide Canadian decision-makers and evaluators across the wide range of industrial policy objectives and instruments.
The IRPP has identified examples of industrial policy objectives that could be relevant for Canada at this time (IRPP Steering Group, 2024). These examples fit roughly into the IMFâs (2024, p. 9) illustrative taxonomy of industrial policy objectives and tools shown in table 1.
In a logic model, the examples and tools should be translated into specific impacts and activities, respectively. Table 2 is a highly simplified outline of a generic logic model covering a sample of inputs and activities and their wide-ranging outcomes and impacts on industrial policy:
Finally, it is important to translate outcomes and impacts into measurable performance indicators. Below is a list of indicators based on the IMFâs (2024) four categories of industrial policy objectives:
The above performance indicators are by no means exhaustive and can be expressed in various ways, such as levels, growth rates, shares and ratios (e.g., R&D intensity ratios [per cent of revenues] and investment leverage ratios [additional private expenditure generated per unit of public support]). Depending on the scope of the study, they can be collected at the micro (firm) and/or macro (sectoral, regional or national) levels. In any case, as noted above, the key step in designing an industrial policy evaluation system is to determine the data requirements at the start of the process to assess availability and quality. Performance information can be sourced from administrative and monitoring data collected during the program, as well as from targeted surveys of program recipients, such as the number and nature of research collaborations. Evaluations can â and shouldââ also draw on publicly available commercial information and national statistics.
Where possible, having information on firms that applied for a program but were not selected could help to build a counterfactual. Unfortunately, the OECD notes that âthis information is seldom collectedâ (Criscuolo et al., 2022b, p. 17). However, the federal governmentâs Strategic Innovation Fund performs what it calls âactive monitoring,â comparing the projects it supports (which must each provide an annual performance benefits report) with those that applied for funding but did not receive it. This practice is recommended at the inception of any industrial policy intervention to help facilitate quasi-experimental analysis.
To assess trade-offs across industrial policy programs and strategies, it is helpful to have a consistent approach to evaluation reporting and dissemination. At the federal level, reflecting the reporting flexibility inherent in the Policy on Results, recent evaluations of the Strategic Innovation Fund, the Innovation Superclusters Initiative, the National Intellectual Property Strategy and the not-for-profit national research group Mitacs all used inconsistent terminology in their recommendations. For example, the National Intellectual Property Strategy evaluation referred to âopportunities for improvementâ (ISED 2023), while the Mitacs evaluation provided just one âlesson learnedâ (ISED 2022b, p. 39).Only the evaluation of the Innovation Superclusters Initiative contained a management response and action plan.
As noted above, tax expenditures are not subject to the evaluation requirements applicable to spending programs under the Policy on Results. Only a regression-based analysis of the responsiveness of R&D expenditures to tax incentives was published in the 2021 Report on Federal Tax Expenditures (Department of Finance Canada, 2021). Nonetheless, one popular tax incentive scheme, the Scientific Research and Experimental Development (SR&ED), publishes annual program statistics (number of claims, value of investment tax credits) and âsuccess stories.â
Best practice suggests the findings and recommendations of all industrial policy evaluations should be published in a consistent format, supported by a management response and action plan, with implementation progress tracked by the centralized industrial policy evaluation unit. These reports should be supported by tailored communications initiatives to reach specific audiences, ranging from ministers to the general public. An annual report on industrial policy evaluation findings and recommendations is also worth considering. In any case, relevant data should be accessible to academics and practitioners outside the government to build a broad community of evaluation.
The OECD notes that digital technologies, such as big data and machine learning, have the potential to improve the design of targeted industrial policies and to lower the costs of policy impact evaluations, as well as to make them more timely (Criscuolo et al., 2022a, 2022b). Furthermore, improving access to more granular data can facilitate the use of standard policy evaluation methodologies, as big data can provide evaluators with invaluable information to identify relevant control groups. This information can also help policy and program designers understand the characteristics of firms for which a policy or program may be most effective and, therefore, enhance their ability to identify the most suitable candidates.
Building an evaluation framework along the lines set out above is by no means the end of the process. The model needs to be carefully examined by decision-makers and evaluators and adjusted accordingly based on their feedback. It can then be used as the basis for numerous other initiatives:
With an effective evaluation framework in place, decision-makers can better assess the benefits and costs of industrial policy interventions. They can demonstrate results to Canadians and, perhaps most important, justify the use of scarce public funds to achieve the diverse but critical goals of a well-grounded industrial policy.
Australian Centre for Evaluation. (2023). Commonwealth evaluation toolkit. Government of Australia. https://evaluation.treasury.gov.au/toolkit/commonwealth-evaluation-toolkit
Bourgeois, I. & Whynot, J. (2018). Strategic evaluation utilization in the Canadian federal government. Canadian Journal of Program Evaluation, 32(3), 273-289. https://doi.org/10.3138/cjpe.43179
Criscuolo, C., Gonne, N., Kitazawa, K., & Lalanne, G. (2022a). An industrial policy framework for OECD Countries. OECD Science, Technology and Industry Policy Papers, No. 127. OECD Publishing. https://doi.org/10.1787/0002217c-en
Criscuolo, C., Gonne, N., Kitazawa, K., & Lalanne, G. (2022b). Are industrial policy instruments effective? A review of evidence in OECD countries. OECD Science, Technology and Industry Policy Papers, No. 128. OECD Publishing. https://doi.org/10.1787/57b3dae2-en
https://www.promarket.org/2023/01/17/a-new-framework-for-better-industrial-policies/
Department of Children, Disability and Equality. (2021, September 29). Frameworks for policy, planning and evaluation. Government of Ireland. https://www.gov.ie/en/department-of-children-disability-and-equality/publications/frameworks-for-policy-planning-and-evaluation-evidence-into-policy-guidance-note-7/
Department of Finance Canada. (2024). Budget/off-cycle proposals. Government of Canada. https://www.canada.ca/en/department-finance/services/publications/federal-budget/proposals.html
European Bank for Reconstruction and Development (EBRD). (2024, November 26). Transition report 2024-25: Navigating industrial policy. https://www.ebrd.com/home/news-and-events/publications/economics/transition-reports/transition-report-2024-25.html
Giswold, J. (2024, June 18). Tallying government support for EV investment in Canada. Office of the Parliamentary Budget Officer. https://www.pbo-dpb.ca/en/additional-analyses–analyses-complementaires/BLOG-2425-004–tallying-government-support-ev-investment-in-canada–bilan-aide-gouvernementale-investissement-dans-ve-canada
HM Treasury. (2020). Magenta book: Central government guidance on evaluation. Government of the United Kingdom. https://www.gov.uk/government/publications/the-magenta-book/magenta-book-central-government-guidance-on-evaluation-html
Innovation, Science and Economic Development Canada (ISED). (2021). Evaluation of the Strategic Innovation Fund (SIF). Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/attachments/h_03942_en.pdf
Innovation, Science and Economic Development Canada (ISED). (2022a). Evaluation of ISEDâs Innovation Superclusters Initiative (ISI). Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/documents/2022-07/ISI_Evaluation_Report-2022-EN.pdf
Innovation, Science and Economic Development Canada (ISED). (2022b). Evaluation of Innovation, Science and Economic Development Canada (ISED) funding to MITACs. Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/sites/default/files/attachments/2022/evalmitacs.pdf
Innovation, Science and Economic Development Canada (ISED). (2023). Evaluation of the National Intellectual Property Strategy. Government of Canada. https://ised-isde.canada.ca/site/audits-evaluations/en/evaluation/evaluation-national-intellectual-property-strategy
Innovation, Science and Economic Development Canada (ISED). (2024). Strategic Innovation Fund: Impact report. Government of Canada. https://ised-isde.canada.ca/site/strategic-innovation-fund/en/impact-report
International Monetary Fund (IMF). (2024, March 11). Industrial policy coverage in IMF surveillance â broad considerations. (Policy Paper No. 2024/008). https://doi.org/10.5089/9798400266683.007
IRPP Steering Group. (2024). Should governments steer private investment decisions? Framing Canadaâs industrial policy choices. Institute for Research on Public Policy. https://irpp.org/wp-content/uploads/2024/10/Should-Governments-Steer-Private-Investment-Decisions-Framing-Canada-Industrial-Policy-Choices.pdf
Lester, J. (2024, September 5). Minding the purse strings: Major reforms needed to the federal governmentâs expenditure management system. C.D. Howe Institute. https://cdhowe.org/publication/minding-purse-strings-major-reforms-needed-federal-governments-expenditure/
Mayne, J. (2018). Linking evaluation to expenditure reviews: Neither realistic nor a good idea. Canadian Journal of Program Evaluation, 32(3), 316-326. https://doi.org/10.3138/cjpe.43178
Net-Zero Advisory Body (NZAB). (2025). Collaborate to succeed: The Government of Canadaâs role in growing domestic clean technology champions. https://www.nzab2050.ca/publications/collaborate-to-succeed-the-government-of-canadas-role-in-growing-domestic-clean-technology-champions
Organisation for Economic Co-operation and Development (OECD). (2020). How can governments leverage policy evaluation to improve evidence informed policy making? Highlights from an OECD comparative study. OECD Publishing. https://legalinstruments.oecd.org/api/download/?uri=/private/temp/7ecfe9af-5b05-4992-9a10-ebdc45a944a8.pdf&name=policy-evaluation-comparative-study-highlights.pdf
Organisation for Economic Co-operation and Development (OECD). (2025). OECD recommendation on public policy evaluation: Implementation toolkit. OECD Publishing.
https://www.observatoriopoliticaspublicas.es/wp-content/uploads/2025/02/OCDE_2025.pdf
Polastro, R. (2024, December 10). Moving organisations from âhaving toâ to âwanting toâ evaluate: Five recommendations for practice. BetterEvaluation. https://www.betterevaluation.org/blog/moving-organisations-having-wanting-evaluate-five-recommendations-for-practice
Results for America. (2024). Evaluation policy guide. https://results4america.org/tools/evaluation-policy-guide/
Ruegg, R., & Jordan, G. B. (2007). Overview of evaluation methods for R&D programs: A directory of evaluation methods relevant to technology development programs. U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy.
https://www1.eere.energy.gov/analysis/pdfs/evaluation_methods_r_and_d.pdf
Social Policy Evaluation and Research Unit (Superu). (2017). Making sense of evaluation: A handbook for everyone. Government of New Zealand. https://www.dpmc.govt.nz/publications/making-sense-evaluation-handbook-everyone
Treasury Board of Canada Secretariat. (2016a). Policy on results. Government of Canada.
https://publications.gc.ca/collections//collection_2017/sct-tbs/BT22-172-2016-eng.pdf
Treasury Board of Canada Secretariat. (2016b, July 1). Directive on results. Government of Canada. https://www.tbs-sct.canada.ca/pol/doc-eng.aspx?id=31306
Treasury Board of Canada Secretariat. (2020a, July 23). Policy on results: What is evaluation? Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/policy-results-what-evaluation.html
Treasury Board of Canada Secretariat. (2020b, November 20). Evaluation 101 backgrounder. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/evaluation-101-backgrounder.html
Treasury Board of Canada Secretariat. (2021, March 22). Theory-based approaches to evaluation: Concepts and practices. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/audit-evaluation/evaluation-government-canada/theory-based-approaches-evaluation-concepts-practices.html
Treasury Board of Canada Secretariat. (2023, September 5). Guidance for drafters of Treasury Board submissions. Government of Canada. https://www.canada.ca/en/treasury-board-secretariat/services/treasury-board-submissions/guidance-for-drafters-of-treasury-board-submissions.html
Treasury Board of Canada Secretariat. (2024). Cabinet directive on regulation. Government of Canada. https://www.canada.ca/en/government/system/laws/developing-improving-federal-regulations/requirements-developing-managing-reviewing-regulations/guidelines-tools/cabinet-directive-regulation.html
Warwick, K., & Nolan, A. (2014). Evaluation of industrial policy: Methodological issues and policy lessons. OECD Science, Technology and Industry Policy Papers, No. 16. OECD Publishing. https://doi.org/10.1787/5jz181jh0j5k-en
This Insight was published as part of the Building New Foundations for Economic Growth program, under the direction of research director Steve Lafleur. The paper also supports the IRPPâs initiative, Canadaâs Next Economic Transformation: What Role Should Industrial Policy Play? The manuscript was edited by Bernard Simon, copy-editing was by Prasanthi Vasanthakumar, proofreading was by Zofia Laubitz, editorial co-ordination was by Ătienne Tremblay, production was by Chantal LĂ©tourneau and art direction was by Anne Tremblay.
Douglas Nevison is an economist and a former senior public servant at Environment and Climate Change Canada, the European Bank for Reconstruction and Development, the Privy Council Office and the Department of Finance Canada. Throughout his career, he has been a strong advocate for policy and program evaluation and evidence-informed decision-making.
To cite this document:
Nevison, D. (2025). A model evaluation framework for industrial policy in Canada. IRPP Insight No. 62. Institute for Research on Public Policy.
The opinions expressed in this paper are those of the author and do not necessarily reflect the views of the IRPP, its Board of Directors or sponsors. Research independence is one of the IRPP’s core values, and the IRPP maintains editorial control over all publications.
IRPP Insight is a refereed series that is published irregularly throughout the year. It provides commentary on timely topics in public policy by experts in the field. Each publication is subject to rigorous internal and external peer review for academic soundness and policy relevance.
If you have questions about our publications, please contact irpp@nullirpp.org. If you would like to subscribe to our newsletter, IRPP News, please go to our website at irpp.org.
Illustration: Istock
ISSN 2291-7748 (Online)