Показаны сообщения с ярлыком evaluation. Показать все сообщения
Показаны сообщения с ярлыком evaluation. Показать все сообщения

пятница, 31 января 2025 г.

Are we there yet? Evaluating NFP outputs, outtakes, outcomes & impact

 


Evaluation – Part 2

Evaluation is one of the central elements in the EDM (Evaluate, Direct, Monitor) Governance Model, but its role in governance (and management) is often obscured by the use of other terms, like ‘problem-solving’ or ‘decision-making’. The importance of evaluation in non-profit governance is highlighted in the extracts from AICD’s NFP Governance Principles illustrated below.


Part 1 (https://bit.ly/41GhRCrin this 2-part series on Evaluation mainly focussed on directors using evaluation measures to address their performance and conformance roles. The diverse nature of evaluative activities carried out across the organisation would acknowledge the wide scope of work implied by the following definition of ‘evaluation’. That broader scope is the theme explored in Part 2 of the series.

As well as seeking recognition of this wider scope of evaluation activities, the shift of emphasis in recent years towards the evaluation of outcomes and impacts also requires that we understand these terms, so that shared understandings inform board and management deliberations.

Integrated evaluation framework

There are many types of evaluation activity and numerous methods to choose from. Evaluation is used in virtually all aspects of organisational life, yet there are few organisations with (monitoring and ) evaluation frameworks or policies to guide directors and staff in this key aspect of their work.

There are numerous ways one could approach the development of an integrated framework, and each has its merits and drawbacks. Looking through multiple lenses may help to overcome some of these drawbacks (think blind people each touching a different part of an elephant). Here are just some of the lenses that could be employed in thinking about evaluation in a non-profit entity.

Working with directors and managers in many organisations, the existence of ‘evaluation silos’ has been evident. It is often the case that people involved in internal audits see no connection between their work and that of their colleagues involved in program evaluation, risk management, performance management, tendering, or project management.

Much of the evaluation literature focuses on development projects or education, and there is relatively little which is overtly identified as relevant to evaluation across the non-profit organisation. Some boards have adopted a monitoring and evaluation framework to bring structure and consistency to evaluations conducted by or for the board, however, these tend to focus on indicators attached to their strategy, along with selected dashboard elements (like solvency and membership trends).

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Boards wishing to ensure that a coherent approach to evaluation is taken throughout their organisation may wish to consider the development of an integrated evaluation framework, which will help to ensure that the results of evaluative activities are presented to directors in a more coherent form. For such a framework to apply across the spectrum of evaluation activities undertaken, it would doubtless need to focus on a set of evaluation principles rather than any single approach. Here are two sample sets of such principles which may offer starting points for your organisation’s Monitoring and Evaluation Framework.

Critical thinking

In Bloom’s (original) Taxonomy of Educational Objectives, evaluation was at the top of the hierarchy of thinking skills – the pinnacle of critical thinking. Perhaps partly in recognition that evaluation did not necessarily solve problems or result in innovation, Bloom’s Revised Taxonomy (2001) added ‘creating’ to the top of the hierarchy.

The EDM (Evaluate, Direct, Monitor) Governance model already recognised that evaluation was not the last or highest step in governance thinking. Once the ‘What?’ and ‘So What?’ questions have been answered via monitoring and evaluation, the ‘Now What?’ question remains to be answered by the board setting directions for future action (creating). (See header image above.)

Evaluation for quality

A parallel can be seen in the operational uses of evaluation, where conclusions drawn about the value, standard, or status of a matter within a given ‘silo’ are only some aspects of quality assurance and precursors to quality improvement.

Given its significant role in shaping the insights which inform future plans and activities, the recent shift in evaluation practice to an outcomes focus is a welcome development.

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Evaluation logic

Attempting to devise an organisational evaluation framework or model that accommodates this wider collection of evaluative activities could run the risk of oversimplifying various parts of the evaluative ecosystem. Failure to seek a coherent framework, however, could miss the opportunity to see relationships and patterns offering significant insights into organisational development opportunities and enhanced quality management. The logic framework suggested below packs a lot of detail into a single chart, but hopefully offers insights that will be helpful to your board and senior managers as they seek to improve the efficiency and effectiveness of your non-profit or for-purpose entity.

When we recognise our organisation as a system comprising interdependent sub-systems and relationships, we break down silos and challenge narrow views about how people, systems, processes, and technology interact to achieve our purpose/s or execute strategy.

Measuring success, progress, and impact

The evaluation methods, metrics, and milestones identified in your evaluation plan will benefit from looking beyond mere outputs, to identify lessons learned along the way (outtakes), outcomes achieved for stakeholders, and the longer-term impact of your strategy, service model, and campaign activities (where applicable). Identifying your evaluation criteria after the initiative or program has concluded runs the risk of hindsight bias clouding the picture. Using a logic framework like the one suggested above should help to avoid that risk.

https://www.aes.asn.au/talking-about-evaluation
https://www.councilofnonprofits.org/tools-resources/evaluation-and-measurement-of-outcomes

https://balancedscorecard.org/bsc-basics-overview/

https://www.criticalthinking.org/pages/index-of-articles/1021/


https://tinyurl.com/yresy54e

среда, 18 декабря 2024 г.

The Scales of Governance: Weighing options, arguments, evidence & consequences

 


Evaluation – Part 1

We use the term ‘on balance’ as a shorthand way of saying that we have come to a decision or choice after considering the power, influence, or ‘weight’ of both sides of a question or issue. This invokes metaphoric reference to a set of balance scales – as in the ‘scales of justice’ (see header image).

Evaluation skills, sometimes described as good judgment, are fundamental to good governance. The EDM (Evaluate, Direct, Monitor) Governance Model acknowledges this. This article is the first in a short series looking at some aspects of evaluation in the work of non-profit directors and managers.

Weighing options

Regrettably, most non-profit governance and management decisions involve more than two options or alternatives. Simple choices between good and bad options are rare. If the issues were that simple, then they could probably be resolved by reference to a checklist or filter system, without having to include them on a board agenda. Often enough in governance deliberations, we can also be faced with a choice between ‘least worst’ options rather than ‘best case’ scenarios.

Debates over complex public policy issues inevitably involve more than two perspectives, unless they are seen through the lens of a polarising media story or a ‘school debating club’ approach. These perspectives contrive to restrict debate to black and white positions, with ‘government’ and ‘opposition’ sides taking a stance for or against a given claim or contention.

Even the choice between action and inaction usually involves additional sub-options, such as whether to act one way or another. For inaction, you could choose to leave the matter off the agenda, or include it, but recommend that the situation simply be monitored for significant developments.

We usually employ arguments for and against each of the options to develop a collective view on which of the options is the most robust, and therefore likely to offer the most satisfactory response to the situation. The criteria we employ to make judgments regarding our option preferences, are discussed further below under ‘weighing evidence’.

Weighing arguments


Deliberative processes in non-profit and for-purpose settings are much less about winning an argument than they are about negotiating best possible outcomes for key stakeholders.

Simply counting the number of arguments for or against the proposal would not pay sufficient regard to the relative value of some criteria compared with others. That’s where recognition of the evidence called upon to support each argument comes into play.

The importance of employing evidence-based decision-making has been formally recognised by the International Standards Organisation with the adoption of ISO9000.

Weighing evidence

Identification of arguments for or against a particular proposal or position, and mapping these so they can be fully examined is a good start in weighing the arguments, but when we acknowledge that not all arguments are supported by the same standard of evidence, we recognise that we need to attach some form of importance ranking or weight to the criteria we apply to our decision making. This would tilt the scales in recognition of our values, strategic priorities, and commitment to evidence-informed decision-making.

When we establish a tender process or initiate CEO recruitment and selection, we are happy to identify criteria for use by a tender committee or selection panel. This helps to ensure that an objective decision can be reached on the preferred candidate. Skill in crafting such criteria exists in most boards and senior management teams, however, the establishment of evaluation criteria for other kinds of decisions is not always addressed. Taking the time to agree on the evaluation criteria, and their relative importance, will be rewarded when the time comes to make a decision.


For complex and high-value decisions, I have found argument maps to be a helpful aid to the deliberative process. There are numerous desktop and online mapping systems available, but I have preferred Rationale* and Bcisive* for many years. The capacity to unpack the debate, capture the supportive and opposing arguments, identify the evidence underpinning those arguments, and the sources of that evidence, is particularly helpful to a board seeking to weight or rank the arguments according to the standard of evidence they rest upon.

*No referral fees or commission arrangements apply.


When assessing whether a board sought access to relevant data and analyses to support their decisions, courts will seek to confirm that directors informed themselves to a level expected by a reasonable person before making their decision. Mere access to the relevant data would not be sufficient of course. The extent to which probing questions were asked and answered also enters into consideration.

Weighing consequences

Certainly, when we assess the likelihood and severity of adverse consequences from action (or inaction) on a given issue, we are weighing the consequences. This only considers the question of what could go wrong of course, and a balanced approach to deliberation would also look at the value proposition, and ‘benefit dividend’ for our client, member, or community. The sweet spot which (at minimum) balances benefits, costs, and risks, should be identified in board decision making, with a preference for proposals in which benefits outweigh costs and risks.

Higher-order governance

The evaluative skills involved in weighing options, arguments, evidence, and consequences are examples of higher-order critical thinking skills. This aspect of evaluation will be explored further in Part 2 of this series.

https://tinyurl.com/bp6ewc4s