Показаны сообщения с ярлыком evaluation. Показать все сообщения
Показаны сообщения с ярлыком evaluation. Показать все сообщения

среда, 18 марта 2026 г.

Locus Focus Vs ‘Hocus Pocus’

 


Measuring success is part of the governance and management responsibility of all non-profit organisations. This responsibility augments obligations related to compliance with legal, regulatory and ethical requirements

‘Hocus Pocus’

According to vocabulary.com, hocus-pocus‘ is an illusion or a meaningless distraction that tricks you in some way. Originally derived from invocations used in magic shows (like ‘abracadabra’), it’s actually fake Latin.

Some of what passes for performance and conformance governance might also be called hocus pocus, as it does not necessarily use the most appropriate indicator or measure to permit a valid evaluation to be carried out.

Confusion can also arise from leaders using certain terms interchangeably. Differentiating between measurement terms and establishing shared understandings of their meaning would therefore be helpful. The header image above offers one way to distinguish between key terms. This may be helpful if you feel your board could benefit from an improved understanding of this central aspect of their governance role.

All incorporated entities need to be able to demonstrate their solvency – their ability to pay all their debts as and when they become due and payable (refer S.95A of the Corporations Act). This is a conformance requirement, and so it’s one of the key metrics all boards should be continuously monitoring. It is also a key focus of scrutiny by independent auditors.

KPIs, Key Metrics and OKRs

Adopting a Key Performance Indicator (KPI) to achieve a certain percentage in revenue growth year on year, or to recruit a certain number of new members/donors, are performance targets. They are not of concern to regulators like ASIC or the ACNC. Such measures may be important, but they do not normally qualify as ‘key metrics‘ (like solvency ratios, member/donor growth, etc.).

Supporting KPIs, key success factors (AKA competitive or strategic ‘posture’) identify the settings required for an organisation to operate effectively in its domain, i.e. what it must do well to achieve its strategic goals e.g. agility, reliability, diversity, and client engagement.

Objective and Key Results (OKR) measures have become popular in some quarters as they separate outcomes-based results (which measure quantifiable outcomes), from effort-based results (which measure the relative success of innovations, projects, or initiatives).

Key result areas (KRAs) may be used by directors when devising the CEO performance plan, otherwise, identification of KRAs is a management function performed by the CEO and other executives for senior staff roles.

‘Objective’ objective assessment

Adopting vague indicators which could only be assessed subjectively would not represent good governance.

While qualitative measures are required for some types of strategic objectives, it should be possible to identify the types of evidence that could objectively be used to determine whether or not, and with what measure of success, a strategic initiative was achieved. An objective (evidence-based) assessment of objective (goal) achievement if you will.

The set of such measures will include both quantitative and qualitative evidence, according to the nature of the matters being evaluated. The selection of methods and measures will also depend on a range of variables, such as the nature of the organisation, its service profile, regulatory obligations, size, age, and governance maturity.

Locus Focus

Unlike hocus pocus, ‘locus’ and ‘focus’ are real words.

Etymonline.com advises that locus is a noun (plural loci), meaning “place, spot, locality,” and that it derives from the Latin locus “a place, spot; appointed place, position; locality, region, country; degree, rank, order; topic, subject”.

The same authority advises that focus too is of Latin origin. It means “point of convergence,” with its Latin origin referring to “hearth, fireplace” (also, figuratively, “home, family”).

‘Modus’ is the third Latin term used in the header chart, meaning the “way in which anything is done” (hence modus operandi).  

The locus of control is a factor in the selection of the right evaluation measures. If the board is responsible for setting strategy and the associated key performance indicators (albeit with advice from management), then they are the locus. If management is responsible for setting goals for a given position, then management is the locus (of control).

Applying Robert Tricker‘s 2X2 corporate governance matrix (illustrated below), the focus of monitoring and evaluation needs to be both be internal and external. It also needs to be performance and conformance related. Depending on which aspect of governance the board is monitoring and evaluating, different types of measures will be suitable.


Your Monitoring and Evaluation Framework


https://tinyurl.com/fnaus7bh

пятница, 31 января 2025 г.

Are we there yet? Evaluating NFP outputs, outtakes, outcomes & impact

 


Evaluation – Part 2

Evaluation is one of the central elements in the EDM (Evaluate, Direct, Monitor) Governance Model, but its role in governance (and management) is often obscured by the use of other terms, like ‘problem-solving’ or ‘decision-making’. The importance of evaluation in non-profit governance is highlighted in the extracts from AICD’s NFP Governance Principles illustrated below.


Part 1 (https://bit.ly/41GhRCrin this 2-part series on Evaluation mainly focussed on directors using evaluation measures to address their performance and conformance roles. The diverse nature of evaluative activities carried out across the organisation would acknowledge the wide scope of work implied by the following definition of ‘evaluation’. That broader scope is the theme explored in Part 2 of the series.

As well as seeking recognition of this wider scope of evaluation activities, the shift of emphasis in recent years towards the evaluation of outcomes and impacts also requires that we understand these terms, so that shared understandings inform board and management deliberations.

Integrated evaluation framework

There are many types of evaluation activity and numerous methods to choose from. Evaluation is used in virtually all aspects of organisational life, yet there are few organisations with (monitoring and ) evaluation frameworks or policies to guide directors and staff in this key aspect of their work.

There are numerous ways one could approach the development of an integrated framework, and each has its merits and drawbacks. Looking through multiple lenses may help to overcome some of these drawbacks (think blind people each touching a different part of an elephant). Here are just some of the lenses that could be employed in thinking about evaluation in a non-profit entity.

Working with directors and managers in many organisations, the existence of ‘evaluation silos’ has been evident. It is often the case that people involved in internal audits see no connection between their work and that of their colleagues involved in program evaluation, risk management, performance management, tendering, or project management.

Much of the evaluation literature focuses on development projects or education, and there is relatively little which is overtly identified as relevant to evaluation across the non-profit organisation. Some boards have adopted a monitoring and evaluation framework to bring structure and consistency to evaluations conducted by or for the board, however, these tend to focus on indicators attached to their strategy, along with selected dashboard elements (like solvency and membership trends).

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Boards wishing to ensure that a coherent approach to evaluation is taken throughout their organisation may wish to consider the development of an integrated evaluation framework, which will help to ensure that the results of evaluative activities are presented to directors in a more coherent form. For such a framework to apply across the spectrum of evaluation activities undertaken, it would doubtless need to focus on a set of evaluation principles rather than any single approach. Here are two sample sets of such principles which may offer starting points for your organisation’s Monitoring and Evaluation Framework.

Critical thinking

In Bloom’s (original) Taxonomy of Educational Objectives, evaluation was at the top of the hierarchy of thinking skills – the pinnacle of critical thinking. Perhaps partly in recognition that evaluation did not necessarily solve problems or result in innovation, Bloom’s Revised Taxonomy (2001) added ‘creating’ to the top of the hierarchy.

The EDM (Evaluate, Direct, Monitor) Governance model already recognised that evaluation was not the last or highest step in governance thinking. Once the ‘What?’ and ‘So What?’ questions have been answered via monitoring and evaluation, the ‘Now What?’ question remains to be answered by the board setting directions for future action (creating). (See header image above.)

Evaluation for quality

A parallel can be seen in the operational uses of evaluation, where conclusions drawn about the value, standard, or status of a matter within a given ‘silo’ are only some aspects of quality assurance and precursors to quality improvement.

Given its significant role in shaping the insights which inform future plans and activities, the recent shift in evaluation practice to an outcomes focus is a welcome development.

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Evaluation logic

Attempting to devise an organisational evaluation framework or model that accommodates this wider collection of evaluative activities could run the risk of oversimplifying various parts of the evaluative ecosystem. Failure to seek a coherent framework, however, could miss the opportunity to see relationships and patterns offering significant insights into organisational development opportunities and enhanced quality management. The logic framework suggested below packs a lot of detail into a single chart, but hopefully offers insights that will be helpful to your board and senior managers as they seek to improve the efficiency and effectiveness of your non-profit or for-purpose entity.

When we recognise our organisation as a system comprising interdependent sub-systems and relationships, we break down silos and challenge narrow views about how people, systems, processes, and technology interact to achieve our purpose/s or execute strategy.

Measuring success, progress, and impact

The evaluation methods, metrics, and milestones identified in your evaluation plan will benefit from looking beyond mere outputs, to identify lessons learned along the way (outtakes), outcomes achieved for stakeholders, and the longer-term impact of your strategy, service model, and campaign activities (where applicable). Identifying your evaluation criteria after the initiative or program has concluded runs the risk of hindsight bias clouding the picture. Using a logic framework like the one suggested above should help to avoid that risk.

https://www.aes.asn.au/talking-about-evaluation
https://www.councilofnonprofits.org/tools-resources/evaluation-and-measurement-of-outcomes

https://balancedscorecard.org/bsc-basics-overview/

https://www.criticalthinking.org/pages/index-of-articles/1021/


https://tinyurl.com/yresy54e

среда, 18 декабря 2024 г.

The Scales of Governance: Weighing options, arguments, evidence & consequences

 


Evaluation – Part 1

We use the term ‘on balance’ as a shorthand way of saying that we have come to a decision or choice after considering the power, influence, or ‘weight’ of both sides of a question or issue. This invokes metaphoric reference to a set of balance scales – as in the ‘scales of justice’ (see header image).

Evaluation skills, sometimes described as good judgment, are fundamental to good governance. The EDM (Evaluate, Direct, Monitor) Governance Model acknowledges this. This article is the first in a short series looking at some aspects of evaluation in the work of non-profit directors and managers.

Weighing options

Regrettably, most non-profit governance and management decisions involve more than two options or alternatives. Simple choices between good and bad options are rare. If the issues were that simple, then they could probably be resolved by reference to a checklist or filter system, without having to include them on a board agenda. Often enough in governance deliberations, we can also be faced with a choice between ‘least worst’ options rather than ‘best case’ scenarios.

Debates over complex public policy issues inevitably involve more than two perspectives, unless they are seen through the lens of a polarising media story or a ‘school debating club’ approach. These perspectives contrive to restrict debate to black and white positions, with ‘government’ and ‘opposition’ sides taking a stance for or against a given claim or contention.

Even the choice between action and inaction usually involves additional sub-options, such as whether to act one way or another. For inaction, you could choose to leave the matter off the agenda, or include it, but recommend that the situation simply be monitored for significant developments.

We usually employ arguments for and against each of the options to develop a collective view on which of the options is the most robust, and therefore likely to offer the most satisfactory response to the situation. The criteria we employ to make judgments regarding our option preferences, are discussed further below under ‘weighing evidence’.

Weighing arguments


Deliberative processes in non-profit and for-purpose settings are much less about winning an argument than they are about negotiating best possible outcomes for key stakeholders.

Simply counting the number of arguments for or against the proposal would not pay sufficient regard to the relative value of some criteria compared with others. That’s where recognition of the evidence called upon to support each argument comes into play.

The importance of employing evidence-based decision-making has been formally recognised by the International Standards Organisation with the adoption of ISO9000.

Weighing evidence

Identification of arguments for or against a particular proposal or position, and mapping these so they can be fully examined is a good start in weighing the arguments, but when we acknowledge that not all arguments are supported by the same standard of evidence, we recognise that we need to attach some form of importance ranking or weight to the criteria we apply to our decision making. This would tilt the scales in recognition of our values, strategic priorities, and commitment to evidence-informed decision-making.

When we establish a tender process or initiate CEO recruitment and selection, we are happy to identify criteria for use by a tender committee or selection panel. This helps to ensure that an objective decision can be reached on the preferred candidate. Skill in crafting such criteria exists in most boards and senior management teams, however, the establishment of evaluation criteria for other kinds of decisions is not always addressed. Taking the time to agree on the evaluation criteria, and their relative importance, will be rewarded when the time comes to make a decision.


For complex and high-value decisions, I have found argument maps to be a helpful aid to the deliberative process. There are numerous desktop and online mapping systems available, but I have preferred Rationale* and Bcisive* for many years. The capacity to unpack the debate, capture the supportive and opposing arguments, identify the evidence underpinning those arguments, and the sources of that evidence, is particularly helpful to a board seeking to weight or rank the arguments according to the standard of evidence they rest upon.

*No referral fees or commission arrangements apply.


When assessing whether a board sought access to relevant data and analyses to support their decisions, courts will seek to confirm that directors informed themselves to a level expected by a reasonable person before making their decision. Mere access to the relevant data would not be sufficient of course. The extent to which probing questions were asked and answered also enters into consideration.

Weighing consequences

Certainly, when we assess the likelihood and severity of adverse consequences from action (or inaction) on a given issue, we are weighing the consequences. This only considers the question of what could go wrong of course, and a balanced approach to deliberation would also look at the value proposition, and ‘benefit dividend’ for our client, member, or community. The sweet spot which (at minimum) balances benefits, costs, and risks, should be identified in board decision making, with a preference for proposals in which benefits outweigh costs and risks.

Higher-order governance

The evaluative skills involved in weighing options, arguments, evidence, and consequences are examples of higher-order critical thinking skills. This aspect of evaluation will be explored further in Part 2 of this series.

https://tinyurl.com/bp6ewc4s