пятница, 31 января 2025 г.

Are we there yet? Evaluating NFP outputs, outtakes, outcomes & impact

 


Evaluation – Part 2

Evaluation is one of the central elements in the EDM (Evaluate, Direct, Monitor) Governance Model, but its role in governance (and management) is often obscured by the use of other terms, like ‘problem-solving’ or ‘decision-making’. The importance of evaluation in non-profit governance is highlighted in the extracts from AICD’s NFP Governance Principles illustrated below.


Part 1 (https://bit.ly/41GhRCrin this 2-part series on Evaluation mainly focussed on directors using evaluation measures to address their performance and conformance roles. The diverse nature of evaluative activities carried out across the organisation would acknowledge the wide scope of work implied by the following definition of ‘evaluation’. That broader scope is the theme explored in Part 2 of the series.

As well as seeking recognition of this wider scope of evaluation activities, the shift of emphasis in recent years towards the evaluation of outcomes and impacts also requires that we understand these terms, so that shared understandings inform board and management deliberations.

Integrated evaluation framework

There are many types of evaluation activity and numerous methods to choose from. Evaluation is used in virtually all aspects of organisational life, yet there are few organisations with (monitoring and ) evaluation frameworks or policies to guide directors and staff in this key aspect of their work.

There are numerous ways one could approach the development of an integrated framework, and each has its merits and drawbacks. Looking through multiple lenses may help to overcome some of these drawbacks (think blind people each touching a different part of an elephant). Here are just some of the lenses that could be employed in thinking about evaluation in a non-profit entity.

Working with directors and managers in many organisations, the existence of ‘evaluation silos’ has been evident. It is often the case that people involved in internal audits see no connection between their work and that of their colleagues involved in program evaluation, risk management, performance management, tendering, or project management.

Much of the evaluation literature focuses on development projects or education, and there is relatively little which is overtly identified as relevant to evaluation across the non-profit organisation. Some boards have adopted a monitoring and evaluation framework to bring structure and consistency to evaluations conducted by or for the board, however, these tend to focus on indicators attached to their strategy, along with selected dashboard elements (like solvency and membership trends).

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Boards wishing to ensure that a coherent approach to evaluation is taken throughout their organisation may wish to consider the development of an integrated evaluation framework, which will help to ensure that the results of evaluative activities are presented to directors in a more coherent form. For such a framework to apply across the spectrum of evaluation activities undertaken, it would doubtless need to focus on a set of evaluation principles rather than any single approach. Here are two sample sets of such principles which may offer starting points for your organisation’s Monitoring and Evaluation Framework.

Critical thinking

In Bloom’s (original) Taxonomy of Educational Objectives, evaluation was at the top of the hierarchy of thinking skills – the pinnacle of critical thinking. Perhaps partly in recognition that evaluation did not necessarily solve problems or result in innovation, Bloom’s Revised Taxonomy (2001) added ‘creating’ to the top of the hierarchy.

The EDM (Evaluate, Direct, Monitor) Governance model already recognised that evaluation was not the last or highest step in governance thinking. Once the ‘What?’ and ‘So What?’ questions have been answered via monitoring and evaluation, the ‘Now What?’ question remains to be answered by the board setting directions for future action (creating). (See header image above.)

Evaluation for quality

A parallel can be seen in the operational uses of evaluation, where conclusions drawn about the value, standard, or status of a matter within a given ‘silo’ are only some aspects of quality assurance and precursors to quality improvement.

Given its significant role in shaping the insights which inform future plans and activities, the recent shift in evaluation practice to an outcomes focus is a welcome development.

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Evaluation logic

Attempting to devise an organisational evaluation framework or model that accommodates this wider collection of evaluative activities could run the risk of oversimplifying various parts of the evaluative ecosystem. Failure to seek a coherent framework, however, could miss the opportunity to see relationships and patterns offering significant insights into organisational development opportunities and enhanced quality management. The logic framework suggested below packs a lot of detail into a single chart, but hopefully offers insights that will be helpful to your board and senior managers as they seek to improve the efficiency and effectiveness of your non-profit or for-purpose entity.

When we recognise our organisation as a system comprising interdependent sub-systems and relationships, we break down silos and challenge narrow views about how people, systems, processes, and technology interact to achieve our purpose/s or execute strategy.

Measuring success, progress, and impact

The evaluation methods, metrics, and milestones identified in your evaluation plan will benefit from looking beyond mere outputs, to identify lessons learned along the way (outtakes), outcomes achieved for stakeholders, and the longer-term impact of your strategy, service model, and campaign activities (where applicable). Identifying your evaluation criteria after the initiative or program has concluded runs the risk of hindsight bias clouding the picture. Using a logic framework like the one suggested above should help to avoid that risk.

https://www.aes.asn.au/talking-about-evaluation
https://www.councilofnonprofits.org/tools-resources/evaluation-and-measurement-of-outcomes

https://balancedscorecard.org/bsc-basics-overview/

https://www.criticalthinking.org/pages/index-of-articles/1021/


https://tinyurl.com/yresy54e

Комментариев нет:

Отправить комментарий