Показаны сообщения с ярлыком accountability. Показать все сообщения
Показаны сообщения с ярлыком accountability. Показать все сообщения

пятница, 31 января 2025 г.

Are we there yet? Evaluating NFP outputs, outtakes, outcomes & impact

 


Evaluation – Part 2

Evaluation is one of the central elements in the EDM (Evaluate, Direct, Monitor) Governance Model, but its role in governance (and management) is often obscured by the use of other terms, like ‘problem-solving’ or ‘decision-making’. The importance of evaluation in non-profit governance is highlighted in the extracts from AICD’s NFP Governance Principles illustrated below.


Part 1 (https://bit.ly/41GhRCrin this 2-part series on Evaluation mainly focussed on directors using evaluation measures to address their performance and conformance roles. The diverse nature of evaluative activities carried out across the organisation would acknowledge the wide scope of work implied by the following definition of ‘evaluation’. That broader scope is the theme explored in Part 2 of the series.

As well as seeking recognition of this wider scope of evaluation activities, the shift of emphasis in recent years towards the evaluation of outcomes and impacts also requires that we understand these terms, so that shared understandings inform board and management deliberations.

Integrated evaluation framework

There are many types of evaluation activity and numerous methods to choose from. Evaluation is used in virtually all aspects of organisational life, yet there are few organisations with (monitoring and ) evaluation frameworks or policies to guide directors and staff in this key aspect of their work.

There are numerous ways one could approach the development of an integrated framework, and each has its merits and drawbacks. Looking through multiple lenses may help to overcome some of these drawbacks (think blind people each touching a different part of an elephant). Here are just some of the lenses that could be employed in thinking about evaluation in a non-profit entity.

Working with directors and managers in many organisations, the existence of ‘evaluation silos’ has been evident. It is often the case that people involved in internal audits see no connection between their work and that of their colleagues involved in program evaluation, risk management, performance management, tendering, or project management.

Much of the evaluation literature focuses on development projects or education, and there is relatively little which is overtly identified as relevant to evaluation across the non-profit organisation. Some boards have adopted a monitoring and evaluation framework to bring structure and consistency to evaluations conducted by or for the board, however, these tend to focus on indicators attached to their strategy, along with selected dashboard elements (like solvency and membership trends).

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Boards wishing to ensure that a coherent approach to evaluation is taken throughout their organisation may wish to consider the development of an integrated evaluation framework, which will help to ensure that the results of evaluative activities are presented to directors in a more coherent form. For such a framework to apply across the spectrum of evaluation activities undertaken, it would doubtless need to focus on a set of evaluation principles rather than any single approach. Here are two sample sets of such principles which may offer starting points for your organisation’s Monitoring and Evaluation Framework.

Critical thinking

In Bloom’s (original) Taxonomy of Educational Objectives, evaluation was at the top of the hierarchy of thinking skills – the pinnacle of critical thinking. Perhaps partly in recognition that evaluation did not necessarily solve problems or result in innovation, Bloom’s Revised Taxonomy (2001) added ‘creating’ to the top of the hierarchy.

The EDM (Evaluate, Direct, Monitor) Governance model already recognised that evaluation was not the last or highest step in governance thinking. Once the ‘What?’ and ‘So What?’ questions have been answered via monitoring and evaluation, the ‘Now What?’ question remains to be answered by the board setting directions for future action (creating). (See header image above.)

Evaluation for quality

A parallel can be seen in the operational uses of evaluation, where conclusions drawn about the value, standard, or status of a matter within a given ‘silo’ are only some aspects of quality assurance and precursors to quality improvement.

Given its significant role in shaping the insights which inform future plans and activities, the recent shift in evaluation practice to an outcomes focus is a welcome development.

Thinking of evaluation only in terms of directors using data from monitoring activities to determine whether and how well strategic and operational outcomes were achieved, and to guide future strategy, is a limited view of the role played by a spectrum of evaluation activities, some of which are described with different names.

Evaluation logic

Attempting to devise an organisational evaluation framework or model that accommodates this wider collection of evaluative activities could run the risk of oversimplifying various parts of the evaluative ecosystem. Failure to seek a coherent framework, however, could miss the opportunity to see relationships and patterns offering significant insights into organisational development opportunities and enhanced quality management. The logic framework suggested below packs a lot of detail into a single chart, but hopefully offers insights that will be helpful to your board and senior managers as they seek to improve the efficiency and effectiveness of your non-profit or for-purpose entity.

When we recognise our organisation as a system comprising interdependent sub-systems and relationships, we break down silos and challenge narrow views about how people, systems, processes, and technology interact to achieve our purpose/s or execute strategy.

Measuring success, progress, and impact

The evaluation methods, metrics, and milestones identified in your evaluation plan will benefit from looking beyond mere outputs, to identify lessons learned along the way (outtakes), outcomes achieved for stakeholders, and the longer-term impact of your strategy, service model, and campaign activities (where applicable). Identifying your evaluation criteria after the initiative or program has concluded runs the risk of hindsight bias clouding the picture. Using a logic framework like the one suggested above should help to avoid that risk.

https://www.aes.asn.au/talking-about-evaluation
https://www.councilofnonprofits.org/tools-resources/evaluation-and-measurement-of-outcomes

https://balancedscorecard.org/bsc-basics-overview/

https://www.criticalthinking.org/pages/index-of-articles/1021/


https://tinyurl.com/yresy54e

четверг, 28 сентября 2023 г.

Do things happen because of you, or do things happen to you?

 


Harish Bhatia

I found this simple framework quite powerful to coach and mentor individuals who tend to slip away into victim behaviors or mindsets when the going gets tough.

Shared this with a few business leaders, owners & CXOs and the immediate response was to identify people in their teams who they could slot into one of the respective levels.

P.s - Not taking credit for the model, I came across this and due credit to whoever illustrated this in this simple form.


пятница, 26 мая 2023 г.

Discipline 4: Create a Cadence of Accountability

 


Accountability breeds response-ability.

Each team engages in a simple weekly process that highlights successes, analyzes failures, and course-corrects as necessary, creating the ultimate performance-management system.

The cadence of accountability is a rhythm of regular and frequent team meetings that focus on the Wildly Important Goal®. These meetings happen weekly, sometimes daily. They ideally last no more than 20 minutes. In that brief time, team members hold each other accountable for commitments made to move the score.

The secret to Discipline 4, in addition to the weekly cadence, are the commitments that team members create in the meeting. One by one, team members answer a simple question, “What are the one or two most important things I can do this week that will have the biggest impact on the scoreboard?” In the meeting, each team member reports first if they met last week’s commitments, second if the commitments move the lead or lag measures on the scoreboard, and finally which commitments they will make for the upcoming week.

People are more likely to commit to their own ideas than to orders from above. When individuals commit to fellow team members instead of only to the boss, the commitment goes beyond professional job performance to become a personal promise. When the team sees they are having a direct impact on the Wildly Important Goal, they know they are winning, and nothing drives morale and engagement more than winning.

Create a cadence of accountability


With Discipline 4, teams share accountability. Commitments are not only made between leaders and their teams; they are made between individuals on the teams. By keeping weekly commitments, team members influence the lead measure which in turn is predictive of success on the lag measure of the WIG®.

Where do you find people who are passionately committed to their work? You find them working for leaders who are passionately committed to them.

— Jim Huling, Co-author of The 4 Disciplines of Execution


Where execution actually happens

The fourth discipline is to create a cadence of accountability, a frequently recurring cycle of accounting for past performance and planning to move the score forward. Discipline 4 is where execution happens. Disciplines 1, 2, and 3 set up the game but until you apply Discipline 4, your team isn't in the game. 

This is the discipline that brings the team members together.

In Discipline 4, your team meets at least weekly in a WIG session. This meeting lasts no longer than 20 to 30 minutes, has a set agenda and goes quickly, establishing your weekly rhythm of accountability for driving progress toward the WIG.

Here's the three-part agenda for a WIG session and the kind of language you should be hearing in the session:

1. Account: Report on commitments.

"I committed to make a personal call to three customers who gave us lower scores. I did, and here's what I learned..."

2. Review the scoreboard: Learn from successes and failures. 

"Our lag measure is green, but we've got a challenge with one of our lead measures that just fell to yellow. Here's what happened..."

3. Plan: Clear the path and make new commitments.

"I'll meet with Bob on our numbers and come back next week with at least three ideas for helping us improve."

To prepare for the meeting, every team member thinks about the same question: "What are the one or two most important things I can do this week to impact the lead measures?"

The WIG session should move at a fast pace. The WIG session also gives the team the chance to process what they've learned. You should often ask each team member "What can I do this week to clear the path for you?"

Each commitment must meet two standards:

  1. The commitment must represent a specific deliverable.
  2. The commitment must influence the lead measure. 

If you simply tell your team what to do, they will learn little. What you ultimately want is for each member of your team to take personal ownership of the commitments they make. 

A Different Kind of Accountability

The accountability created in a WIG session is not organizational, it's personal. Instead of accountability to a broad outcome you can't influence, it's accountability to a weekly commitment that you yourself made and that is within your power to keep. When members of the team see their peers consistently following through on the commitments they make, they learn that the people they work with can be trusted to follow through. When this happens, performance improves dramatically. 

The WIG session encourages experimentation with fresh ideas. It engages everyone in problem-solving and promotes shared learning. 4DX produces results not from the exercise of authority, but from the fundamental desire of each individual team member to feel significant, to do work that matters and, ultimately, to win.

https://cutt.ly/awqOcWg6