Показаны сообщения с ярлыком tool. Показать все сообщения
Показаны сообщения с ярлыком tool. Показать все сообщения

среда, 7 августа 2024 г.

How to Use the PDCA Cycle for Continuous Improvement

 


The PDCA Cycle is a powerful tool for managing and improving processes and products. It consists of four steps: plan, do, check, and act. In this post, I’ll explain what each step entails and how you can apply the PDCA Cycle to your own work.

Plan 📝
The first step is to plan what you want to achieve and how you will measure your progress. You need to define your objectives, identify the current situation, analyze the root causes of problems, and devise solutions.

Do 🚀
The second step is to execute your plan and test your solutions. You must implement the changes, collect data, and document the results.

Check 🔎
The third step is to evaluate the outcomes and compare them with your expectations. You need to analyze the data, identify gaps, and determine the effectiveness of your solutions.

Act 🛠️
The fourth step is to act based on your findings and feedback. You must standardize successful solutions, communicate the results, and plan for the next cycle.

The PDCA Cycle is a continuous loop that allows you to learn from your experience and make improvements over time. It can be used for both major performance breakthroughs and small incremental improvements in various contexts.

The PDCA Cycle was originally proposed by Walter Shewhart in the 1920s and later popularized by W. Edwards Deming in the 1950s. It is also known as the Shewhart Cycle or the Deming Cycle.

https://tinyurl.com/nhdd63kr

суббота, 24 февраля 2024 г.

ICE Framework: The original prioritisation framework for marketers

 

by Stuart Brameld

Introducing the ICE framework

The ICE framework is the original scoring mechanism for growth marketing teams. It is widely considered to have been invented by Sean Ellis, considered by many to be one of the leaders in the growth hacking movement and now the CEO of GrowthHackers.com.

As a result, the ICE framework is probably the dominant scoring framework used by growth teams today.

What is a prioritisation framework?

prioritisation framework, or growth strategy framework, is used by product teams, growth teams and marketing teams to prioritise work and to assist with decision making.

Using these frameworks ideas from marketing teams, product teams, stakeholders, partners and consultants are assigned a quantitative score to determine the order in which work should be done.

There are a number of prioritisation frameworks, or scoring frameworks, all centred around the same theme. These include:

FrameworkDeveloped byScoring factors
RICESean McBride at IntercomReach, Impact, Confidence, Effort
ICESean Ellis at GrowthHackersImpact, Confidence, Effort
PIEChris Goward at WiderFunnelPotential, Importance, Ease
HiPPOn/aHighest paid person’s opinion
BRASSDavid Arnoux at Growth TribeBlink, Relevance, Availability, Scalability, Score
HIPEJeff Chang at PinterestHypothesis, Investment, Precedent, Experience
DICETJeff Mignon at PentalogDollars (or revenue) generated, Impact, Confidence, Ease, Time-to-money
PXLPeep Laja at CXLAbove the fold, noticeable within 5 sec, high traffic pages, ease of implemention and more.
Prioritisation frameworks for growth

What does ICE stand for?

ICE is an acronym for 3 factors that make up an ICE score, these are:

  • I – Impact
  • C- Confidence
  • E – Effort
FactorDescription
ImpactHow impactful do I expect this test to be? Consider any relevant metrics and past data to calculate the likely impact on your baseline metric.
ConfidenceHow sure am I that this test will prove my hypothesis? Use data and past experience to assign a confidence score.
EaseHow easily can I get launch this test? As a marketer, if an idea needs no development work and you can complete it on your own, give it a higher score.

How do you calculate an ICE score?

Firstly, a score of between 1 and 10 is assigned to each individual factor (Impact, Confidence and Ease) and then all 3 scores are multiplied together to provide an overall score of between 3 and 30.


The biggest advantage of ICE prioritisation is its simplicity, decisions can be made very quickly, particularly where ‘good enough’ is the goal. That said, the main criticism is the subjectivity within the scoring, particularly for newer growth teams where there may be little or no historic data with which to accurately determine potential impact or confidence in an idea.

ICE framework template

The image below shows a set of ideas scored based on Impact, Confidence and Ease. The ideas with the highest score rise to the top of the list (or backlog) thereby creating a prioritised to-do list for the growth team.


Image source

Why use the ICE framework?

As with any prioritisation framework, the scores themselves are largely meaningless. The goal of using ICE prioritisation is to:

  1. Assess the idea in a thoughtful, structured and unbiased way
  2. Provide the ability to easily compare the ideas against others

Further resources


https://bitly.ws/3e4Wn

ICE Scoring Model

The ICE Scoring Model is a relatively quick way to assign a numerical value to different potential projects or ideas to prioritize them based on their relative value, using three parameters: Impact, Confidence, and Ease.

What is the ICE Scoring Model?

ICE Scoring is one of the many prioritization strategies available for choosing the right/next features for a product. The ICE Scoring Model helps prioritize features and ideas by multiplying three numerical values assigned to each project: Impact, Confidence and Ease. Each item being evaluated gets a ranking from one to ten for each of the three values, those three numbers are multiplied, and the product is that item’s ICE Score.

Impact looks at how much the project will move the needle on the key metric being targeted. Confidence is the certainty that the project will actually have the predicted Impact. Ease looks at the level of effort to complete the project.

For example, Item One has an Impact of seven, a Confidence of six, and an Ease of five, while Item Two has an Impact of nine, a Confidence of seven and an Ease of two. The ICE Scores would be 210 for Item One and 126.

Those scores can then be compared with a glance and the item with the highest score gets the top slot in the prioritization hierarchy (which would be Item One in our example). Item Two’s relatively low Ease value dragged that item’s ICE score way down, despite the fact it would have a greater Impact at a higher level of Confidence than Item One. This is because all three elements of the equation are treated equally, unlike a weighted scoring model.

Why is ICE Scoring Useful and Who Created it?

There are many different scoring models out there, but ICE primarily separates itself from the pack by being simpler and easier than most of the alternatives. Because ICE only requires three inputs (Impact, Confidence, and Ease) for each idea under consideration, teams can rapidly calculate the ICE score for everything and make prioritization decisions accordingly.

It’s an even simpler calculation than the RICE model, which adds Reach as a fourth element in the equation (it also swaps Ease for Effort, so it uses Reach * Impact * Confidence / Effort for the formula).

The fact that ICE is so speedy is in no small part due to the person who created it, Sean Ellis. Ellis is most famous for coining the term “growth hacking” and helping companies quickly ramp up experimentation. Growth hacking experiments are supposed to be quick and iterative, so it makes sense that the scoring model used to determine which experiments to prioritize would also be quick and easy to use.

ICE scoring is basically a “good enough” estimation and far less rigorous than other scoring models that product teams typically rely upon. There is also a high level of variability in any item’s ICE Score based on who’s doing the scoring. Since it’s almost completely subjective, two people could assign very different values to the attributes of different ideas and end up with contrasting opinions.

The fact that a low Ease score can so easily drag down an item’s ICE score also highlights the “experimental” origin of the model; in the land of growth hacking “failing fast” is extremely valuable due to the lessons learned, and teams typically don’t want to invest a ton of time in any single project. However, some things that will have a bigger impact require a larger resource and time commitment. Solely relying on ICE scores could lead a team to keep going for “low hanging fruit” instead of making a larger investment in a project that could make a much bigger difference in the long run.

ICE Scoring is best used for relative prioritization; if you are considering a few contenders, it’s a great way to pick a winner. Similarly, if you were to apply it to the entire backlog it will help bubble up a top tier of options for the goal being targeted at that moment.

A major drawback of ICE Scoring is that relatively few people in an organization will have enough information to accurately predict all three elements of the equation. Impact and Confidence are business considerations while Ease falls into the technical domain.

Engaging product development to provide an Ease ranking for every item being considered in a scoring exercise is one way to limit the subjectivity to areas where the scorers should have a stronger body of knowledge and gets decision-makers out of the business of guesstimating development timelines. However, the fast and cheap nature of ICE Scoring may run counter to asking developers to create a level of effort for dozens or hundreds of possible projects.

It’s also important to have a consistent definition of the 1-10 scale for ranking each of the ICE elements. If there isn’t an agreement on what a confidence of “7” means it could lead to some very inconsistent assessments by various team members.

While ICE Scoring definitely has its merits, it’s likely not the best method for prioritizing an entire product roadmap, but is better suited for pre-work or taking advantage of a particular opportunity.

Conclusion

Speed and simplicity are ICE Scoring’s biggest selling points and can help product teams narrow things down. Its strength is also one of its weaknesses, however, since it is only assessing an item’s impact relative to a single goal—in an organization with multiple, concurrent goals it falls short of other scoring models’ capabilities.

Despite its lack of nuance and complexity, ICE Scoring can offer a slick way to trim things down and provide some relative comparison points for decision-makers. And when you’re trying to reach a consensus, sometimes ruling things out is just as helpful as figuring out which item is the cream of the crop.

To learn more about prioritization, watch the following webinar.




https://www.productplan.com/

воскресенье, 17 декабря 2023 г.

The Hewlett-Packard Return Map

 


The Return Map is intended to be used by all of the functional managers on the new product introduction business team. It is basically a two dimensional graph displaying time and money on the x and y axes respectively.

 

Time is drawn as a linear scale while money is more effectively drawn on a logarithmic scale, because for successful products the difference between sales and investment costs will be greater than 100:1.

 

It is important to remember that money amount are shown rising cumulatively.

The time axis is divided into three parts:

  • Investigation,
  • Development, and
  • Manufacturing and Sales.

Investigation

The purpose of Investigation is to determine the desired product features, the product's cost and price, the feasibility of the proposed technologies, and the plan for product development and introduction. At this point all numbers are estimates. Investigation is usually the responsibility of a small team and requires a relatively modest investment. At the end of Investigation, the company commits to develop a product with specific features using agreed-upon technologies.

 

Development

The development phase is usually the primary domain of R&D; in consultation with manufacturing; its purpose is to determine how to produce the product at the desired price.

 

Manufacturing Release

The formal end of the development phase is Manufacturing Release (MR) - that is, when the company commits to manufacture and sell the product. When the product is ready to be manufactured and shipped to customers, sales become a reality and manufacturing, marketing, sales costs and profits are finally more than estimates.

 

New Metrics

The map tracks - in money and months - R&D; and manufacturing investment, sales, and profit. At the same time, it provides the context for new metrics:

  • Break-Even-Time,
  • Time-to-Market,
  • Break-Even-After-Release, and
  • the Return Factor.

References

  • House. C. H. and Price.R. L., "The Return Map: Tracking Product Teams, Harvard Business Review , January-February 1991, pp 92-101.

The Return Map: Tracking Product Teams


by 

The Return Map Captures Both Money and Time 1

The purpose of Investigation is to determine the desired product features, the product’s cost and price, the feasibility of the proposed technologies, and the plan for product development and introduction. At this point, all numbers are estimates. Investigation is usually the responsibility of a small team and requires a relatively modest investment. Obviously, Marketing and R&D should collaborate to determine what features customers want and how they could be provided. At the end of Investigation, the company commits to develop a product with specific features using agreed-upon technologies.

The Development phase is usually the primary domain of R&D in consultation with manufacturing; its purpose is to determine how to produce the product at the desired price. Challenges during this phase include changing product features, concurrent design of the product and the manufacturing processes, and, often, the problems associated with doing something that has not been done before.

The formal end of the Development phase is Manufacturing Release (MR)—that is, when the company commits to manufacture and sell the product. When the product is ready to be manufactured and shipped to customers, sales become a reality and manufacturing, marketing, sales costs, and profits are finally more than estimates. The transitions between these phases or the project checkpoints are key times for the Return Map.

Perhaps the best way to grasp how the Return Map evolves from these early stages is to examine a map for a completed project, where all the variables are known—in this case, the map for a recent Hewlett-Packard pocket calculator (see Exhibit 2).

First, the Team Plots Estimates 2

Investigation took 4 months and cost about $400,000; Development required 12 months and $4.5 million, with many new manufacturing process designs for higher quality and higher production volumes. Hence, the total product development effort from beginning to manufacturing and sales release took 16 months and cost $4.9 million (see the investment line on Exhibit 2). Once the product was released, sales increased consistently for 5 months and then increased at a slightly faster pace during the next 9 months. Sales volume for the first year was $56 million and for the second year was $145 million (see the sales line on Exhibit 2—and remember, the differences are greater than they appear owing to the log scale). Cumulative sales volume gives a sense of how quickly and effectively the product was introduced and sold.

In the first year, net profits of $2.2 million were less than expected due to the sales volume lag and the resulting increase in cost per unit. But profits (see the profit line in Exhibit 2) increased significantly in the second year and passed through the investment line about 16 months after Manufacturing Release. For the second year, net profits reached $13 million. Profits are the best indicator of the contribution the product made to the customers since they reflect both the total volume of sales and the price the product can command in the marketplace.

So the critical lines that are systematically plotted on the Return Map include new product investment dollars, sales dollars, and profit dollars. Each of these is plotted as both time and money, with money in total cumulative dollars. Now that we have the basic elements of the map, we can focus on some novel metrics.

New Metrics

The map tracks—in dollars and months—R&D and manufacturing investment, sales, and profit. At the same time, it provides the context for new metrics: Break-Even-Time, Time-to-Market, Break-Even-After-Release, and the Return Factor. These four metrics (see Exhibit 3, plotted for the pocket calculator) become the focus of management reviews, functional performance discussions, learning, and most important, they are the basis for judging overall product success.

Key Metrics Complete the Picture 3

Break-Even-Time, or BET, is the key metric. It is defined as the time from the start of investigation until product profits equal the investment in development. In its simplest form, BET is a measure of the total time until the break-even point on the original investment; for the pocket calculator, BET is 32 months. BET is the one best measure for the success of the whole product development effort because it conveys a sense of urgency about time; it shows the race to generate sufficient profit to justify the product in the first place.

Time-to-Market, or TM, is the total development time from the start of the Development phase to Manufacturing Release. For the calculator project, TM was 12 months. This time and its associated costs are determined primarily by R&D efficiency and productivity. TM makes visible the major check-points of Investigation and Development. It is obviously the most important R&D measure.

Break-Even-After-Release, or BEAR, is the time from Manufacturing Release until the project investment costs are recovered in product profit. BEAR for the calculator was 16 months. This measure focuses on how efficiently the product was transferred to marketing and manufacturing and how effectively it was introduced to the marketplace. Just as TM is considered the most important R&D metric, BEAR is the most important measure for marketing and manufacturing.

Finally, the Return Factor, or RF, is a calculation of profit dollars divided by investment dollars at a specific point in time after a product has moved into manufacturing and sales. In the case of the pocket calculator, RF after one year was .45 (that is, cumulative profit of $2.2 million divided by total investment of $4.9 million) and was 3.1 (a profit of $15.2 million divided by $4.9 million) after two years. The RF gives an indication of the total return on the investment without taking into account how long it took to achieve that return.

The effectiveness of the Return Map hinges on the involvement of all three major functional areas in the development and introduction of new products. The map captures the link between the development team and the rest of the company and the customer. If the product does not sell and make money, for whatever reasons, the product development efforts were wasted. The team is accountable for designing and building products that the customers want, doing it in a timely manner, and effectively transferring the products to the rest of the company.

Making the Most of the Return Map

We have argued that an interfunctional team uses the Return Map most appropriately during the Investigation phase by generating estimates for a final map, including investment, sales, and profit. These initial estimates or forecasts are a “stake in the ground” for the team and will be used for comparison and learning throughout the project. By focusing on the accuracy of the forecasts, marketing, R&D, and manufacturing are forced to examine problems as a team; all three functions are thus sharing the burden of precision.

Too often, the whole burden during the initial phase of a project is placed on the R&D team—to generate schedules, functionality, and cost goals. But the Return Map requires accurate sales forecasts, which forces market researchers to get better and better at competitive analysis, customer understanding, and market development. Similarly, manufacturing involvement is essential for forecasting cost and schedule goals; manufacturing engineers should never be left to develop the manufacturing process after the design is set.

We cannot emphasize enough that missed forecasts generated for the Return Map in the Investigation phase should be viewed as valuable information, comparable with defect rates in manufacturing—deviations from increasingly knowable standards, proof that either the process is out of whack or the means for setting the standard are. The Return Map can be used to provide a visual perspective on sales forecasts and expected profits given any number of hypothetical scenarios. What if the forecasts varied by 20%, over or under? What are the implications of reaching mature sales six months earlier than expected? The Return Map allows for a kind of graphic sensitivity analysis of product development and introduction decisions; it sets the stage for further, indeed, continual investigation.

By no means should the Return Map be used by management to punish people whose forecasts prove inaccurate. Estimates are a team responsibility or at least a functional one—no individual should be held responsible for generating the information on which they are based. Moreover, if functions are penalized for being late, they will simply learn to estimate conservatively, building slack into a discipline whose very purpose is the elimination of slack.

Consider the estimated Return Map for a proposed ultrasound machine (Exhibit 4). The Investigation phase is planned to last 5 months and cost $500,000, TM is estimated to be 9 months and cost $2.3 million, while anticipated BET is 18 months. Mature sales volume is expected to be 300 units a month, or $16 million, and mature profits are expected to be $2 million per month. These estimates were made by a project team that had developed two other ultra-sound products and were quite knowledgeable about the medical instrument business. They wanted to take more time in the Investigation phase to completely understand the desired product features in order to move more quickly during the Development phase.

During Investigation, the Ultrasound Team Makes Early Estimates 4

During the Development phase, that is, once product features and the customer requirements are understood, TM becomes vital and other concerns fade. The Return Map implicitly stresses execution now. The faster a product is introduced into a competitive marketplace, the longer potential life it will have—hence the greater its return. TM emphasizes the need to respond to market windows and competitive pressures.

It is difficult to keep project goals focused during development. Creeping features, management redirection, and “new” marketing data all push the project to change things in midstream, much to the detriment of engineering productivity and TM. The Return Map can help put this new information into perspective and help team members analyze the impact of changes on the entire project. For example, how much will new features increase sales and profits? If adding the features delays the introduction of the product, how will that affect sales and profit?

For example, two months into the Development phase of the ultrasound machine, Hewlett-Packard labs had a breakthrough in ultrasound technology that would enable the machine to offer clearer images. Should the project incorporate this new technology or proceed as planned? The project team determined that customers would value the features and they would result in more sales, though the new technology would be more expensive to produce. But would the changes be worth the extra expense?

The original Return Map was updated with a new set of assumptions (see Exhibit 5). Development costs would increase by 40% to $3.2 million, the TM would be extended by at least 4 months, but the sales could increase by 50% and net profits could increase by 30%. The Return Map demonstrated that the BET for the project would extend to 22 months, the BEAR would remain the same, but the RF would be reduced slightly.

During Development, New Technology Calls for Revised Estimates 5

What on the surface seemed like a great idea proved not to increase significantly the economic return. In the end, the team decided to incorporate the new technology anyway, but in order to capture a greater market share, not to make more money. The team went into the changes with its eyes open; it made a strategic decision, not one driven by optimistic numbers.

Once the team gets beyond Investigation, the Development phase, represented by TM, can itself be segmented into subphases and submetrics that provide greater understanding and accuracy and more effective management. Teams that compare projects over time become more and more sophisticated about the development processes underlying TM forecasts. Within Hewlett-Packard, for instance, the time required for printed circuit board turnaround emerged as something of a bottleneck for many projects. So the company developed streamlined processes for printed circuit board development and eventually simulation tools that reduced the need for board prototypes.

Another HP study showed that company managers were much more effective at predicting total engineering months than total calendar months (that is, effort rather than time). The big mistake here seemed to be a tendency on the company’s part to try to do too many projects with the available engineers—resulting in understaffed projects. Once management focused on staffing projects adequately, the company experienced a significant reduction in this kind of forecast error.

Beyond Manufacturing Release

The Manufacturing Release, or MR, meeting is perhaps the single most important built-in checkpoint in the system. It is structured to allow the management team to focus on the original goals of the entire development effort and to compare the goals at the project’s inception with the new estimates based on the realities of the Development phase. The team can now analyze the adjustments required by the marketing and manufacturing estimates, in consequence of any elapsed time, schedule slippages, or the changed competitive and economic scene.

Consider the updated Return Map for the ultra-sound project showing the estimates that were made during the Development phase and the real development cost and TM (see Exhibit 6). The project took two extra months to complete and cost $700,000 more than estimated. At this time, the estimates for sales and profits are lower than Development phase estimates. The important point for team members to understand, obviously, is the effect of a slide in TM on the other measures, and they should take corrective action in the future to avoid delays. A series of MR meetings, supported by documentation, will sharpen any team’s estimating and development skills.

At Manufacturing Release, the Team Sees What Actually Happened 6

During the Manufacturing and Sales phase, the emphasis shifts from estimates to real data for sales and profits. This is the moment of truth for the development team. As sales and profits vary from the forecast, everyone has the right—and responsibility—to ask why. One HP study tracked sales forecasts for 16 products over a two-year period. The targets were set rather broadly—a 20% deviation either way would be acceptable—on the assumption that manufacturing and sales could adjust successfully to either surging or weakening demand within that range. Nevertheless, only 12% of the forecasts fell into the acceptable range.

From our experience, this is the time when the project team gets the most constructive criticism, insight, and enthusiasm to do things right, now and the next time. At the MR stage, analysis, thoughtfulness, and responsiveness are vital. We have seen products falter because of such varied problems as unreliable performance, an unprepared sales force, or an inability to manufacture the product in appropriate volumes—all problems that should be correctable, even at this late date. Problems such as inappropriate features, high costs, and poor designs, however, will have to wait for the next generation.

Incidentally, one important lesson we have learned is the need to keep a nucleus of the project team together for at least six months after introduction. Team members should be available to help smooth the transition to full production and sales, and the company’s next generation of products will benefit greatly from the team’s collective learning.

In addition to helping manage and analyze individual products, the Return Map can be used for families of products, programs, and major systems. As companies establish a market presence with a product, it must be buttressed with corresponding products based on customer requirements and the next appropriate technologies. A complete program strategy for an important market usually embraces products from three overlapping generations, which we call a “strategic cycle.”

A family of products can lead to overall success, even if some of the products do not reach standard return goals and are not seen as successful. The success of major new programs may not be obvious until the second generation is produced: it may take many years before success or failure is completely determined.

The table illustrates the cumulative data for an entire product family, divided into three generations, with the second generation divided into five major products (A, B, C, D, E).

Three Generations of One Product Family

A quick scan of the second generation of products suggests that productivity could have been improved by doing only those products that end up with a low BET, low BEAR, and a high RF. Unfortunately, this requires more luck than foresight. One product in the series (product D) made much of the economic difference, but it is not possible to establish a full program with only one product. In fact, the success of product D was almost entirely dependent on the investment made in the technology for product C (note the long Time-to-Market) and the market understanding gained from product B.

To have a winning and sustained market presence usually requires at least three generations of products. Frequently, one of those generations will develop a new and significant technology, while the next generation will exploit that technology by means of rapid product development cycles—products tailored to specific markets.

Easier Said Than Done

The Return Map, along with its BET, TM, BEAR, and RF measures, provides a useful indicator of the effectiveness of new product development and introduction. It provides a general management tool to track the development process and to take corrective action in real time. But while the map is simple, the difficulties of using it well are great, time-consuming, and require significant commitment. The first challenge is to get the forecast data out and to track the actual costs, sales, and profits against those forecasts. Unfortunately, most development teams will have to pull this data manually from the period expense reports, since most companies track costs, sales, and profits on a period rather than a project basis. If development teams can prove project data useful, though, accounting departments may start tracking numbers for major projects and not only aggregate numbers month by month. (HP is now overhauling its cost accounting systems to provide project as well as period data.)

Another challenge is to get functional managers to work together toward common goals and to share openly the subset measures that govern their function’s contribution to BET, TM, and BEAR. The map exposes each function’s weaknesses insofar as each function’s performance is clearly measurable against its own predictions—predictions upon which important project decisions were based. Again, if there is going to be open sharing of data, the Return Map should not be used to penalize people for their forecasts. The race to market is a concerted effort that requires enthusiasm, not fear and apprehension. The judgments of the marketplace are generally all the correction a talented team requires.

The Return Map provides a visible superordinate goal for all the functions of the team and, in graphically representing the common task, helps them collaborate. No graph can substitute for judgment and experience—yet there is no substitute either for basing judgment on an accurate picture of experience.

https://hbr.org/1991/01/the-return-map-tracking-product-teams