Introducing the ICE framework
The ICE framework is the original scoring mechanism for growth marketing teams. It is widely considered to have been invented by Sean Ellis, considered by many to be one of the leaders in the growth hacking movement and now the CEO of GrowthHackers.com.
As a result, the ICE framework is probably the dominant scoring framework used by growth teams today.
What is a prioritisation framework?
A prioritisation framework, or growth strategy framework, is used by product teams, growth teams and marketing teams to prioritise work and to assist with decision making.
Using these frameworks ideas from marketing teams, product teams, stakeholders, partners and consultants are assigned a quantitative score to determine the order in which work should be done.
There are a number of prioritisation frameworks, or scoring frameworks, all centred around the same theme. These include:
Framework | Developed by | Scoring factors |
---|---|---|
RICE | Sean McBride at Intercom | Reach, Impact, Confidence, Effort |
ICE | Sean Ellis at GrowthHackers | Impact, Confidence, Effort |
PIE | Chris Goward at WiderFunnel | Potential, Importance, Ease |
HiPPO | n/a | Highest paid person’s opinion |
BRASS | David Arnoux at Growth Tribe | Blink, Relevance, Availability, Scalability, Score |
HIPE | Jeff Chang at Pinterest | Hypothesis, Investment, Precedent, Experience |
DICET | Jeff Mignon at Pentalog | Dollars (or revenue) generated, Impact, Confidence, Ease, Time-to-money |
PXL | Peep Laja at CXL | Above the fold, noticeable within 5 sec, high traffic pages, ease of implemention and more. |
What does ICE stand for?
ICE is an acronym for 3 factors that make up an ICE score, these are:
- I – Impact
- C- Confidence
- E – Effort
Factor | Description |
---|---|
Impact | How impactful do I expect this test to be? Consider any relevant metrics and past data to calculate the likely impact on your baseline metric. |
Confidence | How sure am I that this test will prove my hypothesis? Use data and past experience to assign a confidence score. |
Ease | How easily can I get launch this test? As a marketer, if an idea needs no development work and you can complete it on your own, give it a higher score. |
How do you calculate an ICE score?
Firstly, a score of between 1 and 10 is assigned to each individual factor (Impact, Confidence and Ease) and then all 3 scores are multiplied together to provide an overall score of between 3 and 30.
The biggest advantage of ICE prioritisation is its simplicity, decisions can be made very quickly, particularly where ‘good enough’ is the goal. That said, the main criticism is the subjectivity within the scoring, particularly for newer growth teams where there may be little or no historic data with which to accurately determine potential impact or confidence in an idea.
ICE framework template
The image below shows a set of ideas scored based on Impact, Confidence and Ease. The ideas with the highest score rise to the top of the list (or backlog) thereby creating a prioritised to-do list for the growth team.
Why use the ICE framework?
As with any prioritisation framework, the scores themselves are largely meaningless. The goal of using ICE prioritisation is to:
- Assess the idea in a thoughtful, structured and unbiased way
- Provide the ability to easily compare the ideas against others
Further resources
https://bitly.ws/3e4Wn
ICE Scoring Model
The ICE Scoring Model is a relatively quick way to assign a numerical value to different potential projects or ideas to prioritize them based on their relative value, using three parameters: Impact, Confidence, and Ease.
What is the ICE Scoring Model?
ICE Scoring is one of the many prioritization strategies available for choosing the right/next features for a product. The ICE Scoring Model helps prioritize features and ideas by multiplying three numerical values assigned to each project: Impact, Confidence and Ease. Each item being evaluated gets a ranking from one to ten for each of the three values, those three numbers are multiplied, and the product is that item’s ICE Score.
Impact looks at how much the project will move the needle on the key metric being targeted. Confidence is the certainty that the project will actually have the predicted Impact. Ease looks at the level of effort to complete the project.
For example, Item One has an Impact of seven, a Confidence of six, and an Ease of five, while Item Two has an Impact of nine, a Confidence of seven and an Ease of two. The ICE Scores would be 210 for Item One and 126.
Those scores can then be compared with a glance and the item with the highest score gets the top slot in the prioritization hierarchy (which would be Item One in our example). Item Two’s relatively low Ease value dragged that item’s ICE score way down, despite the fact it would have a greater Impact at a higher level of Confidence than Item One. This is because all three elements of the equation are treated equally, unlike a weighted scoring model.
Why is ICE Scoring Useful and Who Created it?
There are many different scoring models out there, but ICE primarily separates itself from the pack by being simpler and easier than most of the alternatives. Because ICE only requires three inputs (Impact, Confidence, and Ease) for each idea under consideration, teams can rapidly calculate the ICE score for everything and make prioritization decisions accordingly.
It’s an even simpler calculation than the RICE model, which adds Reach as a fourth element in the equation (it also swaps Ease for Effort, so it uses Reach * Impact * Confidence / Effort for the formula).
The fact that ICE is so speedy is in no small part due to the person who created it, Sean Ellis. Ellis is most famous for coining the term “growth hacking” and helping companies quickly ramp up experimentation. Growth hacking experiments are supposed to be quick and iterative, so it makes sense that the scoring model used to determine which experiments to prioritize would also be quick and easy to use.
ICE scoring is basically a “good enough” estimation and far less rigorous than other scoring models that product teams typically rely upon. There is also a high level of variability in any item’s ICE Score based on who’s doing the scoring. Since it’s almost completely subjective, two people could assign very different values to the attributes of different ideas and end up with contrasting opinions.
The fact that a low Ease score can so easily drag down an item’s ICE score also highlights the “experimental” origin of the model; in the land of growth hacking “failing fast” is extremely valuable due to the lessons learned, and teams typically don’t want to invest a ton of time in any single project. However, some things that will have a bigger impact require a larger resource and time commitment. Solely relying on ICE scores could lead a team to keep going for “low hanging fruit” instead of making a larger investment in a project that could make a much bigger difference in the long run.
ICE Scoring is best used for relative prioritization; if you are considering a few contenders, it’s a great way to pick a winner. Similarly, if you were to apply it to the entire backlog it will help bubble up a top tier of options for the goal being targeted at that moment.
A major drawback of ICE Scoring is that relatively few people in an organization will have enough information to accurately predict all three elements of the equation. Impact and Confidence are business considerations while Ease falls into the technical domain.
Engaging product development to provide an Ease ranking for every item being considered in a scoring exercise is one way to limit the subjectivity to areas where the scorers should have a stronger body of knowledge and gets decision-makers out of the business of guesstimating development timelines. However, the fast and cheap nature of ICE Scoring may run counter to asking developers to create a level of effort for dozens or hundreds of possible projects.
It’s also important to have a consistent definition of the 1-10 scale for ranking each of the ICE elements. If there isn’t an agreement on what a confidence of “7” means it could lead to some very inconsistent assessments by various team members.
While ICE Scoring definitely has its merits, it’s likely not the best method for prioritizing an entire product roadmap, but is better suited for pre-work or taking advantage of a particular opportunity.
Conclusion
Speed and simplicity are ICE Scoring’s biggest selling points and can help product teams narrow things down. Its strength is also one of its weaknesses, however, since it is only assessing an item’s impact relative to a single goal—in an organization with multiple, concurrent goals it falls short of other scoring models’ capabilities.
Despite its lack of nuance and complexity, ICE Scoring can offer a slick way to trim things down and provide some relative comparison points for decision-makers. And when you’re trying to reach a consensus, sometimes ruling things out is just as helpful as figuring out which item is the cream of the crop.
To learn more about prioritization, watch the following webinar.