This blog post outlines four crucial factors to consider when evaluating new growth opportunities: Hypothesis, Investment, Precedent, and Experience, forming the acronym HIPE.
Firstly, Hypothesis evaluates the potential impact of the idea on key metrics, considering the size of the user base affected. This can be achieved by increasing user intent or decreasing friction in the user journey. Secondly, Investment assesses the time commitment required for development and maintenance, prioritizing projects with high expected value and short time investments. Thirdly, Precedent analyzes past experiments both within the company and the industry, highlighting the importance of internal data over general benchmarks. Finally, Experience focuses on maintaining a positive user experience while optimizing for specific metrics, emphasizing long-term retention and user actions generating value. The author emphasizes that a strong evaluation process is crucial for any high-impact growth team, ensuring that resources are allocated to impactful projects and maximizing the chances of achieving significant results.
Jeff Chang (@JeffChang30) is a growth technical leader at Pinterest and angel investor. If your startup is looking for an angel investor who can help with all things growth, please send over an email!
The two most important skills in growth are finding great opportunities and executing with high velocity. In this blog post, we’ll talk about the first one, finding great opportunities. When new growth team members join growth, their initial mindset usually is to take on projects given to them and execute them well. However, to have more impact, growth team members need to expand their scope and deliver growth end to end, from ideation to execution to analysis. This blog post will talk about an essential part of the ideation process, which is evaluating new ideas!
Talking about evaluating ideas before sourcing them is important because learning how to evaluate will shape how you look for opportunities. There are many factors that you can consider when evaluating opportunities, but I boil it down to four main factors: Hypothesis, Investment, Precedent, and Experience. It’s easy to remember these four with the acronym HIPE (sounds like hype!).
Why will this idea have a significant impact on metrics?
You should have a good hypothesis as to why certain metrics will change. Your hypothesis should take into account the opportunity size, which is the number of users who might be affected by the feature. It doesn’t matter how good an idea is if very few people will be affected by it in the first place. Most hypotheses fall under the categories of increasing intent or decreasing friction.
Examples:
Increasing intent: Highlighting our unique features will increase the intent of the user to sign up, and therefore increase signups. The increase in signups will be significant because 1 million users per day visit this page.
Decreasing friction: Removing this extra step in the new user flow will decrease the number of steps it takes to get to a key product feature, which will increase activation rates. The increase will be significant because 1 million sign-ups go through this flow every day.
How much time will we have to invest in this project?
Growth is all about making smart time investments. It is better to work on 10 1-day projects with an expected value of 1K
Examples:
The time investment for this project is 1 day, plus a few hours every month to maintain
The time investment for the project is 1 month, plus a few days every month to maintain
Is there a precedent for this working in the past?
This factor looks at past experiments ran by your team or in the industry. In the beginning, you won’t have any previous results to look at, so you will have to see what works in the industry, but over time you should rely mainly on your own past experiment results. It’s important to know that past experiment results are much more valuable pieces of information than industry results because every product has a different set of features and customers.
I almost never trust industry benchmark metrics because the variance is usually very large. For example, if I google “email industry open rates”, the first link tells me that the industry standard seems to fall between 20-30%. Does that mean I should expect email clicks between 20-30%? No, I could get 10% open rate and that wouldn’t be crazy, or I could get over 50%. In fact, my email open rates have been over 50% but this has been due to the kind of audience I have subscribed. I tell my visitors exactly what they would get by subscribing - notifications when I publish new posts, and I make subscription completely optional. Since subscribers know exactly what they will receive and that is all they receive, they have high intent to open. If instead, I made email subscription required to read, my open rates probably would be significantly lower because some users signed up to get access to the blog post, not because they wanted emails. To summarize, industry benchmarks have high variance depending on context and tend to not have that much value in evaluating how good your metrics are.
Examples:
In the past, we tried an experiment on another similar page and it increased signup conversion rate by 10%.
In the past, we tried an experiment on another similar email and it increased open rate by 10%.
Is this change a good user experience
When working on growth, a common problem is focusing on one “north star” metric and making ship decisions solely on that metric. Usually, if you optimize for only one metric, your experience will trend towards an extreme that performs very well for that metric. For example, if you are optimizing for subscribers, the best highest performing experience might be aggressive blocks on core user features, but that is not a good user experience. How do you determine what is a good experience and what isn’t?
It can seem subjective as people usually have different notions of what is a “good experience”. To add to this issue, usually company employees are pretty different than customers, so it’s hard to know exactly what users think. One way (of many) to judge experience is by looking at quality metrics, such as long-term retention and user actions generating value. For example, if you use an aggressive upsell to increase subscriptions, but users who don’t subscribe still have great retention and perform just as many value actions, perhaps it’s not that bad of an experience. However, if users who don’t subscribe have a significant drop in retention, the user experience is likely poor.
Examples:
Showing a non-dismissible modal immediately when a page loads is likely a bad experience
Take experiment evaluation seriously. In growth, working on the right projects instead of the wrong ones is usually at least a factor of 10 in terms of experiment impact. It’s normal to have many experiments that don't beat the control group and therefore have no impact. However, you do want some major successes to balance those out. Great experiment evaluation is a required skill for any high impact growth team.
Want some advice on building a high impact growth team? Email me at [email protected]