Premise and assumptions:
> Assumption 1- I am a Product Manager responsible for Monetization. Also, I am a part of a larger Product team with other PMs.
> Assumption 2- none of these 10 features are absolute ‘junk idea’. They were accepted to be discussed as they had some potential to positively impact the business.
> Assumption 3- I receive the features as a ‘one-line action item’; the details of these features are not shared. So, one of the items in the list can look something similar to-
“Send an email if the user who has initiated posting a task abandons posting midway.”
Solution:
Stage #0: Making a set of user-story document
As all of these product features are shared as action items, it becomes essential to understand and explain these features as user-stories. The objective here is to ensure my understanding of the feature and document the feature with a brief narrative.
Following the standard approach of building a user story-
As a [persona], I want to [do something] so that I can [realize a reward].
As an example, we can say-
As a task poster, I want to be reminded about posting the task which I had left halfway via email so that I can complete posting the task which I had started.
In this way, we ensure that all the features are now clearly visualized.
Stage #1: The filter
Stage #1: The filter
Now as these 10 features are randomly chosen, they would very likely be a part of a laundry list of product feature ideas. The first filter that I would apply to these feature ideas will be to check which of these are in line with ‘Monetization’ (my core responsibility as a PM).
Say, for example, one of the features in the list is-
“As a tasker, I want to be able to chat through the app with Airtasker’s customer support team during a task so that I can highlight any ongoing issues and feel secure while doing the task’.
While this may turn out to be a great value add to overall engagement and user-satisfaction on the platform, which might affect retention (and hence positively influence monetization), but, as no direct correlation of this feature can be established through metrics that influence monetization; I would prefer to hand over this feature and look for other features with greater alignment with my intent.
The ‘LITMUS TEST’ here is to figure out whether the feature has an impact on monetization or not by trying to find a direct answer to the following question-
The ‘LITMUS TEST’ here is to figure out whether the feature has an impact on monetization or not by trying to find a direct answer to the following question-
“Is it possible to directly estimate the revenue (in dollars) that can be influenced by this feature?”
The exercise to find the answer to this question can be a simple brainstorming to put all the metrics together. Here’s an example-
For the feature-
“Abandoned Post Email”
“Abandoned Post Email”
As a task poster, I want to be reminded about posting the task which I had left halfway via email so that I can complete posting the task which I had started.
The output of the brainstorming exercise can look like-
While I have included some assumed numbers here, however, we need not find the actual values of these data points at this stage as often they might be complex to pull from the database; our aim is to just figure out if there is a way to calculate the impact in terms of dollars.
Once we do this exercise for all the tasks, we will not only be sure that all these tasks have an impact on monetization but also have a starting point for our later stages.
> Why did I use this filter?
I find it necessary to provide a justification for this stage- As all teams work under some constraints (usually either time or resources), it would be necessary to filter the features at this stage itself. If the features do not have a direct/near-direct monetary impact, I would pass it on to the respective PM (after coordinating with the Lead) in the team who may be responsible for the engagement/acquisition.
> Why did I use this filter?
I find it necessary to provide a justification for this stage- As all teams work under some constraints (usually either time or resources), it would be necessary to filter the features at this stage itself. If the features do not have a direct/near-direct monetary impact, I would pass it on to the respective PM (after coordinating with the Lead) in the team who may be responsible for the engagement/acquisition.
Stage #2: Looking for actionable data points to measure the $$$ (dollar) impact value
Once we have a clear idea about the features and we know that they are valuable to be implemented, we start building our case for each of these features.
Our objective in this stage is to investigate all relevant metrics and signals relevant to the feature. These might be positive as well as negative signals, however, the act of documenting these signals is very important.
We will look at the following kinds of data-
- Quantitative data- These are data points that can be obtained from the product analytics tools being used. If we track event-based data, we can create a stepwise complete picture of the user funnels for all actions.
Moreover, it is ideally at this stage where we find the actual numbers for the hypothesis which we had built-in stage 2.
- Qualitative data- If we have any screen-grabs of how users interact on a particular screen where the feature may be deployed, we might get hints from user’s mouse-clicks (as in the case of desktops) or heatmaps of touching screens (as in case of app/m-web).
- Support Emails/Chats, Playstore/Appstore reviews, third party review sites- This is an attempt to check “Voice of User” from all easily available sources. If we find any reviews/support conversations favouring (or opposing) the feature, it should be treated with utmost importance in decision making.
- Market Research- If we do not find sufficient quantitative or qualitative data regarding the feature, we can also explore to conduct quick surveys on the product and try to gauge probable user interest in the feature. As we can leverage tools like- https://qualaroo.com/, we can save any probable coding effort.
With the above variety of data, we will be able to build a case for the feature by putting an estimate on $$$ Dollar value which the feature can impact. If the impact value is very low when compared to the targets which we want to achieve, we can consider eliminating the feature or putting it on a very low priority list.
This will be the key element in ranking the features.
Stage #3: Getting an estimation of TIME/COMPLEXITY/EFFORT
With the $$$ value estimation, we will be able to rank the features in descending order of revenue impact. However, that would be only half the picture. The product features would be worth any value to the business only if they are actually executed.
Hence, in order to gauge how complex or time-consuming the features are, I would set up a meeting with the Engineering Lead where I can discuss all the features in detail. The expectation will be to get a rough yet tentative estimate of actually executing the feature.
In order to make the complexity and resource requirements across features comparable, we would need to bring all these features to a common unit- something like, ‘Man hours’ needed for the task. The unit here can be anything including days to launch the feature, the number of sprints for one engineer etc.
Stage #4: Finding the balance between ‘Impact’ and ‘Time;
We try to find the following ratio-
‘$$$ Impact’/’Man hours’ (where total available engineering bandwidth remains constant for all features).
While we may not really plot a graph and understand this metric mathematically, we can use it directionally to get a view of these product features; something similar to this-.
Here, we prioritize the Q II > Q III > Q I > Q IV; we can even consider not doing the features in Q IV as well.
Also, within a quadrant, we can rank the feature which has higher ‘$$$ impact’/’Man hours’ over others. In this way, we can have a ranking of the product features ready for execution.
Stage #5: Talking to other stake-holders
It is extremely important to explain the features and their potential to internal stakeholders. These can be the Marketing team, Business/Partnership teams or Customer support. While talking to the stakeholders, we have to focus more on the impact on their ‘lives’ through the feature while trying to represent the end-user of the product.
If we find a justified response from these teams over modifications in the requirement, after properly vetting the use cases, we can consider changing the priority or even considerations of the feature.
> Why didn’t we pick this stage earlier? (before spending time on other stages)
As it is extremely important to meet these stakeholders with the highest conviction in the feature and its thesis, it becomes necessary for us to take a deep dive into analyzing the feature before talking to these stakeholders.
Stage #6: The actual execution
On a lighter note, as I would already know the ‘Estimation Time’ for these features, we can now expect the following to happen-