Software Release Glossary
Most commonly used terms and acronyms by product managers, engineers and devops
ABCDFKMPRSTUV

Fake Door Testing

What is fake door testing?

Fake door testing is a method where you can measure interest in a product or new feature without actually coding it.

Instead, you would create the visual aspect of the feature and then present it to customers. However, the feature itself is not actually active.

Hence, fake door testing embodies the idea of “fake it till you make it”.

The purpose of fake door testing, also know as trap door testing or painted door testing is to validate your ideas and see if they resonate with your customers before going through the process of coding and then implementing them.

It’s one way to test out if there’s market demand for such a feature or product prior to investing and delivering these features or products.

Where does the name ‘fake door’ come from?

Typically, in a fake door test, the user is shown a feature that doesn’t exist. After the user takes the action, for example by clicking on a CTA on your website, the page the fake door links to notifies this user that the feature is not available yet, which is where the name ‘fake door’ comes from.

The idea is just to get an estimate of how many people will click on the call to action or CTA to gauge interest.

How are these tests implemented?

As we have seen so far, fake door tests present users with an invitation to use a feature or product that’s not really there.

This could range from CTAs, pop-ups to an in-app notification.

As already mentioned, once the user clicks on the invitation, they receive a message or land on a page that informs them that the product doesn’t exist just yet and could be prompted to provide their email address to add to the list of users to be notified when the product or feature is available.

Therefore, the data collected from fake door tests, for example the clickthrough rate, helps teams decide what products or features work (and which ones don’t). To do that, make sure you include a method for measurement and implement some tracking.

This can be determined by dividing the number of people who clicked on the button by the total number of people who were exposed to it. In short, you can measure how many people clicked on a button, how many wanted to visit your website to register, etc.

Alternatively, you can design a fake ad and place it on social media channels such as Twitter or Facebook; these channels will, in turn, give you metrics on how your ads perform and you will also be able track the number of people who visit the page by having clicked on the ad.

Where fake door testing could go wrong

In a worst case scenario, fake door tests could lead to consumers losing trust in your brand, possibly damaging your brand reputation, as you are luring in customers under false pretences, i.e a product that does not yet exist.

As users are expecting something upon taking the action, they may feel they have been scammed or lied to and some users may not take too well to that.

To avoid such disappointments, you will have to carefully pick out your users. In other words, you can choose to run the fake door test on beta testers, i.e the segment of your users who would be interested the most in your new release. This will also help you gather useful and relevant feedback from the segment of your users that matter the most.

Not to mention that this will help you build close relationships with these early adopters so once testing is over and you’re ready to release the feature, you have a group of users that you know are interested in it.

Benefits of fake door testing

Despite the possible trepidation regarding fake door tests, they do offer a number of benefits as well.

Perhaps the most obvious benefit of fake door tests is the ability to gather feedback on how your users respond to the new offering, allowing you to validate a product or idea and test the waters before starting the development process.

This is unlike A/B testing where you actually have to build the feature as this testing requires gathering live data from people using the new functionality.

If you’re not targeting a certain group of users, you can also determine what kind of audience would be most interested in this specific feature you’re planning to develop.

This, in turn, ensures that you are spending time and resources on developing products and features your users actually want. Thus, you are gaining valuable insight into the needs of your target audience without investing too much time or wasting resources.

What’s more, if you’re collecting emails in your fake door test, you have a way to communicate with these potential registrants when the time comes for the actual release.

Fake door tests are also useful for testing out new pricing models, for example presenting subscriptions to various payment plans, to see how people react and whether they’re interested in paying and which pricing plan they react most positively to.

Last but certainly not least, fake door testing provides a little to no risk method to release your products without actually releasing them so you’re not risking sending out a product that you worked hard on only to find there isn’t enough demand for it.

Conclusion

To sum up, fake door testing is a good way to gauge whether there is actual demand for your product or feature. It gives you valuable insight on your users and their preferences but do tread carefully to avoid losing out on current and/or potential customers.

Also keep in mind not to hastily misinterpret the results of this test and discard a perfectly good idea that could have succeeded had it been adequately tested.

The important thing to remember when it comes to a fake door test is to do it right and in moderation. For example, once a user goes through the fake door, let them down easy by choosing your words carefully. Apologise about the lack of product they were promised and let them know you’ll inform them if/once it is actually developed by giving them the opportunity to sign up on a waitlist.

Info
Flagship Sign Up
More terms from the glossary
Soak Testing

Soak testing is a type of performance and load test that evaluates how a software application handles a growing number of users for an extended period of time.

Read description
User Acceptance Testing

User acceptance testing (UAT) is used to verify whether a software meets business requirements and whether it’s ready for use by customers.

Read description
Multi-Armed Bandits

Multi-armed bandits are a complex form of A/B testing that use machine learning algorithms to dynamically allocate traffic to better performing variations.

Read description
crossmenu
Copy link