Use the power of experiments

Completed

We discussed how customer interviews can be a useful way to test your assumptions. Customer interviews are, in fact, a type of experiment. In this unit, we discuss several other types of experiments that help you expand on the insights gleaned from customer interviews.

Experiments range in complexity from simply asking users to provide their email address to observing them as they interact with your minimum viable product (MVP), or asking them to purchase or prepurchase your product.

Startup founders can use experiments to test a specific hypothesis by providing an input to users and measuring the output. The output is usually in the form of user actions.

Each experiment should have at least the following elements:

  • Hypothesis: A concise, falsifiable statement that represents one of your core assumptions.
  • Actions: The steps you take to test your hypothesis.
  • Data: What you measure or observe in the experiment.
  • Success criterion: The minimum response that you need to validate your hypothesis.

It's generally a good idea to run multiple experiments to test each of your critical hypotheses. In most cases, start with the cheapest, quickest experiments to generate some initial data, even if it's imperfect. If the results of these initial experiments are promising, you can progress to more complex experiments. The complex experiments might take more time and effort to complete, but they give you a greater level of confidence in the strength of your hypotheses.

At the end of every experiment, evaluate what you learned and what decisions you can make based on that information.

The following examples of commonly used experiments start with simple, low-fidelity ones and move on to more complex, high-fidelity ones.

Experiment type: Online ad

Description: Create an online ad (search-based ad, display ad, or social media ad) based on your proposed value proposition. Focus on customers who match your ideal customer persona.

Purpose: Test whether your target customers respond to a call to action such as visiting your website.

Pros: This experiment type produces a simple-to-track click-through rate and conversion rate, if you correctly set up analytics before launching the ad.

Cons: This experiment type demonstrates only relatively weak interest. Getting users to select a link might not translate to strong-enough interest to use or pay for your product.

Practical tips: Search term ads are valuable for testing interest among users who are already aware of the problem and are searching for a solution. Display ads and social media ads are better suited to users who have yet to reach this point of awareness.

Experiment type: Landing page

Description: Create a basic website (usually a single page) that describes your product and value proposition, and that asks customers to respond to a call to action. This call to action might be a request to provide their email address (weak evidence of interest), complete an online form (stronger evidence), or prepurchase your product (even stronger).

Purpose: Test whether your target customers respond to a call to action.

Pros: This experiment type is inexpensive to set up and run.

Cons: You need a suitable domain and sufficient design input to ensure that the page looks professional.

Practical tips: Ensure that your call to action is above the fold because not all visitors scroll through the whole page. You can drive traffic to the site by using methods like online ads, email campaigns, social media, and posting in relevant online forms. Use quotes from your customer interviews to highlight customer pain points. Ensure that you're always up front about the status of your product.

Experiment type: Clickable prototype

Description: Create a realistic mock-up of key screens from within your product by using a tool like Figma, InVision, or Microsoft Visio.

Purpose: Observe users interacting with something that resembles your final product, and collect their feedback afterward.

Pros: This experiment type can be a great way to find out what features customers get excited about. The length of time that a user spends engaging with the prototype can be a good indicator of interest.

Cons: This experiment type requires design expertise and an investment of time in capturing individual feedback. It requires users to commit a meaningful amount of time to engage with your prototype.

Practical tips: A clickable prototype is best delivered in person. You provide users with context at the start and invite their feedback at the end.

Experiment type: Concierge

Description: Deliver an outcome to customers manually. Walk customers through the steps that your software product is going to ultimately automate. For example, if the outcome is a report that you provide to customers based on their inputs, you might be able to capture the inputs via a simple form. Then, manually create the report, and send it to them.

Purpose: By delivering an outcome to customers, you can test whether they perceive the outcome as valuable. In many cases, this assumption is more important to test than anything to do with the process by which you achieve the outcome.

Pros: This experiment type can often be done quickly and cheaply, because you can deliver an outcome without having to build the product. It allows for collection of feedback from customers after they receive the outcome and derive the value. This experiment can also be an opportunity to make sales, as long as customers see sufficient value in the outcome. Walking customers through the process is a good way to test it and to ensure that you integrate any learnings when you actually begin building the product.

Cons: This experiment type doesn't scale well, so you're only able to deliver an outcome to a limited number of customers. Depending on the complexity of the process, you might need to set expectations so that customers know when they can expect a response.

Practical tips: It's often a good idea to have at least a landing page that customers can visit to start the process of signing up and to provide any required inputs. Make sure it's easy for customers to leave written feedback and a testimonial if they found the outcome valuable.

Experiment type: Wizard of Oz

Description: A Wizard of Oz experiment is similar to a concierge experiment. The critical difference is that here, customers are unaware that the process is being completed manually "behind the curtain."

Purpose: A Wizard of Oz experiment allows you to test both the perceived value of the outcome and the process by which you deliver it.

Pros: This experiment type provides a more robust test of pricing than the concierge method, because from the customers' perspective, they're buying and using your product.

Cons: This experiment type generally doesn't scale to a large number of customers because the process is manual. The experiment is suited to products that create a single output for customers (such as a report or actioning something), but not to products that require significant customer interaction.

Practical tips: Be prepared to deliver an outcome to customers quickly, because they're unaware that the behind-the-scenes process is being done manually. It's generally a good idea to price your product so that you can deliver it profitably by using Wizard of Oz. Then, you can continue to deliver value manually for as long as you like. When you automate the process, your profit margin can only improve.

Experiment type: Mock sale

Description: In a mock sale experiment, you're positioning your product alongside plans and pricing information. You're also testing customers' interest in buying, without actually taking any payment. When customers select a pricing option, you can tell them the product isn't available to purchase yet and ask them to provide their details to be notified when it is.

Purpose: A mock sale is ideal for testing whether customers perceive value in your product, because selecting a pricing option signals an intent to purchase. It's also useful for testing various price points or plans.

Pros: You can use this experiment type before the product is built by placing mockups of screenshots and other information on a landing page. It can be a valuable way to create an email list of prospects who show strong interest.

Cons: An intent to purchase doesn't always equate to actual purchases when the product is live.

Practical tips: Make sure you're not taking payment or giving any misleading information to customers. Track various traffic sources to establish which are most likely to bring paying customers to your site.

Experiment type: Minimum viable product (MVP)

Description: Create a basic functioning software product that delivers the minimum feature set (usually a single feature) to test a core assumption.

Purpose: Deliver sufficient value to customers through an MVP to meet a particular customer job, solve a pain point, and enable you to learn about customers' needs and experience.

Pros: An MVP experiment can convert users from a free trial to paying customers. Paying for a single feature is a strong signal of customer interest.

Cons: For some startups, a significant effort is required to create an MVP that actually delivers value. In some industries (for example, healthcare and cybersecurity), there might be an unacceptable risk of the MVP failing or not complying with regulatory requirements.

Practical tips: Keep the MVP to one feature that best represents the core job that your product needs to do. Focus on attracting users for whom the limited feature set is likely to solve an important problem. Make it easy for users to provide written feedback. If the feedback is positive, invite them to supply a customer testimonial. It's usually a good idea to create your MVP based on learnings from a lower-fidelity experiment such as customer interviews, followed by a clickable prototype, concierge, or Wizard of Oz experiment.

It's easy to think of an MVP as "version 1.0" of your product, but this thinking can easily lead founders to build more than they need to. For many products, an MVP is better viewed as a disposable tool with the sole purpose of testing assumptions with customers.

It's often possible to build an MVP quickly and cheaply by using low-code or no-code tools and still deliver value via a single feature. In these instances, you can throw away the MVP after the experiment is completed. You can then start building your product based on your learnings, rather than try to use a rough MVP as the basis for your product.

Task: Plan an experiment

Select at least one experiment type that makes sense for your startup. Map out the steps for completing the experiment. Remember to consider what hypothesis you intend to test, and express it as a concise, falsifiable statement. Spell out what you plan to measure or observe in the experiment, and the minimum response that you need to validate your hypothesis.