Case study: Home health check

Shi Wah Tse
7 min readNov 10, 2022

--

Finding customer value with new technology

In innovation, there was a new imagery technology the business acquired and I was brought on to investigate how we might use this technology to create customer and business value.

Understanding the technology

First point of call was to understand this piece of technology, which required meetings with the company, with the tech and developers to ascertain what might be useful for us, and what might not be useful for us.

Trying to understand the capability of the imagery technology

We ran an ideation and prioritised the ideas using the RICE method, and the first idea to work on was called ‘Home health check;.

Lean UX canvas for the idea ‘Home health check’

Gathering business requirements

I also gathered the business direction or playground we would be playing in, the business goals or opportunities and success metrics we can measure on.

Who are we targeting?

I already created customer segments and empathy maps in the imagery strategy, so I knew who the customers we were targeting. I mainly used MOSAIC data overlaid with internal data to create our customer segments:

Customer problem and idea

Next I would flesh out the customer problems I was trying to solve and created a storyboard on what the idea is:

Fleshing out the customer problem
Storyboard of the idea

Value proposition

I also mapped out with the team to define what the real value proposition of home health check is:

Creating our hypothesis

To finish prepping our idea, I did a team exercise to define our hypothesis, did a press release to stress test the idea and a stakeholder map.

Assumption mapping

After running the team through the idea, I help an assumption mapping workshop for an hour, where everyone wrote their assumptions in the 4 categories about home health check.

Everyone writing their assumptions against desirability, viability, tech feasibility and operational feasibility
Subject matter experts would rank by importance
Snapshot of our riskiest assumptions with the categories merged

How I defined what assumption is the most important:

RANK BY MOST IMPORTANT

The top assumptions are absolutely critical for HOME HEALTH CHECK to succeed. If that hypothesis is proven wrong, then all hypothesis becomes irrelevant.

Tech feasibility and operational feasibility

A lot of work has gone into tech feasibility with the tech team, and operational feasibilty with the legal and risk team, but for the purposes of this case study I will concentrate on desirability.

Experimentation planning

To determine what experiments to run, I ask the following 4 questions:

  1. What are your current beliefs and assumptions?
  2. What’s the most important thing to learn right now?
  3. What data will help inform these decisions? (what can we kill off in terms of features)
  4. What experiments can get you that data and learnings?
Listing out all the potential experiments to choose from (doesn’t mean I will execute them all!)
Little table I created to see what experiments are possible with this idea, team timings and budget. The ones on the right are not suitable for this idea so I took them out.

Discovery experiment 1: FAQ analysis

I first choose to look at our analytics to see if there are any current home FAQs that users are looking at.

Experiment 2: Survey with card sort

My next experiment was a low confidence survey (2/5 confidence). Before I get started I would ask these four questions:

Some images of the survey and card sort results and synth:

Logging the results against the main assumptions we wanted to test:

It had some passed assumptions and failed ones, and our PM decided there was still worth to continue experimenting with this idea.

Experiment 2: Follow up survey

I did a smaller survey of only 33 participants to follow up on the first survey to find out more information.

Positive results on the survey

Our PM decided it was still worth pursuing so I continued with the next experiment.

Experiment 3: Feature stub

Setting up the feature stub, by defining our assumptions to test:

Wireframes on how it would look like on the mobile app:

Results of the feature stub (blacked out numbers)
Logging the results against our riskiest assumption to test (blacked out result number)

The results were interesting, if we built it right now, we know that a high percentage of users would click on it on the mobile app.

Experiment 4: Feature stub on the website

Wanted to test something similar with engagement, but on the website rather than the mobile app.

Feature stub on the website. When users click on it, they get a message its in development.

Interestingly, the engagement levels were vastly different from the mobile app feature stub, but the % of users who go to do the survey was quite high on website.

Our PM decided the idea still has legs and not ready to kill it off yet.

Experiment 5: Feature stub on webchat

I also wanted to explore on webchat if users were interested in home health check. So I wrote the script with our webchat team, but unfortunately there were only 3 users who asked about home insurance within the 2 week time frame so the results were not useful.

Example script for our webchat team

Experiment 6, 7 & 8: User testing and survey

I ended up doing two rounds of user testing and a survey to fine tune the user journey, content and design. Some images of the user testing from these experiments:

Prototype in user testing

Final experiment

When I stack experiments together, I call it a triple threat. So it involved sending out an email campaign, that would link to a landing page and if they were interested we would show them a feature stub.

I wanted to show them a wizard of oz instead of a feature stub, so to say we will send it to you within 5 business days, but unfortunately the technology wasn’t ready yet for us to do that in this experiment.

Results

The experiment passed the idea and is ready to progress to MVP, though we have to wait for the tech AI to train to have the capability to do home health check before we start build.

--

--

Shi Wah Tse

Sydney based UX Designer who plays with code. I crack open ideas as a living!