The three hurdles I faced in creating a feature stub experiment

Shi Wah Tse
4 min readSep 28, 2020

--

And the clicking behaviour I wasn’t expecting.

I was given a concept to experiment on — having a personalised video explainer when purchasing home insurance.

Do users want to watch this video?

I set out to create a feature stub test — this test seems to have many names, I’ve encountered ‘painted door test’, ‘fake door test’ or ‘smoke test’. I know there is a ‘404 test’ and the difference is we’re not showing a ‘404’ error page.

It’s essentially, showing the personalised video that looks real, with some text saying the video will show you this type of content (I ran a validation survey first to figure out what type of content users want in this hypothetical video).

This video doesn’t exist. No designer or developer has created this video.

When the user clicks on the the video, we show users a message that the video isn’t available.

It seemed simple enough until I met with our legal and risk team…

Hurdle #1: False representation

“But we’re not even touching the current product or pricing, it’s an extra video…”

I fumble in the risk/legal meeting, trying to push this experiment through.

There was concerns raised about it being misleading — especially with the text.

“Is there anything we can change that will make it less misleading?” I ask, trying to find a middle ground.

“Well, we could change the text from…”

Here’s a short video about…

to

You might be interested in watching a short video that…

Done! Easy change!

I was on a roll and nothing could stop this feature stub going live. Within the hustle and bustle of getting the developers, analytics team and setting out the experimentation board, there was one section I couldn’t fill in.

I struggled to answer when the team asks,

“How many clicks on this fake video constitutes a pass or fail?”

Hurdle #2: Defining success

Some experiments are easier to set up a success metric.

For example experiments that has conversion tied into it. But this is a feature stub experiment and we are measuring desirability — how many people click on this video and not taking account the conversion rate (as they didn’t actually watch the video).

I suggest to the team,

“Well, we can work backwards. Our business goal is +1% uplift in conversion, so as a minimum we need at least 1% to click on this video (and assume 100% who watch the video will buy). So 1%?”

“Maybe we can benchmark for video engagement rates — quick google, for social media it’s 52%.”

“Whoa…that’s a big jump can we benchmark from other experiments run here with engagement rates?”

“We had a button experiment with 24% click rate.”

“But that’s a button…”

“How about the engagement rates on the videos on our marketing pages?”

“Or how about we work out how much it will cost to build these videos and then…”

“Or let’s just put 5% for now and see what happens…”

So on it went with the team, and we struggled to find a success metric. It was ranging from 1% to 60% click rate.

And when the results starting to come in, it became more important to figure out what does success look like…

Hurdle 3: When to pull the plug

I found myself checking the analytics everyday — a bit similar to how I check my shares everyday and it might not be a good thing!

The highs, the lows, when to sell…><

“So few people clicked on it.”

“Oh so many people clicked on it.”

Taken into the account of our numbers of users daily, unlike an a/b test where it tells you how long to run for at what confidence level, I found this experiment different. Especially when we don’t have a defined success metric.

I asked some people, and their advice.

“Wait for a bit longer until results stabilise. Sometimes people just click on things because it’s just there or they are curious.”

I asked our analytics guru, and she said,

“Hm… From experiment points of view, data volume is low and only running for a week. We usually will suggest to run the experiment for at least 2 weeks.”

After a few weeks

The team gathered around, with the adobe analytics screen showing in our video meeting.

“These are the number of clicks on the video…but look here — this is weird. These users were refreshing the page and re-clicking on the video again…”

--

--

Shi Wah Tse
Shi Wah Tse

Written by Shi Wah Tse

Sydney based UX Designer who plays with code. I crack open ideas as a living!

No responses yet