Cover1.png

Scale experiment

Scale experiment

Scale Winner Of An Experiment

Adobe, 2024

 

Project Overview

Scale experiment is a product feature in Adobe Journey Optimizer to allow optimization marketer to send the winner experience of a A/B testing to more audience in a campaign automatically or manually. This is an enhancement on the existing content experiment workflows which enables marketers to test and compare multiple variations of content. 

 

My Role

Research, UX , UI, User testing

I led the UI/UX design and research for this project, including defining the design strategy, conducting the user research, and testing.

 

The problem

 

The manual process of scaling the winner of an experiment slows down sending the best experience to drive conversion of a campaign. How can we automate the process with low risk?

The existing workflow of scaling experiment is disrupted. Users had to end the experiment campaign first and duplicate it to send the experiment winner to more audience. As a result, the size of campaign inventory is unnecessary increased.

 

The strategy

 

The proposed solution including 2 ways for the users to act on the experiment results:

  1. Auto scale

    Set up auto scale in the experiment creation process so the once the winner is found it would be sent at preferred time.

  2. Manual scale

    Allow user to take action on the experiment manually at any time after the experiment is activated.

 

The challenge

 

There are many possible outcome combinations of an experiment and lots of criteria needs be considered in experiment results. How to simplify the settings becomes the challenge. Instead of giving users too many options to control all the scenarios, we decided to focus on the most common use case when there’s clear winner declared from the experiment.

HMW empower users to set up auto scale winner in an easy way and deal with other outcomes with manual actions?

We iterate the design after the design strategy pivot. And brought those mocks into 1st round of user research for concept validation.

 

Wireframes

 

With the initial version, we interviewed users from 5 customers who participated in the alpha program for feedback and validation.

Research

1. “Holdout” = Control group?

This was a surprise to me. Because the experiment creation workflow was not done by me and I assumed that the design was solid. So we assumed that the auto scale setting was to allow the users send the winner to the “Holdout” group, which was not included in the experiment.

However, the research participants saw “Holdout” as a group of audience that would never receive any treatment experience, which is more like a control group. And they would expect scaling the winner to the rest of audience, not the holdout group.

2. Users want a precise control on the criteria and when to scale the winner.

“I’d like to scale the winner after 9am even if the winner is found earlier than that and make sure the winner has enough opens.”

For each customer, they would love to customize the threshold of the winner and decide on the standard of auto scaling winner.

3. Manual scale should be in the report but the status info & CTA was confusing.

Users thought it was a reminder of the experiment is not properly set up and hope to see a summary of what this experiment is about in the report.

Key findings

 

1.Introduced “Reserved group” for scaling experiment.

To distinguish the audience for scaling winner from “Holdout”, “Reserved group” was introduced. Meanwhile, user can decide how many profiles would participant in the experiment as“Test group”. The challenge of redesigning this section is on making the new groups understandable and the visualization of the new distribution.

Iterations

 

2. Increased flexible control of when to auto scale

Allow users to flexibly specify a time for both auto scale winner and the alternative treatment separately.

Advanced setting was added to let the users fine tuning the standard of the winner for auto scale. Not only they can adjust the confidence level for the data significance, but also the guardrail metrics. For example, if the winner was declared by the highest open rate, however, the unsubscribe rate exceed the guardrail value. Then this winner treatment won’t be auto scaled to more audience to prevent loosing engagement.

 
 

3. Clarified experiment status and added contextual info to the manual action in the report.

Experiment status was changed by using plain language for easy understanding. And a short description of the experiment and audience distribution chart were added to the report to let users who might not know the goal and setup of the testing understand the context better.

 
  1. Always pay attention to the previous workflow when designing to add a new feature. Do not afraid to break the old design since it might not suit the new feature anymore.

  2. Giving the context and using natural language to explain what’s going on avoids confusion. And that would help the users make decisions faster.

  3. I wish I could have done more visual explorations of the manual scale action from report. But due to constraints of the product and time, implementing a more visually appealing component was not an option. So I had look for another opportunity to redesign that report from the design principle project.

Reflection