Campaign Experiment Report
Adobe 2022
Overview
Campaign experiment report is an out-of-box report provided by Adobe Journey Optimizer for marketers to view the results of the campaign experiment and performance of each treatment to gain insights & make decision on the experiment. This was a new feature added to the product.
Team & Role
I played the product designer role collaborating with PM, engineering & researcher, designed the UI & UX of the product.
Marketers need a way to track the performance of a content experiment and know the results of the testing for next actions.
The problem
The experimentation report would be provided in both campaigns or journeys that have content experiment. The campaigns and journeys already have reports for delivery and channel performance data. I created the flow chart to show how the experimentation report would be organized within campaign and journey report.
User flow
Key Research Findings
1.Data is great, but I want something readable to tell me the status of the experiment.
In the initial test version, the experiment's winner was highlighted at the top along with key performance metrics, followed by a table of the experiment results. To my surprise, many users I spoke with during the research didn’t understand the experiment's status, as they either missed or didn’t grasp the meaning of the “Conclusive” badge. Additionally, users found the results table difficult to interpret due to the abundance of data, leaving them unsure of what to focus on.
2. Users’ knowledge of reading the chart varies.
This report is designed to serve multiple user roles, not just optimization or experiment marketers who are more familiar with analyzing experiment results. The confidence interval is a critical indicator for helping users understand treatment performance and decide on next steps. However, research revealed that some users struggled to interpret the confidence interval chart due to differences in their knowledge levels. In the first version, when the confidence interval was embedded within the results table as small bars, some users even mistook it for a slider they could control. This made me realize that a complex chart like this doesn’t belong in the table. Users need a larger chart for clearer data comparisons, along with clear guidance on how to interpret it.
3. Reporting is not all about data is also about knowing what works and what don’t.
When it comes to reporting, most people assume it’s all about analytics. But is that really all users are looking for in reports? During the research, participants frequently asked about the experiment setup, the target audience, and what each treatment involved. This feedback highlighted that the users reading the report might not be the ones who created the experiment. Ultimately, the goal of an experiment is to understand which content works better, not just to identify the winning treatment.
1.Clarify the experiment status:
I designed a new custom widget for the experiment status by using a readable sentence to give a brief explanation of that conclusive status.
The big number widget was discard and replaced with the new summary widgets.
Reduce the info inside the results table. The most important widget was moved to the end of the table with a column divider added. And the complex confidence interval column was moved to a new separated widget, which I am going to explain next.
Iteration
2. Give space to the complex chart
I designed a new chart widget for the confidence interval and included a brief guide on how to interpret it, complete with examples.
The updated chart features an axis and a zero line, making the data range much clearer for users.
Users can hover over the chart for additional information and have the option to add multiple metrics for a more comprehensive evaluation of the experiment results.
3.Show more context
To give more context to the audience who might see this experiment for the first time, I designed a dedicated widget for the experiment settings and removed the condensed audience distribution info on the top.
Additionally, the thumbnails in the experiment results chart allow the users to click and view the screenshot of the treatment. In that way, the winning experiment is not a name or label anymore. The users can directly connect the data with content in the report without going back to check the campaign details.
Reflection
Reporting is not just about beautiful data visualizations, users use the reports for a specific goal especially in this experiment report scenario. I have learned a lot from the user research and usability testing. If the users doesn’t understand what this report is showing then it becomes useless. How to make this report more readable and to serve users of different roles & knowledge was the biggest challenge to me. I am glad that the feedback for the final design was very positive.