Experiments
Understand how Experiments measures before-and-after performance once a suggestion is published live.
The Experiments page helps users measure what happened after a suggestion was applied and published. Currently, CSS suggestions are available for applying and measuring — additional suggestion types are coming soon.
Instead of showing ideas or diagnostics, this page focuses on outcome: did the change improve the page or not?
What It Does
Experiments in DynoWeb are before-and-after comparisons, not split-traffic A/B tests.
The page compares a baseline window from before the change went live with the experiment window after the updated theme is live.
Each experiment card is tied to a suggestion and shows the page, selector, status, and comparison window.
When enough data is available, it also shows key performance changes such as Conversion rate, Revenue / visit, Click rate, and Frustration index, along with page views, sessions, and estimated lift.
An important detail for the guide: applying a suggestion to a draft theme is not enough by itself.
Experiment counting starts only after that draft theme becomes the live main theme.
How To Use It Best
Users should treat this page as the proof layer of the workflow.
First they review a suggestion, apply a supported change to a draft theme, publish that draft live in Shopify, and then return here after traffic has passed through the updated experience.
The best way to read an experiment is to compare the baseline and experiment metrics together, not in isolation.
A change may improve clicks but not revenue, or reduce frustration without clearly improving conversion yet.
This page is most useful when users check both performance movement and data volume before deciding whether a change truly worked.
If an experiment is still Pending or Running, users should wait for more traffic.
If it says it is waiting for the draft theme to go live, the change has not started counting yet.
