How should I interpret a wide confidence interval in an A/B test?
#1
I’m trying to interpret the results of my A/B test on a website feature, and my confidence interval for the lift in conversion rate is really wide. It makes me question whether the observed difference is just noise or something meaningful, and I’m not sure how to proceed with a decision.
Reply
#2
Been there. The interval stayed wide and you start doubting the whole thing. Some days it feels like you are watching noise in a rainstorm rather than a signal.
Reply
#3
We waited for more data and tried a quick Bayesian read on the numbers. The probability of a real benefit stayed small enough that it was not convincing, which left us stuck between continue and stop.
Reply
#4
Could the problem be something else entirely like how we count conversions or the traffic mix? Maybe the uplift is real only on a subset.
Reply
#5
We paused the test to avoid spending more on a possibly noisy result and started a quick review of the data pipeline and the baseline.
Reply
#6
Sometimes I drifted to other experiments and then circled back to this one. It felt like chasing a ghost sample size never quite felt right.
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: