How do I know if an A/B test result is real or just noise with overlapping CI?
#1
I’m trying to interpret the results of my A/B test for a new website feature, but I’m getting tripped up by the confidence interval overlapping zero. My point estimate suggests a positive lift, but that interval makes me question if the effect is real or just noise. How do you decide whether to call this a win or not?
Reply
#2
That lift looked solid in the early numbers but the range it sits in ends up touching zero so I pause before calling it a win I often choose to wait for more data or a bigger sample before declaring success unless the upside is life changing
Reply
#3
We tried a quick follow up by checking the metric over a longer period and comparing to a nearby control then we decided to run a second small batch rather than push a full rollout still the guardrails were not clear and we kept expectations modest
Reply
#4
Maybe the issue is not the feature at all but data quality or the wrong metric entirely I have seen big lifts disappear when the sample splits looked off or users churned at the wrong moment and then you are chasing noise.
Reply
#5
I want to call it a win but I am not sure am I chasing the wrong thing or is this really real enough to justify more risk?
Reply


[-]
Quick Reply
Message
Type your reply to this message here.

Image Verification
Please enter the text contained within the image into the text box below it. This process is used to prevent automated spam bots.
Image Verification
(case insensitive)

Forum Jump: