(A talk given at PIPELINE Conference – March 2017 – Beyond Continuous Delivery – Can Your Insights Keep Up – https://pipelineconf.info/2017-event/speakers/)

“You have moved to Continuous Delivery and are delivering features fast. Unfortunately, as you know, responsibility for features doesn’t end with hitting ‘deploy’, and when it comes to deciding how best to iterate on your work you are often unsure of which option could next provide the most impact.

Even with substantial tracking, it’s not simple understanding whether and how a feature has changed user behaviour. It’s not that you lack data, but rather that it is muddied by all the things that happened at the same time as the release – product offers, a big news event, other simultaneous feature releases, or even just the difference in behaviour you see in how your products are used on the weekend.

This was our experience. Fortunately the answer was in AB testing – treating feature releases like scientific experiments, withholding changes from a portion of the audience to understand the impact of the feature, and using methods heavy in statistics to determine the differences in behaviour between the two groups.

Learn how we have adopted AB testing methodologies to understand feature impact, automating the process with an in house tool which has enabled us to test at the speed and scale continuous delivery allows.”

Amy has been working in Digital at the Guardian in roles ranging from Data to Product Management and currently Software Development. Recently she led a project to improve the way teams test feature releases, creating a tool which is now widely used across the organisation to determine feature impact.