Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
There are a lot of myths around A/B testing. They’re difficult to implement, difficult to keep track of, difficult to remove, and the costs don’t seem to outweigh the benefits unless you’re at a large company. But A/B tests don’t have to be a daunting task. And let’s be honest, how can you say you’ve made positive changes in your app without them? A/B tests are a quick way to gain more user insight. We'll first start with a few easy A/B tests, then create a simple system to organise them. By then end, we'll see how easy it is to convert to a (A/B) Test Driven Development process.
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In the RailsConf 2021 presentation titled "Make a Difference with Simple A/B Testing," Danielle Gordon addresses common misconceptions surrounding A/B testing, emphasizing that these tests are not only feasible, but also essential in validating app improvements. Gordon, who works at the startup Nift, discusses her journey from encountering the difficulties of implementing A/B tests to developing an efficient system to conduct them without cluttering the codebase. **Key Points Covered:** - **Understanding A/B Testing:** A/B testing involves comparing two or more versions of a component (like buttons or web pages) to determine which performs better based on defined success metrics such as user engagement, purchases, or retention. - **Example A/B Tests:** Gordon presents two initial tests: changing the color of call-to-action buttons and adding a featured dessert section to a bakery marketplace app. - **Ensuring Consistency:** She explains the importance of deterministic tests where users consistently see the same variant. This is achieved by using a user ID as a seed for a random number generator, creating a predictable, reliable testing environment. - **Code Organization:** Gordon outlines the need for clean code management, suggesting that each A/B test should have its own experiment class to simplify updates or deletions, thus keeping the codebase tidy. - **Data Tracking Mechanism:** She introduces an experiment events table to log each user’s participation in experiments, enabling data collection to evaluate the success of the A/B tests. - **Statistical Significance:** A critical conclusion of her talk revolves around statistical significance, which determines how confident one can be in the results; Gordon advocates for at least a 95% confidence level in test outcomes. - **Future Enhancements:** She concludes by discussing potential system improvements, such as accommodating multiple variants in the same test and providing a user interface for non-developer stakeholders to access A/B testing data easily. Overall, Gordon's presentation aims to demystify A/B testing, encouraging developers to integrate such methodologies into their projects to enhance user experience and make informed design decisions. She leaves the audience with the empowering message that A/B tests can be straightforward and beneficial for any project's improvement efforts.
Suggest modifications
Cancel