Talks
Speakers
Events
Topics
Search
Sign in
Search
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
search talks for
⏎
Suggest modification to this talk
Title
Description
Testing in Production is a common CI/CD practice nowadays. However, feature flags and canary deployments can only get you so far. This talk will walk you through the [Branch By Abstraction](https://martinfowler.com/bliki/BranchByAbstraction.html) pattern and the tools to give you confidence when shipping your code to production because we should be able to move fast and break nothing.
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In his talk "Testing in Production: There is a Better Way" at RubyConf AU 2023, Igor Kapkov addresses the prevalent practices of testing in production and introduces the Branch By Abstraction pattern as a means to improve confidence when deploying code. Igor starts by highlighting common industry practices regarding testing in production, which often involve releasing untested code or heavily relying on feature flags, canary deployments, and A/B testing, thus raising concerns about the adequacy of these methods. Key points discussed include: - **Confidence in Refactoring**: Many developers are hesitant to refactor due to the fear of breaking fragile code, especially if they are new to the project. - **Problems with Test Suites**: Igor emphasizes that even teams with 100% code coverage may lack confidence due to outdated or artificial data being used in tests, further complicating real-world application of these tests. - **Efficiency and Cost**: The ongoing costs of continuous integration systems can strain business resources, making efficient and effective testing practices essential. - **Observability Practices**: Modern software practices necessitate observable metrics and evidence of performance improvements. However, these often fall short in reflecting actual user interactions and behaviors. - **Testing in Production Methods**: While traditional testing practices are standard, they do not mirror real-world scenarios accurately, making performance evaluations in staging challenging. Igor uses the example of the Basilica de la Sagrada Familia to illustrate the long-term nature of complex projects and the importance of gradual improvements rather than sweeping changes. He advocates for embracing scientific methodologies in software development, treating hypotheses about code behavior as experiments. One of the main tools he introduces is the Scientist library, stemming from GitHub, which allows developers to compare old and new implementations in production without disrupting user experience. Key functionalities of Scientist include: - Running experiments that evaluate both new and existing code’s performances simultaneously. - Facilitating gradual refactoring by permitting controlled observations of changes implemented. - Allowing for conditional runs and improved insight into user interactions. Igor concludes his talk by dispelling common myths about testing in production, affirming that the Scientist library is not solely for refactoring but can also be beneficial for comparing new features. He highlights how organizations can derive continuous value from well-structured engineering principles. Overall, Igor's talk emphasizes that with the correct tools and methodologies, teams can navigate the risks associated with testing in production effectively, showcasing the need for a methodical, data-driven approach to software deployment and refactoring. Attendees are encouraged to explore the Scientist library further and consider utilizing its capabilities for improved software processes.
Suggest modifications
Cancel