Talks
Speakers
Events
Topics
Search
Sign in
Search
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
search talks for
⏎
Suggest modification to this talk
Title
Description
Does your test suite fail randomly now and then? Maybe your cucumbers are a little flakey. Maybe you're busy and just kick off the build again... and again. If you've had a week or two where the build just won't pass, you know how fragile a TDD culture is. Keep yours healthy, or start bringing it back to par today. Why do tests fail randomly? How do we hunt down the causes? We'll review known sources of "randomness", with extra focus on test pollution and integration suites. I empower people to make a difference with appropriate technology. Now I'm working to turn around the diabetes epidemic. I was a founding partner and lead dev in a tech cooperative for nonprofits, and a team anchor for Pivotal Labs, training and collaborating with earth's best agile engineers. Help us caption & translate this video! http://amara.org/v/FG1E/
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
The video titled "Eliminating Inconsistent Test Failures," presented by Austin Putman at RailsConf 2014, addresses the common issue of random test failures in software development, particularly those involving Cucumber and Capybara tests. The speaker discusses the severity of these failures and their impact on team productivity and morale. ### Key Points Discussed: - **Random Failures in Testing**: The prevalence of random test failures is highlighted, with audience participation showing that many attendees have experienced similar issues. - **Testing Culture**: Putman emphasizes the importance of a strong testing culture and the consequences of ignoring test failures, leading to reduced trust in build integrity and delayed feedback on code deployments. - **Sources of Randomness**: Various sources of random failures are examined, including test pollution caused by shared state between tests, race conditions, and external dependencies like third-party services and time zone issues. - **Specific Cases**: Putman shares anecdotes from his experience at Omada Health where random test failures were rampant, particularly during a critical development cycle, and the strategies employed to manage this chaos. - **Mitigation Strategies**: The speaker presents several strategies for mitigating random failures: - Always assert the existence of necessary elements before interacting with them to avoid race conditions. - Use immutable fixture data to prevent database state inconsistencies, especially with PostgreSQL. - Rely on tools like mutexes to avoid database access collisions. - Document database states at the time of failures to help reproduce and diagnose issues. - Emphasize the need to run tests in a specific order, especially when test pollution is suspected. - Use libraries such as Webmock and VCR to handle external API calls during tests reliably. - **Conclusion**: The talk concludes that by focusing on identifying root causes of random failures and applying the right set of tactics, teams can achieve a remarkably stable testing environment, significantly reducing the occurrence of random test failures over time. Through this presentation, Putman aims to provide actionable insights into managing test reliability, transforming a problematic test culture into a productive and trustworthy one.
Suggest modifications
Cancel