Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
Regression testing is invaluable to knowing if changes to code have broken the software. However, it always seems to be the case that no matter how many tests you have in your regression buckets, bugs continue to happily creep in undetected. As a result, you are not sure if you can trust your tests anymore or your methodology, and you are ready to change that. I will present a powerful technique called mutation testing that will help make your tests capable of detecting future bugs. I will also give you a metric to assess the effectiveness of your tests in terms of regression, so that future changes in your software can be done with impunity. Audience will learn: What mutation testing is and why it works. When and how to apply mutation testing. How to improve their tests so they detect bugs that are introduced during the normal evolution of software. Help us caption & translate this video! http://amara.org/v/FG2m/
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
In this talk titled "Re-thinking Regression Testing" at the MountainWest RubyConf 2014, speaker Mario Gonzalez emphasizes the importance of effective regression testing, particularly focusing on the inherent limitations of traditional code coverage metrics. Gonzalez argues that, despite achieving high code coverage percentages, many tests fail to detect regressions accurately, leading to a false sense of security regarding software reliability. Key Points Discussed: - **Lack of Confidence in Testing:** Traditional regression testing practices often fail to provide a clear understanding of their effectiveness, leading to uncertainty about which areas of the code are sufficiently tested. - **Redundancy Issues:** Gonzalez points out that high test redundancy can obscure meaningful coverage contributions, with an average of 25% redundancy leading to only 55% regression detection even with an 85% code coverage in Ruby. - **Mutation Testing Introduction:** To remedy the flaws of traditional metrics, the speaker introduces mutation testing (or mutation analysis) as a powerful technique for evaluating test effectiveness. This technique involves modifying existing code and assessing whether tests can detect these changes (loss of functionality). - **Mutation Score:** Mutation analysis provides a numeric mutation score based on the percentage of 'mutants' (modified code) that the tests can successfully detect. A higher score correlates with better test regression detection capability. - **Practical Application:** Gonzalez shares guidelines for implementing mutation analysis, suggesting at least 50 tests for good sample size and highlighting the importance of focusing on lower-order mutants for detecting common errors, while also trying higher-order mutants for more complex scenarios. - **Implications of Code Coverage:** He critiques the idea of relying heavily on code coverage as it often does not accurately represent the reliability of tests over time, proposing mutation analysis as a preferable metric. Through research findings, Gonzalez illustrates that higher mutation scores correlate with lower redundancy and, thus, provide a more reliable measure for assessing test quality. **Conclusions and Takeaways:** - Prioritize mutation testing over mere code coverage to achieve better regression detection. - Utilize mutation scores to guide improvements in testing practices. - Understand that technical debt and under-testing can lead to undetected bugs, especially in over-tested areas of the codebase. - Engage in practices that maintain low coupling in code to enhance the effectiveness of mutation testing. Gonzalez concludes by encouraging software developers to embrace mutation testing as an integral part of their testing strategy, providing the confidence required to make changes and improvements in their codebases.
Suggest modifications
Cancel