Test Coverage
Using Metrics To Take a Hard Look at Your Code

Summarized using AI

Using Metrics To Take a Hard Look at Your Code

Jake Scruggs • September 04, 2008 • Earth

In the presentation titled "Using Metrics To Take a Hard Look at Your Code," Jake Scruggs, a consultant at Abiva, discusses the importance of using metrics to enhance code quality. Drawing from his diverse background—including experience at ThoughtWorks and Object Mentor—Scruggs emphasizes that even highly skilled developers can create subpar code. His talk covers several key points and strategies for improving coding practices:

  • Misconceptions About Code Quality: High intelligence in developers does not guarantee good code. Scruggs cites an example of a poorly maintained seven-year-old Java application, despite the skilled team involved.

  • The Reality of Poor Code: Writing bad code is a common occurrence across various skill levels. The need for effective metrics arises from recognizing that poor code is not confined to inexperienced developers.

  • Utilizing Metrics: Scruggs explores various code metrics, including:

    • Code Coverage: A common metric that indicates the percentage of code tested. However, relying solely on high coverage can create a false sense of security.
    • Cyclomatic Complexity: This measures the complexity of code paths and helps identify areas that may require refactoring. Tools like Flog can assess and provide complexity scores, guiding developers on where to focus.
  • The Importance of Code Review: Regularly reviewing older code can reveal how it has aged and whether it needs improvement or refactoring. This practice encourages accountability among developers.

  • Collaborative Practices: The benefits of pair programming are highlighted as a way to enhance code quality and prevent errors from going unnoticed.

  • Mindset Towards Metrics: Metrics should act as tools for collective improvement rather than as punitive measures against developers. Scruggs encourages fostering a supportive environment where developers can embrace metrics as empowering rather than intimidating.

  • Real-World Applications: The presentation includes an example of refactoring a complex method for measuring distances between words. By simplifying the method and introducing helper functions, the code quality improved significantly.

In conclusion, Jake Scruggs advocates a diligent approach to code analysis using metrics to promote better coding practices and enhance code quality. The session underscores the need for developers to consistently assess their work while fostering a collaborative culture to uphold coding standards. He encourages ongoing professional development and seeking understanding over assigning blame when it comes to code quality issues.

Using Metrics To Take a Hard Look at Your Code
Jake Scruggs • September 04, 2008 • Earth

04 jake scruggs using metrics to take a hard look at your code 960x368

Help us caption & translate this video!

http://amara.org/v/G1XJ/

LoneStarRuby Conf 2008

00:00:06.359 Video equipment rental cost paid for by Peep Code.
00:00:20.840 Hello, I'm Jake Scruggs. I work at Abiva and I'm here to tell you about using metrics to improve your code quality.
00:00:27.679 Your code may be in a state of disarray, indicating that it’s not just other people's code that can be bad.
00:00:34.239 I come from a unique background; I used to teach high school physics before getting an apprenticeship at Object Mentor.
00:00:40.399 This opportunity led to a position at ThoughtWorks where I worked on six different projects.
00:00:47.520 Now, I am a consultant at Abiva, an excellent environment that emphasizes learning about code quality.
00:00:58.280 All these places I worked at are known for their obsession with high-quality code. Particularly, Object Mentor is focused on training individuals to become better programmers.
00:01:11.240 With this experience, you might assume that I only encounter perfect code, but that is far from the truth.
00:01:16.840 Throughout my career, I have seen plenty of truly horrendous code across different projects.
00:01:22.560 This reality highlights an important fact: hiring intelligent individuals alone does not guarantee good code.
00:01:27.680 Many companies propose systems where they check everyone, perform code reviews, and evaluate candidates thoroughly. Yet, I’ve worked on projects where everything fell short despite having a skilled team.
00:01:40.360 One project at ThoughtWorks was a monstrous seven-year-old Java application that was maintained for billing purposes.
00:01:45.399 The reality is that every single person working on that project was a ThoughtWorker, and the code was absolutely dreadful.
00:01:52.880 When I first evaluated the code coverage, it was a dismal 21%, reflecting not only poor quality but also the massive challenges present in the application.
00:02:05.200 It's important to note that writing bad code is something everyone does, regardless of their intelligence or skills.
00:02:19.080 To tackle the issue of poor code quality, several strategies can be implemented.
00:02:25.519 One valid measurement of code quality is the rate of 'WTFS' per minute. Interestingly, even in good code, you might still encounter a few 'WTFS.'
00:02:35.080 One popular strategy you may have heard of is pair programming, where two people work together, often preventing potential mistakes.
00:02:41.319 It’s important to keep the end-user, or maintainer, in mind when writing code.
00:02:48.040 Many developers believe that their own good coding practices adequately prepare them for future hands. However, someone else will inevitably work on that code.
00:02:58.440 Even a perfect piece of code can face challenges when it’s handed over to another person, especially if it was altered to add new features without understanding the original code.
00:03:06.200 People often write code relevant to their current context, forgetting that new developers join their projects frequently.
00:03:15.799 Another valuable strategy is to review your older code every six months, assessing how well it stands the test of time.
00:03:24.120 There have been numerous occasions where I’ve loved a particular piece of code at first but found it to be an incomprehensible mess after some time.
00:03:35.660 Many of the audience members likely already practice these strategies or do something similar, yet results can still result in seemingly poor code.
00:03:47.760 I’ve been part of XP teams where every best practice is applied, yet I still encounter problematic sections of code—tracts where developers instinctively advise against entering.
00:04:03.480 This practice of letting things slide often results from a combination of excitement for newly learned techniques and a lack of adherence to best practices.
00:04:10.319 One of the funny things is the late George Carlin had a rather insightful observation regarding driving speeds.
00:04:18.360 He noted how, when driving along, the tendency is to view everyone going faster as erratic and everyone going slower as foolish.
00:04:26.360 This same phenomenon happens in coding, where programmers assume that their current level of skill represents a quality standard.
00:04:34.200 There have been times I first saw a piece of Ruby code and thought it was nonsense, but later, upon further understanding, I recognized cleverness behind it.
00:04:50.840 It's essential to remember that your perspective may not reflect where others are in their coding journey.
00:05:00.680 We often forget to review old code, and many developers are likely to avoid using the blame tool to examine past mistakes.
00:05:07.440 The rare occasion of looking at old code often leads to unpleasant revelations; it’s noticeable when you realize that you were the one who wrote those messy pieces.
00:05:17.600 Whenever I encounter poor code, I often seek out the author to understand their rationale, discovering often that they faced constraints or demands that influenced their coding decisions.
00:05:29.720 Great code produced in response to demands often leads to undesired processes or overly complex implementations that are no longer necessary.
00:05:40.680 Another issue arises when multiple developers add to a piece of framework code, leading it to become overly complicated.
00:05:49.440 Therefore, I advocate for using metrics—code metrics—as a way to gain another perspective on our work; a means to step outside of our bubble.
00:05:59.240 Let’s begin by discussing code coverage, which is quite simple and likely already in use by your team.
00:06:09.720 Often, a team member will set up code coverage tools, which then become largely ignored soon thereafter.
00:06:22.960 It's common to find code coverage tools attached to a project that haven’t been maintained or checked for months.
00:06:34.000 This phenomena often spurs debate around what constitutes acceptable coverage numbers.
00:06:42.520 As your tests execute, they generate coverage stats that translate into a percentage, leading some to aim for 100%.
00:06:53.640 Nonetheless, achieving full coverage doesn’t necessarily imply quality, much like a low coverage isn't always poor.
00:07:00.560 For example, you can cover multiple lines of code with a single insufficient test that doesn’t actually prevent bugs.
00:07:07.600 So, asking, 'Are my tests effective?' becomes a significant query.
00:07:17.560 A worthwhile goal is to have as many tests as there are possible paths through a method; ideally, testing every route that could be taken.
00:07:29.840 To summarize my sentiments regarding code coverage: it arguably serves as one of the more flawed metrics.
00:07:39.440 Despite having some merits, it often creates a false sense of security as individuals become enchanted with high numbers.
00:07:47.600 Companies sometimes mandate coverage percentages that can lead to low-quality tests being created, which ultimately hurt the codebase.
00:08:01.680 Limited code coverage can provide direction on where further testing is essential, and coverage of zero introduces the need for attention in those areas.
00:08:16.520 Talking about cyclomatic complexity, it’s another useful metric for evaluating code that has variegated paths and branches.
00:08:27.880 One of the common tools for this is Flog, which provides scores based on assignments, branches, and method calls.
00:08:40.360 Flog uses an ABC metric where each score assigned varies by the complexity of the actions performed within the methods.
00:08:57.840 Despite its varying scores, a frequent criticism of Flog is the ambiguity around what the scores mean.
00:09:06.040 When looking at the Flog score, values spanning between 0 and 10 often imply uncomplicated code, while scores in the 21 to 60 range typically indicate a need for refactoring.
00:09:13.760 Scores above 100 suggest more substantial refactoring needs to take place due to underlying complexity.
00:09:24.000 One suggestion is creating a hit list of the project’s worst offenders, facilitating responses to areas that should be prioritized during refactoring.
00:09:48.320 This practice allows for identifying and addressing the methods that have gone awry, rather than attempting to alter and fix issues based on vague references.
00:10:06.080 My presentation will include a real-world code example, illustrating the importance of not relying solely on contrived or fabricated examples.
00:10:15.680 We can find actual methods from the codebase that need examination, measuring them against best practices.
00:10:25.520 For instance, I've discovered a method responsible for measuring the distance between two words, which is rather complex.
00:10:39.480 This method utilized in a search engine serves to map similar terms even if the spelling isn't exact.
00:10:53.200 As I began refactoring this complex method, I focused on simplifying the process by breaking constructive complexity down into manageable components.
00:11:06.720 By introducing meaningful helper methods, the core function became clearer, allowing me to better understand what each component contributed to the overall process.
00:11:23.999 The rewards of this engagement increasingly support the adage that good practices yield better understanding—and better code.
00:11:40.360 As a result, the shared responsibility in writing code and maintaining quality can ultimately lead to better outcomes within teams, benefiting everyone involved.
00:11:54.360 I encourage you all to take a diligent approach towards code analysis, recognizing that consistent evaluation leads to good quality improvements.
00:12:00.080 Metrics can guide prioritization, highlighting significant issues while creating a proactive environment against bugs.
00:12:12.440 Always remember that your mindset is critical in nurturing a culture of quality—consistently strive to improve.
00:12:20.640 Thank you—I'm Jake Scruggs from Abiva, and I'm happy to take any questions you might have.
00:12:34.960 I suggest visiting my blog for more insights and resources around metrics.
00:12:42.480 Thank you for your time and interest. Are there any questions regarding what we discussed?
00:12:49.760 Yes, I understand the concerns regarding metrics, especially within a development team—what's your take on utilizing them effectively without micromanaging?
00:12:56.240 That's a crucial question; metrics should serve more as a guide than a weapon to penalize developers.
00:13:08.920 Education around what metrics denote and improving the overall coding strategies are essential for fostering a supportive development environment.
00:13:20.480 Ultimately, my hope is that developers embrace contributing to the metrics actively—let it serve as empowerment, not fear.
00:13:32.960 That's the takeaway: work together responsibly to uplift coding standards while leveraging metrics as a tool for unified improvement.
00:13:43.760 So let us share knowledge and continue to refine our coding practices. There's no single solution to this equation—our collective efforts are vital.
00:13:54.320 Thank you once again for your time, and I look forward to our collaborative coding endeavors.
00:14:03.920 I hope you found this session informative and engaging.
00:14:10.000 Thank you for being a part of this discussion. I'll answer any further questions during the Q&A.
Explore all talks recorded at LoneStarRuby Conf 2008
+18