Ruby
Summarized using AI

Q&A: Code Metrics

by Piotr Solnica and Markus Schirp

In the Q&A session titled 'Code Metrics', recorded at the wroc_love.rb 2014 event, speakers Piotr Solnica and Markus Schirp addressed the usage of code metric tools in software development. They opened the discussion by gauging the audience's familiarity with various metric tools including Floc, SimpleCov, and Mutant, and underscored some crucial points regarding the efficient application of these metrics.

Key points discussed include:
- The audience's experiences with metric tools, highlighting variations in adoption and utilization.
- The challenges associated with striving for 100% coverage in testing, illustrated by Solnica's personal setbacks in commercial projects due to high costs and low returns.
- The speakers emphasized that metrics should serve as supplementary tools supporting development practices rather than the primary focus, warning against an obsession with numbers.
- They mentioned Code Climate favorably, noting its effectiveness in visualizing software quality for clients, which enhances communicative clarity without causing undue alarm.
- Important insights were shared on interpreting metrics and their implications, stressing the significance of trends over strict numeric values.
- Solnica shared his experience of restructuring a poorly rated codebase, emphasizing the adaptability of metrics relative to project evolution.
- The speakers also discussed the integration of metrics in Continuous Integration (CI) processes to promote accountability among team members, while balancing automation and human interaction to maintain code quality.
- They highlighted which metrics are vital for management discussions, recommending that complexity and duplication be closely examined to identify aspects that may require refactoring.
- Lastly, the speakers concluded with a strong reminder that while metrics are helpful, developers should not become reliant on them as a crutch; instead, open discussions about code quality should remain at the forefront.

Overall, the session reinforced that effective communication about metrics can lead to better decision-making in coding practices and the successful management of code quality over time.

00:00:13.360 Welcome to our Q&A session regarding metric tools. Please welcome Piotr Solnica and Markus Schirp.
00:00:22.880 So, who here is using a metric tool? Please raise your hands. Any metric tool at all? Nice! Now let's do the inverse. Who is not using any metric tools? Interesting. And for those who didn’t raise their hands, could you share why you haven’t started using them?
00:00:44.600 How many of you are using a metric tool named Floc? Ah, it seems not many. How about SimpleCov? Who uses SimpleCov, which implies 100% coverage? Okay, now who is using Mutant? Oh my God, nobody! Well, there’s one person here.
00:01:05.400 Thank you. Is your business rule to achieve 100% coverage? No? That’s as expected. This Q&A session is about metric tools: how to use them and how not to use them. I have some perfect examples of how not to use them. I tried to achieve 100% mutation coverage and 100% SimpleCov coverage on a commercial project, and it failed because the cost was just too high and the return on investment didn’t exist.
00:01:37.320 But this Q&A session involves you, the audience, so we want to hear your stories. Who among you has tried 100% testing but failed to achieve it? Is there anyone who wants to share their experience, perhaps a failure story?
00:02:04.240 It might not feel like sharing a failure, but I’ve tried and it never worked out. That experience was the reason we completely stopped utilizing coverage metrics. The nature of the projects we handle heavily dictates how much of the code we can test and how much testing is sensible. You just can’t compare projects using these metrics; some tests are very difficult to implement.
00:02:55.480 Thank you for sharing. Now, who has ever had a pull request rejected due to metric violations? No one? I feel rather lonely then. So, to kick this Q&A into gear, do you have specific questions? Here’s one: Is Code Climate hot or not?
00:03:39.440 From my point of view, Code Climate is quite impressive. It’s not overly strict with the metrics, which is beneficial, especially since we use these metrics in our open-source projects. It's relatively easy to achieve a score of 4.0, which is decent. While you can always aim for higher, I’d suggest using Code Climate in projects where security monitoring is essential, especially in open-source libraries.
00:04:35.360 I have a different experience with Code Climate. For me, it excels in giving clients a notion of the different strategies in software development. Clients may not understand Floc or Flog scores, but they can visualize the quality through color indicators like red, green, and yellow. If the code is green, it signifies an overall better status compared to yellow code. This creates an explainable experience for clients, which is vital in commercial projects.
00:05:49.000 Now, let's discuss how to interpret metrics. This depends on the specific metric in question. My journey with metrics began years ago with a tool called Metric Fu, which made generating graphs easier with different metric tools. I initially didn’t pay much attention to it but eventually began using it daily, heavily influenced by our work on DataMapper and Ruby Object Mapper.
00:07:41.640 I became overly focused on scores and constant refactoring, eventually realizing that I was getting lost in the details. Metrics should not be the main focus, but rather a by-product of excellent TDD practices. It’s important to understand the costs associated with maintaining code, and if you get stuck chasing numbers, you're likely wasting your time. It’s essential to have an overview of your code quality without being overly fixated on numbers.
00:09:39.720 So when interpreting numbers, don't get obsessed with hard metrics. There are tools that visualize documentation coverage without displaying specific numbers, but rather through color-coding. Understanding whether libraries have acceptable coverage based purely on the color indicators is often more practical than fixating on numeric percentages.
00:11:36.720 I’ve learned to write code that naturally achieves high scores in code quality without designing for the metrics. Learning to write clean code took time, and while the journey was challenging, I now find it easier to produce efficient code.
00:12:15.200 In terms of whether metrics can be improved through Java, I'm just kidding. What I believe is that Code Climate does well by focusing on trends rather than strict numbers. The idea is to ensure that every commit does not deteriorate code quality.
00:12:52.000 The trend being on an upward trajectory is essential. Nobody is focused solely on this approach, but we should.
00:13:33.240 I recently took over the TwoSource gem project, which began as one massive class. Using metric tools, I noticed horrible scores on Code Climate and recognized it was essential to restructure the code into smaller, manageable classes. The metrics’ relevance is relative to how the project evolves, and they can provide a solid baseline for discussions about removals or changes.
00:14:47.360 As for the productivity versus code quality dilemma, I think integrating metrics in CI is crucial. They add value when pulling requests because they keep the team accountable to coding standards without compromising the content.
00:16:09.560 However, the challenge arises when team members do not embrace the metric as an intrinsic part of development. High metric adherence is often correlated with improved code quality, nevertheless, there's always a risk with letting only one team member hold onto metric compliance, as this can lead to burnout and resentment of other team members.
00:17:01.440 This can lead to declining effectiveness of the team as they may grapple to accomplish tasks without clear metric guidance. That said, one must also consider individual coding styles and preferences, as each engineer brings a different approach to the team.
00:18:01.680 Regarding linting metrics, they involve checking if code adheres to standards, such as variable naming conventions. I consider that tools like RuboCop can help establish consistency across projects, but I also value the importance of human interaction.
00:18:59.680 Initially, I relied heavily on automation but realized the value of direct communication. It’s essential to balance automation with interpersonal communication, prompting collective understanding while also preserving coding quality.
00:20:30.520 In response to which metrics to prioritize, I believe complexity and duplication are significant. Recognizing code smells is tricky but essential. Tools designed for detecting code smells are valuable, but it’s crucial not to force refactoring immediately.
00:21:41.440 It’s adequate to identify these code smells and conquer them through team discussion, deciding on the necessary adjustments together rather than acting impulsively. Code quality is a shared responsibility within the team.
00:22:42.640 It’s crucial to have clarity because the code represents decisions made for the future. Developers must be proactive in ensuring that what they provide can be managed in the long run, especially with the reality of development leaving complexities behind.
00:23:51.480 In exploring whether metrics can be misleading, it’s imperative to apply common sense. A poor code metric trend might indeed indicate impending performance issues. I have encountered situations where bad coding led to inefficiencies that became significant pitfalls down the line.
00:25:03.640 We need metrics that translate into satisfactory communication with all involved parties, and poor metrics might indicate a necessity for impactful conversations with experienced developers who strive for a higher coding standard.
00:27:18.480 On the topic of what metrics to present to managers, Code Climate comes as highly recommended. It provides a good overview of how our code is faring from a quality perspective, and it’s critical for understanding the relationship between coding standards and resource allocation.
00:28:25.600 On how to convey the necessity of refactoring to clients, it all depends on the client's experience with software issues. They should be educated on the impact of bad code on long-term productivity and the issues that could arise if action is not taken sooner.
00:29:18.800 It may also involve painting a picture of the future: if we ignore these issues now, the costs may multiply later. Metrics can support your argument, but if they are not understood correctly by the clients, they might lead to unnecessary trepidation.
00:30:43.200 Clients frequently react to colors rather than numbers, therefore, it’s important to navigate discussions using the right approach to ensure that the message is communicated effectively without creating panic.
00:31:40.240 As for questions about harmful metrics, it’s crucial to understand that no tool should be blindly followed. Ideally, they should facilitate understanding and improve coding practices without imposing extreme restrictions on the development process.
00:32:49.760 In conclusion, it is essential that we don’t allow ourselves to be locked into the numbers. The metrics culture should be a guide rather than a chain shackling creativity and efficiency.
00:34:02.080 Metrics serve a critical function in developing maintainable and efficient code bases, but discussion and transparency are just as significant. Thank you, Markus and Piotr, for your insights today!
Explore all talks recorded at wroc_love.rb 2014
+13