Talks

Summarized using AI

You Can't Miss What You Can't Measure

Kerri Miller • March 07, 2013 • Earth

The video titled "You Can't Miss What You Can't Measure" presented by Kerri Miller at the Ruby on Ales 2013 conference discusses the significant role that code metrics play in software development and code maintenance. Kerri draws an analogy between navigating unfamiliar territory without proper tools and managing legacy code without understanding its inherent metrics.

Key points discussed throughout the video include:

- Importance of Understanding Metrics: Just as GPS coordinates need context to be meaningful, various coding metrics must be interpreted to be useful in reflecting the health and clarity of the codebase.
- Static Code Analysis: Kerri emphasizes that static code analysis provides valuable insights, acting as a compass for developers navigating complex legacy systems. Metrics like code complexity, churn rate, and coverage are vital for understanding code quality.
- Types of Metrics: The speaker introduces different metrics used in code analysis:
- Code Coverage: Indicates the percentage of code tested by automated tests, but emphasizes that 100% coverage doesn’t guarantee quality.
- Lines of Code: Outdated metric previously used to measure productivity, which Kerri criticizes.
- Churn: Tracks how often code files are modified, signifying areas with potential instability.
- Complexity Metrics: Various forms of complexity metrics like Flog and cyclomatic complexity help identify hard-to-read and maintain pieces of code that may need refactoring.
- Defining Good Code: Highlights the subjective nature of what constitutes 'good code' and acknowledges that each team should define their own acceptable metrics instead of blindly chasing arbitrary thresholds.
- Tools and Practices: Kerri mentions several tools utilized in Ruby development, including Code Climate, Metric Fu, and Flay to aid in guiding teams toward better code practices.
- Visualization and Team Collaboration: Discusses using visual tools to map code structure, which can aid in onboarding new developers and facilitate discussions about code improvements.

In conclusion, the primary takeaway from this presentation is the necessity of understanding and leveraging code metrics not just for awareness but as tools to foster better coding practices and enhance collaboration within development teams. Metrics should drive conversations and reflections among developers rather than being used as arbitrary judgments of their coding capabilities.

You Can't Miss What You Can't Measure
Kerri Miller • March 07, 2013 • Earth

Adrift at sea, a GPS device will report your precise latitude and longitude, but if you don't know what those numbers mean, you're just as lost as before. Similarly, there are many tools that offer a wide variety of metrics about your code, but other than making you feel good, what are you supposed to do with this knowledge? Let's answer that question by exploring what the numbers mean, how static code analysis can add value to your development process, and how it can help us chart the unexplored seas of legacy code.

Help us caption & translate this video!

http://amara.org/v/FGbI/

Ruby on Ales 2013

00:00:00 Hello, world!
00:00:20 How many engineers does it take to change a light bulb? Let me check. One, two, one, two. There we go!
00:00:30 In context, has anybody thanked the organizing team for Ruby on Ales yet? Well, thank you!
00:00:45 This is a super fun conference every year. This is actually my second year at Ruby on Ales. How many of you are from Bend?
00:00:52 Okay, so you all know how to drive on rotaries and dodge SUVs on our one-way roads and everything.
00:00:57 This is actually my third time in Bend, and I drive up here every year from Seattle. I get lost every single time I leave the conference venue trying to get back to where I'm staying.
00:01:04 This reminds me a lot of my home; I'm originally from Vermont.
00:01:11 It looks exactly like this bridge, which is about a mile and a half from my parents' house. The entire state is this beautiful, pristine, historical monument where every year, hundreds of thousands of people come from all over the world to see our foliage.
00:01:23 Getting around Vermont and getting around Bend is very similar. The joke in Vermont is that if you get directions from a Vermonter, this is usually what you get.
00:01:34 If I have a couple more beers, I'll actually be able to say that properly.
00:01:56 This is a map of Southeastern Vermont where I was actually born and raised in Brattleboro, down at the bottom. If you ever come to visit and let's say you started in Putney, Vermont, a beautiful town where my parents' farm is located, and you hear about this awesome organic milk tasting going on in Brookline Vermont.
00:02:08 You think, 'That sounds amazing! I want to get over there,' but you don't have a map. You might not even have Google because we don’t have Wi-Fi; we have 1G coverage. You manage to get directions from a local, and you figure out that you have to go down to Brattleboro and then come back up the West River Valley.
00:02:28 It seems pretty easy, right?
00:02:40 The challenge is that about two years ago, Hurricane Irene came through and dumped about eight inches of rain on the state. Vermont is shaped like a wad of tin foil that got crumpled up and then someone tried to flatten it out again, which is why many roads washed out and whole towns were wiped off the map.
00:02:54 So you hit this detour, turn around, and find some farmer. You interpret his directions and he says the classic, 'Two miles before the dead end, you want to take a left. Then head five, maybe 15 minutes towards the Hitchin's place, which was burned down at 53. You can’t miss it; take your seventh left after the white farmhouse.'
00:03:10 Classic Vermont directions: You can't get there from here. But eventually you figure it out. You suffer a little, you have an adventure in my home, and you learned a viable route between points A and B while enjoying some pretty leaves.
00:03:24 Oh, and you got to taste the milk before they closed. That's good enough, right? Probably, since you're just a visitor. You're a tourist; please come give us your money!
00:03:43 But what if you lived in Putney and had to go to Brookline every single day for work? Is there a quantitatively or qualitatively better route to take between these two points? Obviously, some are asking that question. There is this road right here, this is Putney Mountain Road, and it's only called Putney Mountain Road by people who know it exists.
00:04:06 There are no signs. It got its name a few years ago when they brought 911 into the state, mandating that every road must have a name. But you're not going to find it on maps; I guarantee you--Google Maps isn't going to find it, nor will iOS Maps.
00:04:22 This road is only open about eight months of the year; the rest of the year, it's not even plowed—it’s snowy or muddy. If you've got a four-wheel drive, go for it! However, it's going to shave 20 minutes off every time you go from Brookline to Putney or vice versa. A classic country shortcut!
00:04:53 We write code, we get the tests working and, according to Sarah, we go back and we write some more tests.
00:05:01 But eventually, it comes time to refactor, or maybe we are staring at a mud ball of code and every time we touch it, little pieces fall off, exceptions get thrown, and bosses get involved. We need to figure out how to make the voyage from A to B, from initialization to final output better. We're trying to refactor; we're trying to fix it. We want to get better.
00:05:24 If it's a car trip, it's pretty easy, right? Once we get where we're going, we know we have some locals, we have some reference points.
00:05:37 We know we can't go up West River Road. How do I know when my code is good enough though? Where's the guide? Maybe I have a new client who gives me an 80,000-line Legacy Rails app that has had 1,800 developers who've worked on it and quit in frustration. It's mine now. Where's my map?
00:06:03 Code metrics can really help with this problem; they act as a compass, GPS, local guide, and location-specific survival kit. They're not going to get you to your destination, but they can help turn this voyage through uncharted regions into just a regular trip across the bridge for milk.
00:06:19 When I was first thinking about this talk, I pitched it to a co-worker, and he said it sounded really good but that I was just selling vegetables, you know? He said, 'You're telling people to eat their vitamins; be better.' And you know you should eat your vitamins.
00:06:34 I'm a veggie, mostly Italian, so I kind of like talking about vegetables, but that's not the point of code metrics. You all know that you should eat vegetables. You know you can do better code, and you probably know something about some code metrics tools.
00:06:54 Maybe you actually got better food working sometime. Actually, has anybody ever used Metric Fu? Did you get it working? Okay, I want to talk to you later; I can't seem to make it work.
00:07:10 Maybe you got that working, or maybe you used Code Climate from Brian Helm camp. Yes, great tool! You got some metric tool running, and then you're going to experience this...
00:07:26 It's the classic five stages of mourning loss.
00:07:35 But probably, you experienced it this way: you get sad, 'My code sucks. I'm getting an F somewhere,' so you decide to fix it. You bargain, and then you get angry because someone on your team wrote some bad, shoddy code.
00:08:02 Then you think, 'Okay, someone on my team wrote bad code; that's okay. Nobody's perfect, especially not my awful coworkers. Big love! So you commit a fix, then you never look at code metrics again.'
00:08:19 And the reason you don't is that you’ve been told you're bad. This cold, unfeeling program, this robot, came out of nowhere and just insulted you and your team, stating that you were bad at coding. So why should you actually seek out this harsh, judgy judgment?
00:08:36 You've probably heard the quote that programs are meant to be read by humans and are only incidentally for computers to execute. We're writing code to communicate logic and intent—work that needs to be done.
00:08:54 These tools don’t have hearts or feelings, and they don’t think that you look funny or smell bad. They serve to tell us something about our code, trying to indicate that we may not be communicating to our teammates or future developers exactly what we think we're communicating.
00:09:11 By the way, this is interactive; I ask a lot of questions! One of the things that one wise person said that struck me was that a block of Ruby code could be read aloud, and people are able to understand it. Ruby is a very literate language in a lot of ways; it's very expressive.
00:09:24 Does anyone want to volunteer? I'll give you the mic right now, you can come up here. No? Okay. This isn't the worst code that I could find in one of the apps I work on right now, but it's pretty bad.
00:09:39 I actually tried reading this out loud last night. I don't know what it does. I mean, I can figure it out—I'm not dumb—but read aloud, it doesn’t say anything. You can't just look at this code and gather what its intent is; there's three or four different levels of abstraction going on here.
00:09:59 Code metrics aren't about eating vegetables, and they're not about forcing us to do something right; they scold us a little when we stray from their particular vision of good. But they also serve to change our context for looking at our code. This is a card from the Oblique Strategies deck, which was made by Brian Eno and Peter Schmidt in the '70s.
00:10:30 They wrote down these little ideas to shake up their creative process. Code metrics serve the same role; they can change how we look at our code, jumpstart our creative approaches, and start conversations on our teams about what is good code.
00:10:42 It starts conversations with ourselves: Am I good enough? How can I get better? What's the road to that? But how do you actually use these tools? How can they improve your process through a test-driven development process?
00:10:58 Everyone tests first? Yeah, always write. Yeah, totally! So, you write code, then tests.
00:11:13 A few people, maybe? Okay, does anybody have 100% test coverage? I always want to know. Nobody? How about 80%?
00:11:30 Okay, who knows what their code coverage is? Awesome! You all are on the first step because you're already using a code metric.
00:11:43 Given our obsession with testing, test coverage is usually one of the first metrics we talk about. It's often one of the first things we run when we inherit a new piece of code.
00:11:58 There are three ways we calculate code coverage: C0, C1, and C2. For the most part in the Ruby world, we focus on the first one, C0.
00:12:12 If you run the R code or SimpleCov, this is the technique used to generate its numbers. 100% coverage doesn't mean anything, as I'm sure you know.
00:12:30 It's entirely possible to write tests that will get you to 100% coverage without writing a single assertion! It's just about the lines of code that are actually executed.
00:12:44 At this point, I think I’m obligated to say what's the right number? I don’t know, and I don’t really care. What’s right for you and your team is the truth for all metrics.
00:13:00 When it comes to code coverage, DHH calls it 'security theater.' Uncle Bob says, 'Of course you're going to get 100% if you're doing TDD right!' But it's really up to you.
00:13:15 You need to determine what your coverage metrics should be. The same applies to other code metrics; maybe you like complex code, maybe you don’t.
00:13:35 Lines of code is another metric that used to measure developer productivity in the dark, dark days of 2012. This is a simple accounting of the number of lines of executable code within a code base.
00:13:52 There is a rough correlation between the number of lines of code in an application and the rate of defects. Obviously, the larger the application, the more chance there are for defects.
00:14:08 But really, if you're ever in a place that's usually measured by lines of code as a metric, you need to change your approach. Update your LinkedIn, don’t worry about what your coworkers think!
00:14:24 A lot of people run Rick stats from time to time. This is actually a pretty opinionated stats tool; it doesn’t show you anything about your views, nothing about your JavaScript application.
00:14:41 It's so high level that it's almost pointless in a way. But this is one of the first commands I run when I get to a new application.
00:14:57 It gives me a rough view: is this a service app? Is it nice and slim and tight on controllers?
00:15:11 Code coverage and lines of code demonstrate two different kinds of code metrics we encounter. Calculating lines of code is a static analysis process performed without evaluating the code.
00:15:34 Dynamic analysis, obviously, is the opposite; it actually runs the code and benchmarks its time or some characteristic about its behavior is tracked.
00:15:50 There are far more static analysis tools available to us in the Ruby world, but static analysis tools never really understand the code. They're, in a way, very complicated regex that look for patterns in the shape of the code and then just spell out a number or a rating or chart.
00:16:09 Aesthetic tools might indicate problematic code, and that word 'might' is really important. Static analysis will usually return a fair number of false positives.
00:16:29 Unfortunately, because it is just regex, there might be a good reason for an empty rescue block that captures all the exceptions. Sometimes you really need a complicated metaprogramming block.
00:16:44 Sometimes code just has to be duplicated. It's up to you, the developer, the human, to figure out what these numbers mean to you.
00:17:01 If you’re ever in a position where you're trying to code to pass some arbitrary threshold in the code, you're really wasting your time. These metrics are just helpers to find possibly problematic code in your codebase.
00:17:17 The metric I use most these days is complexity. I work on a lot of legacy Rails apps, and roughly speaking, this is a decent measurement of how painful it will be to understand a piece of code.
00:17:40 There are three basic forms of complexity: cyclomatic complexity, which was proposed by Thomas McCabe, tracks the number of linearly independent paths through the source code.
00:17:56 Each new linear branch is an inflection point or a functional point within your operation.
00:18:30 The second kind is ABC complexity, which looks at assignments, branches, and conditionals. Assignments like 'foo = bar,' branches like function calls, and class method conditions.
00:18:50 Fog is the one that most of us use; it's the highest profile metric, certainly within the Ruby world. It counts the number of assignments, branches, and conditionals, and it penalizes you for metaprogramming.
00:19:05 Good Flog scores tend to be around 10 for models and 20 for controllers. That's a general guideline. I spoke to Ryan Davis, and that’s pretty much his hard and fast opinion.
00:19:21 If you ever run Flog and get a complexity score around 60, you want to fix that right away. Fun fact: for those who use Code Climate, the highest Flog score ever recorded was 11,354. Yeah, that wasn't me!
00:19:37 Like most code smells, high complexity is a pointer to deeper problems in your code. It might just mean that you need to break out the extract method pattern and place it properly.
00:19:50 But sometimes it means that you must rethink your domain model. The original developers might not have understood the problem space well, meaning we have to decompose this object and restructure things.
00:20:06 Churn is absolutely fabulous. It tracks the number of times a file in your version control system changes over a given period.
00:20:22 Understanding where your most volatile code lives is incredibly useful—it can help you to identify brittle code.
00:20:39 Now, for those who have hiked the Wonderland Trail, thank you! I really want to hear about your trip sometime. I used to be a long-distance hiker, and I would love to hike the Wonderland someday.
00:20:54 The trail runs around the base of Mount Rainier, and the halfway point is about 47.2 miles from Arrow to Arrow.
00:21:10 If I could do 10 miles a day, that works out to about five hours of walking each day. Then you set up a tent and repeat the cycle until you're back at the parking lot.
00:21:26 If you haven't seen it before, this is an elevation graph showing peaks and troughs. The peak is around 6,000 feet, while the lowest trough is barely under 2,500 feet.
00:21:43 I don't want to walk up and down that—it hurts just thinking about schlepping a 15-pound pack up and down those volcanic ridges for five days.
00:21:56 Just like the raw churn number is misleading, so is the number of miles. If we analyze the churn numbers with an added dimension of time, we can gain more insights.
00:22:10 The user models changed a lot. It’s the one in green. I see three distinct areas: two major changes and a few smaller changes, indicating someone working on a new feature or significant refactoring.
00:22:26 However, the invoice model in blue is a mess. It changes every two or three days. It’s not a CSS file, a config file, a gem file, or a translation file. What is going on with this file? I have to find out.
00:22:47 Sometimes, a churning file is just a config, but the frequency should raise a red flag. Sometimes, they are just junk drawers into which problems are thrown.
00:23:00 Churn alone won’t tell you much; it gives you the tempo of a project. But when you start mixing metrics, that’s where it starts to shine.
00:23:13 This is an output from a gem called Turbulence by Chad Fowler. It's based on a blog post by Michael Feathers, who proposed mixing code metrics to improve our understanding.
00:23:30 This data graphs complexity versus churn rate, indicating that a high complexity piece of code implies a higher rate of errors.
00:23:47 The more often we touch something with high complexity, the greater the likelihood of bugs. Sometimes, it's easier to tack on another 'else if' than to refactor!
00:24:02 When I ran Turbulence, I found this crazy file. Its complexity is over 1600, and its churn rate is about a thousand for the last year. Clearly, it’s a candidate for some attention.
00:24:19 Believe it or not, this is actually a model file; some developers clearly didn’t know what they were doing.
00:24:36 After running Flog on it, I discovered four large God objects doing too much.
00:24:47 Flog lets us enable a 'show details' flag to see what it’s complaining about. This indicates a lot of branching, and assignments are going crazy.
00:25:04 After a deep dive, I noticed that ticket and user models appear, indicating potential Law of Demeter violations.
00:25:20 With the extract method pattern, I can simplify the code. Now, everything's denser and cleaner.
00:25:36 However, I now have duplications. I ran Flay on it, which shows me multiple methods repeating similar tactics for sending notifications.
00:25:53 As I examined this, I realized I'd taken a file with a dozen and a half methods and turned it into 30, all now private and working with one set of notifications.
00:26:09 This behavior should likely be placed into its own service model for better organization.
00:26:24 So I juggled this around, ended up with a nice little service object, and your humor rounds of refactoring are much slimmer.
00:26:41 After checking back to my tree map to see how the project was doing, I discovered that the ticket model is now the next challenge.
00:26:58 This is a little canned exercise, but I hope it demonstrates my personal work pattern. Let's see, I am using these tools to get a high-level view of problems.
00:27:15 When you're in the middle of it, you'll become numb to the rubbish around you. It’s no longer obvious that a 100-line method is just adding to the issue.
00:27:32 For my projects, churn and complexity are essential metrics to monitor. I believe they have a lot of applications even for Greenfield projects.
00:27:46 You can track where things may be heading south; sniff the milk, if you will, and see if it's a little off.
00:28:05 The more you gather and pay attention to metrics, the more they will speak up and clue you in on problem areas.
00:28:22 We’ve talked about size, code coverage, churn, and complexity. A few other tools we can use include Flay, a syntax duplication detector. It analyzes your code’s syntax tree and identifies similar blocks using fuzzy matching.
00:28:43 This can help dry out your code; however, you must avoid going too far with it, causing you to end up chasing your tail.
00:29:05 With Flog, developers often find unfortunate false positives; for instance, in a RESTful Rails application, duplication in those controllers is common.
00:29:21 It's simply a side effect of the framework, but we can make better tools. We have the power.
00:29:37 There are numerous best practices, style gems, and tools available. Some focus strictly on specific code smells, like Rails Best Practice and Reek.
00:29:50 Others examine object-oriented design; Ruby and Palooza is extremely configurable yet opinionated out of the box.
00:30:04 Kane looks at code style, measuring white space and line length; it also assesses ABC complexity and checks for documentation in your code.
00:30:18 I hope some of you document as you go, right? Well, good code should document itself.
00:30:33 Kane was developed at Square by Xavier Shea. He realized his team’s code was starting to look rough, so they integrated Kane, running it as part of their CI build.
00:30:48 If the number of violations exceeds a threshold, it fails; this way, it prevents bad code from being merged into the main branch.
00:31:06 I use it once or twice a week to periodically assess moving in a good direction or to tighten up code reviews.
00:31:21 This object-relational map was generated by a gem called Railroading. It surprisingly outputs a DOT file and pulls it into other visualization tools.
00:31:39 It shows the actual relationships in your models, mapping out how they connect. In applications that have grown large organically, these maps can become confusing.
00:31:57 When I onboard new developers, I use maps to guide them through the system. They can visualize how each piece connects, getting a higher view of where their work fits into the larger picture.
00:32:20 We put this up on the wall—Kanban style—to visualize both our progress and our coding messes.
00:32:43 As developers, our code is often a mess. It’s prone to rot, decay, and entropy. Every quick fix or cut and paste adds up invisibly until we’re faced with mud balls.
00:33:04 You might want to write good code, but perhaps you can’t see it anymore. You might be a lone developer without a mentor to help guide you.
00:33:20 Metrics can help in every situation. They act as rumble strips along the highway, alerting you to drifting off course.
00:33:40 We are blessed in Ruby with a rich ecosystem of metrics and tools. It’s up to us to leverage them to improve.
00:34:00 The tools I mentioned are just the tip of the iceberg. Figure out which ones work for you, your team, and your project needs. Get Code Climate running, download the new 3.0 version of Metric Fu.
00:34:21 There’s no magic to using code metrics—the tools are straightforward. You run a command or two to get some numbers.
00:34:40 Use them for a while; don’t get tied to whether the numbers are good or bad. Instead, see what they tell you.
00:35:00 Ultimately, our code has a lot to say about what came before, what we're doing now, and where we're heading. Take the time to listen.
00:35:20 Thank you, everyone.
00:35:36 Thank you!
00:35:40 Goodbye!
Explore all talks recorded at Ruby on Ales 2013
+15