Summarized using AI

The Technical Debt Trap

Doc Norton • October 22, 2014 • Earth • Talk

In the presentation "The Technical Debt Trap" by Doc Norton at Rocky Mountain Ruby 2014, the speaker delves into the concept of technical debt, a term coined by Ward Cunningham in his seminal work back in 1992. The video seeks to clarify common misconceptions surrounding technical debt and how such misunderstandings can lead to poor software development decisions.

Key points discussed throughout the presentation include:

- Definition of Technical Debt: Norton engages the audience to define technical debt, revealing that it commonly denotes unfinished code, things needing cleanup, or slowing your team. Technical debt, as originally described by Cunningham, is linked to speeding up development at a cost, akin to borrowing money.

- The Nature of Debt: The speaker emphasizes that technical debt can be either a strategic design decision for quick feedback or a sign of learning whether intended or not. He stresses that the metaphor of debt implies caution as unaddressed technical debt incurs additional costs over time.

- Problems with Metaphors: Norton introduces the concept of 'metaphor phasis,' where the original meaning of a metaphor, like the technical debt analogy, can obscure the actual issue and lead to unintended consequences.
- Distinction Between Technical Debt and Cruft: The speaker points out that true technical debt involves clean, testable code with defined learning objectives and a payback plan. However, code that lacks these attributes simply represents cruft—badly constructed or redundant code.
- Concept of the Technical Debt Quadrant: Norton critiques Martin Fowler's technical debt quadrant, suggesting that labeling poor coding practices as technical debt is misguided. He argues that reckless coding is far from prudent and can lead to long-term project failure, echoing the importance of pursuing code quality over speed.
- Maintaining Code Quality: Emphasizing the Boy Scout rule, the speaker advises monitoring and improving code quality incrementally through regular maintenance, thus avoiding the trap of cruft that leads to potential project demise. Cleaning and refactoring code as a consistent practice is essential to maintain an adaptable and effective codebase.
- Objective Metrics for Quality: Norton advocates for utilizing code quality metrics like coverage and complexity as objective measures to guide development practices, allowing teams to monitor trends rather than relying on subjective assessments.

In conclusion, Doc Norton’s talk underlines that understanding technical debt, maintaining quality practices, and distinguishing between technical debt and cruft are crucial for effective software development. The overall message encourages developers to avoid taking shortcuts in code architecture and strive for continual improvement in their work practices without seeking permission to uphold high standards.

The Technical Debt Trap
Doc Norton • October 22, 2014 • Earth • Talk

Technical Debt has become a catch-all phrase for any code that needs to be re-worked. Much like Refactoring has become a catch-all phrase for any activity that involves changing code. These fundamental misunderstandings and comfortable yet mis-applied metaphors have resulted in a plethora of poor decisions. What is technical debt? What is not technical debt? Why should we care? What is the cost of misunderstanding? What do we do about it? Doc discusses the origins of the metaphor, what it means today, and how we properly identify and manage technical debt.

Help us caption & translate this video!

http://amara.org/v/F1kN/

Rocky Mountain Ruby 2014

00:00:26.540 Nothing like starting your talk at the massive group hug! That's awesome. So, as I mentioned briefly in the lightning talk, I'm Doc Norton. You can find me pretty much anywhere as 'doc' on whatever site, 'whack doc on dev', that's probably me. And of course, that's my Twitter handle.
00:00:33.420 As I mentioned, I am the Global Director of Engineering Culture at Groupon. When they can't give you a raise, they add a word to your title! A little bit more about me: I am a husband, I am a father, and there’s actually only one reason that I even mention this—I love sharing this! I am also a grandfather.
00:00:49.140 So enough about me. I've got a question for you: What is technical debt? It's hard for me to see folks in the audience, but I am looking for an answer from someone. Feel free to shout it out.
00:01:07.260 What’s technical debt? Stuff you’ve got to clean up later? Stuff you’re not sure you want to keep? Things that slow down your team? These are all pretty common definitions of technical debt. Do you agree?
00:01:30.720 Ward Cunningham is actually the guy that came up with this phrase. In a paper presented at OOPSLA 1992, Ward said, 'Shipping first-time code is kind of like going into debt. A little debt speeds development, and as long as you pay it back promptly with interest, you're pretty good.' He noted that danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt.
00:01:58.740 A couple of things about this: First of all, while it was written in an OOPSLA '92 paper, a year or two prior, Ward used this exact same explanation in an email to his client while working on a financial system. The metaphor he chose fit their domain; it made sense to them. That’s why he chose the debt metaphor. I don't know what metaphor he would have used if he had been working on medical instrumentation.
00:02:22.200 If you know Ward, he's a really nice guy, very gentle and soft-spoken, and I think sometimes he's a little subtle, which we might miss. So I want to hone in on exactly what Ward said.
00:02:43.590 He warns us of danger, right? He doesn’t say ‘the challenge,’ ‘the concern,’ or ‘the potential eventual problem’—he says ‘danger.’ Danger should actually invoke fear in us. He says: every minute spent on code that’s not quite right— not quarter, not month, not iteration, not release cycle, not day—every minute...
00:03:06.870 ...spent on code that’s not quite right is dangerous. But I'm here to tell you that technical debt is good. How is that possible?
00:03:25.590 Well, technical debt, as Ward described it, is basically two things: it is either a strategic design decision, something that we've done to allow for rapid feedback, or an indication of learning. It’s an indication of learning whether it's intentional or not.
00:03:44.030 We get something out there, see what happens with it, and adjust as we go. When Ward was talking about technical debt, this is what he meant. But remember, it’s a metaphor. It’s a metaphor for danger.
00:04:06.780 Metaphors absolutely rock. They allow us to talk about a complex problem that maybe we don’t all understand in terms of some other situation that hopefully we have a common understanding of. However, something happens—something goes awry. Eventually, every one of you has been in a conversation where you began with a metaphor that seemed to make sense.
00:04:22.860 And then before you know it, you're having an argument about the thing you were comparing, not the actual issue at hand. This is what I call 'metaphor phasis'—when a metaphor goes wrong.
00:04:43.720 Eventually, in our industry, the debt metaphor transformed into phrases like 'It was simple, people understood.' Well, I know what that is: it’s like a credit card, it’s like a home loan, it’s like short-term... it’s intentional, it’s prudence, it’s pragmatic leverage—there’s a loan shark—there’s inadvertent reckless debt in the third quadrant, which, by the way, we will talk about. This became so pervasive in our industry that our leaders adopted this same kind of thinking.
00:05:09.550 Eventually, Martin Fowler came up with a technical debt quadrant. Well, dude, now it’s official! Now you can throw tons of money at the consultant who came up with that quadrant. The way he describes it is that there are a few different types of debt: it’s either reckless or prudent and it’s either deliberate or inadvertent.
00:05:36.150 Now, keep in mind that in 2009 Ward spoke up again and said, 'Yeah, that’s not what I was talking about.' A lot of you have confused this with the idea that you could actually write bad code. You’ve confused the debt metaphor with the idea that you could write code poorly, but the ability to pay that debt back depends upon you writing code that is clean enough to be able to refactor.
00:06:01.630 So, what is refactoring? "No one in the audience knows what refactoring is," right? So we’ve changed the design without changing the implementation—without actually changing the behavior. The internal design changes, so how do we do that? What’s the safest way for us to be able to do that?
00:06:21.700 The safest way for us to do that is to have good tests around the code, good coverage. The cleaner the code is, the better shape it's in, the easier it is for us to actually refactor it. So here it turns out, that in order for you to actually have technical debt, you have to have code that is clean. Clean code is a prerequisite for technical debt.
00:06:53.920 The last thing Ward had to say on this was in Twitter 2009, when he stated, 'Dirty code is to technical debt as pawnbrokers are to financial debt. Don’t think you’re ever going to get your code back.' Right?
00:07:05.580 So fine folks, you want to go with this metaphor? Here’s how it goes: you’re never going to get it back!
00:07:19.880 So if you’re looking at your code and asking yourself, 'Is this technical debt?' check these things: Is the code actually clean? Is it tested? Is there a learning objective? Did we move this into production because we want to learn what the users actually need? Or are we waiting to see how the market responds?
00:07:38.890 Or are we just waiting to see if that federal regulation actually goes through? But in the meantime, we can generate revenue doing this thing. Is there a payback plan for the debt? Is the business truly informed? This is important.
00:07:54.040 Us saying in a meeting, 'Okay, but you know it’s going to be a problem later,' is not the business being truly informed. We have to get them more engaged, to help them understand the cost of these short-term decisions we’ve made. If you say no to even one of these things, you don’t actually have technical debt. What do you have then?
00:08:15.200 You have cruft. You literally have cruft, which from the dictionary is defined as 'an unpleasant substance; the result of shoddy construction' or 'redundant, old or improperly written code.' You basically just have a mess—it's just cruft in code.
00:08:30.049 I’ve given this talk a number of times, and I’ve had conversations over dinner with folks, and one common response is, 'Hey, you know, chill man, it’s just semantics. It’s just a word, and it’s not that big of a deal.' But if we agree that technical debt is good, and we also agree that quick and dirty is technical debt, then we’re agreeing that quick and dirty is good, and I can’t abide by that.
00:08:50.200 I’ve been on enough projects that started off quick and dirty and ended in a horrible death. If we’re lucky, we start off quick and dirty, and at some point we generate enough revenue to convince the company to do the grand rewrite—and this time it’ll be better, and we’ll get it correct.
00:09:17.170 So I don’t think it’s just semantics. I look at this technical debt quadrant again, and I feel agitated by some aspects of it. I mean, one, just the fact that it’s a quadrant—you must have a ship now and deal with the consequences.
00:09:36.130 To whom in the audience does that sound prudent? This is deliberate and prudent, and it still sounds pretty reckless to me! I actually think that should say, 'Let’s deploy and gather more information.' That’s different than dealing with the consequences.
00:10:00.250 But there are some other aspects of this that bother me, and I was trying to articulate why it bothers me. What is wrong about this? So, I started thinking: what if we looked at technical debt, this metaphor, this concept, and we applied it to other fields? Because it’s so pervasive in ours, if it actually makes sense, it should make sense in other places.
00:10:27.870 Let’s look at automotive. Can you imagine taking your vehicle to a mechanic, and he says to you, 'Hey listen, we incurred some mechanical debt to stay in budget; we should probably add some metrics around that and make sure we pay that down in the future?'
00:10:43.520 What about medical? This is not photoshopped, by the way. At the end of a surgery, you’re talking with a surgeon about your loved one, still in recovery, and the conversation goes, 'You see, we encouraged some health debt during the surgery. It’s kind of like you paid for the surgeon with a credit card instead of a home equity loan.'
00:11:01.440 So when we apply this to other industries, it’s obviously ridiculous. So how is it that it’s okay in ours? Recklessness and deliberation are not things to be taken lightly.
00:11:23.770 I don’t think that’s an accident. I’m going to give the benefit of the doubt and say that this was recklessness and inadvertent decisions as we apply them to our technical debt quadrant. To me, that says recklessness and deliberation is just irresponsible, and recklessness and inadvertence is just incompetence.
00:11:38.650 So, on our technical debt quadrant, we are left with irresponsibility and incompetence. The only thing left that’s actually technical debt is that which is prudent. We wrote good code and moved something into production that had a design that we weren’t entirely sure of.
00:11:54.220 But we knew we were close and wanted to learn. If we find that something we were highly confident was correct actually isn’t what the users wanted, we made some adjustments. That’s the only aspect of this quadrant that’s actually technical debt.
00:12:18.680 So I want to play a game. Does anybody here want to play a game with me? We’re going to play Cruft or Debt. This is usually a much longer talk, but we’re only going to do two rounds of this. I apologize in advance; none of this code is Ruby.
00:12:36.650 It is also, by the way, not Java or C# or any other language you’ve seen—it’s all kind of pseudocode, and that’s sort of intentional. So now take a look at this and I want you to tell me: Is this cruft or debt?
00:12:56.380 It’s cruft. How do we know that? It just looks like it is. I can read through this, and every time I read through it, I start to feel, 'What is this doing?' It’s just not really clear what this does.
00:13:09.490 So, how can I clean that up? Maybe if the person who wrote this had commented on it, because that’s the universal solution for unreadable code, right? No! If customers are federally regulated, well, there’s something else.
00:13:27.170 So if you look at this, I don't know what this class is, I don't know what this method is, but I do know it’s interrogating my customer; it’s asking awful lots of questions for no good reason. As we go on, maybe what I've got here is not just a case of cyclomatic complexity but also feature envy.
00:13:47.790 This responsibility is probably in the wrong place. So I’m already getting the idea of what this should be—and then that actual logic should probably be over in the customer object.”
00:14:06.850 Let’s take a look at that. Now, what about this one? Is this cruft or debt? And bonus points for anyone who gets the joke hidden in here.
00:14:24.090 What do you think? Cruft? Debt? Some would say cruft, but it’s hard to say. This is borderline. If you follow Uncle Bob's guidance, the fact that you’ve got three cases here might indicate there’s a problem. This is just nested ifs flattened to look sexier.
00:14:44.520 But it’s still high cyclomatic complexity; it’s still nested conditional. If I’ve got one conditional, that’s okay. If I have two, then maybe I should look at my composition.
00:15:01.500 Maybe there’s some polymorphism or something I could be doing differently to actually clean it up a little bit. So let’s try that and see what it looks like.
00:15:19.270 Now, obviously, there’s other code hidden here. We kind of introduced a couple of methods that maybe we didn’t see previously. But it’s hard to say if this is really better.
00:15:38.410 My point is that sometimes it’s super obvious, and sometimes it comes down to personal choice. If that case statement had 27 different options in it, everybody in this room would agree that it was cruft.
00:15:59.250 But right now, it’s borderline, so we still have to make some personal judgments. Cruft is a bad decision every single time, and why? We are all professionals. We are professional software developers—this is what they pay us to do.
00:16:19.400 And you’re going to create unintentional cruft. You know why? Because it’s unintentional—you didn’t mean to, but it’s going to happen, and you’ll have to clean up that existing cruft sooner or later.
00:16:35.000 So the title of this talk is 'The Technical Debt Trap,' and so far I haven’t even talked about the trap. What is the trap? The trap is cruft. When you start down this path, it gets harder and harder to get out of it.
00:16:55.600 First of all, you’ve set a precedent for speed over quality. So you made a compromise early on, and now the precedent is set. If you’re in an agile shop, and your management has read the book, they likely expect you to be moving faster.
00:17:15.340 They've read somewhere that velocity increases as teams get better. So they expect you to go faster. But cruft slows you down, so you actually have to write more cruft to keep up.
00:17:34.170 Eventually, what you have to do is ask permission to do your job correctly. Now we may hide this in some way; we might say, 'Well, this whole thing started when we were on Java 5, and now we want to go to Ruby, or we need to get to Java 8.'
00:17:53.080 There’s some rational reason why we have to completely rewrite the entire application—some technical reason behind it. But the odds are, the real reason is because we can’t keep up anymore.
00:18:10.700 I’ve seen systems written in RPG 30 years ago that are still running fine today and are still being maintained. It’s not always the need for the technology; in many cases, we make up the reasons for needing the rewrite.
00:18:31.570 So what do we do? Avoid the trap—avoid it in the first place. There’s this technique of writing incremental fixes; some of you may have experienced this—cruising along and then doing some kind of debt iteration, or some kind of cleanup.
00:18:53.000 Every so often, you focus on cleaning up the code. Well, studies have shown that when you follow this approach, the actual cost of change in the application over time keeps getting higher and higher.
00:19:17.020 You can see that this is the curve on this—it gets to the point where you can’t do anything anymore; you can't write new features because the cost of change is so high.
00:19:37.100 Now let's look at a codebase where we’re constantly cleaning and refactoring. You’ll notice that while the cost of change increases as well, the codebase gets larger.
00:19:57.080 You can’t hold all that context in your head. There are rational reasons for this; this is what happens when teams do these cleaning sprints; you go through this period and then spend time cleaning up the code—but then what are you doing?
00:20:14.610 You’re trying to catch back up to the schedule the business expects, and even if you aren’t, you’ve basically jumped right back into those old habits.
00:20:38.940 So it turns out that really, you’re on the same path; you’re just fooling yourself as you go along. You’ve got to clean constantly; don’t make an intentional mess.
00:21:00.430 Monitor your technical debt; follow the Boy Scout rule: What’s the Boy Scout rule? Leave it cleaner than you found it. It's not that hard; if everyone on the team is doing that, the work you’re doing is good quality, and you clean up as you go.
00:21:19.130 Over time, it actually does get better. It can feel futile; it can feel like you’re spitting into an ocean of debt. But everyone making the effort can actually help. If you're following the Boy Scout rule, you’ll be cleaning the code you most often change where the highest churn is.
00:21:36.770 Turns out, that’s where your highest risk is. Just like coverage: if you’ve got a code base with zero coverage, and you start writing tests, write tests around the stuff you’re actually working on.
00:21:53.660 If everything else is ugly but working and you don’t need to touch it, don’t touch it. Remember, quality is your responsibility, and never ask for permission to do your job correctly.
00:22:09.500 This is really hard; it can be very scary. We’re under a lot of pressure sometimes to deliver. We are also very fortunate—we are in a field that is in extremely high demand.
00:22:27.130 People who stand up for what’s right and do their jobs excellently can be fired for that very thing, and I guarantee you will find a job the next day. In fact, if that happens to you and you don’t find a job the next day, call me; you will have a job the day after that.
00:22:40.800 There are a few key metrics we can look at for monitoring our cruft and debt code: coverage, code complexity, coupling, and then maintainability—which was only a net thing but I believe there’s now a maintainability index that’s available with a couple of Ruby libraries.
00:22:55.430 I know that Metric Foo, which is kind of old, was working on a maintainability index as a heuristic along with all the other stuff it did. I’m not going to get into details on these things—like what tools you should use for each one of these things. What I do want to talk about is how we often have these arguments.
00:23:15.300 I often hear about how code coverage can be gamed, how cyclomatic complexity can be gamed, how coupling can be gamed, and how this doesn't mean quality. And you know what? You’re right. Put all three together, and I challenge you to game them.
00:23:33.970 I'd like to see someone come up with a way to game coverage, complexity, and coupling all at the same time and not inadvertently improve the quality of their code.
00:23:48.320 The maintainability index is basically just a heuristic that uses several of these together and gives you a number on a scale of like 0 to 99. It’s kind of nice for graphing. The other thing I watch with these is: don’t set targets.
00:24:08.340 If your coverage is at 30 percent, but that 30 percent is covering the stuff that has high churn, that’s awesome. If your coverage is at 100 percent, that’s a smell as far as I’m concerned. Someone's probably gaming the system.
00:24:26.660 Just monitor the trends. Is it getting better? Is it staying the same? Is it getting worse? Based on what you’re doing as a team, is that what you expect? If you've made the decision to rush to hit a deadline because there’s a large financial opportunity, and you’re okay with the code quality staying flat or making a slight dip, then that’s fine.
00:24:44.540 That’s what you planned. Most importantly, I’d say try to get it trending up, but just watch the trends over time—don’t watch any single point on any of these.
00:25:02.920 Make sure there’s time for QA; do a quick review. Technical debt is a strategic design decision. It requires the business to be informed; it includes a payback plan.
00:25:14.090 Cruft happens; it needs to be monitored and cleaned, but it is not technical debt. And my final reminder is: never ask for permission to do your job correctly.
00:25:29.530 Comments? Questions? Someone, speak up! Come on, it happens at every conference!
00:25:47.820 How would I recommend reversing the debt ratio in a large legacy code base? I would take the same approach that I would take to introducing testing to a code base that doesn’t have it—really focus on the areas where there’s high churn.
00:26:09.070 We know where the team is actually working, because that’s where the greatest risk lies. Coupling metrics are very helpful there. Look at classes with a lot of efferent coupling, where a lot of other classes are dependent on them. That’s a good place to take a look.
00:26:28.950 SonarCube, which was originally a tool available in the Java space, grabbed all of the static analysis information and showed you reports over time. There are now plugins that let you run static analysis tools against your Ruby codebase, Python codebase, JavaScript codebase, etc.
00:26:51.690 Sonar has this very cool feature where it shows a heat map of the codebase and says, 'Hey, this is a space, an area of the code, that you can make an easy change to that will have a high impact.' It shows you quick hitters.
00:27:12.640 So, you can actually make some informed decisions about where to focus. For the most part, focus on the stuff that’s currently changing. If you’ve got some old, crusty classes that no one touches anymore—where things just do their job—even if they’re flawed, if you’re not touching them, don’t worry about it.
00:27:27.830 When it's time to actually touch them, then consider cleaning them up. I would do that differently if they don’t have unit tests around them; I profile them first. Profiling for me means writing unit tests that indicate the behavior I observe.
00:27:48.310 I write them as if that’s the behavior I expect. One of the fun things about this, as a consultant: I went into an organization, wrote a bunch of tests, and one stated that it should allow you to change the state name without adjusting the abbreviation.
00:28:05.600 One of the developers got very upset that I would write a test that stated that. They insisted, 'No, dude, this is what the class does; this is what the class has always done; it’s always been broken; you just didn’t have a test to tell you that!'
00:28:21.350 So, I like profiling. What are some techniques to use to actually push back more on the business? One of the things I advocate for is getting static analysis around the code.
00:28:42.430 One reason I do that is because it is objective. It may not be perfect, but it is objective. We can show that the quality of the code is getting better or it’s getting worse. If we contrast that against our velocity, we can start showing that, 'Hey, when the team is pushed to move this fast, quality goes down.'
00:29:02.800 When the team is allowed to slow down a little bit, quality comes back up. Now we’re making an informed decision that isn’t based on the opinion of different developers.
00:29:20.400 That’s part of the problem, especially in large organizations, where one team says, 'Man, that code is crap,' and another team is perfectly proud of that code, because it’s very subjective. But if we can actually have objective measurements, we can change the conversation.
00:29:44.990 Cool, well, thank you very much, everybody!
Explore all talks recorded at Rocky Mountain Ruby 2014
+22