Talks

Summarized using AI

A Practical Taxonomy of Bugs

Kylie Stradley • November 04, 2016 • Earth

Introduction

The video titled "A Practical Taxonomy of Bugs" by Kylie Stradley addresses different types of bugs in software development and provides insights on effective debugging strategies. Stradley builds upon experiences from both consulting and product development, aiming to normalize the process of identifying and squashing bugs in applications.

Key Points

  • Understanding Debugging

    Debugging is often thought to be an instinctual skill that developers grow over time through experience. Stradley emphasizes the importance of having a set of logical rules rather than relying on arbitrary instincts that can lead to poor team communication.

  • Instincts and Patterns

    Debugging instincts can be converted into actionable patterns through observation and experience. Developers can use logical reasoning to determine where to look when a bug arises based on past experiences.

  • Taxonomy of Bugs

    Stradley introduces a classification system for bugs, likening it to biological taxonomy, where bugs are categorized by their behavioral attributes. She identifies two primary categories and several subtypes of bugs:

    • Upsettingly Observable Bugs: Easily reproducible errors often rooted in specific workflows or data states. Stradley refers to these as "bore bugs" due to their predictability. Examples include validation errors where processes fail silently, causing major issues.
    • Schrödinger Bugs: Bugs that appear to function correctly until closely examined. Identifying these requires careful observation and logging.
    • Wildly Chaotic Bugs: Bugs that manifest inconsistently and may lead to multiple failures across applications. Two notable examples are:
    • Heisenberg Bugs: These bugs cannot be observed without affecting their outcomes, making them challenging to debug.
    • Mandel Bugs: These present as overwhelming failures in various parts of a system simultaneously, requiring systematic identification and resolution of issues.
  • Debugging Tools and Techniques

    Stradley discusses various debugging tools that can help mitigate the reliance on intuition. She stresses the importance of using logging and profiling tools to analyze and resolve bugs effectively, enhancing team communication while improving system performance.

Conclusion

Stradley's presentation concludes with an encouraging message that developers need not solely depend on instinct when it comes to debugging. With the right tools, knowledge, and documentation, developers can adopt structured approaches to address and manage bugs in software applications. She invites attendees to connect for more insights on debugging techniques and web development best practices.

A Practical Taxonomy of Bugs
Kylie Stradley • November 04, 2016 • Earth

A Practical Taxonomy of Bugs by Kylie Stradley

Keep Ruby Weird 2016

00:00:07.519 Welcome to this presentation, titled "A Practical Taxonomy of Bugs." Today, I'll be discussing various types of bugs and how to squash them. Before we begin, I want to thank Trina Owen for her excellent keynote this morning, which set a positive tone for my talk.
00:00:22.020 This presentation serves as my personal field guide to debugging. When we talk about debugging, we usually focus on debugging skills, which are essential tools in any developer's toolkit. Skilled debuggers can troubleshoot issues ranging from databases that refuse connections to the infamous 'undefined is not a function' error. They seem to know the right path to take or the next question to ask, even if they don't always have the immediate answer.
00:00:49.829 Watching adept debuggers at work can make it seem like debugging is purely instinctual. I've worked on various consulting projects and now at a large product company. Recently, I encountered a saying about debugging new applications: once you get familiar with a code base, you start to develop some debugging instincts. However, some people refer to this as developing calluses over time, which I find quite gross. The idea that our work might callous us is unsettling, and I don't think it's entirely true.
00:01:27.720 You might hear developers say they knew to look in a particular place because of scars from past experiences. Then, they might provide specifics like, 'Whenever I see this happen, the first thing I do is check the logs.' You can memorize those kinds of points, developing instincts—or as some might think, 'magic'—around debugging. However, these instincts are not created through witchcraft or proximity to successful debuggers.
00:01:54.210 The instincts might seem mystifying. Watching someone debug can feel like watching MacGyver—someone who seems to fix everything effortlessly. MacGyver, if you don't know, was a television character who solved ridiculous problems through sheer instinct and creativity. But in the real world, you don't want to rely on a MacGyver; relying on one person can be dangerous.
00:02:03.750 When we turn someone into a hero, we risk making them the sole source of truth instead of relying on the code itself. This dependence leads to a breakdown in communication within teams. Ultimately, instincts don't scale, and you can't just create another 'MacGyver' when things go wrong. Let's take a step back and look at debugging more generally.
00:02:57.060 When I say, 'Whenever I see X, I always check Y,' that's a simple conditional. In programming, I appreciate logic and rules, and in my experience, these instincts or scars develop into internalized rule sets. We can transform those instincts into patterns by observing them. However, we must keep several research methods in mind. First, we may need to contain a bug before squashing it, halting the mistake before conducting a retrospective to understand what occurred.
00:03:38.400 This approach is acceptable because, in reality, we often need to prioritize getting the application back online rather than deeply analyzing every issue. Second, we can only work with facts; we must base our conclusions on observable evidence, not on assumptions. Lastly, I intended this talk to cover everything we don't have enough time to address every bug we encounter in the world. As much as I would like to, I can't hand you my Swiss Army knife for debugging.
00:04:28.290 What is practical, then? It's learning how to identify bugs based on their behavioral attributes. Fortunately, a branch of science called taxonomy exists, particularly phonetically, which is used by biologists to categorize organisms based on observable traits. We can similarly identify bugs in software by their behavioral patterns.
00:05:28.260 For this presentation, I'll create convenient hypothetical scenarios to focus on identifying attributes of bugs we often encounter in live applications. I'll define two major types of bugs: the 'upsettingly observable' and the 'wildly chaotic.' Up until now, we have seen these bugs in many codebases, and they've often lived in our applications due to various reasons, such as being under-tested or untested.
00:07:02.990 The upsettingly observable bugs often lead developers to feel frustrated as they wonder how these issues escaped notice. Such bugs are often tied to specific workflows or data states. If you're answering yes to my questions about whether the bug can be easily reproduced or restricted to a specific area, it may indicate a 'bore bug.' We refer to these as such because they are simple and deterministic—like the Bohr model of the atom.
00:07:57.900 A bore bug will consistently produce the same output for a specific input. They're commonly embedded within code or lurking in server configurations. Their favorite hiding spots seem to be in functions or classes with complex branching logic. An excellent example of this is validation errors. Validator classes are prime candidates for unit testing due to their complexity, but it's crucial to ensure that validation workflows are thoroughly tested to avoid silent failures, which often lead to major issues down the line.
00:09:19.080 When these silent failures occur, they may not produce errors, giving developers a false sense of security. For example, a user might think they successfully sent an email, but behind the scenes, the validation never triggered an action. As a result, catching bore bugs becomes easier as we can replicate them in both local and test environments due to their predictable output.
00:10:50.910 By implementing tests and ensuring the code remains highly readable, we can protect against future occurrences of the same issues. This readability creates a clear understanding of how code functions, allowing future developers to implement changes without introducing new bugs.
00:11:57.120 Now, moving on to the next type of bug. The 'Schrödinger bug' is named after Schrödinger's thought experiment. It appears to work until examined closely, revealing itself to be faulty. These bugs may give the illusion of functioning code, but they can mislead users and developers alike.
00:13:09.640 Schrödinger bugs often develop by side effects within functions where return values obscure the actual result. For example, if a function provides an incorrect return value after saving an entry, users may believe the change was made when it was not. Identifying these bugs requires careful observation, logging, and often a process called 'git bisect,' which helps find the exact point where code stopped functioning as it should.
00:14:52.960 In reproducing and resolving these kinds of bugs, it's essential to add log statements to track what causes the malfunctioning state. By isolating these bugs and validating prior code states, we can resolve issues more systematically.
00:15:43.270 Now, let’s discuss 'wildly chaotic bugs.' These form the third major category. The observable characteristics of this bug type may make it difficult to discern their patterns. For instance, they may manifest inconsistently across server instances or disappear entirely when you start debugging them.
00:16:52.230 The Heisenberg bug is likened to the scientific principle that states you cannot observe a system without affecting it. In programming, debugging tools can alter outcomes simply by invoking them. Thus, relying on debugging statements can inadvertently hide the bug instead of illuminating it. Profiling tools can help identify these elusive bugs without modifying code behavior.
00:18:54.080 For example, profiling can capture runtime data usage and identify heavy data calls that cause the buggy state. Understanding your system's performance can alleviate many debugging challenges related to the Heisenberg bug, root out issues without compromising code integrity.
00:20:09.260 The 'Mandel bug' presents as an overwhelming issue: everything seems broken at once, contributing to a high-stress environment. Users may find simultaneous failures in multiple applications or functionalities, raising alarms within development teams. Identifying the single point of failure allows for more efficient resolutions.
00:21:59.540 To resolve chaos like that presented by the Mandel bug, we need to check our systems methodically, frequently reverting to our logs or diagnostics to find likely causes. Keeping an eye on server usage and activity can point out when issues are happening, giving context to the chaotic nature of such bugs.
00:24:54.500 Ultimately, I have good news. There are tools available to help debug these types of issues; you don't need to rely solely on instinct. Armed with the right knowledge and tools, including shared documentation and resources, we can develop a structured approach to debugging.
00:25:02.500 Thank you for attending this session. If you have any questions about debugging techniques or just want to chat about my work at MailChimp, please feel free to find me during lunch. I'd love to connect and provide valuable insights on web development.
Explore all talks recorded at Keep Ruby Weird 2016
+1