Mental Models

Summarized using AI

Mind Over Error

Michel Martens • November 15, 2015 • San Antonio, TX

In the presentation "Mind Over Error," Michel Martens explores the relationship between human error and system design, particularly in the context of programming and software development. He emphasizes the importance of understanding how systems work to minimize human error, especially in high-stakes situations such as production emergencies. Martens draws insights from significant works like "Human Error" by James Reason and "The Design of Everyday Things" by Don Norman, highlighting that many errors stem from design flaws.

Key points discussed include:

- Human Error in Programming: Martens acknowledges that while it is common to make mistakes in programming, the consequences can be severe in production environments where time is critical.

- Mental Models: He stresses the need to build accurate mental models of systems to ensure effective problem-solving when things go wrong, emphasizing that understanding the inner workings of tools is crucial.

- Complexity and Simplicity: The presentation discusses how reducing complexity leads to better outcomes. Martens illustrates this through the evolution of chess notation as well as programming languages, using Ruby as an example to showcase the balance between expressiveness and complexity.

- Metrics of Complexity: Different metrics for measuring code complexity are presented, such as lines of code and cyclomatic complexity, which correlate with the difficulty of understanding software.

- Choosing Tools Wisely: Martens cautions against selecting tools based solely on popularity or superficial qualities. Instead, he advocates for a culture that encourages critical engagement with code to ensure appropriate tool selection.

- Emphasizing Stability: He advises focusing on lightweight and stable tools, shared through his projects and experiences, to minimize the cognitive load on developers.

- Programming Culture: Building a culture centered around understanding and simplicity in code will improve software development practices, leading to better outcomes and reduced errors.

In conclusion, Martens emphasizes the ongoing responsibility of programmers to cultivate an understanding of their tools, leading to simpler, more efficient solutions that inherently reduce the likelihood of human error.

Mind Over Error
Michel Martens • November 15, 2015 • San Antonio, TX

Mind Over Error by Michel Martens

Industries like aviation and power plants have improved their safety mechanisms using our growing understanding of the nature of human error, but have we kept up? How do we incorporate the ideas from Human Error, The Design of Everyday Things, and other great resources into what we build? I want to show you how to improve the safety of our systems by reducing their complexity and generating accurate mental models.

Help us caption & translate this video!

http://amara.org/v/H0cY/

RubyConf 2015

00:00:14.620 Hello, hi! My name is Michel Martens. I'm from Argentina and I've been using Ruby since 2003. At that time, there weren't many big frameworks like Ruby on Rails, RSpec, or Bundler. As far as I can remember, it was all about small libraries that highlighted how expressive Ruby is. This experience had a significant impact on me, making me strive to create small tools that solve very specific problems with minimal code. My username on Twitter and GitHub is lydia_vs_auburn, and most of the code I write is open-source. I also have a company called OpenRace, which provides an in-memory database. This presentation will focus on human error and how it relates to the concept of mental noise and simplicity.
00:00:44.239 We are all familiar with human error; we make mistakes all the time. During development and programming, we can hit the wrong key, forget a comma, or write an incorrect algorithm. Thankfully, programming is a forgiving environment, as we have plenty of time and minor consequences for errors. With the help of our text editors and interpreters or compilers, we can run the program and see if it works. However, the situation changes dramatically when we're dealing with an issue in production, such as when our website goes down during an emergency.
00:01:14.820 A few years ago, I realized that I had never trained for that kind of situation. Dealing with an emergency means that we are usually racing against time. I also became aware that there are many types of human errors underlying these crises. For example, if my jacket was poorly designed, I might inadvertently make an error in my actions due to that design flaw.
00:01:50.540 The primary idea here is that while we might make errors during development without severe repercussions, when it comes to production scenarios where customers are complaining, each mistake can be critical. I came to understand this after reading extensively about accidents in aviation and power plants. The topic is fascinating and addictive. In everything I read, there was always a reference to a book called 'Human Error' by James Reason. He conducted a great deal of research in this area and proposed a model to classify human errors and behaviors, also suggesting ways to prevent various types of mistakes.
00:02:32.080 Another significant book related to this subject is 'The Design of Everyday Things' by Don Norman. He is a psychologist as well as an engineer, and much of what he discusses directly applies to what we build. The central idea of this book is that for every human error, there's often a design error. As humans, we make mistakes; it's part of our nature. Good design should anticipate this and help prevent silly errors while also allowing us to detect and correct these mistakes.
00:03:13.799 While I can't cover everything in these books, I highly recommend reading them. I want to focus on an idea that closely relates to programming and its complexities. It pertains to building accurate mental models of the systems we create or use. A mental model is our representation of a system. It encompasses how that system works and its internal design.
00:03:46.140 It's essential to clarify that knowing how to use something is not the same as knowing how it works. For instance, in software development, understanding how it works means we have to read the code and comprehend its functionality. If we can create an accurate mental model, we’re less likely to misuse the tool. Moreover, when something goes wrong, we'll know exactly where to look for the issue. If our knowledge only extends to how to use a tool, we may feel like experts, but when problems arise, we could be clueless about their origins and struggle to fix them.
00:04:14.790 In emergency scenarios, it's the worst time to be trying to understand how the system works. Therefore, we must proactively learn how our tools are designed and understand their workings in advance. The primary barrier to understanding how something operates is its inherent complexity.
00:04:55.200 I have an example of how complexity can be reduced from a historical perspective, unrelated to programming—specifically, chess notation. Four hundred years ago, people wrote chess moves in a cumbersome manner: 'the White King commands his knight to the third house before its own bishop.' Clearly, that was not an efficient way to describe a move, and they recognized this.
00:05:14.420 Over the years, they discovered the value in keeping track of their games. A hundred years later, notation had evolved further to compress the information significantly. Eventually, they adopted a more standardized coordinate system to describe moves, leading to better communication about chess strategies. This evolution improved the overall level of chess play. Similarly, when I began working with Ruby, we wrote test cases in a straightforward manner. Later, frameworks like RSpec and more refined syntax came into play, which still effectively communicated our assertions. However, we witnessed a change in paradigm where developers opted for more complex syntax, which ultimately sacrificed performance and clarity in favor of additional functionality.
00:06:12.180 This illustrates an intuitive approach to handling complexity: we need to understand the boundaries of minimal complexity. For example, consider a simple function that returns the number 42; it can be defined simply, but we can also make it more complex unnecessarily.
00:06:40.080 In fact, we can define it in infinitely complex ways while still achieving the same result. However, this serves to highlight that simplicity often leads to better outcomes. In software complexity, we examine the relationship between a program and the programmer, specifically how challenging it is for an individual to comprehend a program. This does not pertain to computational complexity like Big O notation, but instead, it's a more psychological aspect of understanding based on the clarity and simplicity of code.
00:07:23.120 Several metrics serve to capture complexity, although none are perfect yet. One early idea proposed by Wiebe in 1974 was simply to count lines of code, a method that, while simplistic, became popular for estimating software effort and cost. Another well-known metric is cyclomatic complexity, which McCabe introduced in 1976. This metric counts all possible execution paths in a program, providing a score that correlates with its complexity. Also popular is the concept of code volume, which encompasses counting operators and operands to arrive at a complexity number.
00:08:09.760 Interestingly, even though these metrics seem disparate, they correlate closely with experimental measurements of software understanding difficulty. A paper from 2007 demonstrated a strong linear correlation between lines of code and complexity. This suggests that reducing the lines of code in software reliably correlates with a decrease in complexity, which is something we want to achieve. Here, we're assuming good clarity—in other words, the code is readable and clear.
00:09:09.920 Ruby embodies an interesting paradox: it's an extremely expressive language, often requiring significantly less code to convey solutions compared to others. Yet, while Ruby's expressive capabilities are its strength, it often leads to producing increasingly complex tools. Consequently, we tend to prioritize complex, heavyweight frameworks, even though it's entirely feasible to create smaller, more efficient tools within the same community.
00:10:10.920 For instance, RSVP has hundreds of thousands of lines of code to solve similar problems as Sinatra, which has far fewer. In fact, countless simpler libraries effectively solve analogous issues that we encounter daily. To underscore this, we could build modern web applications using a lightweight stack—Tools like 'Cube' can serve as routers, while libraries like 'Shield' suffice for authentication with less than 100 lines of code, in contrast to 'Devise,' which exceeds 60,000 lines.
00:11:14.279 My latest project, called 'Saira,' is a routing library similar to Cube, designed to be efficient because it prioritizes a modular approach. I wrote a tutorial for it, guiding users through building a demo application encompassing user accounts, activation emails, and template rendering, encapsulating all essential components of a typical application. The goal is to help newcomers build with simplicity in mind.
00:12:14.520 The philosophy behind these libraries is that they remain stable over time. For instance, in the contributing guidelines for some libraries I've created, if a solution doesn't change or improve, there's no reason to alter that software. Many developers tend to choose tools based on the recency of updates, which often misguides them; tools I've used successfully can be years out-of-date but perform quite well.
00:13:14.390 Moreover, a tool that changes frequently forces constant learning, which can create instability in development. It’s crucial to build a culture of understanding and reading code, striving for simplicity to effectively mitigate errors. Furthermore, I want to bring awareness to the work of Leslie Lamport, who compared programming to cars: both require maintenance, but programming inherently needs none. An if-else statement remains intact and correct regardless of how many times it’s executed.
00:14:17.600 The idea is that we can prove a program's correctness, yet we can't ensure a vehicle's reliability unless it runs well every time. This comparison shows how we perceive software maintenance; we monitor 'runs' and 'maintenance' with a mathematical object instead of treating software as a physical entity that can wear out over time.
00:15:27.400 As we deal with increasingly intricate systems that we don’t fully understand, our decision-making can become irrational, particularly regarding tools. For instance, users might select a library based on superficial qualities or popularity instead of truly understanding its function and applicability. We need to foster a culture where programmers engage with the code, comprehensively read it, and evaluate it critically to fulfill their specific use cases.
00:16:45.320 Finishing up, I believe it's our responsibility as programmers to cultivate this understanding, which will ultimately lead to simpler, more effective solutions. If you resonate with these ideas, I invite you to join our community, where we have discussions on IRC and a subreddit where we can compile information on these topics.
00:17:18.490 Regarding questions, if you're curious about using something from Active Support, such as extracting a handy tool from a library, I recommend understanding the core of the problem first. Find solutions that are lightweight and effective. From my experience, focusing on the essential algorithms and data structures can yield lasting results and minimize complexity. Simple, effective solutions often endure.
00:18:00.850 As for the transition from large frameworks to smaller tools, there is no one-size-fits-all method. While some developers incorporate minimalist tools into larger projects, others demonstrate to clients how a simpler application can have significantly less code and outperform more complex alternatives. Your proof lies in quantifiable improvements and performance, as these can become persuasive arguments.
00:19:03.780 To close, on the topic of metaprogramming, I found it fascinating at first. However, threading the line between flexibility and cognitive load is crucial; tools must be easy for both developers and the machine. I would advocate for cautious use of metaprogramming, as it contains propensity for performance bottlenecks.
00:20:17.670 Thank you for your attention.
Explore all talks recorded at RubyConf 2015
+76