Agile
Keynote: On the Care and Feeding of Feedback Cycles by Elisabeth Hendrickson

Summarized using AI

Keynote: On the Care and Feeding of Feedback Cycles by Elisabeth Hendrickson

Elisabeth Hendrickson • November 08, 2021 • Denver, CO

The keynote speech by Elisabeth Hendrickson at RubyConf 2021 focuses on the significance of feedback cycles in software development. Hendrickson breaks down the essence of feedback into understanding empirical evidence—what works and what doesn't—through various feedback cycles like Deming's Plan-Do-Check-Act and Agile approaches. She emphasizes the risks associated with delayed feedback, which can lead to unnecessary speculation and inefficiencies in development processes.

Key points discussed include:

- Understanding Feedback: Feedback is the empirical assessment of actions taken, distinguishing valid data from mere opinions.

- Feedback Cycles in Software: The life cycle of software projects is laden with speculation, requiring systematic feedback through testing and iterations to minimize risks.

- The Importance of Release Feedback: Relying solely on in-the-wild user feedback can lead to disastrous outcomes if not approached cautiously.

- Types of Feedback: Different types of feedback are highlighted, including unit tests, system tests, and user feedback, each with varying cycle times and importance in the development process.

- Cautionary Tales: Hendrickson shares various cautionary tales from her experience, including the drawbacks of long pull request processes, the ramifications of not addressing test pollution, and the importance of developer ownership in maintaining the quality of tests.

- Improving Feedback Loops: Strategies to improve feedback loops are proposed, emphasizing the need for shorter cycles, addressing pollutions in feedback, and fostering a culture of continuous improvement.

- The Learning Cycle: The seminar concludes with the notion that every feedback cycle offers learning opportunities, turning failures into growth experiences.

Hendrickson underlines that by nurturing feedback mechanisms and reducing latency in responses, teams can enhance their agility and deliver higher-quality software products more confidently. The session wraps up by asserting that there’s no failure in the software process, only learning, thus encouraging an iterative and reflective approach to software development.

Keynote: On the Care and Feeding of Feedback Cycles by Elisabeth Hendrickson
Elisabeth Hendrickson • November 08, 2021 • Denver, CO

RubyConf 2021

00:00:10.719 I'm going to confess, this is a little bit surreal for me. Even without COVID, I wasn't getting out a whole lot. In 2019, I don't believe that I attended any in-person live conference events like this, so it's been probably at least three years. I'm a little nervous, so I hope this all works for all of us. But it is so good to see you all in person, and I'm so grateful to be here and to see you.
00:00:43.040 So, let’s talk about feedback. Where to start? Well, better to just get started. This talk is divided into four parts. First, we’re going to talk about the nature of feedback in an abstract way; we’ll go through that fairly quickly. Then, we’re going to talk about software and how feedback applies to it. After that, I’m going to tell some cautionary tales. Finally, I’m going to bring it home.
00:01:09.040 Let’s start with the question: What is feedback? It is the simplest thing in the world. You do a thing, you see what happens, and you gather empirical evidence that tells you if the action you took had the intended effect. The empirical evidence part is the super important aspect, because opinions are not actually feedback unless they provide concrete information about the effect of what you did.
00:01:41.520 We have many fancy ways of describing feedback loops. There’s the Deming cycle: Plan, Do, Check, Act. It's just a feedback cycle. We plan what we’re going to do, then we do it, check to see how it went, and act based on the information we've gathered. If you receive feedback and do not act on it, then that’s a problem. There’s also the OODA loop, where John Boyd introduced us to Observe, Orient, Decide, Act—that's another feedback cycle. And then there's the Lean Startup model: Build, Measure, Learn. The idea is to build the smallest possible version of your product to test your hypothesis. It’s basically the scientific method at work.
00:02:25.680 Now let’s talk about software. This is where things get a little more complicated. Once upon a time, when I gave variations of this talk—by the way, I have given this talk many times, and it is slightly different each time. This time, for reasons I still don't understand, I decided to rebuild all the slides. So, you’re looking at an entirely new deck, even if it looks a bit familiar. I will also mention this is the first time I’m giving this talk from Google Slides. This is not a Google advertisement, but I’ve realized that the world has sufficiently changed that I no longer feel the need to have everything local on my computer; the internet is ubiquitous now.
00:03:19.760 Let's talk about every software project ever. Whether you work in an agile manner and ship frequently or you work on large projects that ship every five years, the process begins with analysis. This might involve an analysis phase or just informal user research to understand what we want to build. This stage is all about analyzing the problem we want to solve and forming some hypotheses as we engage in design, which might involve UI design or architecture.
00:04:01.680 Notice that there is a speculation curve I haven’t mentioned yet. At each step of the way, we’re building assumptions based on previous assumptions. For instance, we assume that we understood the problem. We speculate that when business analysts gather requirements from stakeholders, they comprehend what the stakeholders actually need instead of just providing solutions disguised as needs. We are continually speculating that our design will indeed solve the problem we identified. As we proceed, we inevitably iterate, whether we are using a waterfall or agile approach. You’ll notice the slope of the speculation curve starts to decline because now we receive some empirical evidence.
00:04:53.760 Then we conduct final testing. If you work in a traditional environment, you might have a dedicated QA department carrying out the final test cycles. If you work in an agile setup, you might be deploying to a staging server. Regardless, there’s a release, and this is when the results of our efforts reveal themselves. This entire life cycle of the feedback process—from doing a task to observing the outcome—addresses all risks inherent in software development. Clearly, the longer this cycle continues, the greater the risks we assume.
00:06:18.720 Here’s a digression: let’s talk about Schrödinger's cat. Why on earth am I discussing Schrödinger's cat? You may have heard of it. This thought experiment, proposed by Schrödinger in 1935, illustrates the phenomenon of superposition in quantum mechanics, where until an observation is made, particles exist in multiple states at once.
00:06:30.800 In this thought experiment, there is a cat in a sealed box, along with a device that has a 50% chance of releasing poison gas based on the decay of a radioactive isotope. If the isotope decays, the cat dies; if it doesn’t, the cat lives. Until the box is opened and the cat is observed, it exists in both states simultaneously. This absurdity demonstrates that without observation, we remain in a state of speculation, akin to the state of awaiting empirical feedback.
00:07:02.320 So, until you observe the results of your efforts in the wild, you are merely speculating. In theory, Agile solves this for us by promoting frequent shipping, but let’s face reality: Agile has its challenges. Although in practice, if we adhere to Agile principles, we're able to ship everything with more confidence and reduce risk, I have to stress that even in a SaaS environment, empirical evidence can still be elusive.
00:07:42.560 Theoretically, we should have access to ample testing opportunities through system tests, unit tests, and performance tests. However, what often happens is that we don’t arrive at the ideal state. Sometimes this is intentional, due to the expense of testing environments, or out of avoidance because doing performance tests can be burdensome. Such a significant gap surfaces between desired feedback and actual feedback.
00:08:56.000 As we iterate frequently, that gap represents speculation. We tell ourselves it will be fine and maybe even write it off as inconsequential, but the duration between our observations builds risk, making it easier to choose fragility over agility.
00:09:45.600 If we wait for in-the-wild feedback, we often realize we’ve waited too long and opened ourselves to significant risk. The tragic reality is that delaying gathering feedback or testing can lead to significant failures, as exemplified by the 737 MAX disaster. What I'm hoping is that you take away an understanding of different levels and types of feedback you could be obtaining.
00:10:02.320 For example, a unit test answers a specific programming question: Did the code I wrote perform as intended? However, even with a comprehensive suite of unit tests, we might not accurately depict the system's overall behavior from the user's perspective.
00:10:18.400 One of my past consulting clients sought help three years into a project that was delayed. They hadn’t made it to formal QA, and though they claimed to have tested all their components, they struggled with managing those tests amidst progressing development. Little did they know that their project was expanding beyond an anticipated schedule due to late-breaking surprises and unforeseen complexities.
00:10:56.720 The CI system provides information across configurations that can help to present a comprehensive picture of the project's health, but that relies on having someone examine the work and confirm it aligns with intended value. Feedback from stakeholders reflects critical direction as development continues.
00:11:35.680 For each of these feedback levels, there is a distinct cycle time; unit tests take seconds to minutes, CI integration tests take minutes to hours, while acceptance tests may take hours to even days before they are evaluated. Stakeholder feedback can take days to weeks, and user feedback can extend over years—the very nature of the industry dictates that we must adapt and work to deliver value consistently.
00:12:35.839 Now, let’s consider some cautionary tales. One that stands out is a team I was involved with that utilized a pull request-based process. In this environment, individual developers were incentivized to push their features through the pipeline before receiving any form of code review.
00:12:49.920 Once a developer completed some code, they would run slow tests locally for an hour, check in on a branch, and continue with the approval process. However, in this scenario, the organizational structure made it difficult for junior developers to receive timely feedback, and some pull requests sat without engagement for extended periods, creating demotivation.
00:13:09.440 As a result, the process was frustrating for those less experienced, like a junior developer who expressed immense frustration when she could not get two lines of code merged in over a week. Ultimately, she spent that entire week waiting and adjusting her request while no significant progress was made.
00:13:44.560 The end result was an inefficient flow that undercut morale among the team members. The solution then was to streamline this process, moving to pair programming instead of relying solely on pull requests. By fostering collective code ownership, we shifted the approach: two developers would simultaneously work on code, increasing opportunities for immediate feedback and allowing things to progress more swiftly.
00:14:29.360 So in thinking about your processes, consider how much waiting there is for feedback. If you can implement these kinds of changes, the goal is to reduce cycle time and thus allow for quicker iterations in feedback, which in turn can lead to more successful outcomes. Now let’s shift gears and examine branching strategies and batch sizes.
00:15:26.160 If possible, I prefer to keep all work on the main branch. When you start branching features rather than keeping our work in a principal place, it leads to larger batch sizes and less frequent merges, creating a lengthy lifecycle that adds increased risk. If there’s too much distance between merges, we open ourselves to unforeseen issues.
00:16:00.960 Feature branching may be a common practice and may have its advantages and disadvantages, but it's important to recognize that they can introduce additional risks due to elongated cycles between merges.
00:16:39.920 Now let’s move forward with cautionary tales I don’t recommend. In this instance, an organization decided every team should have its own branch—initially a seemingly reasonable decision. What transpired, however, was that developers became disconnected from the evolving main branch and wasted months of efforts to sync work.
00:17:06.720 Developers began merging their work into their branches but neglected to regularly sync with main, resulting in a complete loss of six months of further development efforts.
00:17:27.680 The incident is not unique; many organizations face a similar challenge—delayed feedback leads to waste in the development process. This approach fosters an environment where developers may begin to discount the validity of tests in a quality assurance group that they don’t feel responsible for.
00:18:18.240 In my previous experiences, working on rapidly changing, complex systems, we discovered that insufficient confidence in tests created inconsistencies in the delivery of software. We needed a cultural shift that resulted in improved accountability and ownership of testing, creating an environment that valued tests as vital feedback sources rather than disparate indicators.
00:19:22.720 One significant decision required an entire team to cease all feature development until we could rectify the testing environment. This drastic move could be contentious; however, it ultimately laid the groundwork for better confidence in the products being delivered.
00:20:10.880 So as you aim to improve processes, keep in mind the significance of cleaning up feedback loops while they are small. Once they become superfund sites, fixing them becomes a more grueling process. Keep the focus on the testing environments and ensure comprehensive understanding across all levels of the team.
00:20:59.840 Finally, let’s extend our discussion on testing and risk management. Organizations often face a context where testing results could be misinterpreted or downplayed by stakeholders, creating confusion about risk levels. Testing must be clean, clear, and understood—only then will it serve its purpose in building proper feedback cycles.
00:21:42.880 Regarding organization and ownership of feedback processes, individuals must feel a sense of shared responsibility to maintain the quality of our systems while also committing to producing the best software possible.
00:22:22.240 Next, I'd like to explore the concept of pollution in feedback cycles. When feedback lacks clarity or integrity, it erodes trust within teams. We must strive to maintain clean feedback loops in order to harness the true value of feedback systems.
00:23:09.440 One impactful example comes from working with intermittent builds leading to complex failures. Teams must find the balance of urgency in addressing pollution while acknowledging that certain barriers exist. Situations often arise where resources must be allocated judiciously to untangle the superfund site of accumulated technical debt.
00:24:05.280 When we identified sources of flakiness in tests, one effective strategy is separating blocking and non-blocking tests. However, ensure that blocking tests have sufficient coverage in place. Reducing execution time is another essential approach. Cultivating a culture of shared responsibility allows for greater ownership and a more engaged approach to resolving persistent issues.
00:25:05.760 To illustrate how this can be fun, I once ran a campaign where we offered pie rewards for shortening lengthy test times. It transformed the atmosphere among developers, fostering collaboration and community engagement in the pursuit of efficiency.
00:26:05.920 To wrap up my thoughts, I want to emphasize the importance of healthy feedback loops. Maintain tight cycles, minimize wait states, and continually address pollution in these cycles. It is critical to recognize the relevance of varying types of feedback as they serve distinct purposes in the development process.
00:27:06.080 Every feedback cycle acts as a learning opportunity, where cycles such as experience, observe, reflect, and abstract create a platform for enhanced understanding. Every cycle enhances your learning—there's no failure, only opportunity to learn.
00:27:49.600 Given that time is fleeting, I will not hold a Q&A here, but I would love to converse further about any of these ideas throughout the day. Thank you so much for having me, and thank you for laughing at my jokes.
Explore all talks recorded at RubyConf 2021
+95