RubyConf AU 2020

Summarized using AI

My Experience With Experience

Xavier Shay • February 21, 2020 • Earth

My Experience With Experience

In his talk at RubyConf AU 2020, Xavier Shay explores the complexities of learning from experience in professional environments, particularly within software engineering. Drawing from a decade of personal experiences, Shay emphasizes that while becoming a senior contributor takes time, understanding and improving the learning process can help individuals make the most of their professional journey.

Key Points

  • Learning from Experience: Shay starts by acknowledging that learning from experience is inherently challenging. He queries how individuals can enhance their learning, and how coaching others to improve can be equally difficult.

  • Feedback Loops: The concept of feedback loops is discussed, particularly in relation to software engineering practices like red-green-refactor and A/B testing. However, Shay highlights the limitations of this model when applied to wider career-related challenges.

  • Wicked Environments: Shay introduces the idea of wicked learning environments, where the mismatch between learning settings and real-world applications can lead to ineffective learning. He differentiates between kind environments (where practice and application align) and wicked environments (where they do not) using the hiring process as an example.

    • Interview Process: Shay discusses how interviews often fail to accurately predict job performance. He explores potential blind spots and the implications of mismatches between the learning and performance environments.
  • Broader Learning Paradigms: He reflects on how Ruby developers may miss out on learning from other methodologies. Although the Ruby community excels in certain areas, it can also lead to limited exposure to diverse engineering practices.

  • Expert’s Curse: This concept outlines how useful techniques can become restrictive philosophies, leading to dogmatic thinking. Shay argues for a more flexible approach to problem-solving that avoids rigid lenses.

  • Collaboration for Better Decision-Making: Shay references Philip Tetlock’s research on forecasting and emphasizes that collaboration among diverse experts fosters better decision-making in uncertain environments.

Conclusions

Xavier Shay's talk culminates in the understanding that improving our ability to learn from experiences requires a recognition of the complexities involved. By exploring different models of learning, acknowledging our blind spots, and remaining open to diverse methodologies, we can enhance our learning processes and professional development.

Takeaway: To succeed in the evolving landscape of software engineering and beyond, it’s essential to embrace a variety of learning experiences and methodologies while being mindful of the limitations posed by our established practices.

My Experience With Experience
Xavier Shay • February 21, 2020 • Earth

Xavier Shay

Becoming a senior contributor to your organization takes years. It's a process that is stubbornly hard to accelerate - it takes much more than drilling code katas!

In this talk Xavier reflect on situations he's encountered over the last decade, and apply some academic models he's found useful to explain and learn from them. By doing so we can better understand the limits to learning and prepare ourselves to make the most out of our experience.

Xavier recently moved back home to Melbourne after spending eight years in San Francisco, mostly as an engineering leader at Square. Currently working at Ferocia building Up, a fancy new digital bank. He's scheming to introduce the RubyConf 5K tradition to Australia.

Produced by NDV: https://youtube.com/channel/UCQ7dFBzZGlBvtU2hCecsBBg?sub_confirmation=1

#ruby #rubyconf #rubyconfau #rubyconf_au #rails #programming

Fri Feb 21 16:35:00 2020 at Plenary Room

RubyConf AU 2020

00:00:02.810 Thank you so much, Dana! That was fantastic, and yes, you managed to fit so many pink slides! I loved it! Not that you can tell, I like pink. My mother is so disappointed that I love pink; she spent so many years trying to make sure her daughters weren't into pink.
00:00:08.130 She tried to hide it from us. She tried to do the same with sugar, but that didn't work. Now we are up to our final speech of the day, actually the presentation of the day. We've got Xavier Shay coming to talk to us. He works with the wonderful folks at Ferocia building Up, who you may have heard about. I've been talking about them and their great MarioKart competition and their excellent banking service over the last few days.
00:00:21.439 He's an active member of the open-source community and a member of the Aspect team. He previously worked on Bundler and is a contributor to Ruby on Rails. He recently moved back home to Melbourne after spending eight years in San Francisco, mostly as an engineering leader at Square. He blogs about code, books, and politics, but I don't think all at the same time. He's a runner, a swing dancer, a vegan, and a piano player. Please welcome, for our last presentation at the conference, Xavier!
00:01:16.230 Quick intro: I work at Ferocia, where I'm working on Up. You've probably seen this up at the back; we've been around a bit. If you scan the QR code here and haven't used it before, you'll get ten bucks, which is easy money as far as I'm concerned. Before Up, I was doing a lot of leadership coaching for executive teams, and before that, I was an engineering leader at Square.
00:01:31.650 Before that, I worked a lot with startups around Melbourne. My takeaway from all this is that learning from experience is hard. That's probably the most obvious statement you've heard today. However, I think about this a lot: how can we learn better?
00:01:39.120 From a personal perspective, how can I learn better? But also, in my role, I've had to coach many people who often ask, 'I want to get better. How do I get better?' They are usually very good at what they're doing, so it's unsatisfying to simply advise them to do more of it. I've thought a lot about this question, and I'll spoil the talk by telling you that I don't really have any great answers, but I wanted to share some of the things I've encountered, the things I think about, and what has actually helped me move through this problem.
00:02:02.280 To start with, yes, learning from experience is hard. But specifically, why is it hard? In what ways is it hard? I want to present a couple of different models for learning, environments, and decision-making, and see if we can glean anything from them to better understand what we're actually dealing with here. This is part one of the talk, addressing exactly why this kind of learning is challenging.
00:02:13.480 I'll start by talking about a concept called feedback loops. I'm sure you're familiar with feedback loops; these include things like red-green-refactor, review, CI/CD, and A/B testing. These are the nuts and bolts of software engineering, and I expect engineers to get these basic feedback loops down in a couple of years.
00:02:31.650 However, I wanted to spend this talk discussing what comes after that. If we take the concept of feedback loops and look at the other tasks we need to do to be successful at our careers, we see that the concept doesn't necessarily apply well at all. For example, what about code organization? What does a feedback loop look like there? What about introducing new technology, or features we work on, interviewing, hiring, and mentoring? Applying the feedback loop model here can feel unsatisfying.
00:02:50.320 For example, is having a shorter feedback loop better in these instances? It doesn't really fit. We can say that longer feedback loops are more ambiguous or harder to assess, which is true, but it's not particularly helpful. So, we can't just shrug our shoulders and say, 'Yep, feedback loops are hard.' I wanted to move beyond this and say that while the concept of feedback loops is useful in itself, it isn't enough for many of the harsher problems I deal with.
00:03:10.780 So, let's look at something different. I want to try a model of wicked environments. I learned about wicked environments via a book called "Range" and a paper by Hogarth and friends that details two settings in which inference occurs. In the first, information is acquired, and this represents learning; in the second, it is applied – predictions or choices are made.
00:03:39.420 This essentially states that we learn, then use that knowledge to make predictions or decisions. These are two separate environments. When those environments align perfectly, we create what we call a kind learning environment. For example, in the feedback loops we discussed, if you're learning how to perform a red-green refactor loop, you can practice it in exercises, and when it comes to actually doing it for real, it applies fairly similarly.
00:04:02.110 Similarly, games like chess enable practice and play to mirror each other consistently, hence creating a kind learning environment. These environments are generally easier to learn from because practice yields meaningful feedback.
00:04:32.920 Conversely, a wicked learning environment has mismatches, where what you're learning doesn't necessarily correspond with the problem you're trying to solve. It’s interesting because you may not even realize the mismatch is occurring. The paper continues to quantify how these mismatches can manifest.
00:04:49.110 To illustrate this, I’ll discuss different types of wicked learning environments using hiring as a concrete example. In this scenario, we have a learning environment defined by the interview process.
00:05:00.300 In this situation, I need to predict how well candidates will perform at work based on their interviews. If the learning environment aligns perfectly with the target environment, we would see a clear correlation.
00:05:36.820 However, we all know that interviewing is often imperfect; we don't see this exact line. So what happens when our learning environment is smaller than the target environment? In this case, using an interview process, we might not learn about candidates who don't pass the interview process. This creates a blind spot, meaning our predictions about candidate performance may not be accurate.
00:05:58.660 Alternatively, what if the target set is smaller than the learning set? In this case, a strong interview process may not reflect accurately on performance because it misses good applicants who decline to apply.
00:06:06.340 There can also be cases where both sets overlap but are distinctly different. For example, you might have candidates who pass the interview but shine due to an excellent mentoring program that skews your overall results.
00:06:33.490 There might be individuals who may not perform well based on the interview but excel at their roles due to a supportive environment. This complexity can be highlighted by the famous quote from W. Edwards Deming, the management theorist: ‘What should we do with the dead weight in our organization?’ His response was, ‘It depends, were they dead when you hired them, or did you kill them?'
00:07:06.690 This highlights the challenges when your learning environment doesn't match up with your target environment, leading to a lack of correlation between tests and performance, something you want to avoid. The interesting point here is that the model itself isn't quantifiable. You can't simply state that they intersect at 20% or whatever. Instead, the utility lies in listening to these mismatches and reflecting on which case you are dealing with.
00:07:49.000 From a technical, mathematical perspective, we see that missing data can present major challenges. There’s nothing concrete we can do about it. However, with interviewing, we actually understand the domain better, which gives rise to potential corrective measures.
00:08:02.110 For example, if we want to improve our blind spot in interviews with candidates that didn’t get hired, we can maintain relationships with these candidates and share our assessment from the interview process. I’ve personally seen cases where I believed the candidate would succeed in the future, so I kept in touch only to see them thrive down the line.
00:08:37.079 We might also consider taking a chance on those whom we're unsure about. If you have the luxury of being able to say, ‘Okay, let's try this out,’ you can potentially learn from these ‘edge' cases and often be pleasantly surprised.
00:09:18.630 Additionally, we can also be explicit about what we screen candidates for and experiment with our screening methods. For example, if I’m hiring for curiosity as an attribute, I might create a question like, ‘Tell me about something you learned recently.’ This question might not guarantee a good interview, but it gives us the chance to see how people respond.
00:09:43.660 With a more extensive pipeline, we can create a feedback loop that helps us understand how our questions correlate to performance, thereby allowing us to tackle the structural problem of interviewing more effectively.
00:10:06.790 Alright, this is the concept of wicked environments, which I thought was interesting. Now let’s apply this to our work with Ruby. In our jobs, most of us use Ruby and had to make architectural decisions about how to write code. When I think about the learning environment as Ruby itself and the performance environment as our success, I'm struck by things we may or may not learn due to our Ruby community focus.
00:10:54.690 For example, Ruby developers typically lean towards processes over threads for historical reasons. We also tend to avoid certain database practices, and very few experience working with stored procedures.
00:11:14.150 Even with test-driven development (TDD), we are generally proponents of testing, but have you considered how what we focus on limits our exposure to other methodologies? For instance, property testing is much bigger in more strongly typed functional languages.
00:11:33.520 Similarly, concepts like dependency injection in many modern Java projects or Command Query Responsibility Segregation (CQRS) aren't highlighted as much within Ruby. The challenge here is recognizing that while we excel in our community focus, we often miss out on broader software engineering methodologies. Thinking about this idea of wicked environments, my instinct is to expand my 'blue circle' of learning to better align with the performance goals.
00:12:06.190 But as I pondered more about it, I realized the idea that my blue circle would always expand over time didn’t ring true. I get better at solving problems with the tools I have, yet remain largely unaware or quite separate from other methods that exist.
00:12:17.770 Next, we navigate to the expert's curse, which I think has much to teach us about how we approach these topics. The expert's curse describes when a useful technique transforms into a philosophy. It occurs when a tool designed for problem-solving becomes a lens through which we assess all problems.
00:12:55.280 For instance, we might view everything through the TDD lens, automatically judging code as bad if it lacks tests. This approach reinforces dogmatic thinking rather than critical analysis of new situations. This not only leads to poor architectural decisions but blinds us to alternate solutions.
00:13:25.390 Next, there's the science of forecasting, based on making better decisions in wicked environments. Philip Tetlock is a leading author in this realm, investigating how to navigate unknown settings.
00:13:38.280 What I learned is that collaboration among diverse experts can yield better decision-making than allowing a singular philosophical lens to dominate.
00:14:00.460 This resonates with a truth in the industry where overconfidence can lead us to poor decisions. I’ll share this quote from 'Range': 'Experts viewed every world event through their preferred keyhole. Doing this made it easy to craft compelling stories about anything that occurred, told with adamant authority.'”},{
Explore all talks recorded at RubyConf AU 2020
+15