Nate Berkopec
Panel: Performance... performance
Summarized using AI

Panel: Performance... performance

by Sam Saffron, Richard Schneeman, Eileen M. Uchitelle, Nate Berkopec, and Rafael Mendonça França

The video features a panel discussion titled "Performance... performance" from RailsConf 2017, moderated by Sam Saffron and including notable speakers Richard Schneeman, Nate Berkopec, Eileen Uchitelle, and Rafael França. The panelists explore various aspects of application performance, focusing on how developers can enhance the speed and efficiency of their Rails applications.

Key points discussed during the panel include:

  • Importance of Performance: The panelists underscore that performance is critical for user experience and business success, especially in e-commerce.
  • Getting Started in Performance Work: They advise newcomers to engage with performance by observing existing metrics, practicing optimizations, and collaborating with the community.
  • Ruby 2.4 and Garbage Collection: There is a discussion on whether tuning the Ruby garbage collector is still necessary. Panelists highlight the importance of measuring performance first, suggesting that changes should be based on measurable outcomes rather than assumptions.
  • User Experience Impact: The definition of fantastic performance goes beyond server-side metrics; it must consider user experience across different networks and devices. Variance in request handling is also emphasized, as it can affect user perception significantly.
  • Focus on Slow Endpoints: The panelists recommend prioritizing optimization efforts on the slowest endpoints, as fixing these can lead to the most substantial performance improvements.
  • Community Engagement: They encourage contributing optimizations back to the Rails framework if multiple developers are experiencing similar issues, enhancing collective performance.
  • Managing Performance Regressions: After framework upgrades, it’s essential to monitor performance to quickly identify and address regressions, utilizing effective monitoring tools.

In conclusion, the panel highlights the importance of thorough measurement before and after optimizations, focusing on community efforts to improve performance collectively, and ensuring applications maintain effectiveness after upgrades. The session provides valuable insights for developers looking to enhance the performance of their Rails applications.

00:00:11.809 Welcome everyone! We have this amazing panel on performance. I'm going to introduce Sam, who's the moderator. Sam Saffron is a co-founder of Discourse and the creator of several gems, including Mini Profiler, Memory Profiler, and Mini Racer. He has written extensively about various performance topics on his blog, samsaffron.com. Sam loves making sure Discourse keeps running fast. Enjoy!
00:00:25.019 Hey! Is everybody still awake? I'm just going to have the panelists introduce themselves. I have a bunch of questions prepared, but we're also accepting questions through the chat. So, please log in and type your questions. I'll triage everything as we go along. If anything interesting pops up, or if there's any tech issue like a vet crashing, we can revert to the traditional format.
00:00:51.629 Let's start! My name is Nate Berkopec. I'm an independent performance consultant. I work on people's Rails applications to make them faster, and I blog about it online at speedshop.co.
00:01:10.320 I am Rafael França. I work at Shopify as a Production Engineer. My job role focuses on ensuring that Shopify operates primarily under performance considerations. I'm also a member of the Rails core team.
00:01:35.610 Hi, I'm Eileen Uchitelle, and I work at GitHub. My job sometimes involves performance, but not always. However, I have given a few talks on performance topics, especially on how to speed up integration tests and other aspects in Rails. I'm also on the Rails core team with Rafael.
00:02:00.240 Hi, my name is Richard Schneeman, but I go by EAMs on the internet. I work for a startup in San Francisco called Heroku. They are quite up-and-coming, and some of you may have experienced issues with them recently. My role involves performance work where customers report issues, and my job is to help make things faster.
00:02:21.320 I wrote a tool called Derailed Benchmarks, and I also blog about performance issues. Thanks for having me here!
00:02:35.000 To kick off the discussion, I'd like to talk about how newcomers can get involved in performance work. If I have never done anything publicly before and have no experience in performance work, where should I start? Let's go down the line.
00:03:02.270 I started in performance work when I worked at an e-commerce company. It was clear to me then, as it is now, that fast sites make money and enhance user experience. I've always had a customer-focused view where the speed of a website is a primary factor in user interaction. I wanted customers to buy more, and as I dived deeper, I found that fixing discrete problems was highly rewarding; for example, if a page used to take ten seconds and I reduced it to one second, it felt like a great achievement.
00:03:20.239 Working at Heroku, one thing I do is when tickets come in for Ruby performance issues that cannot be handled by our support staff, they escalate to me. People often mention Heroku not providing the performance they expected; I see those performance issues as bugs. When someone has a bug, they might blame Heroku, but 99% of the time, unless there is an ongoing outage, it's not our fault.
00:03:37.490 I spend a fair amount of time helping customers find the right solutions, but I also have to prove that it's often an issue with their application, rather than Heroku itself. The troubles I experienced that led to the development of Derailed Benchmarks arose from performance issues I couldn't debug on Heroku. Before we had tools to access metrics and performance monitoring directly from dynos, I had to reproduce issues locally. If a performance problem surfaced there too, I could demonstrate it wasn't about Heroku.
00:04:13.010 Along the way, I met others who care about performance in the community. People regularly reach out to me, asking about certain behaviors in their apps. It's akin to someone saying a cookie went missing from the cookie jar, and I have to investigate thoroughly.
00:04:48.320 Is performance work only for advanced developers? It certainly helps to be experienced, but curiosity is essential. Performance work connects to numerous layers of the architecture, many of which you may not have encountered before. If a web request takes five seconds, you need to assess multiple layers including the network, application server, application code, Ruby VM, and even the system kernel.
00:05:17.460 I would advise that if you want to engage in performance work, you simply need to start doing it. Writers write; similarly, performance engineers need to work to understand performance. If you’re unsure where to begin, observe what others are doing with performance, the tools they're leveraging, and potential spots for optimization.
00:06:24.560 The chat room is bustling with questions, so I will pivot to those. One question is: 'Is tuning the Ruby garbage collector still needed in version 2.x?' My response is that this is a trickier question than it seems. Instead, you should be measuring performance.
00:06:48.000 If your measurement shows that adjusting certain settings leads to improved performance, then, by all means, do it. However, blindly adjusting garbage collection settings can lead to performance degradation if you don't know what the impact is.
00:07:09.830 At Discourse, we have made adjustments to our garbage collection settings based on feedback received from monitoring data. You asked if I regret publishing a blog listing all the GC settings we used. My answer is no; I believe in sharing information.
00:07:48.510 When dealing with performance problems, many developers jump to fix issues without measuring first. You can’t fix a performance issue without knowing its cause, much like you can't just put a rescue block to get rid of an error. A past incident highlighted the necessity of measurement: even when a profiler indicated an issue was resolved, benchmarking revealed it remained slow, emphasizing why pre- and post-optimization measurements are fundamental.
00:08:23.150 SI believe the performance issue is often tied to being able to understand how allocation levels change as the app runs and how garbage collection performs. You should adjust settings based on measurable outcomes.
00:08:40.180 Another interesting topic that comes up is understanding fantastic performance: what does it mean? It involves looking at the entire experience, not just server-side metrics. From user experience to handling JavaScript and making sure your site loads quickly on various networks, all of these contribute to what fantastic performance looks like. We can't just assume our server-side performance metrics indicate user satisfaction.
00:09:09.580 Additionally, I often think about variance in request handling. If most requests clock in at ten milliseconds, but one request lingers around for ten seconds, this high-latency request significantly impacts user perception and load times.
00:09:26.960 I generally prefer to focus on the slowest endpoints as these are the ones that, when optimized, yield the most significant performance benefits. In balancing performance work, it’s essential to prioritize.
00:09:43.300 Sometimes, I find myself engaging in preemptive optimizations even before realizing if they are needed. This can result in unnecessary work, as well as getting caught up in trendy optimizations, rather than focusing on real performance issues.
00:10:10.600 If you're frequently optimizing code that others are likely encountering too, do push those changes upstream to Rails or relevant libraries. If multiple developers are facing the same issue repeatedly, uniform solutions will benefit everyone.
00:10:25.040 We should also bring forth the issue of managing pull requests effectively. When submitting performance changes, providing detailed benchmarks, reproducible test cases, and a narrative surrounding the improvements significantly increases the chances of PR acceptance.
00:10:41.560 If everyone in the community is encountering similar performance problems, putting forth actionable measures will give us a clearer path to enhance applications collectively.
00:11:00.440 To tackle major performance regressions following a framework upgrade, our first step should always be to identify the problem. Effective monitoring tools can help highlight how performance ratios change with each upgrade; if something rises, we need to address that promptly.
00:11:15.950 So, to conclude this session, the overarching lessons revolve around measuring thoroughly before and after optimizations, focusing on a collective strategy towards performance improvements, and ensuring that everyone's applications maintain effectiveness post-upgrade through collaborative efforts.
00:11:34.570 Thank you all for joining our session today, and we hope this discussion has provided valuable insights into enhancing Rails performance for everyone involved in software development!
Explore all talks recorded at RailsConf 2017
+105