Summarized using AI

Your App Server Config is Wrong

Nate Berkopec • May 26, 2017 • Phoenix, AZ • Talk

In the talk titled 'Your App Server Config is Wrong' at RailsConf 2017, presenter Nate Berkopec emphasizes the importance of application server configuration in optimizing the performance of Ruby on Rails applications. The presentation covers various aspects that developers often overlook when managing application servers like Puma, Passenger, Unicorn, Thin, and Webrick, highlighting how misconfigurations can lead to poor resource utilization, high costs, and inefficient response times. Berkopec walks through a structured approach for optimizing server configurations aimed at improving performance, which includes:

  • Determining Worker Count: Assess how many concurrent requests need handling, using tools like Heroku’s dashboard to gauge requests and average response times.
  • Memory Usage Estimation: Calculate memory demands per worker and avoid overloading the dyno. Misconfigured limits can lead to unnecessary memory spikes and restarts.
  • Container Sizing: Choose the correct dyno size based on performance requirements, understanding how memory usage behaves in Ruby applications.
  • Connection Limits: Be mindful of database connection limits while configuring worker threads to prevent bottlenecks and ensure efficiency.
  • Monitoring Performance: After deployment, continuously monitor application health indicators like CPU usage, memory consumption, and response time to adjust configurations as needed. Berkopec illustrates these points with practical examples, including a case study of the Envato platform, demonstrating how poor server configurations led to significant underutilization of resources.

The key takeaways from Berkopec's presentation showcase that many performance issues within Rails applications can often be resolved through thoughtful server configurations. By understanding and fine-tuning factors such as worker count, memory allocation, and connection limitations, developers can deliver vastly improved user experience and lower operational costs. The overarching conclusion encourages developers to invest time in server configuration as it can lead to substantial performance gains without requiring extensive code optimizations.

Your App Server Config is Wrong
Nate Berkopec • May 26, 2017 • Phoenix, AZ • Talk

RailsConf 2017: Your App Server Config is Wrong by Nate Berkopec

As developers we spend hours optimizing our code, often overlooking a simpler, more efficient way to make quick gains in performance: application server configuration.

Come learn about configuration failures that could be slowing down your Rails app. We’ll use tooling on Heroku to identify configuration that causes slower response times, increased timeouts, high server bills and unnecessary restarts.

You’ll be surprised by how much value you can deliver to your users by changing a few simple configuration settings.

RailsConf 2017

00:00:13.429 Thank you for coming! If you haven't seen our booth yet, we're doing some cool things.
00:00:20.640 You can vote for your favorite open-source project, and we're going to donate $500 to the project that wins.
00:00:27.060 We have a few distributing questions, and I think there's a total of four of those. Here's the QR code, but I don't actually expect you to scan it with your phone.
00:00:38.910 We're also doing a thing called 'Saru Zeroes.' While it's not happening anymore, I enjoyed the spirit of saying thanks to the people in the community who have helped you on your journey as a developer.
00:00:46.620 So we have these cool postcards at the booth where you can write your thanks, and you can either give it to the person if they're here at RailsConf, or you can post it on the whiteboards we have at the booth. We'll either tweet them or find a way to make that public.
00:01:06.869 Right after this talk, there will be a break. There will be a bunch of people from the Rails Core and contributor team at our booth during office hours.
00:01:13.500 If you have questions or want to meet folks like Aaron, Eileen, Raphael, or others, you can come and get those questions answered. I know a lot of people came by to try and get shirts and we ran out within the first 30 minutes, or maybe even less.
00:01:24.240 But we will have more shirts tomorrow. So if you stop by tomorrow, hopefully we'll have shirts for you. With that, I'll hand this over to Nate to take away the session.
00:01:41.700 Thank you! This is for Heroku's sponsored talk. I don't know if this is on. I am not an employee of Heroku, but they were very nice to let me have this slot.
00:01:52.470 This talk is titled "Your App Server Config is Wrong." I'll be discussing application servers. When I say application servers, I mean servers like Puma, Passenger, Unicorn, Thin, and Webrick. These are all application servers that start and run Ruby applications.
00:02:22.470 First, a little about who I am. I am a skier and recently moved to Taos, New Mexico, just for the skiing. I also enjoy motorcycle riding and have ridden my motorcycle cross-country three times on dirt roads.
00:02:38.460 Here's a picture of my motorcycle taking a nap in the middle of nowhere in Nebraska. Additionally, I was on Shark Tank when I was 19 years old during the very first season.
00:02:51.810 One of my readers gave me this Shark Tank-related gift, which I enjoy very much. I also like to make spicy programming memes. Here's another spicy meme I created.
00:03:13.580 You may know me from my blog, where I write about Ruby performance topics, focusing on making Rails applications run faster. I also run a consultancy called Speed Shop, where I help people optimize their Ruby applications to be faster and use less memory.
00:03:38.510 I've written a book and a course about optimizing Rails applications, which is called "The Complete Guide to Rails Performance." One common issue I encounter in client applications is incorrect application server configurations. It's very easy to undermine your app's performance by having a server configuration that isn't optimized.
00:04:20.130 Overprovisioning is another pitfall; you may end up requiring more dynos and resources than necessary. It's quite easy to spend a lot of money on Heroku, which is great for them, but you could end up scaling out of your problems simply by adjusting that dyno slider.
00:04:47.220 If you're spending more on Heroku each month than you are on requests per minute, you might be overprovisioned. You don't need to spend $5,000 a month for a 1,000 RPM app. The exception might be if you have an unusual or unique add-on.
00:05:04.130 Whether that adds to your costs is situational, but in most cases, that's just a general guideline I've observed.
00:05:15.060 Another issue that can arise with a misconfigured app server is overusing resources; you might be using a dyno size that is too small for your settings.
00:05:21.330 Let's define some terms. I use the word 'container' interchangeably with 'dyno' because that's what a dyno essentially is—a container within a larger server instance. Since this is a Heroku talk, I will be using their terminology.
00:05:42.530 In Puma, which I am a maintainer of, the term 'workers' is used. Other application servers like Passenger or Unicorn may refer to them differently, but the top three modern Ruby application servers implement a forking process model. This means they initialize the Rails app and call 'fork,' creating copies of that process, which we call workers.
00:06:47.390 One important configuration setting is determining how many processes will run per dyno. We all know what a thread is, but it's essential to differentiate between a process and a thread.
00:07:03.810 In regular Ruby, processes run independently, allowing us to handle multiple requests concurrently. However, threads share the same memory and cannot process two requests at the same time. We can utilize concurrency in Ruby by releasing the global VM lock in Ruby while waiting for a database call.
00:07:42.850 Here’s the general process we'll go through in this talk: First, we'll determine how many concurrent workers we need and the ideal number of requests to handle at once. We'll then assess the memory usage of each worker process and choose the appropriate container size.
00:09:31.230 Following that, we'll check our connection limits with the database to ensure we aren't exceeding them. Finally, we will deploy and monitor various metrics such as queue depths, response times, CPU usage, memory usage, restart frequency, and the number of timeouts.
00:10:52.370 This is a principle known as Little's Law, originating from queueing theory. It calculates how many resources we need based on the arrival rate and the time spent in the system. On a high level, it conveys that the number of requests we can serve concurrently is dictated by our request rate and average response time.
00:11:52.330 For example, let's say we receive 115 requests per second with an average response time of 147 milliseconds. By multiplying these two values, we determine how many requests are being handled at any given moment. By dividing this by the number of workers, we can gauge how effectively we're utilizing our resources.
00:12:50.370 I always recommend performing this calculation to find your effective worker count. You'll find your request rates and average response times readily available on the Heroku dashboard. A factor of five can help estimate how many processes you'll need to ensure you're properly utilizing your application's capacity.
00:13:54.160 After obtaining your processes count, the next step is deciding how to distribute them across your dynos. You might have to determine whether you're better off using a 1X, 2X, or a performance dyno.
00:14:39.860 Common mistakes can occur with container sizes due to misinterpretations of Ruby's memory usage. Many assume applications should have a flat memory graph, yet in reality, Ruby applications tend to follow a logarithmic pattern.
00:15:12.840 During the startup period, memory ramps up as the application requires various components and builds caches. Memory usage will plateau but never completely flatten out. Because of this, Heroku typically restarts your dynos every 24 hours to prevent potentially unbounded memory usage.
00:16:02.020 When deploying an app, we need to ensure we’re not using too much memory relative to the dyno size. It’s important to tune down numbers to find the optimal balance; tuning your web concurrency can help stabilize memory usage.
00:16:30.530 If you suspect your application has a memory leak, try allowing your processes to live beyond six hours for better insights. Observing performance patterns over a longer duration helps establish a more accurate understanding of average memory usage.
00:16:58.990 Step two involves determining how much memory is needed per worker per process. Workers should feel comfortable in their assigned dyno; it's best practice to aim for around 80 percent of total memory utilization.
00:17:48.470 Common types of dynos are typically discussed in production contexts, but note that Dyno configurations vary greatly depending on memory size and CPU count. The perf dyno is particularly notable for providing stable performance; however, it's essential to consider your specific application's needs.
00:18:41.420 For effective performance settings, tailor your application server based on the count of necessary workers without exceeding memory or connection limits us. A recommended practice is ensuring three to four processes per dyno for balancing load effectively.
00:19:24.580 It's crucial not to restrict your processes to the core count; applications can often benefit from running more processes. Just be cautious not to fragment memory by using excessive threads.
00:20:14.980 For Rails apps, thread counts should remain between three and five threads per process. This helps in maintaining connection limits and preventing excessive memory fragmentation.
00:21:03.880 Monitor your connections actively, especially with databases where the limits are commonly reached quickly. You may need to provision additional connections if using features that require long wait periods, such as rack timeout.
00:21:55.430 It's critical to monitor your app post-deployment. Watch memory usage patterns closely. If you notice spikes or degradation, it’s essential to conduct deeper analysis of controller actions and potentially optimize code logic.
00:22:48.210 Alert yourself to connection limits. For example, if you have 20 app workers with five threads each, you may hit limitations very quickly. Calculate how many instances you can scale to before you hit your connection caps.
00:23:43.400 When all pieces are in place, routinely check your application’s queue times and response latency. Scaling up during peak usage can greatly enhance user experience.
00:24:43.460 Lastly, keeping an eye on recurrences of timeouts is key. If your application routinely suffers from timeouts, consider increasing dyno workers to minimize these interruptions. Factor in that it's usually better for your app to time out rather than hang indefinitely.
00:25:37.040 For fine-tuning your server, utilize performance monitoring tools effectively to identify bottlenecks in memory and execution speed, adjusting numbers as necessary for optimal results.
00:26:51.150 Don’t hesitate to explore multi-threading with Puma if you haven't already; start slow and incrementally test. Overall, successful Ruby application performance hinges on consistent tuning and monitoring.
00:28:01.210 Establishing efficient, effective server architecture and settings based on your specific needs can vastly improve response times and overall performance. Tune in and tweak parameters to achieve desired stability.
00:28:55.500 Lastly, always maintain a close eye on your application’s connection limits, adjusting as necessary to avoid problems arising from resource constraints. Thank you! That concludes my talk. If anyone has questions, I'm happy to answer them!
Explore all talks recorded at RailsConf 2017
+109