Talks
Speakers
Events
Topics
Sign in
Home
Talks
Speakers
Events
Topics
Leaderboard
Use
Analytics
Sign in
Suggest modification to this talk
Title
Description
You have a Ruby web application or service that takes a certain number of requests per minute. How do you know how many Ruby processes you will need to serve that load? The answer is actually very complex, and getting it wrong can cost you a lot of money! In this talk, we'll go through the mathematics and theory of queues, and then apply them to the configuration and provisioning of Ruby services (web, background job, and otherwise). Finally, we'll discuss the application of this theory in the real world, including multi-threading and the GVL, "autoscaling", containers and deployment processes, and how Guilds may impact this process in the future. RubyKaigi 2019 https://rubykaigi.org/2019/presentations/nateberkopec.html#apr18
Date
Summarized using AI?
If this talk's summary was generated by AI, please check this box. A "Summarized using AI" badge will be displayed in the summary tab to indicate that the summary was generated using AI.
Show "Summarized using AI" badge on summary page
Summary
Markdown supported
The video titled 'Determining Ruby Process Counts: Theory and Practice' by Nate Berkopec, presented at RubyKaigi 2019, focuses on the complexities of determining the number of Ruby processes necessary to efficiently handle web application requests. The talk is particularly beneficial for developers managing dyno sliders on Heroku and EC2 instances on AWS, aiming for optimal horizontal scaling and provisioning of Ruby applications. Key points discussed include: - **Understanding the Need for Scaled Resources**: Many developers estimate server needs based on hunches rather than data, often resulting in over-scaling, leading to unnecessary costs. - **Application of Little's Law**: The talk introduces Littleās Law, which helps calculate the number of processes based on request arrival rates and response times. This law is central to understanding system efficiency. - **Examples to Illustrate Capacity Planning**: Nate explains how to apply these theoretical principles to real-world cases. He uses hypothetical data to determine process needs for a Ruby application handling 120 requests per minute and discusses the significance of proper metrics. - **Queue Management**: He emphasizes that reducing request waiting times is crucial for effective scaling and highlights the importance of monitoring request queue times instead of relying solely on response times, which can be misleading during traffic spikes. - **Real-World Case Studies**: Nate references metrics from companies like Twitter and Shopify to exemplify optimal process configurations. For instance, he notes that Twitter efficiently processed 600 requests per second with 180 application processes, indicating a well-tuned setup. - **Recommendations for Scaling**: Maintaining a 25% operational capacity relative to theoretical processing capabilities is advised, as well as considering core availability and preventing background jobs from congesting web processes. - **Future Considerations**: The discussion also touches upon Ruby's threading limitations and the implications for future initiatives like guilds and autoscaling, which may allow for more streamlined process management. In conclusion, by applying Little's Law to determine Ruby process needs, developers can significantly reduce costs and avoid over-provisioning hardware while ensuring efficient application performance. Nate invites further inquiries and emphasizes ongoing learning through practical measurement in live environments.
Suggest modifications
Cancel