RubyKaigi 2017

Asynchronous and Non-Blocking IO with JRuby

RubyKaigi2017
http://rubykaigi.org/2017/presentations/codefinger.html

Asynchronous and non-blocking IO yields higher throughput, lower resource usage, and more predictable behaviour under load. This programming model has become increasingly popular in recent years, but you don't need to use Node.js to see these benefits in your program. You can build asynchronous applications with JRuby. In this talk, we’ll look at libraries and patterns for doing high performance IO in Ruby.

RubyKaigi 2017

00:00:00.000 Hello everyone. Today, we're going to discuss asynchronous and non-blocking I/O using JRuby.
00:00:08.790 My name is Joe Kutner, and I go by Codefinger across various online platforms, including IRC and Twitter.
00:00:14.910 I work at Heroku and have authored a couple of books, including 'The Healthy Programmer,' which has been translated into five languages, including Japanese.
00:00:20.909 People might know me for my expertise in certain beer preferences, but today, we are here to focus on JRuby and how to use it in unique ways.
00:00:34.340 Before diving into the specifics, let's discuss how most people utilize JRuby or Ruby.
00:00:41.460 Typically, most Ruby applications use Rails or Sinatra, where requests are handled with a single thread per request, blocking the thread the entire time it services.
00:01:00.010 This means that if a request requires database work or external service interaction, the thread handling that request will block and wait during that time. We call this a blocking wait, which is inefficient because each thread consumes resources.
00:01:20.909 To scale systems of this nature, more threads need to be added, consuming additional memory and putting more strain on the CPU. This leads to higher resource consumption and costs.
00:01:40.050 Today, we will talk about asynchronous wait, which allows us to handle a request while performing I/O operations, freeing the thread to do other work and enabling a single thread to handle multiple requests simultaneously.
00:02:07.979 In this asynchronous and non-blocking model, we save money as each request uses fewer resources, allowing multiple requests to share a single thread and ultimately reducing our server costs.
00:02:27.040 Major companies such as Apple, Google, and Twitter utilize a specific technology that I will discuss today, which is the Netty framework.
00:02:40.060 Netty is a Java framework designed for high-performance non-blocking and asynchronous I/O operations, supporting both servers and clients. Apple is one of its largest users, operating 400,000 instances in production and handling tens of millions of requests per second.
00:03:00.430 This impressive scale showcases the benefits of using Netty, which operates at a low level allowing for specific use cases. However, we will be using a higher-level framework called Ratpack.
00:03:17.150 Ratpack is a high-performance micro framework similar to Sinatra, enabling the creation of web applications on top of Netty. Ratpack uses an event loop provided by Netty, allowing it to handle requests more efficiently.
00:03:40.320 Unlike traditional approaches where each request is confined to a particular blocking thread, Ratpack utilizes an event loop that processes requests asynchronously, freeing up the handling thread for other work.
00:04:05.790 This results in a more efficient use of resources and allows for handling multiple requests concurrently. It introduces concepts like event emitters, which generate events that are processed in a serial manner.
00:04:20.340 The core construct in Ratpack is the promise, which represents a unit of work that can be completed at a later time. Promises help manage asynchronous work, allowing us to encapsulate I/O operations and return a result once they are completed.
00:04:54.680 I'll provide an example application to demonstrate how Ratpack works. This simple application takes requests from clients and processes search terms by making requests to the eBay API. It showcases various methods of making requests, both blocking and non-blocking.
00:05:55.540 One method is blocking, where each search term wait sequentially for responses. The other showcases an asynchronous approach, freeing the thread to handle multiple requests simultaneously.
00:06:04.140 In the blocking example, we iterate through the search terms, making synchronous requests to the eBay API, causing the request thread to block while waiting for each response. In contrast, the asynchronous example allows the request thread to remain active.
00:07:05.000 In this case, while we are waiting for the eBay API to respond, the thread can handle other tasks, further optimizing resource usage. However, it is still a serial process, and while freeing the thread helps, we are limited by the time it takes for those requests to become available.
00:08:19.230 To optimize further, we can utilize multiple event loops, allowing us to handle I/O operations in parallel, minimizing waiting times. While this adds a bit of complexity, it allows for much greater efficiency.
00:09:40.680 I will demonstrate how to implement this parallel processing in code, using Ratpack's support for handling multiple promises and combining results to optimize request throughput.
00:10:24.560 The result is that we can handle multiple requests across several threads, significantly reducing the overall time for request completion and improving our application's performance. The Ratpack framework allows us to write clean and manageable Ruby code that remains readable and efficient.
00:11:35.890 In conclusion, I've provided an overview of using asynchronous and non-blocking I/O in JRuby, specifically exploring frameworks like Netty and Ratpack. I hope you found it insightful and applicable to your own projects.
00:12:47.210 If you have any questions or would like to share your experiences with asynchronous programming or JRuby, feel free to reach out. Thank you for your time, and enjoy the rest of the conference!
00:14:06.440 Now, I'd like to open the floor to any questions or discussions regarding the topics covered. Your thoughts and experiences would be greatly appreciated.
00:14:37.260 Thank you again for joining me today, and thank you to Heroku for supporting this talk. I look forward to our conversation.