Asynchronous Programming

Summarized using AI

EventMachine

Aman Gupta • February 11, 2015 • Earth

The video features a talk by Aman Gupta at the MountainWest RubyConf 2010, focusing on EventMachine, a Ruby library designed for handling asynchronous I/O operations. Gupta, a maintainer of EventMachine, discusses its functionalities, underlying concepts, and practical applications for efficiently managing network I/O in Ruby applications.

Key Points Discussed:
- Introduction to EventMachine: EventMachine is an implementation of the Reactor pattern allowing for non-blocking I/O, supporting various Ruby VMs, including 1.8.7, 1.9, Rubinius, and JRuby.
- Production Use: Emphasized its stability and capabilities, managing thousands of concurrent connections within a single Ruby process, ideal for I/O-bound applications.
- Non-blocking I/O vs. Blocking I/O: Explained the difference between blocking and non-blocking I/O. The latter allows other tasks to process while waiting for I/O operations to complete, thus optimizing application performance.
- Reactor Pattern: Described the reactor model as a single-threaded event loop, stressing the importance of avoiding any blocking calls to keep the reactor responsive.
- Asynchronous Programming: Delved into asynchronous programming using callbacks instead of return values, which can complicate code but ultimately provides the scalability necessary for high-traffic applications.
- Timers and Event Handling: Discussed timer management, which benefits from not blocking the reactor. Timers can be one-shot or periodic, and their management is crucial for event-driven applications.
- EventMachine Features: Highlighted features like em.defer for encapsulating events, queue management, and supporting subprocess management, benefiting multi-threading and external command execution.
- Practical Application - Chat Server: Gupta showcased a chat server implemented using EventMachine, illustrating many discussed principles. This server efficiently handled multiple connections and included integration with Twitter’s streaming API.

Conclusions: Gupta urged developers to adopt EventMachine for network I/O tasks to leverage its capabilities in building scalable and efficient applications. He reiterated the importance of maintaining non-blocking operations to ensure high throughput in handling multiple client connections by providently managing I/O operations with EventMachine's available APIs and tools.

EventMachine
Aman Gupta • February 11, 2015 • Earth

Help us caption & translate this video!

http://amara.org/v/GGvK/

MountainWest RubyConf 2010

00:00:15.519 This talk is about EventMachine. I'm going to cover a lot of code and various topics, so if you have questions at any point, feel free to ask. If something is unclear, just interrupt me and let me know. My name is Aman Gupta, and I live in San Francisco. I've been maintaining EventMachine for 18 months now and was responsible for the last four releases. Another release is coming up in about two weeks. I work on several other projects, mostly related to performance and debugging for MRI. You can follow me on GitHub and Twitter.
00:00:27.320 So what is EventMachine? I'm curious how many people have heard of EventMachine and how many have actually used it. EventMachine is an implementation of the Reactor pattern; it's similar to Python's Twisted project. Currently, EventMachine supports a variety of Ruby VMs, including 1.8.7, 1.9, and Rubinius. All three use the C++ reactor. We also support JRuby, which has its own Java reactor. Additionally, we have a simple pure Ruby version of the reactor that sort of works, but not many people use it, and it doesn't have all the features that the other reactors do.
00:01:11.200 A lot of people use EventMachine; it is in heavy production use in hundreds, if not thousands, of applications. EventMachine is definitely production tested and very stable. If you are trying to do some of the things we are about to discuss, EventMachine is something you should definitely consider. This talk focuses on I/O. For us, I/O usually refers to network I/O, which includes talking to network services like MySQL, HTTP, Memcached, and more. In the context of web applications, most web applications are I/O bound, not CPU bound. It is uncommon to write a Ruby web application that generates fractals, though we did see that today.
00:01:59.920 The basic idea behind EventMachine is that instead of waiting for I/O — for a MySQL response, for instance, or waiting for data to be returned from an HTTP request you made — you can use that time to do other tasks. We will dig into this a little more, but essentially, this is what EventMachine excels at: scaling I/O-heavy applications. In production, EventMachine can easily handle 5,000 to 10,000 concurrent connections with just a single Ruby process. This capability applies to any type of network I/O: whether you're sending emails, making HTTP requests, or writing your custom HTTP proxies. People have done all of those things, including writing proxies in front of MySQL or Memcached, often utilizing EventMachine for data access.
00:02:43.640 Now, let’s talk about how you can perform I/O in Ruby with EventMachine. There is a range of APIs, including TCP socket and server, which are quite standard. Here’s a look at the class hierarchy of what the I/O classes look like in Ruby. You will notice there is a class called Socket that is actually not the superclass of TCP socket; it is its own entity, which is a bit unusual. It actually should probably be renamed to RawSocket as it provides raw access to the BSD socket API, which is similar to what you would use if you were writing C code for these tasks.
00:03:19.440 When creating a socket in Ruby, the process can require a lot of code. You are not only creating a socket but also creating a socket address and telling the socket to connect to that address. This is not something you would typically do, but if you were writing in C, it closely represents what that code looks like. Using the higher-level APIs, you can write a very simple HTTP server in just a few lines of code. You start by initializing an HTTP server object that can accept new connections off that socket. When a new client connects, the server's accept method will return that client, which is its own TCP socket for reading and writing data.
00:04:17.280 One important thing to note is that anytime you call a read function or even a write function on a socket, it will be a blocking call. What this means is that you can only handle one client at a time, because you will be sitting there waiting for the read operation to complete. If another client attempts to connect during that time, they will be unable to as you are still processing the first client. A typical solution to this problem is to implement a thread per client. It is straightforward to make that change: you wrap the server's accept in a thread so that every new client spawns a new thread.
00:05:18.920 However, while this approach can work, it isn’t a great way to scale. What we want to discuss next is the alternative to this: non-blocking I/O. The basic idea behind non-blocking I/O is that instead of blocking the read call, you do not block at all. You may notice that this non-blocking version tends to be much longer and more complex. There are plenty of additional tasks to handle, such as keeping a list of clients and buffering data properly. In non-blocking I/O, you will pass this list into an API called IO.select. This method will watch those clients, and once any of them become readable or writable, it returns an array of those connections.
00:06:29.560 It may seem challenging to read, but instead of calling read or readline, you will be calling read_nonblock. Non-blocking versions of all the API functions are available, which return whatever is available at that moment. If nothing is available or if a maximum size argument is specified, it returns whatever is available. This leads to the necessity of buffering since you cannot predict how much data you will get at once; thus you must ensure you receive a full line before processing it.
00:07:01.680 This is essentially what EventMachine does: it handles non-blocking I/O effectively. It takes care of much of the boilerplate and low-level details for you, allowing you to focus on writing logic. The code you write mainly handles, for example, parsing a line and responding to it. EventMachine also performs a lot of behind-the-scenes operations to manage both inbound and outbound buffers for maximum throughput. It efficiently handles I/O using system calls, if available. Moreover, it supports EPOLL and KQueue, which we'll discuss shortly.
00:08:04.680 Now, you might be wondering what exactly a reactor is. This term often confuses people. Simply put, a reactor is just a single-threaded event loop. It’s commonly referred to as the reactor loop. Below is some Ruby code illustrating a very simple reactor. As long as the reactor is active, you will keep iterating. Within this reactor, you can process expired timers and handle any new network I/O accordingly. The principle is that the code you write reacts to incoming events, which is why it's referred to as a reactor.
00:08:50.680 A crucial point to note is that if you write code in your event handler that takes too long, this usage could impact when other events are processed. The longer you take to execute your code, the longer it will take for other events to wait for their turn to process. This brings us to an important lesson that we will revisit frequently: in a reactor-based system, you can never block the reactor. This means that a lot of typical APIs you'd expect to be able to use as a developer cannot be used here. For example, you can't use 'sleep', since starting a sleep call within a loop will block that loop and prevent any other operations from occurring.
00:09:47.800 Similarly, if you frequently process a substantial amount of data, e.g., iterating over 100,000 items, this would potentially block other operations. Blocking I/O and polling functions can lead to similarly harmful delays; MySQL queries, for instance, often take a long time. If you're waiting on a five-second query, that’s five seconds where no one can connect to the server, disrupting necessary operations. To illustrate this again, if you are inside a reactor loop and invoke your own while loop within your processing code, it will prevent the outer loop from executing further.
00:10:55.920 Events in the reactor are managed asynchronously, and you will hear this term frequently when discussing EventMachine or reactor-based systems. To clarify the distinction, let's compare how synchronous Ruby code looks against asynchronous (evented) Ruby code. Synchronous Ruby code typically uses return values: when you call a function, it returns a value, which you can then manipulate. However, in asynchronous evented code, you cannot use return values. Instead, you utilize Ruby blocks. Rather than getting a return value directly, you pass in a block that will eventually be invoked at a later stage with the return value.
00:11:48.640 It's essential to understand that when you're dealing with asynchronous code, the block you passed does not execute immediately. Instead, it is stored and invoked later when the return value is available, though the timing of this is unpredictable. It’s an essential detail that we will explore further. In the reactor model, you react to events, which is conceptually simple: rather than waiting for something to occur and then executing some code, you take the code that you expect to execute, encapsulate it inside a proc, and invoke that proc whenever the event triggers, effectively removing delay.
00:12:33.560 The complications emerge when we start working with events. While events as a concept are straightforward, evented code can become complex. To illustrate this, consider the following: the code on the left is executing three tasks sequentially. It retrieves a URL from a database, makes an HTTP request using that URL, collects the response, sends off an email with that response, and then prints something out. When we attempt to rewrite this in an asynchronous style, we quickly dive into nested blocks, which can be quite challenging to read. Another limitation is that with this approach, we lose the ability to handle exceptions directly with 'begin' and 'rescue'. Instead, you will typically have to pass in an additional block for error handling.
00:13:58.480 It falls to the developer to comprehend the trade-offs involved and choose the approach that best suits their application. There are three scenarios to consider: you may not need to scale, you may not expect much traffic, and your code could function properly as it is. Alternatively, there may be a middle-ground approach where your code could be adjusted to scale using a model like Unicorn, which forks multiple processes and maintains a centralized queue. Finally, there are instances where a complete rewrite of your code to be asynchronous is necessary, especially if you require high levels of scalability.
00:14:46.920 Another possibility involves utilizing EventMachine with threads, which serves as a combination of both strategies. You can execute blocking code within threads while running all your asynchronous code within the reactor. While there have been issues in the past, most of these concerns have been resolved. However, threading is not the optimal solution, as it introduces unnecessary overhead per client. To utilize EventMachine, it is simply a gem that you can install and require to begin using. EventMachine is compatible with various environments, including Rubinius and JRuby. It offers a wealth of APIs, and while I will breeze through many of them, I don’t want you to get bogged down by details. The slides will be available for reference; they are already posted on the web.
00:15:43.680 One key point to keep in mind is to watch for common patterns; I will point out situations that occur repeatedly. Many processes are utilized to avoid blocking the reactor. As such, when running the reactor, you simply call eventmachine.run. This starts the while loop and takes a block that is executed as soon as the loop is operational. Keep in mind that this is a blocking while loop; any code after it will not run until the loop is stopped. After stopping the reactor, your code following that point will execute as expected.
00:16:45.760 Within the reactor, iterations are processed continuously. Various APIs exist to manage these iterations effectively, and one powerful API is called em.next_tick. This method queues a proc; you simply pass a block that ensures the code executes on the next iteration of the reactor loop. Breaking this down: a fundamental rule is to avoid blocking the reactor. If you are tasked with heavy processing, you need to divide that workload into smaller chunks to be processed over multiple iterations of the reactor.
00:17:33.760 You will notice a pattern where a proc self-schedules to be called in the future. For example, in a straightforward synchronous loop, you might process a thousand tasks in a single pass. In its asynchronous rewrite, you'll generate a proc that processes one chunk of work before scheduling itself after execution. You kick it off once using em.next_tick to start the process. It processes each chunk of work and schedules itself repeatedly as long as needed.
00:18:25.480 This is indeed a common pattern, with a simple wrapper helping to implement it. It's worth mentioning that while this behavior is typical for the reactor, execution occurs at a very high frequency. For instance, on my laptop, you may hit 30,000 iterations per second. Running any significant Ruby code at such a frequency could overburden the CPU. This pattern is particularly useful for synchronization with other reactor loops, which occurs more often.
00:19:10.680 Now, let’s compare synchronous code and asynchronous code. When writing synchronously, you implement your iteration simply and wait for the block’s execution to finish before moving on to the next iteration. However, when dealing with asynchronous code, it is necessary to handle it differently. You may not know when the block finishes and thus must signal when you've completed the work. This means you explicitly convey your readiness for the next iteration. Additionally, there is a second argument allowing you to define the maximum level of concurrency; for example, if you pass an array of URLs, you could set a concurrency of 10, allowing 10 HTTP requests to process simultaneously while signaling the next iteration as each completes.
00:20:32.720 We touched on threading briefly and will go through this section quickly. If you are trying to execute mixed threading, you can establish EventMachine within an external thread. A common approach involves pausing the current thread until the reactor starts running. The em.schedule method is a simple way to ensure thread safety, allowing you to execute the EventMachine within threads or run your own code within the reactors' thread pool.
00:21:34.720 Back to our central theme: remember this basic rule: don't block the reactor. This is crucial for implementing timers correctly. EventMachine provides different types of timers: single one-shot timers and periodic timers. The way these function is straightforward: you just create an object for the timer, provide it a block, and it will invoke that block at every specified interval. However, an important point is to avoid calling sleep; if you sleep inside a timer, it blocks the reactor, preventing multiple invocations.
00:22:36.120 The timer objects encompass several APIs allowing you to manage and cancel timers effectively. Additionally, we understand the concepts of events within a reactor and how to manage these events through specific helpers. The 'callback' function has separate syntax forms that let you specify how to handle events in your code. You can pass around event handlers through blocks, proc objects, or methods associated with specific objects.
00:23:33.680 An important feature in EventMachine is 'em.defer', which has a generally misunderstood name. It represents a way to encapsulate events as objects. This is powerful since it allows you to register callbacks for success or failure with a specific event, regardless of when it occurs. It’s even possible that you may receive a notification about events that were triggered earlier. The 'defer' module is easy to include in your own classes, allowing for simpler event tracking.
00:24:11.280 To use an example, consider an HTTP request that returns a defer object. If multiple interested parties want to perform actions when an HTTP request succeeds, you can easily add callbacks for each desired action, creating a reactive but non-blocking design. We also touched earlier on queues, which provide an asynchronous queuing mechanism that does not return a value directly; instead, it takes a block causing it to invoke the block once it has something to process. If the queue is empty, it stores the block until someone pushes an item into it.
00:25:11.520 For the queue, you'll use a recursive proc approach, kicking off a worker that pops an item and then processes it before re-scheduling itself. Similar to the queue, EventMachine also features channels. The main difference is that channels allow multiple subscribers to receive messages. Subscribers can also unsubscribe or resubscribe as desired. Now let's delve into practical applications for EventMachine, beginning with subprocess management. You can execute external commands without blocking your application.
00:26:18.960 This can be useful, for example, if you have a daemon process converting images or invoking image processing libraries. Rather than blocking, you can receive notifications upon completion—so you might leverage this for running an external process and getting feedback on success or failure. EventMachine’s 'em.system' builds on this with a streaming interface, allowing you to receive data as processes output it. In this context, handlers in EventMachine are essentially modules or classes defining methods for different events.
00:27:34.960 Handlers simplify the management of events by creating instances for each subprocess. This instance-specific mechanism allows you to maintain state via instance variables while promoting a cleaner coding style. EventMachine shines most when it comes to network I/O, which is the primary use case for many developers. Writing network servers and clients is straightforward. Key APIs include start_server for initiating TCP servers and connect for TCP clients. To illustrate this, calling connect connects either a Unix domain socket or a host with port, while start_server will initialize a TCP listener with a handler for each client.
00:28:56.960 For every client connecting to the server, an instance of the given client handler will be instantiated. The handlers often extend em.connection, incorporating built-in methods applicable for handling connections. Methods within the connection class allow you to interact with the reactor easily, involving events such as connection completed and data received, as well as managing connection closures. This built-in functionality allows for straightforward scaling and maintenance of network communications.
00:30:18.960 Returning to the topic of non-blocking code, remember it's your responsibility to parse logical packets from the incoming buffer. Each time the client or server sends data, TCP treats everything as a stream. There is no guarantee for how the data is delivered; thus as a developer, you are responsible for structuring code that can dissect the incoming stream and recognize full packets. The naive approach often leads to inefficiencies. Instead, you should implement a buffer that can efficiently parse based on the protocol you're using, such as a line-based protocol.
00:31:24.960 EventMachine provides numerous helpers for handling this process, including a buffer tokenizer, which assists in parsing based on specific markers in the protocol. Making handling protocols simpler means wrapping your handlers more systematically — for example, including an object protocol would automatically handle the receipt of data into usable Ruby objects. This minimizes the amount of raw data manipulation required, allowing for clearer and more maintainable code. As mentioned earlier, EventMachine contains a variety of protocols, including support for email and HTTP.
00:32:57.760 We previously advised that EventMachine defaults to using select for I/O operations but this has limitations in terms of scaling. By default, select can only manage a maximum of 1,000 open file descriptors efficiently. As the number of connections increase, select’s performance decreases. However, EventMachine supports epoll and kqueue, allowing scalability to tens of thousands of connections seamlessly. To leverage epoll or kqueue, invoke em.epoll or em.kqueue before calling em.run depending on your system’s needs.
00:34:12.160 Additionally, there are a plethora of features available within the reactor that allow for monitoring files and processes. You can receive notifications when files change, get modified, or even discover when processes fork or terminate. You can also capture standard input. As such, practical usage scenarios involve readying your EventMachine for web applications. One way to maximize performance is through the use of EventMachine-enabled servers like Thin or Rainbows. These pre-built servers already initiate an EventMachine reactor, enabling you to integrate EventMachine’s API calls directly into your application components.
00:35:37.480 For even more dynamic situations, you can employ 'async Sinatra', providing extended capabilities for handling streaming and delayed responses. The 'async Sinatra' allows operations similar to what's seen in this example of creating delayed responses, quite advantageous when working with extended polling scenarios. I also want to showcase a simple demonstration: I've created a chat server that leverages a line-based protocol, encapsulating much of our discussed concepts. It utilizes buffering techniques to achieve efficient data processing and includes a channel object that allows all connected clients to receive messages—showing how much can be achieved within the EventMachine framework.
00:36:47.680 To provide an interactive element, I also implemented a command to make the computer read aloud any messages entered into the chat. Additionally, I’ve linked this chat server to Twitter's streaming API to push tweets mentioning EventMachine or MWRC directly to the chat. However, I encountered some difficulties with internet connectivity at the conference. If anyone connected, you should be able to engage through telnet on the showcased IP address and port. The chat server provides a fun demonstration of collaboration among multiple threads, invocation of asynchronous actions, and effective communication channels.
00:38:10.800 In summary, the entire codebase for this chat server is quite concise, organized around the core reactor loop. Despite the intricate utilization of EventMachine, the potential it offers to handle many connections efficiently—using less overall code than many conventional methods—is evident. The server demonstrates dependencies managing states across various channels and connection coupling efficiently to deliver a seamless communication experience.
00:39:55.560 Feel free to download this demonstration or run it on your machine for a hands-on experience. More extensive documentation is available, highlighting the project's GitHub repository. We welcome contributions or assistance in technical improvements for the upcoming release. Alternatively, for any further inquiries, I’m accessible via IRC, Twitter, or GitHub. Are there any questions? Someone mentioned there is a Java version of EventMachine. Yes, indeed. JRuby features a fully functional reactor that is compatible and works reliably. I recently refactored a significant portion of it and have used it successfully in production environments.
Explore all talks recorded at MountainWest RubyConf 2010
+22