Ruby

Summarized using AI

Introduction to Concurrency in Ruby

Thijs Cadier • May 04, 2016 • Kansas City, MO

In the video titled Introduction to Concurrency in Ruby, Thijs Cadier explores various concurrency models in Ruby, providing insights into how to effectively implement simultaneous tasks within the language. Cadier introduces the concept of concurrency and offers a structured approach to understanding three primary methods: multiprocess, multithreading, and event-driven programming. Throughout the presentation, he illustrates these concepts using a simple chat server implementation, which he demonstrates through live coding.

Key Points:

  • Concurrency Models: Cadier outlines three main concurrency models in Ruby:
    • Multiprocess Model: This model, used by servers like Unicorn, involves forking processes for handling tasks. Each child process operates independently, which simplifies concurrency management but increases system resource usage.
    • Multithreading Model: By creating multiple threads within a single process, this model allows shared memory access among threads. Cadier explains the benefits and complexities, such as needing mutexes to handle data consistency, potentially leading to deadlocks if not managed carefully.
    • Event Loop Model: This method efficiently handles many connections by processing one event at a time using constructs like fibers. Cadier emphasizes that while it appears to enable concurrency, it actually runs tasks sequentially but does so with minimal overhead.

Examples & Demonstrations:

  • Chat Server Implementation: Cadier builds a chat server as a practical example across all three concurrency models. He discusses:
    • Client-Side Execution: Using basic networking features from Ruby's standard library, allowing user input to initiate connections and read messages.
    • Server-Side Execution: Implementing the server logic using the different concurrency models to showcase their operation in real-time.
  • Cadier performs live demonstrations of each implementation, emphasizing the structure of connections and process management.

Takeaways:

  • Each concurrency model has its specific use cases, strengths, and weaknesses:
    • Multiprocess model is robust against failures, suitable for applications needing fault tolerance.
    • Multithreading provides a lower memory footprint but requires careful state management.
    • Event-driven programming excels at scalability, making it ideal for high-traffic applications, though it can lead to complex programming scenarios.

In conclusion, the choice of concurrency model in Ruby depends largely on the requirements of the application, with each approach offering distinct advantages and operational challenges. Cadier invites the audience to explore these models further through hands-on experimentation with the provided code examples.

Introduction to Concurrency in Ruby
Thijs Cadier • May 04, 2016 • Kansas City, MO

In this talk we'll learn about the options we have to let a computer running Ruby do multiple things simultaneously. We'll answer questions such as: What's the difference between how Puma and Unicorn handle serving multiple Rails HTTP requests at the same time? Why does ActionCable use Eventmachine? How do these underlying mechanism actually work if you strip away the complexity?

RailsConf 2016

00:00:10.559 Hey, welcome! My name is Thijs. If you want to join, I'll be doing a little demo later on, so if you're interested in looking at the code I'll be discussing, please clone this repository. Today, I'm going to talk about how to do concurrency in Ruby using some very simple features from the standard library.
00:00:30.480 I work on a monitoring product for Ruby called AppSignal, and we support a variety of different web servers and tools. This has forced me to learn about all the different ways to implement concurrency in Ruby. I used to find this topic quite intimidating, and I felt like I didn't fully understand what was going on, but I realized it was actually much easier than I thought. Today, I'm here to share some of these insights with you.
00:00:58.359 In general, there are a few exceptions to this, but for simplicity's sake, we're going to discuss three main ways to handle concurrency: running multiple processes, running multiple threads, or running an event loop to achieve concurrency in a way.
00:01:16.280 You're probably familiar with these three web servers, as they all use one of these models, each with its own ups and downs. We'll try to frame this discussion by building a very simple chat server that's somewhat similar to Slack.
00:01:35.479 I've implemented a small chat server in Ruby in these three different ways. My colleague Roy is already logged into it; hopefully, it’s working well. Here’s our very minimalistic Slack alternative. Unfortunately, I don’t think we'll be getting millions in VC funding for this, but at least it works.
00:02:08.759 We'll start by discussing the chat client, which is located in a file called client.rb, if you checked out the repository. This file uses basic networking features from the Ruby standard library to establish a connection. It begins by requiring the 'socket' library, which brings in all the networking logic, and then opens a TCP connection to a specified address. Then, it starts a small thread that asks the client for incoming data.
00:02:51.599 Basically, anything the server sends back to the client will be printed to the command line. Finally, it listens for user input on the command line. The client simply waits for you to type something and press enter, which triggers that loop. Once the input is entered, it gets sent back to the server, allowing the server to return data to the client. This setup creates a fully functional chat client, although I must regrettably note that it doesn't support any animated GIFs.
00:03:39.599 As we've discussed, there are three ways to implement this in Ruby. The first and simplest method is to use multiple processes. This is how Unicorn works. In this model, there's one master process started by the system. Whenever work needs to be done, it forks into a job process, known as a child process, which performs the task before it may terminate or stay alive longer.
00:04:10.159 The master process manages these child processes. If you were to check your activity monitor on a Mac or use 'top' on a server, it would show a structure where you have one master process and several Unicorn workers. If any of these workers crashes, the master process will ensure that a new one gets spawned, making this a robust architecture.
00:04:53.639 Now, what does this look like in Ruby code? We’ll start with actually initializing the server. The code for this is essentially the same across all examples. We start a TCP server on a certain port, which thereafter waits for new incoming connections.
00:05:24.840 Since we’re working with multiple processes here, each process operates in its own namespace. Therefore, modifying a variable in one process does not affect the others. This isolation necessitates a means of communication between processes, especially when a chat message is received in one process that needs to be sent to all other connections in separate processes.
00:06:04.759 I've simplified the explanation a bit. You can see the full code in the examples, but what it ultimately comes down to is using a pipe. A pipe is a stream of data from one process to another. Writing data from one process into the pipe makes it readable by another. By establishing this communication, we ensure that all processes receive chat messages.
00:06:46.759 Next, we move on to the management aspect, which aligns closely with what Unicorn would do. We begin a loop where we try to accept new connections from the server. The server's 'accept' method blocks and waits for someone to connect. When this happens, it prompts the next iteration of the loop. We then set up a pipe to facilitate communication between the new process and the master process.
00:07:22.480 We add this pipe to the list of processes in the master to ensure we can communicate back to the child process. A key function in this process is 'fork,' which creates a complete duplicate of the process as it exists at that moment. Everything within the 'do...end' block of code becomes the new child process, while the previous process remains intact and continues to operate.
00:08:11.439 This concept can indeed be a bit challenging at first, and I recommend experimenting with it on your own machine to truly grasp it. In essence, we create a child process that knows which stream it is connected to, allowing it to interact accordingly.
00:09:01.279 Now, we can implement some chat logic. The first step involves reading the first line from the socket, which we assume is a nickname, following the protocol of our chat server. The client writes this nickname as its first line, and we send a little message back to the client.
00:09:49.640 Furthermore, once we receive this input, we initiate a thread that sends incoming messages back to the client. Unfortunately, this means we do need a thread even in a multiprocess example. Otherwise, the full functionality cannot be realized. The thread will wait for you to type something, continuously looking to read a line of text from the socket. When it obtains that line, it writes it back through the pipe to the master process.
00:10:57.360 Now, let's recap how the master process communicates with this data. Once we have this text message, the master can relay it back to all child processes. Those child processes will then be capable of writing it back to your terminal. We'll observe how this works in practice during the demo at the end of this talk.
00:11:41.599 There are several advantages to using this multiprocess concurrency model. One key benefit is that you can almost disregard the complexities of concurrency because each process operates independently. Since each process operates in a single thread, there are no concerns regarding thread safety issues. Additionally, if a worker crashes, the master process can simply reboot the affected worker.
00:12:29.240 However, the downside to this approach is the significant resource usage involved. Whenever you want to perform concurrent operations, you will require multiple processes, which will naturally consume memory. Hence, it's not the most efficient choice for a chat server, but it does function, as we will see shortly.
00:13:08.960 Next, let's discuss the multithreading model, which may be more suitable for jet applications. This approach works with a single process but allows you to create threads within that process which do work together. The key advantage here is that all threads share the same memory. When one thread modifies a variable in memory, it is instantly visible to all other threads, unlike in the multiprocess scenario.
00:14:47.680 This implementation looks similar to what we've discussed before. Here we still open a TCP server, but the logic changes slightly. We utilize a shared 'messages' array as a mock database, allowing us to store incoming messages and send them to other connected users. However, if multiple threads attempt to read from and write to this array simultaneously, it can lead to inconsistent states. To mitigate this, we utilize a mutex.
00:15:36.320 A mutex operates like a traffic light. A thread can lock the mutex when it begins working with shared data, preventing other threads from accessing it until the lock is released. This ensures that the data remains consistent. However, excessive locking can result in the entire system operating as slowly as a single-threaded application. If all threads end up executing one after another instead of concurrently, we miss the benefits of parallel execution.
00:16:57.679 So, just like with our previous examples, we again listen for a connection from the server. Instead of forking, we now initiate a thread. Any code executed within a 'do...end' block runs independently in its own thread within that same process. Again, we read the nickname from the socket and relay it back.
00:17:52.560 In this instance, rather than setting up numerous pipes, we again create a thread that relays incoming messages back to the client and collects messages from the client. When a new message arrives, we push it into the shared 'messages' array, utilizing 'mutex.synchronize' to ensure only one thread accesses the array at a time. This method provides a way to react to incoming messages without stepping over any other data.
00:18:32.480 In a typical scenario, you would likely leverage a dedicated database or message queue to manage data persistence and avoid losing messages on process crashes. We send messages back to each client roughly every 200 milliseconds, collecting outgoing messages until it reaches that timestamp.
00:19:29.680 When dealing with mutexes, one should be aware of the chance of deadlocks occurring. Deadlocks arise when a thread locks a resource and another thread tries to access it. If each thread is waiting for the other to release their lock, neither can proceed, causing the system to effectively hang. This is one of the complexities associated with multithreaded programming, making it somewhat challenging.
00:20:52.320 Next, we will examine how our messages are dispatched from the shared array. Ultimately, we sleep briefly to yield control back to the event loop, letting it process incoming messages. This can expose the weaknesses of the system. If multiple threads are continuously accessing the same resource, the chances of waiting for that resource to become available will increase.
00:21:35.639 In an ideal scenario, you would want to minimize overhead per connection to maintain responsiveness. In this model, we have a singular process with a unique ID handling several threads independently. However, Ruby's Global Interpreter Lock (GIL) restricts the execution of Ruby code to one thread at any given time within the MRI implementation.
00:22:47.520 In practice, this means Ruby cannot truly leverage multi-threading except for I/O bound operations, making it still usable in web applications most of the time. It allows Ruby to perform efficiently in networking contexts, where most time is spent waiting for database calls to return.
00:23:26.279 The caveat here is that multithreaded programming necessitates careful management of state consistency. You must be acutely aware of potential variable mutations and prevent simultaneous access to mutable state. If a thread crashes, the entire process can also crash, leading to a complete loss of functionality.
00:24:32.960 Finally, we arrive at the ultimate method for handling concurrency in Ruby: the event loop. An interesting aspect of the event loop is that, although it seems to enable concurrency, it technically does not execute multiple tasks simultaneously. This approach employs fine-grained operations, making it appear concurrent, but it only processes one task at a time.
00:25:19.440 An event loop is efficient in its memory usage per connection, proving itself valuable for web clients. The event loop relies on the operating system to manage interactions, such as monitoring when a connection is ready for reading. You can ask the OS to inform you when an event occurs, leading to an organized queue of actions to handle.
00:26:57.920 This creates a single-threaded, single-process environment that endlessly loops, processing events when they become available. Implementing this requires a tight integration with the operating system, and a Ruby gem called EventMachine can facilitate this integration effectively.
00:27:35.679 If you were to deploy this in a production environment, you’d typically use a robust event loop implementation. In this case, I used a less complex event loop for demonstration purposes using 'Fiber' and 'IO.select.' Fibers are lightweight concurrency constructs that allow yielding control until a resumable event occurs.
00:28:18.480 In this example, we maintain a TCP server while handling connections with fibers. Each fiber remains associated with its client while preserving its own state. The event loop itself checks for new connections and assesses which connections are ready to read or write.
00:29:04.520 This system further allows us to track the state of each client, letting us know when data can be read from or written to. When reading or writing data, the fiber will determine whether it can proceed based on the current state. The design is somewhat convoluted, but after examining the diagrams and code you'll see how the system is implemented.
00:30:09.520 The benefits of an event loop include reduced overhead for each connection and scalability to handle numerous connections simultaneously. However, when managing more complex operations, event-loop programming can lead to issues with callback hell or deep call stacks. Consequently, if the event loop is blocked, it halts the entire application because everything operates as a single-thread process.
00:30:54.160 In deciding which concurrency model to utilize, it ultimately depends on your specific application requirements. For situations where failures can occur, the multiprocess model works very effectively. Multithreading is convenient as it doesn't require substantial rewrites, while the event loop shines in scenarios demanding high levels of concurrency.
00:32:12.600 Now, let's conduct a live demo of the chat server to see how it functions on my laptop. If you checked out the example code, please make sure to pull the latest updates as I've made fixes since the beginning of this presentation.
00:34:04.480 As we start our evented version, you’ll see the process running on one side of my terminal while the client operates on the other side. Inspecting the server reveals a single process with an active thread handling the connections.
00:35:25.680 Next, we’ll switch to the threaded version of the chat application. As connections come in, threads will be instantiated to handle each incoming connection accordingly. I’ve been attempting to gauge the performance difference between both versions but it's currently negligible due to the way we've set up this demo.
00:36:28.399 Finally, now let’s test the multiprocess version as our last demonstration. The display shows the master process at the top, with several nested child processes handling the connections. You can observe how this process architecture manages the connections effectively.
00:37:15.880 This concludes my presentation. If you have any questions about applying the knowledge gained today or about any one of the models discussed, please feel free to ask. Thank you for your time!
Explore all talks recorded at RailsConf 2016
+102