Celluloid
Modern Concurrency Practices in Ruby
Summarized using AI

Modern Concurrency Practices in Ruby

by Arnab Deka

In this video from RubyConf AU 2014, Arnab Deka presents on 'Modern Concurrency Practices in Ruby.' The talk is inspired by concepts from the book 'Seven Languages in Seven Weeks,' discussing the potential need for better concurrency models in Ruby. Deka provides a comprehensive exploration of concurrency versus parallelism, emphasizing that while both are important, the focus will be on concurrency, particularly in the context of Ruby programming.

Key points discussed in the talk include:

  • Definitions: Concurrency is about doing multiple tasks simultaneously toward a common goal, while parallelism involves tasks being executed at the same exact time.
  • Threads and Mutexes: Deka illustrates how to manage state with Ruby threads and the complications that arise from using them, such as potential race conditions and the global interpreter lock. He emphasizes the importance of mutexes to mitigate issues but warns about their impact on debugging and real-world application complexity.
  • Atomic Operations: Introducing the concept of atomic operations as a higher level of abstraction, Deka showcases how they utilize the processor's capability to execute operations atomically, reducing risks associated with traditional threading models. He mentions the atomic gem as a valuable tool in Ruby to achieve this functionality.
  • Futures: Deka explains futures, a concept used to manage network calls without blocking the main thread. He uses the Celluloid gem to demonstrate creating future operations in Ruby, which allow for efficient asynchronous programming.
  • Actor Model: He describes the actor model, with an example of how Elixir handles processes and message passing, showcasing its benefits for building robust systems by decoupling state from processes.
  • Software Transactional Memory (STM): Deka touches on the concept of STM, similar to database transactions for in-memory operations, highlighting how it ensures state consistency among competing threads.
  • Channels and CSP: Drawing from Go's capabilities, he discusses how channels can help structure communication between processes efficiently, promoting effective data flow.

Deka concludes by encouraging developers to explore these concurrency models, which can significantly enhance the scalability and resilience of applications. He advocates for an open-minded approach toward different technologies and systems, emphasizing simplicity as key in Ruby development.

Overall, the talk presents practical insights into modern concurrency patterns applicable to Ruby developers, promoting a deeper understanding of managing concurrency effectively in software design.

00:00:06.100 I'm actually glad that the talk is at 2:40 and not at 4 o'clock because I'll be asleep by then. I have no idea what time zone I'm in right now.
00:00:11.389 The premise of this talk is based on my experience reading the book titled 'Seven Languages in Seven Weeks.' One of the languages featured in the book is Ruby, and the author, Bruce Tate, asks Matz what he would change if he could modify one thing in Ruby. His response was that he would essentially remove the threading model that we currently have and replace it with a higher-level advanced concurrency feature.
00:00:29.960 Today, I'd like to share with you some of these advanced concurrency models and demonstrate how you can implement some of them in Ruby. We will start by exploring what concurrency and parallelism mean, take a brief look at threads, and then move on to some other interesting topics before concluding the talk with my final thoughts. Expect a lot of code mixed with different programming paradigms, including some spoonfuls of Closure, a dash of Ruby, a hint of Elixir, a smidgen of Erlang, and a sprinkle of Scala. To keep you engaged, I’ll add a pinch of Java as well.
00:01:05.540 Last month, I delivered my first conference talk in Bangalore. Since it was my inaugural presentation, the organizers advised me to start with a joke to engage the audience. I had a really good joke, and I was laughing, but nobody else did. It was very awkward, so this time I decided to skip the jokes. Instead, I looked for some interesting Australian trivia, and I found plenty of it. I’ll share some of those facts throughout the presentation, so feel free to tell me if they're true or not, but remember, these are facts because they are written on the internet.
00:01:39.349 So, what exactly is concurrency and what is parallelism? Here’s an example of a concurrent system: multiple individuals and groups work towards a common goal, and although each person is doing their individual tasks at the same time, the system as a whole is making progress towards that common objective. In contrast, a parallel system assumes that all elements, like trains, leave at exactly the same time, heading towards the same destination. Today, we're not going to delve deeper into parallelism, as most of you are Ruby developers, and I'm sure you’re already familiar with that.
00:02:20.730 Instead, I want to focus more on the concurrency aspect. There's also the event-driven model present in Node.js, jQuery, or Ajax calls that you may be familiar with. This model allows you to make I/O calls without waiting and instead attach callbacks to handle the results. While that's another excellent method for achieving concurrency, we won't be discussing it today.
00:03:02.880 Let's talk about threads, locks, and mutexes. Here's a simple example of a Ruby class called Counter, which has a variable `count` initialized to zero. I have a method that increments this count by one. I then open two threads, each incrementing `count` 1,000 times. Now, let me ask you, what do you think the final value will be? Raise your hands if you think it's 2,000. No hands? The answer is actually 2,000. However, this is only true if you're running on MRI Ruby. If you try this in JRuby or Rubinius, you will likely see a number less than 2,000. Why is that? It's due to the global interpreter lock.
00:03:41.790 In this example, if one thread reads the value and increments it locally without saving it, another thread could come along and read the same value and then update it. When the first thread updates the value afterward, the second thread’s update is lost, which illustrates the kind of problems we can encounter with threading. To prevent these issues, we can use mutexes. By creating a mutex and placing our increment operation inside a synchronized block, we ensure that only one thread can perform the increment operation at a time while all other threads wait for it to complete. Although this looks neat, consider how this will affect your real-world applications when you’re not simply incrementing variables. You can run into problems like deadlocks, out-of-sequence executions, and it becomes challenging to test, debug, and reproduce issues.
00:05:57.660 Now, if you still want to manage concurrency this way, there's an excellent gem by Charles Nutter, the creator of JRuby, called `thread_safe`. This gem makes most fundamental Hash and Array structures in Ruby thread-safe. If he hadn’t written this, I wouldn't recommend it, but given his expertise, I feel confident in using it. If you look closely, he adds synchronized blocks to every method in Hashes and Arrays. While this allows thread safety, it does introduce potential race conditions and deadlocks, making debugging more complex. So, the question remains: do you want to be a locksmith or prefer writing your application code instead?
00:06:59.810 When using mutexes and locks, your focus while developing applications can shift from the application code to managing concurrency constructs, which is why it's beneficial to use higher-level abstractions. One of them is Atomic operations. Modern processors implement a mechanism called Compare and Swap, allowing the previous operation we discussed—fetching a value, incrementing it, and setting it—to be completed in one atomic action by the processor. The team over at Javelin has created a beautiful hierarchy of classes in Java, such as AtomicInteger, AtomicBoolean, and variations thereof. Since we often need more, there’s even an AtomicIntegerArray and AtomicBooleanArray.
00:07:46.330 So, how can you achieve this in Ruby? Once again, Charles Nutter wrote a fantastic gem called `atomic`. Instead of initializing count directly to zero, we wrap it in an Atomic operation. Any object can be wrapped, but for simplicity, we'll use zero. Inside the increment method, we call the `update` method provided by the Atomic class, generating a new value based on the current value. Once `update` is called, it sets the atom to this next value. Using this approach, you can ensure that you receive 2,000 consistently across all Ruby implementations, and since this approach is non-blocking, you won’t face deadlocks or similar issues. You could also utilize more complex objects in this manner.
00:09:04.410 In Clojure, for example, you can easily create an atom and get its value with clarity. You can also use functions like `reset!` or the analogous `swap!`, which updates the atom's value based on the function you provide, analogous to what we discussed with the `update` method in Ruby. ActiveRecord validators and observers can seamlessly integrate with atoms, allowing for in-memory data integrity.
00:09:38.480 For example, when using an atom to hold a positive number, you can utilize a reader macro in Clojure that defines a function to accept only positive numbers. You can even add watchers to your closure atoms, preventing them from ever becoming negative. If a negative value is set, the atom will block the operation to maintain the integrity of your data.
00:10:17.650 Clojure also handles retries well. If the atom's value differs from what your thread has locally, it will trigger a retry rather than executing the atomic operation, so be cautious of any side effects within the block of code that processes this value.
00:10:49.360 Now, how many of you think this concept about atoms is true? It seems like quite a few of you have encountered similar cases.
00:11:18.520 Next up are futures, which are heavily used in situations involving network calls. Futures allow you to fork your code, where you might want to retrieve something from a web service without blocking your current execution thread. When you create a future, it will execute code in a separate thread, while you continue doing something else. Once you try to access the value from the future, if it's ready, you’ll get it immediately; otherwise, your code will block while waiting for it.
00:11:55.919 Creating a future in Ruby is quite straightforward with a gem called Celluloid. For example, when creating a future to simulate a network effect, my code might sleep for three seconds before counting to 500. So, while this work is being done in its own thread, you can still query whether it is ready. If not, you can wait for a bit before trying again. Using Celluloid makes this process efficient by managing the threads intelligently.
00:12:58.130 Let’s say you're working within a command class and you have a method called `cool_beans` that you want to execute in a future. By including Celluloid in your class and creating a thread pool, you can run methods asynchronously without overwhelming the system with excessive thread creation. An important consideration is limiting the number of threads—typically no more than 8 to 10 threads on a four-core system. Hence, you can select available threads from the pool as needed.
00:13:38.419 There are other gems available that simplify creating futures as well, such as the Ruby `thread` gem, and there's a gem called `concurrent-ruby` introduced by Jerry Diem. If you’re interested in futures, I encourage you to check out his talk. Promises are somewhat similar to futures: the key difference being that futures are eager, initiating their operations immediately, while promised operations rely on another thread to complete their tasks.
00:14:54.840 Now, let's discuss actors. How many of you attended the RubyConf last year and remember the inside joke related to this topic? Tenderlove, also known as Erin Peterson, joked about it, but actors are essentially processes or threads that have their own mailboxes, meaning they have queues of things to do. Messages are sent to these queues, and they run their tasks sequentially. Erlang is well-known for its actor model since it allows you to create lightweight processes, sometimes numbering in the thousands. We'll look at an example using Elixir, which has a syntax that resembles Ruby more closely.
00:15:48.810 In the example, we have a module called Player, where a method called `loop` takes a player's name and waits for messages. When a message is received, it can pattern match the incoming messages and execute code based on that. For instance, spawning the player process allows us to send and receive messages like `serve` or `play_next`. The player's state and communication are handled through message passing, with the rally count being part of the message—encapsulating state without tying it directly to the object.
00:17:22.080 As we simulate a tennis game between Federer and Nadal, when I send a message for Federer to serve, he sends a message to Nadal, who then responds based on the game logic. The state of the game, represented by the rally count, is managed through message passing, meaning the state information is part of the message itself rather than being directly part of the object, emphasizing how processes communicate through messaging rather than direct state manipulation.
00:18:29.360 The lightweight nature of actor processes in Erlang or Elixir allows systems to manage a vast number of concurrent processes efficiently. This capability supports robust network applications, as exemplified by WhatsApp, which is built using Erlang. Erlang follows a fault-tolerant philosophy: when an error occurs, the process crashes, but the parent process is notified, allowing for seamless recovery. The Open Telephony Platform (OTP) consists of libraries and designs that facilitate building resilient systems following this model.
00:19:43.880 Furthermore, while many of the mechanisms we’ve discussed, such as atoms and closures, operate within the confines of a single computer, the actor model allows for distributed computing. You can run an actor on one server, while another can run elsewhere, with the same code base, fostering scalability and resilience. In Ruby, libraries like Celluloid can offer similar functionality and can help facilitate such distributed systems, making it straightforward to implement concurrent designs.
00:20:32.000 In terms of implementation, when we utilize Celluloid, we can define an actor class that encompasses attributes like the player’s name and capabilities. Through method calls mapped to their async counterparts, we can effortlessly communicate between players. The `supervised` feature in Celluloid enforces fault tolerance by monitoring actor processes, allowing automatic recovery from failures.
00:21:49.700 Now, let’s discuss agents. They are similar to atoms with one key distinction: while atoms perform synchronous operations, agents represent asynchronous entities. Agents manage their state over time. For example, if you create an agent representing an object, you can tell it to execute certain commands over time without waiting for a result. This approach prevents threading issues since agents do not block operations.
00:22:53.600 In Ruby, agents can provide valuable functionality. You can define actions that log details over time or perform other tasks while ensuring non-blocking behavior. Of course, if you send an erroneous command, the agent will simply terminate without notifying you, meaning you must check to prevent further issues.
00:24:19.790 Next, let’s talk about Software Transactional Memory (STM). Its concept resembles database transactions but applies to in-memory operations. When modeling a banking scenario where Alice has $1,000 and Bob has $2,000, you can set up the transactions such that their accounts cannot drop below zero. Upon attempting 25 transfers, you’ll see that the system will only allow 11 successfully, demonstrating how the validation prevents the balance from going negative.
00:25:14.180 Another characteristic of STM process transactions is that they are atomic. If multiple threads attempt to access the same block of code, the resulting state consistency is maintained, allowing you to retrieve the expected values even if multiple threads interfere. However, durability is not preserved, similar to some transactional systems, meaning that if an error occurs, you must explicitly save your state to disk to prevent data loss.
00:27:58.260 While STM isn’t commonly well-supported across languages, experimental implementations may suggest paths for future development. Additionally, Intel is developing hardware-level support for Transactional Memory, which holds promise for enhancing performance and reliability.
00:29:08.690 Let’s move onto Channels and Communicating Sequential Processes (CSP). Inspired by Go, libraries like core.async in Clojure make concurrent programming straightforward. A simple channel can be created that reads and writes values—blocking if they are empty and supporting buffered queues. This behavior allows for more manageable and clear data flow between processes.
00:30:06.830 Clojure’s design promotes careful management of producer and consumer rates, preventing the build-up of unprocessed items in buffers. It’s smart to solve such root issues in development rather than defaulting to automatic solutions that may lead to larger concerns.
00:31:07.886 Concurrently, Clojure provides utilities for mapping functions across channels, facilitating efficient batch processing, unlike conventional threading which can become cumbersome with callbacks. To illustrate further, Go utilizes goroutines that manage channel operations efficiently, creating state machines that seamlessly handle message passing without blocking execution.
00:32:26.830 In Ruby, there’s an agent gem by Ilya Grigorik that allows for similar behavior to Go. It's a powerful tool you can utilize to manage concurrency in your applications. Now, as we wrap up, I encourage you to explore these technologies and methodologies. You might encounter problems later that you won’t initially know how to solve, but understanding these tools could provide you valuable insights.
00:33:44.670 When programming in Ruby or any other language, simplicity should remain your guiding principle. Using multiple processes and queues can vastly enhance your application’s scalability. Technologies such as SQS, RabbitMQ, or similar systems can facilitate communication across distributed processes, ensuring a smooth scaling process when you need to push beyond a single machine.
00:35:00.000 Don't shy away from mixing different technologies, as diverse systems cohabiting and communicating through APIs can yield significant benefits. There’re various organizations out there with systems built on multiple technologies communicating over HTTP or Thrift without coupling concerns encompassing them. Ultimately, being open-minded about the technologies you implement will enhance your development capabilities. Thank you for your attention, and I'm grateful to be here.
Explore all talks recorded at RubyConf AU 2014
+16