00:00:11.580
Hello everyone! I am here to talk about Concurrent Ruby and the new framework which is part of Concurrent Ruby version 1.1. Unfortunately, I didn't have enough free time before the conference, so I decided to postpone the update until after. It will be available by noon next week.
00:00:19.020
My name is Petr Chalupa, and I work for a research group called ReCollapse. Among other things, we are also working on a new Ruby implementation called Tofu Ruby. I'm just curious, how many of you have heard about Tofu Ruby, previously known as Iris?
00:00:31.349
Wow! That’s quite a response! For those of you who haven’t heard of it, you can look up a video from last week's conference, which has a status update after the 27th minute. However, I'm not here today as an Oracle employee; I'm here as a maintainer of Concurrent Ruby, and that’s what I’ll be focusing on.
00:00:47.940
Concurrent Ruby is not a new implementation, nor does it include any language extensions. It was created in 2014 and serves as a toolbox of abstractions from which you can choose based on your specific needs. It consists of both low-level and high-level abstractions, and it has no dependencies since we want to avoid complications that dependencies may introduce. Thus, Concurrent Ruby is independent of Ruby implementations, although it works best with CRuby, JRuby, and Tofu Ruby. It's open source and has about 3,000 stars, with around 207 gems currently depending directly on it, including major ones like Sucker Punch, Sidekiq, and Elixir.
00:01:15.810
Let me share the current state of concurrency in CRuby. The CRuby implementation has a Global Interpreter Lock (GIL), which means it does not support true parallelism. The only threading tools available are Mutex, Monitor, and Condition Variable from the standard library. Additionally, there are a few features that are specific to certain Ruby implementations like JRuby, such as synchronized blocks and channels, but they are not portable. Unfortunately, we also lack volatile variables, requiring us to rely on Mutex for thread safety.
00:02:00.660
When working with parallelism, a common solution in MRI Ruby is to use forking, which is not optimal since it cannot be done with JRuby. This variability complicates the development process when migrating applications between Ruby implementations. To address these issues, we've developed Concurrent Ruby. The library consists of three gems: the core gem is stable and includes all the Java extensions, since they've been compiled before deployment without issues. The edge gem contains experiments and new abstractions, while the extensions gem provides C extensions.
00:02:52.920
If you encounter compilation issues, you can use the core gem independently, as it contains numerous high-level abstractions along with atomic synchronization primitives and various tools. If you have a concurrency problem, it’s advisable to look into Concurrent Ruby to see if it offers a solution. Now, let’s discuss promises.
00:03:22.700
How many of you are familiar with promises in JavaScript? Great! This framework is essentially a superset of JavaScript promises, which means it has even more functionality. It integrates previous abstractions like futures and promises into a single tool, eliminating any issues with conventions and compatibility across classes. The names are also derived from JavaScript to maintain familiarity.
00:04:04.019
What's new about this framework? It utilizes a synchronization layer that we've built into Concurrent Ruby, allowing for the implementation of volatile variables and compare-and-set operations. This is a breakthrough since these operations allow the construction of promises without the need to lock resources, making it significantly faster.
00:04:30.390
It also integrates with other abstractions such as actors and channels. The framework offers an outline covering the basics of futures, how to chain them, and other functionality. The core classes in this framework are Event and Future. An Event instance represents some occurrence that will happen in the future, though it does not hold any value at this stage.
00:05:03.640
On the other hand, a Future instance represents a value that is not yet computed but will be available sometime in the future. Events can be either pending or resolved, while Futures can be pending, fulfilled, or rejected. The fulfilled and rejected states signify that the Future has been resolved.
00:05:40.929
There are several convenience methods for Events to help you determine their state after being resolved. Similarly, futures include methods that check if a Future is fulfilled or not. If you fulfill a Future with a value, its state changes, and you can then read that value. Futures can also be rejected if they are constructed with an error, which can later be accessed via the rejection method.
00:06:22.400
Let’s look at a practical example of how an Event could be used. Imagine two threads, in the first one we call a calculation method, and in the second thread, we have another method that depends on the result of that calculation. We want to synchronize the two threads to ensure that the dependent method is called only after the first method finishes.
00:07:09.170
We can create an Event using the factoring method from the promises module. In the first thread, we execute the first method and resolve the Event afterward. The second thread also starts, but it will be blocked until the Event is resolved before it can continue with its execution.
00:07:56.910
Often, we also want to communicate results between threads, so we can use a Resolvable Future instead of an Event. Instead of simply resolving an Event, we can fulfill the Future with the result of the long-running calculation. After fulfilling the Future, we can read the value from it in the second thread.
00:08:40.630
We can use a different factoring method with this framework to do asynchronous operations. By calling a Future factory method, the block will be evaluated immediately in a separate thread, eliminating the need for you to create one manually.
00:09:20.290
Moreover, it’s possible to use chaining. After creating the first Future, we can call 'then' on it, which creates a second Future that is executed immediately after the first one fulfills, carrying the result of the calculation forward. However, it’s important to note that it’s not thread-safe to pass arguments through local variable capturing, as changes to those variables later on may lead to issues. Instead, pass arguments directly to the constructor.
00:10:09.240
You can also create branches from a Future; for instance, after the first Future resolves, you can create two parallel branches to process the results independently. These branches can later be combined using a 'zip' method to return a Future that contains the results of both branches.
00:11:03.240
For example, if you have four parallel tasks and you want to wait until all are complete, you can create Futures for each task and use the zip method to combine them. The combined Future will yield an array of results for all tasks when they're done.
00:11:57.540
It's also beneficial to avoid naive solutions that involve directly calling blocking methods inside a Future, as this would block the thread evaluating the outer Future, leading to potential starvation of your thread pool. Instead, using the 'flat' method on the outer Future returns a new Future that is fulfilled with the value from the inner Future.
00:12:45.370
If you need to perform lazy computations, wrap a Future with a delay factory method. This Future won't start executing immediately but will operate when it is needed. By chaining multiple Futures together, you can introduce a delay where necessary.
00:13:43.280
Another significant feature in Concurrent Ruby is the ability to schedule code to run at specific times or with delays by using a schedule factory method. You can specify a delay in processing either using a block or by passing a timestamp for precise execution.
00:14:38.970
Finally, we have introduced cooperative cancellation to the library, which allows programmers to share the cancellation of tasks across multiple Futures and actors. This approach avoids issues that may arise with traditional timeouts, as it doesn’t rely on creating additional threads that could lead to complications.
00:15:32.820
In the example, we create several Futures to simulate computations, and if one fails, all the others are canceled gracefully. The co-created cancellation token allows for controlling the cancellation process seamlessly.
00:16:30.840
We can also enforce limits on concurrent tasks. This feature is beneficial when working with limited resources, such as a shared database. By implementing a throttle mechanism, we can ensure that only a set number of tasks run simultaneously.
00:17:17.500
For instance, if we create ten Futures but set a limit of three concurrent tasks, the throttling mechanism will ensure that only three tasks execute at any given time.
00:18:08.710
We also have an implementation of actors that allows for concurrent processing. With actors, messages are processed when they arrive, storing the state internally to maintain context without relying on global state.
00:19:01.190
You can use the actor pattern to manage databases or other shared resources efficiently, allowing you to focus on sending messages to actors rather than worrying about threading complexities.
00:20:03.800
This implementation is coupled with channels, which allow for the sending and receiving of messages with back pressure built-in to avoid overwhelming the receiver. Channels function over Futures, allowing seamless communication.
00:20:53.890
For instance, if a channel has a fixed capacity and receives more messages than it can handle, any excess messages will remain pending until space becomes available. This behavior allows you to manage flow control effectively.
00:21:50.440
You can choose to select messages from multiple channels, and if a message is available in one of them, the selection method fulfills the Future for that channel, enabling responsive processing.
00:22:40.700
Additionally, you can simulate processes using Futures without needing to create a thread for each operation. This is useful when managing resources, as creating too many threads can lead to performance degradation.
00:23:35.990
By employing a count method that creates a Future, it can run until reaching a specified condition. This way, you avoid blocking the main thread and simulate asynchronous behavior.
00:24:51.320
Back pressure is also essential in this context. It ensures that the producer slows down if the receiver cannot keep up with processing messages, thus avoiding overwhelming the system.
00:25:45.200
In a concrete example, a producer pushes integers to a channel while the receiver processes them, and if the producer generates values faster than the receiver can consume them, the channel's capacity ensures that the producer will wait until space becomes available.
00:26:37.170
This allows us to see the result of the receiver process effectively while minimizing resource usage, as we only create a limited number of threads to handle all processes cumulatively.
00:27:51.070
In summary, processes can run on a global fast executor, allowing for efficient message passing and back pressure management without needing to create numerous threads.
00:28:46.510
The new actor model simplifies the design and allows for maintaining a clean separation of concerns while utilizing predictable asynchronous behavior.
00:29:42.780
The processing of messages can now be handled effectively, enhancing scalability and consolidating the benefits of combined channels, processes, and actors.
00:30:32.060
Error handling in the library is also straightforward; common methods like 'run' only proceed if the Future is fulfilled successfully, otherwise, the rejection propagates. Unresolved cases can be handled via rescue chaining.
00:31:40.420
There are two global thread pools, a fast executor and an IO executor. The IO executor is the default to maintain safety during blocking tasks.
00:32:34.580
Ultimately, the goal is to maximize efficiency by managing threads properly while implementing advanced features without complicating the developers' experience. You can construct your Futures easily using factory methods in any class.
00:33:47.500
To summarize, the benefits of using this framework include running on thread pools without needing to manage threads directly, higher performance via lock-free operations, back pressure support, and flexibility to use various abstractions like promises, channels, and actors together.
00:34:37.200
As we move toward the release of version 1.1, the core components include all the essential methods and capabilities, and I welcome feedback as we finalize everything.
00:35:06.650
Thank you for your attention! I am now happy to answer any of your questions.