Talks
Asynchronous Workers to the Resque
Summarized using AI

Asynchronous Workers to the Resque

by Dave Kapp

In the video titled "Asynchronous Workers to the Resque" by Dave Kapp, the speaker discusses how to effectively utilize asynchronous workers in Ruby applications, particularly with the Resque library. Asynchronous processing is crucial for enhancing application responsiveness, especially when dealing with slow operations. Kapp outlines several key techniques for implementing asynchronous workers:

  • Isolating Slow Activities: Asynchronous workers can move slow tasks out of the request/response cycle, ensuring user operations are not hindered by delays caused by slow API calls or computations. For instance, freeing the main application to process other requests while an API call is being made improves overall user experience.

  • Pipelining Processing: Pipelining involves breaking larger tasks into smaller, manageable jobs that can be executed sequentially. This not only clarifies job structure, improving maintainability, but also enhances debugging efficacy. Kapp recommends that each worker perform a specific task, ensuring separation of concerns and reducing complexity.

  • Periodic Maintenance Tasks: Resque and similar libraries can automate routine background jobs like data collection and reporting. Kapp suggests using rake tasks to enqueue these periodic jobs, emphasizing that this can help in maintaining data integrity and application performance.

Throughout the talk, Kapp emphasizes two best practices—"shared nothing" architecture to minimize concurrency issues and ensuring jobs are idempotent, meaning repeat executions do not lead to unintended state changes.

The concluding points highlight the importance of managing concurrency wisely, stressing that while it offers many advantages, it requires careful consideration to avoid complications such as race conditions. Kapp leaves the audience with critical rules for handling concurrency and offers resources for further exploration. Overall, the discussion provides valuable insights into using asynchronous workers effectively within Ruby applications, with practical advice and examples relevant to developers in various contexts.

00:00:16.400 Hello everyone! My name is Dave Kapp, and I am a cat video fiction author who moonlights as a software developer. I work for Coshx Labs, a company based out of Charlottesville, Virginia. I used to work in the Boulder, Colorado office of the company, but I moved to Austin about a year ago, and I’m now the only person from the company currently in Austin. I'm happy to be part of the tech scene here, which I've really enjoyed.
00:00:39.920 I gave a talk on an introduction to asynchronous workers at RailsConf this year, and they were kind enough to invite me to give a follow-up talk about some patterns of asynchronous worker use, along with some details on ways to deal with concurrency problems that can arise. I have a reasonably large amount of material, and I’ll try to keep an eye on the time to allow for some questions. However, concurrent processing and parallel processing are vast fields, so it will be challenging to cover a lot in just half an hour. That said, I hope to cover as much as I can about asynchronous workers to the rescue and their parallelism patterns.
00:01:20.720 I have some good news if you haven't heard yet—Ruby 2.0 has been released! Many of you are likely happy about this, and I know I was excited because it brought several great improvements. However, some of you may feel disappointed since the global interpreter lock (GIL) still exists, which is something we have to deal with in MRI Ruby. If you are using JRuby or the latest builds of Rubinius, the situation is different.
00:02:05.040 I want to encourage everyone to stay calm. The reality is that there are still options for parallelism in Ruby, even with MRI. However, traditional parallel programming techniques often don't work well with Ruby. You may or may not be familiar with how MRI works, but the GIL prevents more than one thread in a single MRI process from running code simultaneously.
00:02:24.160 Ruby now has true native threads, but it cannot execute more than one at once. This limitation is common across many languages, not just Ruby. In an HTTP request/response-oriented system, which I'm guessing many of you are using, there is often a form of parallelism present because you run multiple processes on your servers. Few people in production run only a single instance of Passenger, Puma, Unicorn, or other servers.
00:03:01.200 The basic idea of parallel processing is straightforward: you have a big problem that is too large to be handled all at once. Instead, you break the problem down into smaller pieces and work on each piece separately. Once those individual pieces have been completed, you put them back together to construct or reconstruct the solution. This concept is similar to solving a puzzle where different pieces represent an aggregate solution, and you aim to reassemble them. It's worth noting that parallel processing has been around long before the Ruby language itself, and every language offers various ways to approach it.
00:03:58.080 So, while some languages have built-in support for it, Ruby leverages asynchronous workers, an approach that works well with multiple flavors of Ruby. Asynchronous workers can be utilized in various ways, providing flexible tools to apply to different situations. Let's explore three primary techniques or patterns for using asynchronous workers: isolating slow activities, pipelining processing, and managing periodic maintenance tasks.
00:04:26.720 Before diving deeper, I want to emphasize two best practices that can save you frustration and keep your sanity. These practices are particularly crucial for avoiding concurrency problems when working with asynchronous workers: the concept of "shared nothing" and aiming for "idempotent" jobs. The idea behind shared nothing is that you do not share state or objects across asynchronous workers. Even if you’re working with the same domain models or the same database backend, sharing specific details across workers can lead to complications. The less you share between workers, the better off you'll be. This approach minimizes the risk of encountering concurrency problems.
00:05:26.080 Next, let's talk about idempotency. Idempotent operations are those that, even if repeated, won't change the outcome or state. A classic example of this is an ATM transaction, where if a transaction fails and is repeated, the user's account balance won't change. However, idempotency does not apply universally. For instance, if you're processing time-sensitive operations, such as calling 'time.now,' the output will not be the same across multiple calls without special circumstances. Thus, aim for understood failure instead, ensuring that you know exactly what happens if an operation fails.
00:05:40.320 Referential transparency is another key concept: it refers to a function's output always being the same when given the same inputs. In the context of asynchronous workers, this means aiming for workers that yield consistent results for the same inputs, making it easier to understand and compose your asynchronous jobs. The takeaway here is that while working with asynchronous workers, it’s essential to understand the principles that underpin their design and implementation.
00:06:40.320 Asynchronous workers run in separate processes distinct from your main application. Although I'll be using Rails in my examples, the same principles apply to apps built with Sinatra or any non-web-based application. An asynchronous worker operates independently, reading data from a specified place and processing it, periodically polling for new data or tasks. There are three main libraries for asynchronous workers in Ruby: Resque, Sidekiq, and Delayed Job.
00:07:19.600 Resque is the library I'll be using in my examples, while Sidekiq was initially designed to be API-compatible with Resque but employs a different model for managing multiple jobs. Delayed Job, on the other hand, uses SQL as its backend instead of Redis. When choosing a library, consider what best fits your project, as there is no one-size-fits-all solution, and your choice may depend significantly on requirements like running on Windows.
00:07:54.880 While discussing asynchronous workers, the term "workers queue" may come up. This phrase can be misleading, as it denotes a conceptual level where jobs are treated like items in a data queue. However, they may not function exactly as a traditional queue. Before we continue, let me address a quick note for those using JRuby and planning to use Sidekiq, as you can sidestep the GIL and leverage the JVM for your worker processes. However, I recommend reading an article titled "Ruby Core Classes Aren't Thread Safe" to avoid frustration and better understand potential pitfalls.
00:08:21.280 Now, let’s address the first of the three patterns: how to handle slow operations effectively by outsourcing them to asynchronous workers. Applications often encounter slow processes, whether due to complex calculations or slow API calls. For example, if you have an API call that takes 15 seconds, no algorithmic changes on your side will solve the problem; the external server is inherently slow. Users of your application will surely be disappointed if they have to click buttons and wait excessively for responses.
00:09:01.400 Even if you implement modifications to enhance performance, the real issue may remain unaddressed. HTTP request time differs fundamentally from a batch processing scenario, and an HTTP request taking just a minute is as detrimental as one that takes an hour. Therefore, a restructuring of your application architecture is necessary. You want to move slow tasks outside of the request/response cycle to ensure responsiveness. This principle applies to both drawing graphics on an event loop in iOS development and to Rails applications where you aim not to block the handling of HTTP requests.
00:09:40.240 So, how do asynchronous workers help with handling slow operations? By making the slow call asynchronous, you free up your application to continue processing other tasks, achieving a form of parallel processing. The goal is to ensure that slow operations do not hinder your application's essential work, especially those involved in handling HTTP requests that generate revenue for your business. In summary, utilizing asynchronous workers allows you to move slow processes away from core functionality and enhance overall application responsiveness.
00:10:22.720 Let’s now look at some code examples to illustrate the concept. A simple worker class in Resque looks something like this. Essentially, this is a class defined for a job that includes a queue set as a class instance variable and a method named 'perform,' which is a class method—this part is crucial. The method accepts arguments and processes them accordingly. While this may vary slightly in syntax for Delayed Job or Sidekiq, the fundamental structures remain consistent.
00:10:59.440 The important aspects to notice include the class queuing designation and the 'perform' method that handles incoming arguments. Do not worry too much about the details here; focus on the structure. Each job type should have its class, which enables you to manage specific pieces of work efficiently. Regarding organization within a Rails project, some choose to place workers in the library, while others place them in models or other directories. It's vital, however, that your team maintains consistency in wherever you decide to put them.
00:11:36.480 It's also crucial to understand that you can use Rails components inside your asynchronous workers. While you need to configure this correctly, it’s straightforward and generally worthwhile, as it offers comprehensive access to Active Record models and other Rails domain elements. When calling the worker, after performing an action like 'if user.save,' you utilize 'rescue.enqueue' with the class name and relevant arguments. The worker processes the job separately, allowing the main application to continue without delay.
00:12:14.320 However, upon completion, you must ensure the application can collect the processed data. Here are two primary ways to do that: first, use a 'check back later' approach where you instruct the user to return for the results at a later time. This can work well for longer-running jobs, like generating financial reports. Alternatively, implement asynchronous loading via JavaScript with polite loading indicators, ensuring users don’t have to stare at endless spinning wheels, particularly on mobile devices.
00:12:54.240 It's important to consider potential data synchronization concerns. Be cautious of mutable states. You might encounter race conditions if multiple workers modify the same data simultaneously. Therefore, adjust your workflow to prevent collisions that could result in data anomalies. It’s generally easier to restructure your processing order than to implement locks to manage access to mutable data.
00:13:41.760 I'd like to share four rules of concurrency, sourced from the JRuby wiki. First, if possible: avoid concurrency while still achieving your application goals. Second, if concurrency is necessary, do not share data across threads or workers. If you must, ensure you do not share mutable data. Lastly, if you need to share mutable data, synchronize access to it. Prioritizing the decision to utilize concurrency will yield better results.
00:14:26.560 Now let's delve into the second technique: pipelining. Pipelining refers to doing data manipulation in a sequence of steps. Since asynchronous jobs don't inherently bog down your application, it's tempting to overload a single worker. However, consider breaking operations into smaller, focused tasks—this approach enhances clarity and maintainability.
00:15:12.560 When writing asynchronous workers, it’s easy for a single worker to take on far too many responsibilities. I’ve seen this happen often where a single worker tries to handle complex tasks, resulting in code that becomes difficult to debug when things go wrong. By using well-defined smaller jobs, you can enhance your ability to trace issues and isolate failures.
00:15:51.200 Pipelining allows you to define a series of stages where each stage receives only the parameters it needs. This streamlining mimics dependency injection patterns that improve testability. Instead of handling a large mass of operations, each small operation can be tested and verified independently, aiding in clarity and simplifying error-tracing when they arise.
00:16:42.560 Here’s an example of a pipeline process broken down into manageable parts. First, you obtain the user and their followers, then enqueue the next step. Each stage follows this pattern, ensuring logical continuity while keeping the separation of concerns in check. You may wonder if this method truly makes a significant difference, and I can confirm that the clarity achieved when breaking things down is invaluable.
00:17:31.040 Now, let's discuss where to save temporary data between the stages. Directly saving to Redis works well, as does passing parameters as needed. When you need time-specific operations, passing explicit time windows is far more effective than making assumptions on when jobs should execute.
00:18:04.320 With many different worker types, handling failed jobs can become tricky. Each asynchronous job has different failure conditions, but most libraries lack robust plans to manage that, so it's wise to develop custom handling for failures in your application.
00:18:47.600 Lastly, using a good log aggregator can greatly enhance your logging experience when working with asynchronous workers. Having dozens of workers' logs consolidated into one easily searchable format can vastly improve your ability to track down issues.
00:19:29.440 Logging in Resque is configurable, and you can write logs to standard output or ensure that they work well with the Rails logs. However, I would advise against using Rails logs due to verbosity; your asynchronous workers may generate more detailed logs than required.
00:20:08.720 Now let’s briefly cover the final pattern: periodic maintenance tasks. Resque and its counterparts can also effectively manage periodic jobs, enabling you to automate tasks such as data collection, periodic reporting, or collating information.
00:20:52.480 To execute periodic tasks, create a worker for the job and a rake task to enqueue it. When scheduling, if you have multiple tasks to run, implement a task to find needs and enqueue appropriately. For simple, routine tasks, cron works well, or consider using the 'whenever' gem for easier syntax if you're not fond of the standard cron representation.
00:21:46.560 Just a note: if you're on a cloud service that doesn't grant you access to cron, they usually provide alternatives that you can use instead. While 'rescue scheduler' functions well, its future may be uncertain without a maintainer, so be cautious with its use.
00:22:17.840 To summarize, we've covered isolating slow activities, pipelining processing, and periodic maintenance tasks; all these techniques can work harmoniously together. Concurrency is indeed challenging. It's essential to remember to approach it with care, embrace shared nothing, and utilize idempotent transactions.
00:23:07.040 Remember the four rules of concurrency: firstly, avoid doing it if you can. If not, avoid sharing data across threads or workers, and be cautious when it comes to mutable data. Evaluate if the concurrency methods provide real benefits; if they do, the effort spent will be worthwhile.
00:24:07.760 I hope you've found this talk insightful. As I mentioned, I’d love to take your questions outside if there's no time here. If you prefer, feel free to reach out to me on Twitter at @happy_mr_dave. I will also be posting the slides online later, so please check my GitHub account for them. Thank you very much!
Explore all talks recorded at LoneStarRuby Conf 2013
+21