Talks
Concurrency Showdown: Threads vs. Fibers
Summarized using AI

Concurrency Showdown: Threads vs. Fibers

by Vishwajeetsingh Desurkar and Ishani Trivedi

In their conference talk at RubyConf AU 2024, Vishwajeetsingh Desurkar and Ishani Trivedi delve into the enhancements in concurrency within Ruby 3.0, highlighting the trade-offs between traditional threads and lightweight fibers. They discuss various aspects of concurrency, a crucial element in optimizing programming for performance, scalability, and efficient resource management.

Key Points Discussed:
- Understanding Concurrency: Concurrency is defined as the simultaneous execution of multiple tasks or processes, often utilizing shared memory for coordination.
- Importance of Threads: Threads are presented as lightweight processes that can enhance system performance through multithreading, although their implementation is limited by the Global Interpreter Lock (GIL) in Ruby, restricting concurrent access to resources.
- Introduction to Fibers: Fibers are introduced as a more efficient alternative to threads, especially in Ruby 3.0, offering a simple way to handle asynchronous operations without incurring significant memory overheads and avoiding the complexities associated with race conditions.
- Race Conditions and Synchronization: The presentation includes a discussion on race conditions in multi-threaded applications, illustrated through a banking example. Threads require mutex locks for synchronization, while fibers manage concurrency inherently through manual control, thus avoiding race conditions.
- Deadlocks: They explain deadlocks as a scenario where threads wait indefinitely for resources. Fibers mitigate this issue due to their non-preemptive nature and the direct control developers have over execution flow.
- Interrupt Handling: The management of interrupts is addressed, with fibers less susceptible to issues due to their controlled execution structure compared to threads.
- Practical Applications: The speakers conclude with a practical example using both threads and fibers in an image processing service, where fibers manage I/O-bound tasks while threads handle CPU-intensive work.

The talk emphasizes that both methods have their strengths and weaknesses; threads excel in CPU-bound tasks, while fibers shine in I/O-bound tasks. Ultimately, they advocate for hybrid solutions where both concurrency models can be effectively utilized together to maximize application performance. The session highlights not just technical nuances but a passion for Ruby, aiming to empower developers with effective strategies to enhance concurrency in their applications.

00:00:03.679 Hello everyone! This is my first time traveling abroad from India and speaking at a conference outside my country. Today, I will be discussing the topic of "Concurrency Showdown," which focuses on how we can efficiently manage concurrency in Ruby.
00:00:14.360 My name is Vishwajeetsingh Desurkar, but you can call me Vishi. Joining me is Ishani Trivedi. We are both from India, and it's our first time in Australia. The weather here is quite nice! Coming from different cultures—I'm from Ahmedabad and Ishani is from Pune—we can see how despite being just a few hundred kilometers apart, the cultural differences are quite significant.
00:00:37.680 India is often described as a diverse country. For instance, Pune, where I come from, is a melting pot of cultures with historical forts dating back to the 1600s and 1700s. It's known as the "Oxford of the East" due to its numerous colleges. On the other hand, Ahmedabad is famous for its vibrant Navaratri festival, which lasts for nine colorful nights and is recognized as a World Heritage City.
00:01:13.680 Now, let's dive into the main topic: What is concurrency? Concurrency refers to multiple processes or tasks executing at the same time. How is concurrency implemented in programming? It often involves shared memory or data among the tasks or processes, enabling them to coordinate and work simultaneously.
00:01:25.760 Concurrency is important as it maximizes performance and, if used efficiently, increases scalability. To help illustrate this, I like to refer to threads, which can be thought of as lightweight processes. This diagram shows how multithreading essentially involves spawning multiple workers to optimize system performance.
00:01:45.760 To utilize multi-core systems effectively, take for example a basic task of mapping search results from a result set sequentially. This process takes a specific time. We can optimize this by converting it into a threaded example, where each search result spawns a new thread. However, keep in mind the limitations regarding the number of threads.
00:02:03.640 Next, let's discuss the Global Interpreter Lock (GIL). In traditional multithreading, we spawn multiple threads to optimize CPU performance, but the GIL allows only one thread to access resources at a time, which can undermine the performance enhancements we seek through concurrency.
00:02:45.360 While I may be advocating for threads, I also acknowledge their limitations. Threads require the operating system to manage scheduling and concurrent work. However, fibers are another approach. Fibers are lightweight threads, a primitive form of concurrency in Ruby.
00:03:09.360 Fibers operate within a thread and can handle asynchronous input/output operations without blocking the entire thread, leading to lower memory overhead as they do not utilize shared memory. Although fibers were introduced in Ruby 1.9, their popularity surged with the introduction of the fiber scheduler in Ruby 3.0.
00:03:25.760 The fiber scheduler allows for the scheduling of multiple tasks via fibers, giving developers control over how and when fibers are executed. For example, developers can use fiber.yield to pause a fiber and fiber.resume to start it again, effectively controlling the concurrency in their code.
00:04:19.680 This leads us to the question: How do fibers handle race conditions? Unlike threads, where race conditions can often occur, fibers avoid this issue as developers control when to yield execution.
00:04:37.680 To illustrate race conditions, consider a scenario with a simple bank account class, where multiple threads may deposit into the same balance that could lead to inconsistencies. To mitigate this, we can utilize synchronization mechanisms like mutex locks to ensure that only one thread accesses a critical section at a time. With fibers, because the control is manually managed, race conditions are practically non-existent.
00:04:56.880 Now, moving on to deadlocks—another major concern with threading. Deadlocks occur when processes wait indefinitely for resources held by each other. To avoid deadlocks in threaded environments, we must ensure a specific order when acquiring locks, set timeouts for locks, and maintain synchronization effectively.
00:05:34.720 On the contrary, fibers manage deadlock scenarios more gracefully due to their lack of preemptive scheduling. If a fiber is yielding, it's up to the programmer to ensure that execution flows correctly, preventing potential deadlocks. If a programmer forgets to call yield, the flow could stall, but this is a controllable scenario.
00:06:12.960 As we wrap up this comparison, let’s address interrupt handling between threads and fibers. Interrupts can be tricky; threads are often seen as more susceptible to handling interrupts, while fibers, with their manual control, can be less affected as they are in charge of the flow.
00:06:48.800 In the case where we need to halt a thread, programming strategies such as using flags or timeouts to prevent indefinite execution can help. Alternatively, you can directly terminate a thread by killing it. However, for fibers, the execution is more manageable without needing such drastic measures.
00:07:14.240 Now let’s kick off round one with race conditions in multi-thread programming! Race conditions arise from unpredictable thread execution, leading to uncertain outcomes. This challenge emphasizes the importance of synchronization. Synchronization mechanisms can be ineffective at times, especially when threads aren't correctly managed, so it's crucial for developers to understand this well.
00:08:01.360 For example, in a banking application, multiple deposits might lead to unexpected results due to a race condition. To solve it, mutex locks can be employed for synchronization, allowing only one thread to operate on the balance at any moment. On the other hand, fibers handle these issues inherently since the control is programmed.
00:08:48.360 In this round, we take a moment to appreciate that fibers do not encounter race conditions because of the manual control given to developers. The scheduler does not preempt execution, ensuring that at any given time, only one fiber runs.
00:09:34.640 At the conclusion of round one, we must ask you: Who won this round, threads or fibers? Raise your hands if you think fibers did! Now, what about threads? It’s a close call, folks! Let’s move to round two, where we’ll discuss deadlocks.
00:10:06.960 Deadlocks are situations of mutual blocking where processes wait endlessly on each other. To handle deadlocks in threads, it's vital to define a strict lock acquisition order and implement timeouts effectively. Meanwhile, fibers can avoid deadlocks altogether by ensuring explicit yields in programming; however, programmers must remain vigilant throughout.
00:11:14.480 Moving to round three, let’s explore how interrupts are managed between the two methods. Interrupts are less problematic for fibers since developers maintain control over their execution. In contrast, threads may need explicit timeout checks or the ability to terminate them when they take too long.
00:12:33.680 This leads us to a lightning round! What are your concerns regarding threads? They are often complex and difficult to debug, especially when issues in production arise. Fibers, on the other hand, offer greater control, yet they lack native parallelism.
00:13:52.120 Despite their differences, both threads and fibers have their pros and cons. Threads can manage heavy CPU-bound tasks, while fibers excel in I/O-bound tasks thanks to the lightweight structure they possess. The shared memory structure in threads might lead to inconsistencies if not handled properly, while fibers do not suffer from this issue due to their controlled execution.
00:15:15.760 Let’s end this with an example where both threads and fibers can be utilized together for an image processing service. Here, we use fibers to retrieve data from URLs, leveraging their asynchronous capabilities, while threads handle the CPU-intensive transformation of the images.
00:16:39.120 In conclusion, while threads and fibers can work together to enhance performance, we also touched on the need for higher-level solutions, like incorporating additional frameworks or utilizing procedural programming to overcome the limitations of either method.
00:18:27.680 Before we close, let’s discuss 'Jos'—a Hindi word that means passion. We both share a profound dedication to Ruby, and over the years, have organized numerous events surrounding it. Our shared love for the language has fueled our journey together.
00:19:40.000 Thank you for your attention, and feel free to connect with us on social media if you’d like to discuss further about building a 'Justice League' of sorts in programming.
00:19:59.200 Thank you, everyone!
Explore all talks recorded at RubyConf AU 2024
+14