00:00:14.660
All right, so yesterday I took a look around our Twitter account to see what was going on. I saw some speakers doing some last-minute updates to their beach time slides. I talked to some speakers who hadn't updated their slides, and one speaker even freaked out because he thought his session was only 30 minutes long and didn't know what to do. Apparently, instead of cutting down on slides, he was adding more! I'm not sure that really works out very well. However, I did run into something special from our keynote speaker, Mr. Patterson.
00:01:03.539
I was wondering what this meant and what we could really do about this. I'm guessing he must really want spam, or he's a really big fan of it. So what's the situation here? Did you get to eat spam yet? Oh, that's great! Because we have one for you right here! If you can come on up, I'll give it to you. Awesome, thank you! Yes, spam - this is Hawaii; we can't let you come here and not leave without it.
00:01:42.110
All right, who's ready for day one? Who here wants me to get off stage and let this guy talk? All right, let's do that! So I'm going to talk about Rails 4 and the future, or as I like to call it, Rails 4 for you and me. I was told that you're not supposed to introduce yourself when you're giving keynotes, but since I'm not very good at speaking, I think I will anyway. My name is Aaron Patterson, and if you want to follow me on Twitter, you can do so.
00:02:08.090
For the record, I'm not introducing myself here. I just want to say hi to everyone first. I also have to start out by thanking some people. First, I have to thank my employer, AT&T. Without them, I wouldn't be here, so thank you! I also want to thank the conference organizers for having this conference in such an amazing location. I've never been to Hawaii before; this is my first time, and I'm super excited. The reason I'm excited is because I love spam! Thank you!
00:03:01.389
Also, my favorite TV shows are here, like Dog the Bounty Hunter. I was looking for him last night but couldn't find him. I also love Magnum P.I.; as you can see by my mustache, if you don't believe me, I even named my computers after Magnum P.I. characters. I wanted to share a quick story. While flying to Hawaii, I had to go to the bathroom, and the guy next to me in the same row also had to go.
00:03:32.410
He went first and then came back, and when I went to the bathroom, I noticed there was a five-dollar bill on the floor right in front of the toilet. I figured he must have dropped it, so I picked it up, returned to his seat, and gave him the money, saying, 'Hey, you must have dropped this.' Then I went back to my seat and the crew was collecting donations on the aircraft for breast cancer awareness. If you donate money, you get entered into a raffle for prizes.
00:04:10.890
Needless to say, I didn’t think much of it since I wasn’t expecting to win anything. So I donated the five dollars, thinking it wouldn’t hurt anyone. Of course, I ended up winning the raffle and was the first to pick a prize! I could have chosen chocolates, champagne, and all sorts of things, but I didn’t know what to do since it wasn’t my money. In the end, I picked the chocolates, but now it feels like I have these illegal chocolates that I shouldn’t have. I think I'll just eat them in my hotel room and not tell anyone that I won them, but I guess it's too late for that.
00:05:14.200
I don’t know if you can tell, but I am insanely nervous on this stage. One of the ways I comfort myself is by remembering advice a friend gave me. He told me when I feel nervous, I should ask myself, 'What would Freddie Mercury do?' So now, every time I give a talk, I try to think of Freddie Mercury. Now, I know most speakers say to imagine the audience in their underwear, but for me, I imagine I'm on stage in my own underwear. I figure that's what Freddie Mercury would do.
00:05:58.010
I also want to introduce you to my cat, whose full legal name is Gorbachev Puff-Puff Thunderhorse. We call him Gorby Puff, and I love him a lot. He’s the first cat I've owned, and I thought ownership would be 99% fun. I imagined we would go bike riding and get ice cream together, but it turns out 99% of the time, he’s just sleeping. I try to take pictures of him, but the only ones I ever manage to capture are of him yawning, as he is always about to go to sleep.
00:06:57.000
So we never get to go bike riding together. If you want to see more pictures of him yawning, you can follow him on Twitter! All right, so let's talk about Rails 4 – for you and me. We'll look at some of the features of Rails 4 and discuss the future of the web. Now, despite my looks, I'm not a television psychic, so I can't tell you exactly what’s going to happen in the future. What I can share is where I think it’s going.
00:07:24.640
The point of my talk is to get ideas flowing among you—all of us brainstorming about where I think we’re heading. We’re going to explore some behaviors in Ruby, then we’ll look at some changes in Rails, and we will analyze how these changes interact with the web. We're starting from the server and moving toward the client.
00:08:36.040
First, I want to talk about concurrency in Ruby, or parallelization. I like to shorten this to P56n, though I may not spell it correctly. Most of you are probably familiar with this; the MRI (Matz's Ruby Interpreter) has a GIL (Global Interpreter Lock), which prevents concurrent CPU execution. This means that we cannot schedule two threads to run on two different CPUs simultaneously. If you want a Ruby interpreter that can do that, look into alternatives like JRuby or Rubinius. These are guilt-free alternatives.
00:09:56.480
Now, here is some good news: the GIL was removed one night! So we can be super happy about that. But here comes some bad news: the GIL was simply replaced with a GDL (Global Data Lock), which serves the same function—it just has a 'D' instead of an 'I.' If you read the source code of version 1.9, you’ll see plenty of references to GDL. This can lead to the question: Is MRI useless for P32n? Obviously, it seems like it is, as it’s clear that your programs don’t actually work; they just seem to work.
00:10:52.230
This is a philosophical question better left for someone smarter than me. What I want to do is look at the impact of the GDL on MRI and see what that means for our day-to-day Ruby usage. I often use the Fibonacci sequence to illustrate how the GIL impacts MRI since, during my time working in online advertising, most calculations revolve around it. I like to run benchmarks using this sequence to demonstrate certain behaviors.
00:12:37.060
For example, running a standard Fibonacci sequence computation on my machine takes about 5.7 seconds. I figured I need to deliver ads faster, so I calculated the Fibonacci sequence using threads. I've got four CPUs on my machine, so I tried running four threads to calculate the Fibonacci sequence in parallel. I expected a timing improvement, but it took exactly the same amount of time to complete. This is because time spent in the VM cannot be done in parallel, hence the term GVL.
00:13:41.550
In practical terms, the time spent within the virtual machine prevents execution in parallel. As a solution, people often opt for JRuby or Rubinius for true multi-threading on their machines. In practice, many developers utilize multiple processes, which is what running your Rails application does when you implement a web server like Unicorn; it starts multiple Ruby processes to handle concurrent requests on your machine.
00:14:50.660
Next, I want to discuss how to handle slow web services in our application. In online advertising, managing reconnaissance of ad displays involves slow web services as well. I have a simple web service here that is ridiculously slow; it's implemented to return 'hello world' with a built-in delay of half a second for each request. When I run this request without implementing threading, it takes over 2 seconds to complete because the delay accumulates.
00:15:59.580
We forgot that we cannot actually execute anything in parallel if we’re using threads because of the GVL. Despite this, we can run it anyway, and surprisingly, it still manages to complete in about half a second using threads. This works because when using sockets, the interpreter knows it isn’t executing anything while waiting for data from the socket, allowing other threads to run when waiting.
00:16:58.279
There are specific operations that employ this method, chiefly I/O operations. For those interested in C extensions, the key function in query handling within MRI is called `RB_thread_blocking_region`. It unlocks the GVL, allows other processes to execute in parallel, and then reacquires the lock. You can employ this function in CPU-intensive tasks, such as cryptography where the execution happens outside the Ruby virtual machine.
00:18:14.270
So, what does it mean to run things in parallel in MRI? From the perspective of advertising, it means we need to consider building 'Fibonacci as a service'—or what I’d like to call F-FaaS! Yes, I’m looking for investors, so please come and talk to me after this session! But, more importantly, it means that a block in the VM is a block to all of us. If you come across a library promising fiber execution that executes in parallel, chances are it's adding more complexity than any actual benefit.
00:19:31.629
Being mindful of threads is essential; if you are running I/O on MRI, threads matter immensely. Many web applications, or at least portions of your coding, involve asynchronous operations, which I suspect you'd like to optimize. Therefore, utilizing threaded web servers like Puma, which perform operations in parallel—when threading I/O operations should ultimately prove beneficial—will become more essential in the future.
00:20:07.589
Now, let's delve into the matter of thread safety in Rails and discuss how we've built in changes to ensure it's safe for developers. I want to cover common thread safety concerns we encountered and the resolutions we deemed essential in standard applications. The first change we made was removing `config.thread_safe`. While the option still appears, it functions as a no-op and is effectively redundant.
00:21:21.480
You can still call it, and it will likely output a message indicating that cool story bro, you’re already thread-safe! This raises an important question: why consider removing this configuration flag? If we strive to always write thread-safe applications, then checking a flag just to prompt users to run safe operations makes no sense!
00:22:06.020
This approach unnecessarily complicates the Rails codebase with branches checking for thread safety. Alternatively, eliminating these checks simplifies our code, making future code maintenance easier. However, you might wonder if it is truly safe to delete the thread-safe flag, to which we should examine what the thread-safe feature truly did.
00:23:17.840
The thread-safe flag primarily set four specific configuration options. However, we need to remember that loading code isn’t thread-safe; although the `require` method has been deemed thread-safe in JRuby, it still isn’t safe. Circular requires can potentially lead to deadlocks, which is a scenario we want to avoid during Rails application startup.
00:24:30.830
Regarding configuration options, the first task is to enable preloading frameworks so we load all of Rails at once. Because Rails is typically lazy-loaded, referencing an Active Record model triggers its loading. Preloading ensures all your code is accessible from the start. We also enable caching classes to prevent reloading code during production since it could lead to deadlocks.
00:25:55.340
Next, we disable dependency loading at the model constant reference level. Since we are preloading all your code, it doesn't need to go out and find these constants again. Lastly, the most contentious option we considered was enabling concurrency, which disables a middleware called `rack-lock`.
00:27:06.830
This middleware seemed unnecessary when running multi-process setups, as each process runs a single thread. Therefore, its overhead becomes a liability; the problem is compounded in a multithreading environment, where multiple threads cannot process requests simultaneously if the first thread is still trying to process a request.
00:28:53.780
The concern around this default configuration is why no one chooses threaded servers—developers startup their servers and realize it only processes one request at a time, leading them to move on to something else. So now with our impact through the removal of the thread-safe flag being fully induced, you're probably wondering how this will influence boot time in production. The boot time may increase since we're preloading all your code, but remember that you were already incurring that cost over various requests in the past.
00:30:32.430
The practical outcome should yield similar application speeds as production warms up. However, with threaded servers, they should work without further issues. Now, interestingly, there was a survey that asked which web server was most popular—yet amusingly, it did not include WEBrick, which is the default web server that comes with Ruby and that deploys to Heroku if you don’t specify a server.
00:31:26.620
Interestingly, WEBrick serves as a threaded web server, making it a unique server to consider. This detail means many users pushing applications to Heroku may not know that they’re utilizing WEBrick as their server, as it remains hidden from the larger conversation. Removing the thread-safe option was not the only change to ensure Rails supports threaded applications; we also had to fix bugs producing unsafe outcomes.
00:32:31.580
I want to address some common scenarios where we encountered bugs, primarily in the realm of caching, and what actions you can take to fix threading issues in your Rails applications. We found that almost all our bug-related incidents were race conditions stemming from caching, which is a good thing as we hadn’t encountered any deadlock issues.
00:33:18.230
When considering the use of Memoization, we often forget that the `||=` operator yields caching on the right side of the statement, which means some calculations can happen multiple times across different threads. If you come across one of these cases, particularly if data is shared across threads, those calculations will wind up being duplicated.
00:34:34.290
One way to handle this is by eager initialization. If you're concerned about multiple calls to the same instance variable, you can set the variable upon class boot. We can also use synchronization to add a mutex around your calculations to ensure that only one thread can execute that block of code at a time. Additionally, it may be better to shift towards instance methods instead of class methods to minimize these issues as classes and methods can be shared across threads.
00:35:54.780
Another common problem we encountered involved hash operations—notably related to our earlier work with `||=`. The method level could seem safe, but upon checking a hash for keys, the storage could create race conditions as well. One workaround is synchronizing to access the keys safely.
00:37:08.310
One alternative is to look for specialized libraries providing thread-safe hashes and other concurrency tools. This is a point I believe is limiting because Ruby's standard library lacks many primitives to facilitate thread safety. This presents hurdles for Ruby developers when it comes to writing thread-safe code on their side.
00:38:47.790
Interestingly, shifting the focus—many of you have been working with Rails applications—what steps can be useful at the application level to implement thread safety? The first step is strangely simple: avoid shared data. Once you can identify shared state, it becomes manageable to put locks around the shared data to make your applications more synchronized under load.
00:39:55.440
We need to remember that creating threads manually is quite unusual. If developers aren’t creating new threads via `Thread.new`, they should be primarily focused on spotting shared data in their application logic. This shared data often appears in global variables, constants, or class-level methods, so it’s something that can lead to instability across threads.
00:41:08.740
As we engage with web applications now, I want to talk about streaming. In Rails 4, streaming is not a new feature per se, rather, it was about making it easier to implement reuse in your applications. When it comes to rendering templates in Rails or a high-level overview, it’s important to understand the memory consumption when processing views as Rails buffers the results and ultimately sends it to the client.
00:42:55.860
The downside of this process is that clients become blocked while Rails engages in processing requests, which is not optimal or scalable. Clients sit there awaiting information while the server is processing—this could lead to frustrating delays. Further complicating matters, each entire page must live in memory until it is completely generated, leading to resizing cycles and memory consumption as more strings are built up.
00:44:32.860
But with ActionController Live, we can give our controllers an I/O-like API for streaming data down to the client. Within Rails 4 applications, adding ActionController Live allows us to stream responses. We are able to progressively send data as it’s ready, which prevents clients from having to wait unnecessarily and engages them in a more interactive manner. For example, when using this, controllers can write to a response stream in real-time.
00:45:50.490
One potential exciting use case is employing server-sent events. This technology allows servers to push updated data to clients continuously. Imagine a paradigm where real-time updates keep your users engaged or actively update their view without needing multiple refresh cycles.
00:47:30.790
For instance, as shown in this demo: you could code a scenario where a user pushes an event that will notify clients to reload the page when certain assets are changed. It becomes a smooth interaction, invoking seamless client-server engagement while executing a reload via JavaScript functions each time events happen.
00:48:49.100
Returning back to our three key points of concurrency, thread safety in Rails applications, and streaming, I hope you see how they interweave throughout both the framework and your applications. As technology continues to evolve—for instance, with more cores popping up in everyday devices—we must capitalize on what we have. Leveraging these advancements seems paramount.
00:49:55.640
To summarize, I would encourage you all to explore and maximize performance by implementing smarter strategies. Efficiency is key in web applications and adapting to clients of varying bandwidth or latency is a challenge we’re likely to face continually as user patience wanes.
00:50:45.000
To enhance performance, I encourage you to make use of caching, deliver partial updates instead of complete loads, and offload some computational work to client-side JavaScript when possible. In conclusion, as we progress forward, I invite you to be innovative while optimizing our software practices to enhance overall user experiences. Thank you for your attention.
00:51:37.320
Now, if you have any questions, I’d love to address them after the presentation. I am not entirely sure about the protocol in terms of question timing here, but I would be happy to discuss with anyone so please feel free to approach me.
00:52:30.000
Although we may not have time for additional questions, I did have a cheeky inquiry while here: are there any specific spam shops recommended in Hawaii? I’d enjoy trying options like organic, shade-grown, or Fairtrade spam if they exist—deep-fried spam sounds delicious!
00:53:12.790
In summary, if you happen to find any interesting local delicacies, do share! And please give a shout if you have any other questions or curiosities. It's been a pleasure engaging with you, and thank you for being such an attentive audience during my talk!
00:54:12.190
Before I wrap up, I want to express that I genuinely appreciate your patience! I am excited about the future of Rails and its community; let’s continue expanding our knowledge together. Thank you!