Performance Optimization
Rails 5, Turbolinks 3, and the future of View-Over-the-Wire

Summarized using AI

Rails 5, Turbolinks 3, and the future of View-Over-the-Wire

Nate Berkopec • June 20, 2015 • Earth

In this talk, Nate Berkopec explores optimizing web applications using Rails 5 and Turbolinks, focusing on achieving sub-100 millisecond UI response times. Berkopec, a freelance Rails consultant, emphasizes that a common misconception is that developers need to abandon Ruby for JavaScript frameworks to improve website performance. Instead, he advocates for leveraging Rails along with Turbolinks to enhance speed without rewriting applications.

Key Points Discussed:

  • Understanding UI Response Time: Berkopec defines 'to glass' as the duration from user interaction to complete rendering, highlighting the significance of achieving reaction times under 100 milliseconds for a seamless user experience.
  • Current Performance Challenges: A typical Rails application faces latency issues, not solely due to server response times but also due to heavy client-side processing, such as document-ready handlers that delay rendering.
  • Improving Responsiveness with Turbolinks: By utilizing Turbolinks, developers can avoid redundant tasks and minimize loading times significantly. This allows for retaining existing JavaScript while sending HTML updates from the server, resulting in agile UI interactions.
  • Real-World Comparisons: Berkopec presents a comparison between a Turbolinks implementation of a To-do MVC app and a standard Ember app, showing that the Turbolinks version exhibits reduced loading times by sidestepping unnecessary re-renders.
  • Future Enhancements in Turbolinks 3: With Rails 5’s upgrade to Turbolinks 3, new features such as partial replacement and a public API for a progress bar will further boost performance and enhance user experience.
  • Tools for Optimization: Berkopec recommends tools like the Rack Mini Profiler and Chrome timeline for profiling and diagnosing performance, allowing developers to measure speeds and identify bottlenecks effectively.
  • The Concept of View Over the Wire: This method allows applications to reduce the amount of JavaScript required by favoring HTML document replacements, making development simpler while retaining performance gains.

Conclusion:

In conclusion, Berkopec stresses the importance of focusing on view over the wire for more efficient application development. He shares his GitHub repository for further exploration of his Turbolinks implementations and encourages developers interested in performance enhancement to consider this approach. Berkopec's insights provide a promising pathway for future web development using Rails.

Rails 5, Turbolinks 3, and the future of View-Over-the-Wire
Nate Berkopec • June 20, 2015 • Earth

@nateberkopec
With Rails 5, Turbolinks is getting a nice upgrade, with new features like partial replacement and a progress bar with a public API. This talk will demonstrate how Rails 5 Turbolinks can achieve sub-100ms UI response times, and demonstrate some tools to help you get there.

Talk given at GORUCO 2015: http://goruco.com

GORUCO 2015

00:00:13.629 Let's talk about Turbolinks because everyone loves it, right? Well, maybe not everyone, but that's why I'm here to discuss it. My name is Nate Berkopec, and I'm a freelance Rails consultant. Many of the projects I work on tend to be slow, and no business owner enjoys having a sluggish website. Today, I'll show you how you can utilize Rails to achieve sub-100 millisecond UI response times.
00:00:32.599 If you've read programming blogs, you may think that the only way to get a fast website is to throw everything out and rewrite your application using Node and some JavaScript framework. But I’m not a fan of that approach. I love Ruby and prefer to work with it instead of JavaScript and other languages. So let’s start with some definitions.
00:00:52.850 The term 'to glass' isn’t formal, but I’ll define it as the time taken from user interaction, like a key press or click, until the final rendering is complete. It’s different from the DOM content loaded or when the page is first ready; it’s about when the page is actually stable and the interaction is considered complete. This can happen after the document is ready because of multiple document ready handlers. The important point is that while our computers have significantly improved in speed over the past 50 years, human beings have not. Back in the 1960s, researchers at Xerox studied how quickly computer interactions need to be for humans to perceive them differently. If an interaction is below 100 milliseconds, it's perceived as instantaneous. Between 100 milliseconds and one second, users notice the delay but still feel as if they're working directly with the data. However, if it takes more than one second but less than ten, while you have their attention, they will feel the delay. More than ten seconds—forget it; they’ve likely moved on to something else.
00:01:49.460 I want to share a typical Chrome timeline readout of a Rails website that will remain nameless. This timeline shows the period from when I hit refresh to the final paint, outlining where the time went. In a typical Rails application, most of the load time isn’t spent on the server response; instead, it shows as idle time because the server is simply waiting for a response. This is what I’m going to explain next. A large portion of this time is actually due to the document-ready handlers that have accumulated over the years. These scripts can take up to a second and a half just to attach event listeners, initialize A/B testing tools, or change fonts as requested. Much of the delay arises from re-rendering because of all the scripts executed at document ready, which affect the render tree by forcing it to re-render everything again.
00:02:59.390 Let’s break this down. In terms of network latency for a typical Rails app, this usually takes around 250 to 500 milliseconds just to get a server response. Factoring all this in makes it almost impossible to stay below the 100-millisecond threshold. After the server response, you'll need additional time for parsing HTML, creating the render tree, evaluating all the JavaScript, and finally painting the result on the screen. If your app is slow and takes, say, 250 or 500 milliseconds to respond, that's an issue; it doesn't have to be that way. It's not necessarily the fault of Rails; there are many large websites that have achieved considerably faster response times, primarily through effective caching strategies. So, make it happen. It's entirely possible to design a website with a 50-millisecond response time.
00:04:22.910 Now, let’s say your server response time is reduced to 50 milliseconds. If you spend ten milliseconds on the server request and another ten milliseconds on the return trip, that adds up to 70 milliseconds, leaving you with just 30 milliseconds for all the additional tasks that previously took two seconds. So, how can this be achieved without the needless workload? What if I told you that you don’t have to perform all these tasks on every single page load? That’s where Turbolinks comes into play.
00:04:49.539 Traditionally, when faced with the challenge of attaching all those event listeners and performing various actions, developers often turn to JSON. This means implementing logic on the client side, allowing the client to determine how to update the view based on the JSON received. Essentially, the client becomes responsible for managing the data flow. While many developers appreciate this flexibility, I personally prefer to use more Ruby. A major issue with this approach is that it results in having two separate code bases: one for JavaScript which requires different knowledge and tools. It creates division in teams, particularly in small companies that often only have a handful of engineers.
00:06:02.630 View over the wire is a simpler solution. It allows you to get an HTML document from the server and replace the existing document with the new one. This isn’t quite what Turbolinks does, but it captures the essence. What's beneficial about this method is that you don’t have to discard your entire JavaScript runtime, nor do you need to completely re-parse or re-evaluate scripts from the head. You retain the global scope of your JavaScript and can reap the benefits without distributing the logic across JavaScript files.
00:07:06.840 I conducted an experiment by rebuilding the classic To-do MVC application using Turbolinks instead of a client-side JavaScript framework. You can find it at the URL I’ll share later. Now let’s look at the loading times of both this new app and a standard Ember app during the same interaction, in this case, completing a to-do item. Even though both apps are doing similar amounts of work, the Turbolinks version has a greatly reduced loading time because it avoids unnecessary re-rendering and event listener reattachment.
00:08:05.130 This is a timeline comparison between my Turbolinks implementation and the Ember app. While the Ember app has to wait for the server more often because it writes to local storage, the time spent in scripting and rendering for both apps remains quite similar. The difference lies in the waiting times; the typical Rails response might slow things down bit more in general. The takeaway here is that by using Turbolinks, you can achieve noticeably faster interactions despite minimal adjustments—just a few lines of JavaScript and Turbolinks implementation.
00:09:43.140 The efficiency feels almost instantaneous. To highlight the performance, I’ll show you two implementations: one with Turbolinks and one using Ember. It’s fascinating how you may not be able to discern which is which just by looking at them, emphasizing Turbolinks’ ability to provide a responsive experience. This sense of speed is remarkable, particularly when considering that my Turbolinks app sends requests to a Heroku server that updates an Active Record model and returns results at comparable speeds to the local storage operations of the Ember app.
00:10:59.490 Coming up with Rails 5, we’ll see upgrades to Turbolinks 3. While you can use Turbolinks 3 today with certain setups, it will integrate incredible improvements from Shopify’s production experience. One of the standout features is partial replacement, which allows only specific elements of the page to update instead of reloading the entire body, ultimately speeding up the process. You also gain access to a new public API for controlling a progress bar that can enhance user experience.
00:11:44.820 To achieve sub-100 millisecond performance, it's essential to remember a couple of extra layers beyond Turbolinks. One invaluable tool is the Rack Mini Profiler gem, which presents a badge in the upper left of your application, allowing you to observe loading speeds and query metrics. It even allows for detailed flame graphs showing the stack profile during page loading, which can be incredibly useful for diagnosing and addressing slow loads.
00:12:27.720 An additional analysis tool, the Chrome timeline, can break down the execution time of your JavaScript effectively, which is essential when you aim for responsiveness around the 100-millisecond mark. Interestingly, you'll find that introducing progress bars or spinners can sometimes give users the impression that an application is running more slowly than it truly is.
00:12:32.550 In conclusion, the concept of view over the wire heralds a faster, more efficient way to develop applications by reducing the volume of JavaScript required. This method has been battle-tested at various large-scale companies with significant user bases. If you're interested and want to learn more, consider checking my GitHub repository at the end of this presentation. Thank you for your time, I've been Nate Berkopec, and that wraps up my talk on Turbolinks and the future of view over the wire.
Explore all talks recorded at GORUCO 2015
+5