Performance

Summarized using AI

Turbo Boosting Real-world Applications

Akira Matsuda • April 17, 2018 • Pittsburgh, PA

In the presentation 'Turbo Boosting Real-world Applications' at RailsConf 2018, Akira Matsuda discusses optimizing performance in large, legacy Rails applications. The focus lies not only on application code but also on the Rails framework itself. The key points covered include:

  • Performance Bottlenecks: As applications grow, performance issues arise, often due to blocking operations and inefficient component interactions.
  • Non-blocking Operations: Matsuda emphasizes the significance of making API calls and database queries non-blocking to reduce waiting times, highlighting a 'future pattern' approach by utilizing threads to handle operations in the background.
  • Optimizing Database Queries: He introduces strategies to improve database interaction by sharing connections across threads and using connection pools effectively.
  • Action Views: He addresses the performance of rendering partial templates, advocating for using concurrent rendering to reduce wait times in view processing.
  • Lazy Attributes: The presentation emphasizes tweaking attribute accessors in Active Record to improve response times significantly by minimizing method call overhead.
  • Named URLs: The speaker identifies named URLs as a performance drain and suggests caching mechanisms as a solution.

Matsuda supports his technical points with practical examples from his experience and explorations within the Rails framework, detailing potential optimizations and sharing code snippets on GitHub for further exploration. His main takeaways stress the importance of identifying slow components within the framework, the complexity of thread programming, and the necessity of proactive performance hacking. He concludes with an encouragement to embrace these performance improvements in production environments and the future development of Rails frameworks.

Turbo Boosting Real-world Applications
Akira Matsuda • April 17, 2018 • Pittsburgh, PA

RailsConf 2018: Turbo Boosting Real-world Applications by Akira Matsuda

One day I joined a team maintaining a huge, legacy, and slow Rails app. We thought that we have to optimize the application throughput for the user happiness, and so we formed a task force that focuses on the performance.

Now, here's the record of the team's battle, including how we inspected and measured the app, how we found the bottlenecks, and how we tackled each real problem, about the following topics:

Tuning Ruby processes
Finding slow parts inside Rails, and significantly improving them
Fixing existing slow libraries, or crafting ultimately performant alternatives

RailsConf 2018

00:00:11.120 Hello everyone, this is Akira from Japan. Today, I'm going to talk about performance.
00:00:19.230 This talk is specifically focused on web application performance, but I won't be discussing general programming techniques like eliminating N+1 queries or using Rails caching.
00:00:26.060 Instead, I will focus on identifying real problems within the Rails framework and how we can effectively address them.
00:00:37.710 The main focus is not solely on our application code but rather on Rails itself.
00:00:43.079 Today, I have brought some actual examples of performance issues that I've encountered, which I can share with you. Hopefully, you can find them useful later.
00:00:55.050 So, let me start by asking you a question: Do you think your application runs fast?
00:01:03.510 My answer is no—quite often, it doesn't.
00:01:10.200 When we start a project, everything runs smoothly, but as the application grows, it eventually becomes slow.
00:01:16.220 I believe this is a common experience, right? This slowness essentially arises because of the architecture and some of the very slow components within Rails.
00:01:29.420 Here's a diagram that shows actual applications. This is not from my actual application, but I downloaded it from Skylight's website.
00:01:36.750 This diagram demonstrates that when executing queries, often the operations happen in a serial order. This means they are executed on the main thread, causing delays.
00:01:48.460 For example, while a query is being processed in the database, Ruby just waits. In other words, these are all blocking operations.
00:01:55.050 So, what if we could perform these activities without blocking the main thread, perhaps in parallel and in a non-blocking manner?
00:02:02.520 To begin with, I have five topics to cover today.
00:02:10.650 Starting with API calls, which are commonly performed through HTTP.
00:02:18.700 These may involve invoking external APIs or microservices.
00:02:25.540 However, introducing microservices into your application usually adds extra network overhead, which can slow it down.
00:02:34.380 The real issue with calling external APIs is that it blocks the main thread while waiting for the HTTP response, during which the CPU remains idle.
00:02:49.790 So how can we make these calls non-blocking?
00:02:57.020 Let me show you an example: a simple case where a client requests a very slow API that takes one second to respond.
00:03:03.830 The client code makes three requests to this API.
00:03:09.629 As a result of these synchronous calls, it takes three seconds.
00:03:16.530 However, we can fix this problem quite easily using Ruby.
00:03:23.370 By utilizing threads for API calls, we can reduce the execution time to just one second. This approach is known as the 'future pattern.'
00:03:50.190 In this pattern, when a thread is created, it begins executing in the background, allowing the main thread to continue working.
00:04:02.990 The main thread can perform other tasks while the API call runs asynchronously.
00:04:12.290 This concept is important as we progress to the next topic.
00:04:18.050 Now, let’s look at boosting database queries.
00:04:25.760 Database queries can be quite time-consuming—it’s a well-known issue.
00:04:32.710 When we query the database, the main thread is often left waiting.
00:04:40.520 This represents a significant opportunity for optimization.
00:04:48.910 We can actually leverage threads here as well.
00:04:55.050 To illustrate, consider a heavy database query example.
00:05:04.290 We can use multiple connections to split the load.
00:05:10.600 This technique allows us to handle more requests concurrently without blocking the main thread.
00:05:19.460 Now let’s look at a practical implementation.
00:05:26.640 This version of the query will use multiple threads effectively.
00:05:35.660 It's important to note that while threads are useful, we need to be careful not to exhaust the connection pool.
00:05:47.020 Using too many threads can lead to resource contention, which is something we must avoid.
00:05:55.740 We can still apply the future pattern to execute database queries asynchronously.
00:06:04.320 Let's look at another aspect next: action views.
00:06:11.300 Action views often render partials that can slow down the application's performance.
00:06:18.620 When rendering partials, there's an opportunity to optimize this process.
00:06:26.050 Perhaps we can render these in a non-blocking fashion as well.
00:06:32.960 The idea is to offload rendering to background threads, thereby freeing the main thread for other activities.
00:06:39.360 Let’s explore how this could be achieved in practice.
00:06:46.160 If we consider a setup where we have multiple partials, we can leverage threading to improve efficiency.
00:06:53.760 This implementation can help mitigate some of the delays produced by partial rendering.
00:07:00.750 It’s worth noting that results will vary depending on the complexity of the views being rendered.
00:07:08.090 Moving on, let’s talk about lazy attributes in view code.
00:07:16.390 In one scenario, when rendering many attributes in one page, performance can degrade.
00:07:30.750 To address this, we can look at the methods being called.
00:07:38.250 Tuning them can provide significant performance benefits.
00:07:45.790 Instead of accessing attributes through method calls, we can retrieve their values through literals, eliminating method call overhead.
00:07:55.160 This method can reduce response times dramatically.
00:08:01.640 Now, let’s discuss the last topic relating to named URLs.
00:08:07.320 We often overlook the performance impact of named routes.
00:08:16.640 However, by caching these URLs or optimizing how they’re generated, we can improve responsiveness.
00:08:24.240 By caching the results or reevaluating their generation strategy, there's potential for substantial improvement.
00:08:33.000 In conclusion, we covered various performance improvements today.
00:08:49.350 Whether through threads or by optimizing Rails itself, we can tackle many of the latency issues found in web applications.
00:09:00.950 Don’t shy away from investigating performance bottlenecks—you might find they originate from the framework.
00:09:12.890 The experience of optimizing applications with Ruby threading is challenging but rewarding.
00:09:19.180 In the future, I plan to continue this journey of performance improvements, focusing on production application viability.
Explore all talks recorded at RailsConf 2018
+98