Sam Rawlins
New Ruby 2.1 Awesomeness: Pro Object Allocation Tracing

Summarized using AI

New Ruby 2.1 Awesomeness: Pro Object Allocation Tracing

Sam Rawlins • March 17, 2014 • Earth

In this talk, Sam Rawlins introduces a powerful new feature in Ruby 2.1 called #trace_object_allocations, which allows developers to trace object allocations down to the file and line number where they occur. This feature represents a significant advancement due to the previous limitations in memory profiling tools for Ruby, especially with earlier versions like 1.8.x. With Ruby 2.1, the new methods such as trace_object_allocations, start, stop, and clear enhance the ability to understand memory usage within applications.

Key Points:

  • Introduction to Ruby 2.1: Sam Rawlins greets attendees and highlights the excitement surrounding Ruby 2.1 and its new profiling capabilities.
  • Challenges of Memory Profiling: Previously, tools for memory profiling were limited, making it hard to diagnose memory issues within applications. An anecdote from GitHub exemplifies the necessity for efficient profiling after they discovered high object counts at app startup.
  • Tracing Object Allocations: Developers can now trace object allocations using trace_object_allocations with examples demonstrating how to incorporate this feature in code blocks or through method calls.
  • Importance of Reducing Allocations: The talk emphasizes reducing the memory footprint to enhance garbage collection efficiency. GitHub's insights on high object counts at startup underscore the need for this feature.
  • Using the allocation_stats Gem: This gem simplifies the aggregation of allocation data, allowing users to view allocations by source file and class, offering better clarity on memory usage.
  • Performance Optimization Techniques: The presentation explores using frozen strings in Ruby 2.1 to minimize memory allocations, which can dramatically improve performance by reducing garbage collection overhead.
  • Active Record and Rails Optimization: Sam points out common sources of excessive allocations in Rails applications, particularly through Active Record and Active Support, suggesting refactoring methods for efficiency.
  • Gems for Rack Applications: The rack-allocation-stats gem can be employed in Rack applications to identify top allocation sites and optimize memory usage as requests are processed.
  • Concluding Remarks: The session wraps up with a call to action for developers to utilize the tools available in Ruby 2.1 and be proactive in optimizing their applications to reduce memory allocation overhead.

Overall, Rawlins urges developers to take advantage of these new tools and features to improve application performance and efficiency in memory management, contributing to a more robust programming environment.

New Ruby 2.1 Awesomeness: Pro Object Allocation Tracing
Sam Rawlins • March 17, 2014 • Earth

By Sam Rawlins

Ruby 2.1 is coming out soon with an amazing new feature under ObjectSpace: #trace_object_allocations. We are now able to trace the file and line number (as well as method) where any Ruby object is allocated from. This is a very welcome feature, as object-level tracing has been very difficult in Ruby, especially since the memprof gem could not support Ruby past 1.8.x.
This new Ruby 2.1 feature is really just exposing some raw (and vast) data, so it can be difficult to tease out meaningful information. Two gems are introduced in this talk to solve just that problem. The objspace-stats gem allows us to view and summarize new object allocations in meaningful ways. We'll look at how to filter, group, and sort new object allocations. The second gem is rack-objspace-stats. We'll see how this tool can intercept requests to a Rack stack, and measure new object allocations taking place during the request. (For those familiar, this works very similar to the rack-perftools_profiler gem.)
We'll look at various examples of how this new Ruby 2.1 feature, and these tools can help an organization reduce unnecessary memory allocations, and speed up their code, especially mature Rack applications.

Help us caption & translate this video!

http://amara.org/v/FG2j/

MountainWest RubyConf 2014

00:00:25.599 Hey everyone, I'm Sam Rawlins, and this is the last talk before lunch. I'll do my best to make it quick. I might not take questions today because I'm eager to get to lunch, and I may have packed more content into this talk than I should have. Today, I will be discussing a new feature in Ruby 2.1 called tracing object allocations. It's super exciting!
00:00:36.280 So Ruby 2.1 is out! Raise your hand if you've installed it or used any new features in Ruby 2.1. Awesome! That's a better adoption rate than Ruby 1.9 when it came out. You can grab Ruby 2.1 using tools like rbenv or RVM, which allow you to install it easily. The news file outlines all the features, but what I will focus on is a specific feature in the corner: object space trace object allocations.
00:01:00.399 Object space is not a new concept; you may be familiar with methods like count_objects and the garbage collector method. However, Ruby 2.1 has introduced a couple of new methods: trace_object_allocations and some sibling functions like start, stop, and clear. Let's start with an anecdote from GitHub, which is where this story begins.
00:01:12.640 They published a blog post titled 'Hey Judy, don't make it bad.' In it, they explain that when they launch the GitHub app, they immediately count how many objects are in memory and find that there are over 600,000 Ruby objects. This was much more than they expected and became a mystery since, at the time, there weren't many good ways to profile memory usage until Ruby 2.1 was released. Before Ruby 2.1, the profiling tools were quite limited. MPR was a good one for Ruby 1.8, but it didn't support 1.9.
00:02:04.880 There was a void in profiling tools, especially for understanding memory allocations. While there were good SQL and CPU profiling tools, memory profiling was lacking. This situation prompted the development of the trace object allocations feature to answer the question: 'Where am I hogging all this memory?'
00:02:23.800 Let’s examine a simple example. You have a class with a method that returns an array, which means it allocates and returns an array. Another method allocates and returns a string. I want to trace the object allocations. We can do this by calling a specific block of code, and there are two important lines where we say we're going to save the return value of the array into variable 'a' and the string into variable 's'. We then wrap that code in a block and pass it to trace_object_allocations.
00:03:47.600 What you get are additional helper methods in object space that let you know which file allocated the object and on which line it was allocated. For instance, it tells you the class, method, and allocation site of your variable 'a'. That’s pretty neat! In example 3.rb, it shows that my class allocated on line 3. There is also an alternate method; instead of wrapping the code in a block, you can just start and stop tracking the allocations without any additional wrappers.
00:04:00.239 So, why is this important? We want to do two main things: reduce the memory footprint and help with garbage collection time. GitHub's issue was that they had too many objects at startup, which negatively affected their performance. With fewer objects, garbage collection runs faster and more efficiently.
00:04:30.120 If your application isn't running on Ruby 2.1 yet, that’s fine! You can still use this as a diagnostic tool. If you can get your app to run locally in Ruby 2.1, you can play with these features without needing to upgrade your production application.
00:05:05.439 The trace object allocations feature is somewhat limited; it allows you to handle specific objects, and that's not very broad. It's also very fine-grained, telling you details about individual objects. But I think this is just the beginning of a great feature. We can write tools around this data to gain meaningful insights into our applications.
00:05:40.079 Next, let's talk about aggregation. I created a gem called allocation stats that provides a simplified API, requiring Ruby 2.1 and utilizing the new feature. Here’s a simple usage: you have a class with a method allocating a hash with three string values. By requiring allocation stats, we can wrap our method call in a block to trace the allocations.
00:06:02.040 You will see a concise table displaying the allocations, listing where each allocation occurred. This gives you a better overview than looking at each individual allocation separately. Aggregating the results by source file and class makes the output much more useful.
00:06:44.119 For a more complex example, consider using the Psych library. Here, a simple array of two strings is dumped to YAML format. By grouping allocations by source file and class, you can see the total allocated strings in the output, which helps identify resource-heavy areas.
00:07:00.599 It's fascinating to note that even when performing simple operations like dumping a YAML, the system allocates many objects, shedding light on heavy allocation methods within the Psych library. We can further drill down to see allocations sorted and grouped by various criteria, offering deeper insights into the underlying code.
00:08:11.540 Using the Hike library, we can explore even more complex scenarios and sees a deeper understanding of allocations across files and classes. This comprehensive analysis can reveal insights that inform development decisions and optimizations.
00:09:44.079 One common issue that arises concerns repeated allocation of similar objects, causing unnecessary garbage collection overhead. By freezing strings in Ruby 2.1, we can significantly reduce memory allocations. Instead of allocating new strings every time a method runs, frozen strings ensure the same object is reused, improving performance.
00:10:52.560 While examining performance metrics, we find that using frozen strings versus non-frozen strings can drastically cut down on the time spent in garbage collection, a critical concern for performance optimization in Ruby applications. Developers should consider employing these strategies where applicable to enhance application speed.
00:12:19.360 If you're working with Ruby on Rails, a considerable number of objects can be allocated due to various methods in Active Record and Active Support. Code segments frequently regenerate similar strings, leading to excessive allocations. Investigating these methods further allows for the identification of potential optimizations.
00:12:59.440 One example is the callback methods in Active Support, where the method names are generated with string interpolation. Refactoring these areas to cache or memorize these strings can result in a more efficient application, eliminating redundant allocations.
00:14:20.799 Likewise, we found that Active Record methods often create new instance variables that involve string interpolations. Addressing these inefficiencies in Rails 4 has minimized unnecessary bloat and optimized resource management.
00:15:50.239 Using gems like `rack-allocation-stats` in your Rack applications allows you to see top allocation sites directly. These insights will guide you in identifying and addressing potential bottlenecks in real-time as your app responds to requests.
00:16:25.520 Also, take advantage of tools like allocation stats to trace object allocations during development, making it easier to identify areas for improvement. Utilize middleware features to analyze your application's performance, grouping and sorting allocations as needed to isolate the most resource-intensive operations.
00:17:18.640 As we wrap up, remember, the goal is to reduce allocations and optimize your applications effectively. Consider using tools and libraries like allocation stats, track object allocations in your applications, and make informed optimizations to enhance performance and reduce garbage collection overhead.
00:18:39.960 Stay tuned for more tools coming out for Ruby 2.1 and upcoming improvements in memory profiling. Be conscious of your object allocations for effective garbage collection. Lastly, utilize language features like `freeze` wisely, helping to reduce unwanted allocations that could slow down your application.
00:19:21.920 Thank you for joining me today! If you're interested in diving deeper, check out the blog post from GitHub that highlights how they solved their memory issues. Remember, this is just the beginning, and with Ruby 2.1's powerful new features, we can do so much more. Let's optimize and innovate together in the Ruby community!
Explore all talks recorded at MountainWest RubyConf 2014
+12