00:00:00.030
The next speaker is Mr. Nate Berkopec, and the title is "Memory Fragmentation and Bloat in Ruby." Hello, thank you everyone.
00:00:07.649
You’re all very quiet when you come into these rooms here; it’s like a church. Of course, the reason it is so hot in here is that I am now in the room and this talk is going to be fire. It’s going to be lit.
00:00:17.900
Okay, so my name is Nate Berkopec. We’re going to talk about memory fragmentation and bloat in Ruby. I am the maintainer of Puma now, along with Richard in the front row here. I write online about Ruby performance issues on my blog, Speed Shop.
00:00:38.430
I also wrote a course called "The Complete Guide to Rails Performance." If you purchased it and you're here, thank you very much! That’s how I pay to get to conferences and stuff; I buy the consultant ticket, and it’s all out of my own pocket. So if you have bought the course, thank you, you have paid for me to be here.
00:01:12.600
Okay, so Rails is awesome; it’s very fun to use, but often it uses a lot of memory. This isn’t just a Rails problem; it’s a Ruby problem as I will show. Most of my clients have memory issues that can be the bottleneck in terms of how many Ruby instances they can run on their hardware.
00:01:39.329
This limitation is gated by how much memory they have available. The reason why memory usage is so important is that so many of us work in memory-constrained environments. Whether or not you’re using Heroku and dealing with the prototypical 512 MB dyno, or if you’re running in a restricted virtual private server, memory isn’t free. It is cheap, but it’s not free.
00:02:11.610
I haven’t actually learned enough about this yet, but I know that Ruby is used in many different environments here in Japan. If you are using Ruby in a memory-constrained environment that’s not a web app, I would actually really like to hear about it after the talk. If you could find me in the hallways or something, I would find that really interesting. Everyone has memory constraints, and no one has unlimited memory, which is why this issue should concern us as Rubyists.
00:02:30.230
It’s also very difficult to debug memory problems for Rubyists. There are some tools to help you fix these problems, but they’re often difficult to use and understand. Therefore, we should find ways as a language or as a community to mitigate these memory problems.
00:03:05.480
Part of the reason why it is so difficult to understand is because memory has so many layers of abstraction. There are numerous layers of indirection between calling `Object.new` and that data actually going into a RAM location somewhere in physical hardware. The first layer is, of course, your Ruby code itself that’s calling `Object.new` or creating a new array.
00:03:39.329
Then, the Ruby runtime, YARV, has some functions that organize its own memory usage. The Ruby runtime interacts with your memory allocator; for most of us, this will probably be GLIBC malloc, but it could be J malloc or TC malloc or whatever allocator you happen to be using. Then, the allocator may or may not interact with the hardware MMU, which translates virtual memory addresses into real memory addresses.
00:04:25.909
The MMU interacts with your actual physical hardware, pulling stuff in and out. Maybe there’s something that’s swapping memory. Really, there are just so many layers of abstraction, and memory usage issues can arise from any one of these layers. This complexity makes handling memory even more challenging.
00:04:58.970
I think as Rubyists we don’t want to have to think about memory, so we should either make these problems happen less often or make them easier to understand and debug. I’m going to start by discussing bloat. I call memory bloat a pattern where there is constant memory usage in an application and then a sudden spike in memory usage, often even excessive memory usage.
00:05:39.350
Typically, it only falls to that new level after something happens. For example, if prior to some event you were using 512 megabytes of RAM per process and then suddenly, due to some action, you are now using one gigabyte of memory per process, that’s what I refer to as memory bloat. This can occur during normal web application operations, and if it happens, it can lead to the application becoming really slow and inefficient.
00:06:48.050
An important distinction here is that memory bloat is excessive memory usage. You could say it’s a kind of bug, but it is strictly necessary—for at least a short period of time. This usually happens when someone performs an action that requires one gigabyte of memory at one time. During that timeframe, we need to be utilizing that one gigabyte of memory; otherwise, we risk crashing. But then, usually, we don’t need that memory afterward—so why doesn’t it drop back down? This behavior is primarily a function of how the Ruby runtime works.
00:07:42.000
Large collections are the primary culprit for this behavior—loading the entire user table into memory is one common example, which is excessive. For example, if you have a condition that fits many records in a table and load in a large number of active record objects into an array, that can easily spike memory usage.
00:08:16.820
Additionally, exporting a very large CSV file can cause memory usage to double, especially if a marketing department needs a CSV of every user. It’s recommended not to do this if it can be avoided. One way to handle this is by working in development with data that resembles production data, so when you use `User.all.each` in development, you handle 40,000 records instead of just ten from seed data.
00:09:19.110
I realize that’s not possible for all environments so it isn’t a perfect solution. That is the only one I’ve seen truly work effectively. Just being mindful of using destructive in-place modifications for certain collections that may be large can also help mitigate large spikes in memory usage. However, this does not explain why we can’t get all that memory back again after use.
00:09:57.660
Now, this is malloc’s fault. Freeing memory in general does not return it to the operating system for other applications to use. In other words, `free` does not mean free. It’s not guaranteed unless the top chunk in a heap, which is adjacent to unmapped memory, becomes large enough for some of that memory to be unmapped and returned to the operating system.
00:10:37.480
What this means for Ruby is that we have a big memory heap, and the only space that malloc can free is if there is a continuous substantial chunk at the end of the memory address space that is open. If there is anything in that space, it cannot free it. For example, if you have one gigabyte of memory in your heap but a live memory location at the very end, we cannot free anything. This behavior is specific to malloc, which is why I bring it up.
00:11:55.220
It frequently appears as a bug in the malloc tracker, where people will report that they’ve created a memory leak, but it’s not a leak. Everything is still being taken care of and tracked, yet we cannot give address space back to the kernel when the main arena is discontinuous. Memory that is not compactly arranged is expected behavior.
00:12:36.750
So what are some things we can do to prevent memory bloat? We can make actions that might lead to memory bloat more painful to perform. DHH refers to this as "syntactic vinegar." I find that most people think it’s someone else’s fault for memory bloat when often they are doing something excessively wrong, like calling `.map` on a one million element array.
00:13:15.130
By making these interfaces more painful to use, we can indicate to the user, ‘Hey, you might be doing something wrong here, so please be careful.’ One example is MiniTest, which uses a block format that is simple to use for stubbing time. However, if you start nesting these and create four stubs in a test, it becomes quite painful to use. That’s deliberate because Ryan, the author of MiniTest, doesn’t want people to use that many stubs.
00:13:59.050
Some ideas to improve this user experience might include a strict mode, where you force users to only select the fields they need, enabling an exception if they attempt to access a field that hasn’t been selected. Another idea could be making methods that try to fetch every row, like `.all.each`, always operate in batch mode. Perhaps we could even raise an exception if users try to create an enumerable with more than a million members.
00:14:29.040
Of course, we wouldn’t do this in production but only in development to surface these problems sooner, potentially with logging. However, Ruby is a language that does not generally require us to think about memory. If we overuse painful interfaces, it may lead us to start thinking more like computers rather than the intention of Ruby.
00:15:01.390
Another solution is just to not allocate as much memory. Rails tends to create more objects than Sinatra, so we might consider being more cautious with allocations instead. However, since Rails is a powerful framework, this may not be a sustainable solution in the long term either.
00:15:45.260
Consequently, we could strive to be more aggressive about deallocating or giving memory back to the operating system. The painful aspect of bloat is often that we have reached a high memory usage limit, but it never decreases again. So, what prevents this? The issue is fragmentation.
00:16:23.620
Fragmentation generally appears as a memory usage curve that looks like a long, slow logarithmic growth. It is basically memory usage that appears to approach a limit but never quite stops. This is caused by the fact that memory becomes less continuous.
00:16:53.820
Over time, the allocation of memory segments creates holes, somewhat resembling Swiss cheese when it comes to memory structure. For instance, if we allocate memory for several objects and then free some middle blocks, we end up with disjointed spaces that cannot be effectively reused when new allocations are needed.
00:17:39.240
In theory, a Rails application starts up, allocates the necessary memory for code, and should deallocate at the end of processing each request. However, not all allocated objects get garbage collected at the end of the request lifecycle. Some might remain in memory longer due to being added to longer-lived collections, which leads to fragmentation.
00:18:27.640
Sadly, this fragmentation is primarily a factor of how both the Ruby runtime and the memory allocator are engineered. It is not something that can be fixed merely by writing better Ruby code, despite the appearance of a memory leak.
00:19:01.680
Contrary to popular belief, a memory leak can be simply classified as memory usage that gets lost, where an object has not been freed properly. In contrast, fragmentation causes the system to retain allocated but unused space—all the while keeping track of individual memory requirements.
00:19:51.780
You can identify the different patterns in memory usage through visual analysis of graphs. If memory usage appears to build steadily and increases at a constant rate, that likely implies a leak. In contrast, if it grows logarithmically over time—starting quickly and then gradually slowing—it indicates fragmentation. Memory leaks continue leaking consistently without bounds, whereas fragmentation yields slow increases as the application ages.
00:20:53.810
One method to measure fragmentation in memory is by inspecting the `GC.stat`. This is available in any Ruby session and provides a hash containing statistics about garbage collection and memory state. To talk about what GC means requires understanding GC internals briefly.
00:21:31.710
In Ruby, every object is assigned an R value, which is a magical C struct, accessible in different ways—like a string or a number. Every object has a corresponding R value sized at 40 bytes, organized into pages of 16 KB each. The R values are structured in such a way that fewer live objects lead to more empty space. For example, if free slots exist across pages without forming continuous sections, fragmentation occurs.
00:22:26.420
To measure fragmentation from a new IRB session, you can run a major garbage collection and examine the number of live slots, comparing that to the number of Eden pages. By dividing the number of live slots by the total number of slots available in all used pages, you can gauge fragmentation. A perfect heap would yield 100% efficiency, while a severely fragmented one may drop near 1%.
00:23:09.820
For Ruby, fragmentation may also occur in the main Ruby heap, which stores larger strings and objects that require malloc calls. This area often dominates total memory usage due to the limitations of the allocator and the Ruby runtime.
00:23:46.340
The per-thread memory arenas feature contributes to fragmentation, especially in Ruby applications that create and destroy many threads or perform extensive I/O. When a Ruby program starts, it has a single memory arena, and if lock contention occurs, a new arena can be created for use, which increases fragmentation.
00:24:32.150
If this occurs frequently, it may lead to situations where fragmentation escalates. Each time a new arena is initiated, malloc can only release space if it lies at the end of the heap. The allocation spaces become fragmented, impacting overall performance, particularly in Ruby web applications built around threaded operations.
00:25:40.540
To reduce fragmentation, we could lower the number of objects we create, though this isn’t always feasible in Rails applications where extensive object creation is inherent.
00:26:30.040
A more practical approach could involve shifting some of the memory management responsibilities into the Ruby runtime itself. For instance, moving to a slab-based memory management approach or utilizing allocators like J malloc could yield improvements as they generally perform better concerning fragmentation.
00:27:11.340
However, there are trade-offs with every approach. Default settings for applications may be more conservative to ensure that performance is not hurt across diverse environments.
00:27:59.050
Notably, configuring pertinent environment variables, like `malloc_arena_max`, can limit the number of memory arenas created. Lowering this variable from its default setting can potentially improve memory usage without significantly impacting processing speed.
00:29:22.190
Memory fragmentation may stem from many factors, but optimizing how Ruby manages memory and considering allocators are steps towards resolving excessive memory usage issues.
00:30:06.750
So we can implement solutions that work best for us as programmers, which can be shaped by the specific needs of our applications. Thank you very much! I have a few minutes left, so I would be happy to take any questions.
00:30:54.010
If there are no immediate questions from here, feel free to connect with me after this session. Thank you so much, Mr. Nate!