Ruby Programming

Summarized using AI

How Are Method Calls Formed?

Aaron Patterson • April 12, 2016 • Earth

In this presentation by Aaron Patterson, titled "How Are Method Calls Formed?", the focus is on method dispatch optimizations in Ruby, particularly the various types of inline method caching. Key points include:

  • Introduction and Context: Aaron discusses how methods work in Ruby, emphasizing that understanding Ruby's VM internals is crucial for optimizing code.
  • Method Formation: He explains the structure of call sites, including how to recognize them and their significance in method dispatch.
  • Ruby VM Mechanics: An analogy likening Ruby’s VM operations to calculator functions illustrates how Ruby’s bytecode interacts with the stack to execute methods.
  • Understanding Method Lookup: The presentation highlights the method lookup process, detailing how class hierarchies affect performance. The algorithm for searching methods demonstrates that lookup times increase with more ancestors in the class hierarchy.
  • Optimizations with Inline Caches: Aaron introduces the concept of inline caches, which can significantly speed up method calls by storing previously accessed method information.
  • Benchmarking and Case Studies: A variety of benchmarks are presented, comparing performance across different method calling techniques (e.g., if statements vs. case statements) and their caching behaviors.
  • Polymorphic and Monomorphic Caches: Aaron differentiates between monomorphic and polymorphic call sites, explaining how caching strategies differ and how having too many types leads to inefficiencies.
  • Best Practices: He emphasizes the importance of measurement before optimization to avoid unnecessary performance enhancements and to focus on actual bottlenecks in code.
  • Conclusions and Recommendations: The key takeaway is to apply these learned caching techniques judiciously and to validate them through testing in specific code bases to ensure meaningful performance improvements.

Overall, the presentation equips audience members with insights into Ruby's method call optimizations, making them more adept at analyzing and improving their Ruby applications.

How Are Method Calls Formed?
Aaron Patterson • April 12, 2016 • Earth

How Are Method Calls Formed? by Aaron Patterson

today, we’ll dive in to optimizations we can make on method dispatch including various types of inline method caching. Audience members should leave with a better understanding of Ruby’s VM internals as well as ways to analyze and optimize their own code.

Help us caption & translate this video!

http://amara.org/v/IMmu/

MountainWest RubyConf 2016

00:00:22.449 With that said, I'm really happy to introduce our first speaker, Aaron Patterson, also known as Tender Love. I feel weird doing all these introductions because, like, who doesn't know who Aaron Patterson is? He's our local boy turned famous. I think Aaron is the epitome of how you too can become a major global success. He is a Ruby core team member and a Rails core team member if you get out of Utah. So, thank you, Aaron.
00:00:41.510 All right, all of you other speakers out here, remember when you come up on stage to ground yourself. Not just get down to earth, but I mean literally touch the ground so that you don't ruin the equipment. All right, sorry, that's a different presentation... Sorry, hi!
00:01:06.820 If you get anything out of this presentation, this is what I want you to go home with: if you go to Preferences > Keyboard > Text on your machine, you can enter a bunch of shortcuts. This is what I have, and what it does is when you're typing, it automatically replaces certain phrases with what you want. For example, when I type 'face' it turns into a face emoji, or if I type 'hearts', it turns into a heart emoji. I'll give you a quick little demo here; this is practical information that you can use.
00:01:37.850 So, please, if you learn anything from this presentation, take this home with you today. So, I came into this building and I heard that Confreaks was recording. Then, I saw this sign that said 'No taping or photography allowed in the theater,' which made me laugh because I thought we were having a Mountain West event. So, use that hashtag if you want to.
00:01:57.380 All right, so today I'm going to talk about how our methods are formed. I personally wanted to call this 'Method Mania,' but I thought that might be a little too non-descriptive. We're here today at Mountain West, in case you forgot where you are on this Monday morning. Just as a reminder, I tried to represent the conference name in emoji, and this is the best that I could do. We have a mountain for 'Mountain', a map emoji for 'West', and then for 'Ruby', I used 'Route B' because there’s no Ruby emoji.
00:02:36.499 This may or may not be known to you, but this is the final Mountain West conference. I was extremely nervous about performing well, entertaining everybody, and giving a good presentation. However, I realized that it doesn’t matter how poorly I do; I’ll never be invited back again. So, the pressure is off now for the rest of you speakers; it’s fine. So Mike said my name is Aaron Patterson. You might know me on the internet as Tender Love; that's my avatar, just so you know.
00:03:00.530 I should mention that that is not my real hair. You should follow me on Twitter; ninety percent of my tweets are puns, and the other ten percent might be cat pics or technical content. It’s true that I am from Utah; I grew up here, but I moved out in 2001. If you're from out of town, don't ask me for any restaurant recommendations because I don't think I can help you. Actually, how many of you are here from out of town? Raise your hands. Okay, like maybe twenty-five or thirty percent? That's good.
00:03:58.159 So, this is the last Mountain West Conference, and I want to tell a story about when I spoke here two years ago about my Tender Bahamut. I was the backup backup speaker back then—apparently, one of the speakers canceled, and then the next backup person canceled. Mike IMed me and said, "Aaron, would you want to come give a talk at Mountain West?" He'd asked me many times before, and I usually said no because I come home every year for Christmas, and I’m like, ‘I’m already home; I’ve had enough of my parents.’ I don’t need to see them twice a year.
00:04:26.540 But I thought to myself, ‘You know, it would be nice if my parents knew what I actually do.’ So, I decided I would like them to see me give a talk. This happened to be perfect because my parents live here in town. So I said I would give a talk at Mountain West, but only if you give me two free tickets for my parents. I was driving a hard bargain there, right? Two free tickets for my parents. Mike said yes, of course, absolutely.
00:05:15.610 So, I showed up with my parents. The thing is, both of my parents are engineers, and I talk to them very frequently, so they know what I do. They don’t think it’s weird that I type on a computer all day for a living, but I tell them everything about what I do. However, I’ve never told them my internet name; that is one thing that I never told them.
00:06:18.930 So, we showed up at the conference, and I met Mike. We went down to the front, and there were three seats with signs on them. The signs said 'Tender Love,' 'Tender Mom,' and 'Tender Dad.' I was just like, 'Oh no, why right now?' I had to very quickly say to them, 'Mom, Dad, people know me by this name, Tender Love. Just don't worry about it. People are going to ask you about me and this name, but just don’t worry about it; it’s fine.' That was the end of it; we have never talked about it since then.
00:06:51.389 So that was my nice story I wanted to share about Mountain West from a couple of years ago. All right, so let's move on. I work for a company called Red Hat, where I’m on a team called Managed IQ. We develop a project that manages clouds. So, if you have clouds that you need to manage, we can help you manage those clouds. Our project is open-source; it’s up on GitHub. I’ll be talking about this project a little bit later near the end of the presentation.
00:07:03.540 For some research that I was doing for these slides, you'll see that later. Also, I love cats! I brought some stickers of my cats, but I left them at my parents' place, so ask to see them tomorrow. This is one of my cats; her name is SeaTac Airport Facebook YouTube. She likes to sit on my chair. I decided that I would dress her up as Donald Trump, and it was adorable.
00:07:15.690 This is my other cat; his name is Gorbachev, or Gorby Puff. His full name is Gorbachev Puff Puff Thunderhorse. When my wife and I got married, I wanted to change our last names to Thunderhorse because I think it's awesome, but she said no. That's unfortunate. Anyway, my wife really wants me to give a TED talk, so she made this slide for me, and I'm obligated to put this into my slides now. But I love it; it’s amazing! I also enjoy hugs, so please come give me a hug later in the day.
00:07:50.200 I would be very happy if I got that. I know it’s not Friday, but I will absolutely accept Monday hugs too. Monday is a very hard day, although we’re at a conference, which is really awesome. This is a great way to start the week, I think!
00:08:29.520 All right, we’re going to talk about methods today, specifically method optimizations. We’re going to talk about types of methods, how methods work, bytecode, and the Ruby VM internals. Then we’re going to talk about some method optimizations, specifically inline caches, polymorphic inline caches, and look at some optimization tests by implementing optimizations and testing them against real code.
00:08:59.610 The important thing that I want you to know is basically there is a method to my madness on this Monday. All right? This is a very highly technical presentation. I apologize for that on a Monday morning. Usually, I start out with some jokes, then go on to the technical portion of my presentation. But I want to start out with something a little bit softer.
00:09:13.020 I want to give some advice for new people in the audience as well as experienced people. This is an advanced presentation, but I want to make sure that it’s accessible. So even if you’re new to programming, I want you to be able to get something out of this presentation. My goal for this talk is to ensure that there’s something in this for everyone at all levels.
00:09:45.270 The other thing I want to ensure is that if you are new, if you don’t understand some of the things that I’m talking about, don’t be embarrassed to ask me questions. At some point in my life, I didn’t understand these things either. I had to learn them somehow, and the same is true for all of you as well.
00:10:07.650 So if you have questions, please ask me. You don’t have to wait until the end of the presentation; you can come up afterwards. I promise I don’t bite. And I want to say to those of you who are more experienced: if someone new approaches you and asks a question, be kind; answer their questions. They need to learn too.
00:10:24.210 As long as you’re asking questions, make sure to be genuine about it. You're trying to learn. That’s just advice that I want to give for new people and experienced folks alike. We’re going to look at high-level method concepts and low-level concepts. I want to ensure that anyone can pick up something from this presentation.
00:10:43.050 So let’s get started! First off, I want to say this presentation is a failure. I have failed; that is it. We’re going to get to the end of the talk, and you’ll find out that everything failed at the end. But that’s fine because we’re going to learn about all this stuff. It's all about the journey, right? So we will go through this journey together.
00:11:14.450 So first off, I want to talk about call sites. Specifically, a call site is an example of some code. You can easily recognize a call site by the dots you see right there. That’s a call site. It’s very interesting, and I want you to know that call sites in your code are unique. If we were to repeat that line multiple times, there would be multiple call sites there.
00:11:46.490 Throughout the presentation, I might refer to the left-hand side and the right-hand side of a call site. The left-hand side is the object you’re calling the method on, and the right-hand side is the method itself. Let’s look at some more examples of call sites. Here’s some sample code. Of course, we have that initial example, and we see a similar one here where the left-hand side is a class rather than an instance.
00:12:22.310 We have another one right here. This is a call site as well, but you’ll notice there’s no left-hand side. It’s an implicit left-hand side where the left-hand side is ‘self’. In this case, when you’re just writing a script like this, the left-hand side will be main or whatever object you’re inside of at that time. We also have more examples down here. We’ve got one here with a case statement that we’ll discuss later.
00:13:12.080 So now we have an idea of where these call sites are. We know they’re all over the place. Let's talk a little bit about how Ruby’s VM works. I’m going to breeze through this fairly quickly. It works similarly to a calculator, and I’ll explain how I visualize this in my head. When I was in school, I used an HP calculator that looked like this, kind of like this newer model.
00:13:40.620 For example, if we want to use this calculator to calculate 9 times 18, you would enter 9, then 18, and hit enter. This puts those two numbers on the stack. So you hit nine and put it on the stack; you hit 18 and put that onto the stack. When you do multiply, it will pop both values off the stack and then puts the calculation back on the stack. I know many of you are saying, 'That’s so much work; why bother?' And to you, I say: go back to your TI-83 Plus; we don’t need you here.
00:14:19.200 Anyway, Ruby’s VM works very similarly. It pushes things onto the stack and pops them off. We can do exactly the same thing with our Ruby VM. On the left-hand side is our bytecode, and on the right-hand side is our stack, which works its way through the bytecode. For example, if we say 'push six,' it pushes six onto the stack, and then if we say 'push eight,' it pushes eight onto the stack. If we then say 'add,' both values will be popped off the stack and put 14 on the stack.
00:14:57.580 That’s it! That’s pretty straightforward. Now, an important thing to note is that this bytecode is actually stored somewhere. It isn’t some magical thing; it exists in memory. It’s stored as an array of arrays. If you look into Ruby’s VM internals, you’ll see it's essentially stored like this: if I were to translate this into a Ruby data structure, it’s an array of arrays.
00:15:14.210 The outer array is our list of bytecode, and on the inside, we have each individual bytecode with an operator and then an operand. This is essentially what the entire array looks like. So it's important to know that this bytecode is stored in memory, and you can manipulate it—it’s there for you to utilize.
00:15:48.710 Now, moving on to how methods work at a high level: we find the name of the method, then we look at the type of the left-hand side, and then we determine where the method is given that information, and finally, we execute the method. Now let’s look at this from a low level by inspecting the instruction sequences of a particular method.
00:16:28.130 This is how we can take a look at the bytecode. If you execute a certain command, it will output the bytecode for the chosen method, and you will see at least two important lines for executing our method. When you run the benchmark and see it again, you'll find it's almost exactly the same, but those pairs will be repeated in our bytecode. That’s what we’re looking for. You can then match these back to your code.
00:17:02.150 The 'get local' keyword matches back up to the local variable 'bar,' and 'opt end without block' matches the actual method call. In the bytecode, we’ve got our operator and our operand. I know those lines probably aren't lining up very well, but that’s right—they’re essentially split, and we can give you a close-up view of how that appears.
00:17:33.530 Now, as this execution occurs, we can view the stack. If we view the stack while it’s executing, we'll see that we do this 'get local' call to push the value of 'bar' onto the stack. So that local variable is now on the stack. Then we'll execute the next bytecode; it will pop that off the stack, call 'bass' on it, and then push that value back onto the stack.
00:18:17.590 This is how the VM looks internally, and it’s not much different than our calculator example. We can look at how this bytecode works by checking out the file 'insns.def'. If you go check out Ruby, you can find this file that defines the bytecode. You'll see that it simply says to search for a method and then call that method.
00:19:10.659 Now, for your homework, what I want you to do is to mess around with this. Grab some methods, output their bytecode, and just write some small methods. Play around with some Ruby code and analyze the resulting bytecode outputs. Then, look at 'insns.def' and see what those different bytecodes do. This is a really good way for you to start learning how Ruby’s VM internals work. You don’t really need to know C to get started, but you can get a general idea of what’s going on inside.
00:19:57.009 So, before executing a method, we must first find it. I’ve rewritten the algorithm so we can see how that works; essentially, this is the process: we ask the class for its method table. Do you have the method? If not, let's try the superclass. If it is found, great! Let's keep recursing up the ancestor chain.
00:20:40.920 For example, if we have some code that looks like this, when we go to find the method, the algorithm works like this: 'Hey class, give me your method table. Do you have foo?' Nope. 'All right, let's try B.' Nope. 'C?' Again Nope. 'D?' Yes! We have it; great—we found the method, and we can call it.
00:21:03.380 If you think about this algorithm, it means that our method lookup is O(N), where N is the size of the ancestors. It means that the more ancestors we have, the longer it takes to look up that method. So let’s do some tests on method speed. This is a great idea! We know the algorithm states that an object with 10,000 ancestors will be slower than one with ten ancestors.
00:21:57.580 So we have a test setup where we have a class called Ten, which has ten ancestors, and another class called Ten Thousand, which has 10,000 ancestors. Note: it’s not actually Ten or 10K; there are a few more in there, but come on, who cares, right? So when we run this benchmark and look at the results, this methodology is using iterations per second, meaning the more iterations per second, the faster it is.
00:22:34.630 If we look at this output, they are almost identical. I previously mentioned that the more ancestors you have, the slower it gets, yet clearly that’s not true upon benchmarking. So how do we speed up these method calls? This algorithm is true, yet the timings suggest otherwise.
00:23:12.639 The way that we speed up these method calls is by essentially caching things that never change. If you look at this code, you’ll notice that the ancestors for the 'ten' variable never change, and the ancestors for the 'ten thousand' variable never change either. So, why do we need to look up that chain every single time? If we know the ancestors are constant, we can just look it up once, cache that value, and then use it on subsequent calls.
00:23:57.140 This is where in-line caching comes in. This method lookup cache is stored in line with the bytecode. So, when you go back to work, you can say, 'Hey, I know about in-line caches! They are caches that are stored inline with the bytecode. That's great!' Now, what’s interesting is when we talk about breaking this cache, if people say, ‘Breaking method cache’—what they’re referring to is breaking that particular cache.
00:24:39.240 I want to take a slight detour and look at the case when statements versus if statements. We were looking at a case and said, ‘Okay, we’ve got all these call sites,’ but there’s one special one with 'when' acting as an object. What I want to do is take that case when statement and break it down into an if-else statement.
00:25:17.470 We have a very far-left if-else statement using triple equals. In the middle, we have our case statement that just employs case when; these two should behave exactly the same. We’re going to benchmark the two and take a look at comparing them. If we execute our benchmarks, we’ll find that, in fact, an if statement runs faster than the case when statement, even though they do the same thing!
00:26:02.120 So why is that? The reason is that the case when statement doesn’t have a cache while the if statements all have their individual call caches. You can verify this by inspecting the bytecode. If we dump the bytecode for the if-else statement, you’ll see that there’s a call cache, while in the case when statement, the check essentially matches but has no cache available.
00:26:41.580 Now, I’m saying this: don’t change all of your code. Don’t go changing all of your case when statements. This is fine; it’s perfectly fine! It's a good structure! But just note that, in some cases, we don’t have a call cache; and you can use those instruction sequences to see that.
00:27:22.740 The other key point is: notice how we’ve got call caches everywhere. This is important because the size of the cache matters. If we double the size of the cache, that will likely double the size of your bytecode, and you probably don’t want that. The memory of your program might increase too significantly.
00:28:00.470 Next thing I want to look at is, well, we’ve discussed where this cache lives—in the bytecode. But what’s actually in the cache? We need a key and a value. I’m sure most of you work with caches, like memcache at work, and know you need to have a key and a value. The cache here consists of the class of the left-hand side as the key and the method we looked up as the value.
00:28:38.280 To calculate this cache, we say, 'Okay, give me the class of hello.' We get a serial number for that class. The value is the method we looked up. So the class of the variable returns us the first class in the ancestor chain.
00:29:17.200 To create cache misses, let’s discuss how we can do that. To measure it, we can use the Ruby command `vmstat`, which will return a hash with various values, but today I will only discuss two of them. This global method state impacts every method site cache in your program, so it’s critical not to break that.
00:29:51.890 Unnecessarily shortening ‘very’ to ‘V’ captures the attitude of today's kids, so it's very bad if you break this one. This second value, 'class serial', only impacts the specific class that you broke and its descendants.
00:30:23.070 If we define a new module, the serial number increases. If we define a new class, the serial number increases, and if we monkey patch a class by adding a new method, the serial number also increases. Essentially, anytime the shape of our code changes, this cache breaks. So, we have to think about the structure of our code in the classes and modules that we define and the methods that we define.
00:30:57.110 This behavior happens as soon as your code gets loaded. You might be thinking, ‘Oh my god, when I require all these files, the cache gets broken!’ However, it doesn't matter because this only happens once; this happens at the very beginning when you boot your program, and this cost should be amortized across your program.
00:31:42.530 Let’s look for instances when cache is getting broken at runtime. I sought examples and wrote a method called `stat_diff`, which prints out a diff to show which pieces of code impact your caches. On the left shows no diff—this is fine. When you extend something that breaks cache, or when you instantiate with `instance_exec`, that will break the cache as well.
00:32:23.920 Accessing the singleton class will break the cache. The commonality here is that we're accessing the singleton class of that instance. So, when considering an object, ask yourself: ‘What is this an instance of?’ It seems trivial, but here’s a simple example.
00:32:58.400 Let’s say we have a 'Foo' class. If you say 'foo', what is this instance of? It’s straightforward: it's an instance of the 'Foo' class. Now what if we define a singleton method on 'foo'? Now what is the class of that instance? It can’t just be 'Foo,' since it has other methods.
00:33:38.390 What it becomes is an instance of a singleton class that inherits from 'Foo.' When you access the singleton class, a singleton class is magically created, and that singleton class inherits from Foo. So we can think of whether the class is, for example, the real class of foo is 'Foo', while here it is a singleton class.
00:34:24.860 You’ll find this terminology used throughout the Ruby source code. The way this impacts us: let’s say we have this sort of access using 'instance_exec'. When we want to calculate the cache of that value, we’re mapping over to a singleton class on the top, while the real class remains just 'Foo' on the bottom.
00:35:11.920 This leads to the conclusion that accessing the singleton class will break those caches. So thus far, we’ve considered two types of cache misses: one where we defined new methods or new classes at boot time. This is common. The second we have runtime issues occurring when we access the singleton class.
00:36:01.000 Now let’s discuss another way to break in method cache: polymorphism. Hopefully, we’re all building polymorphic code! I will give you a hint at the end of this talk where we might not be doing polymorphism as heavily as we think we are.
00:36:39.240 Take a look at this example. On the left side, we A with an instance of A, while the right has calling foo with instances of both A and B. I made sure both sides are calling foo the same number of times to keep it consistent. If we now compare their performance, it turns out A runs a bit faster than B.
00:37:06.850 Interestingly, we notice that the B instance call is slower because at that call site we’re accessing and retrieving the class of B as opposed to A, which doesn’t require that lookup repeatedly. Therefore, even though both classes have the same overall method calls, it turns out that class B introduces a cache miss that A does not.
00:37:49.200 This means that calls to the instance of A reach the lookup cache. If a call site only sees instances of A, we refer to it as a monomorphic call site. If it sees instances of both A and B, we call that polymorphic. If a call site sees too many types, we might consider them mega-morphic; this is typically when the class relationship becomes too cumbersome.
00:38:33.800 Now the distinction of how we can speed this up. Our cache operates this way: when we cache the object type, it asks, 'Does the cache have the same serial number as that class?' If yes, we have a cache hit; if no, we go lookup that method.
00:39:20.890 We can denote this as a monomorphic in-line cache because it’s storing one type in-line. The beauty would be if we had a more extensive cache system that recognizes a list of previous entries and can validate through those. Let’s say we want more entries in our cache; we might want to keep track of two or three types, yielding a more advanced caching system.
00:40:07.070 If we find two or three prior classes in our cache, we hit! But if we see four or more, that may be too many! This is how we employ our polymorphic in-line caches. I wrote a patch to implement this in Ruby, and much like most patches, it seems daunting at first glance. However, it's quite manageable.
00:40:58.920 If we run our test again, using identical methods, we find the numbers are precisely the same. This means both monomorphic and polymorphic tests yield identical results: mission accomplished! But unfortunately, we arrive at the part of every tech story that could suck.
00:41:37.280 The most crucial aspect of any optimization is to measure its impact—if you speed something up but no one uses that feature, does it matter? No! The most valuable performance tip I can offer is to only speed up bottlenecks.
00:42:16.800 To identify what a bottleneck is, you must measure your code. There are various tools available for this, and I’m not going into all of them today. Measure your code, identify those bottlenecks, and then only optimize those bottlenecks. The real query is: 'What percentage of our call sites are actually polymorphic?' This optimization will only improve polymorphic call sites.
00:43:09.050 To understand this fully, I added logging to the cache key lookup function. When Ruby VM encounters a cache hit or miss, I included that data in the log to measure instances of cache hits and misses and their respective types. The result gives you concrete data on how many types operate at each call site.
00:43:56.780 We bootstrap this through various call sites until we capture data from four million total calls, resulting in a histogram measuring types processed at each call site.
00:44:39.250 Across the x-axis, we can visualize the number of types affecting each call site, while the y-axis tracks how many totals experienced those correlations. On the left, we identify monomorphic call sites. It's apparent that this polymorphic optimization creates benefits, but only yields positive results with a minority of call sites.
00:45:25.710 Notably, one call site observed over 16,000 types—a significant impact! It traces back to the event machine used within our application, which makes the cache miss every time a new client connection is established. This information means we'd be wise to switch over to another solution.
00:46:14.120 In the end, taking a brief look at call sites with only two types shows us where our code may simplify to improve performance—very few recorded as behavior or performance related. Most could have been optimized to yield monomorphic results with poorly configured arrays where types could be limited.
00:47:03.090 As we begin to wrap up, I assert that applying polymorphic in-line caches probably won’t significantly help Ruby overall, especially considering our findings. It’ll only be a failure if you learned nothing from this discussion, and I optimistically remind myself that this effort is worthwhile.
00:48:00.900 Inline caches are load-specific, so while you may not find benefit within our application, perhaps yours will benefit. I urge you all to test this against your own code, understanding that our application may not represent typical Ruby programs broadly.
00:48:38.220 If you want to explore this further, check out these branches and compare it against your code. Please try to validate these findings! I shouldn’t have reached this conclusion without analyzing where polymorphic cache sites existed in our production code.
00:49:13.990 This approach of measurement before optimization is crucial. And lastly—please use more polymorphism. Validate my findings to make this patch worthwhile!
00:49:47.109 Thank you very much for your time. I'm honored to be here at the last Mountain West. Thank you for having me!
00:50:13.439 The question was regarding the cost of polymorphic calls; how much do cache misses potentially cost? It’s a very good question! The miss itself won't likely incur great expense, as Ruby has two layers of caching. The first would be our previously discussed call cache...
00:50:48.470 and the second serves as a backup layer that will cache all ancestors of missed calls in a secondary hash. Consequently, you’ll still see valuable performance enhancements, but we need to delve deeper into the validating performance aspects.
00:51:16.480 Someone joked about writing a Dr. Seuss-like book titled 'The Cache in the Hatch.' I think that's a brilliant idea, and I would gladly co-author it with my cats!
00:51:47.183 Now, regarding active record polymorphism—yes, we employ active record polymorphism in our application, and while I don’t know if it’s as widely utilized in others, please be sure to investigate if you utilize it!
Explore all talks recorded at MountainWest RubyConf 2016
+7