RubyGems

Summarized using AI

Keynote

Aaron Patterson • April 21, 2015 • Atlanta, GA

In this keynote delivered at RailsConf 2015, Aaron Patterson, also known as Tender Love, explores the advancements and performance improvements in Ruby on Rails, as well as addresses issues related to testing frameworks. The presentation, infused with humor and personal anecdotes, begins with Patterson revealing his quirky use of Windows XP for the talk. The main themes include a critique of static typing, a light-hearted jibe at Action Cable, and authentic insights into the Rails community.

Key Points Discussed:
- Rails Community Engagement: Patterson emphasizes the collaborative spirit of RailsConf, where core developers gather to discuss the future of Rails.
- Performance Improvements: He shares ongoing projects aimed at enhancing performance, such as automated parallel testing, aiming for faster test execution without manual setup.
- Cached and Compiled Views: Patterson introduces ideas for compiling views at boot to reduce memory footprint and speed up response times, addressing challenges with multiple templates.
- Revisiting Testing Frameworks: The speaker proposes merging controller tests into integration tests to improve speed, as he notices that integration tests should be fast enough since they involve middleware. This leads to discussions about optimizing integration tests and the sluggishness of controller tests.
- Profiling and Performance Measurement: Patterson narrates his approach to profiling both controller and integration tests using various tools, eventually uncovering performance bottlenecks. He highlights the need for continuous improvement and validation of profiling results.
- Bundler and RubyGems Performance: A significant focus is given to the performance of Bundler and RubyGems, wherein Patterson notes how requiring files scales with the number of gems, leading to linear time complexity. He presents innovative ways to optimize gem specification loading time.

In conclusion, Patterson's keynote is a humorous yet insightful reflection on the Ruby on Rails framework, highlighting the importance of performance and community collaboration in crafting a robust development experience. The talk ends with an optimistic view towards continuous enhancement of the Rails ecosystem, encouraging developers to engage and innovate.

Keynote
Aaron Patterson • April 21, 2015 • Atlanta, GA

By, Aaron Patterson

Help us caption & translate this video!

http://amara.org/v/G61I/

RailsConf 2015

00:00:16 Okay, okay, so I'm doing an experiment where I am trying to see how much I can get away with before they stop inviting me to these things.
00:00:23 All right, I've asked the organizers, and if you follow me on Twitter, you know what's about to happen. If you don't follow me on Twitter, you're in for a treat. I've asked the organizers to place security at the back door so you can't leave.
00:00:38 Now, I will begin. I don't understand why people like static typing so much. It's impossible for me to program without moving my hands.
00:00:49 I heard a team of French computer scientists are working on a JIT for Ruby; they call it "Legit." The second version will be called "Too Legit." I recently went to a JavaScript conference, but all of the Node developers put me on edge.
00:01:04 It's 9:00 a.m.
00:01:06 Yes.
00:01:10 Apparently, Action Cable is all about streaming, so why didn't DHH call it Action Netflix? The problem with Action Cable is that I'm paying for more channels than I watch. I'm really excited about the new text-based browser that DHH announced yesterday, Turbo Links.
00:01:46 Where's my T? How much time do we have left? Okay, all right, I will stop punishing you for a while. Hold on a sec; let’s get this started.
00:02:01 I should have prepared one.
00:02:17 Okay, I think we're up, so you're probably wondering why I'm using Windows XP. I'll explain why I'm using Windows XP.
00:02:32 The reason is that Kent Beck is here at this conference, and he’s going to be giving a keynote. I'm a really huge fan of Kent Beck, and I went to his Wikipedia page, and it turns out that Kent Beck invented XP.
00:03:01 So, I thought I'd use XP for this presentation. Unfortunately, Windows XP has no emoji, and I really like to use emojis.
00:03:14 So instead, I have to use Wingdings. I'm sorry; this presentation will lack emojis. Anyway, welcome to RailsConf! I'm really happy all of you are here, and that you all came to watch me.
00:03:29 I really enjoy going to RailsConf. It's like, if you attend Rails, you get to meet a lot of the core team developers like me, and most of the core team comes to RailsConf.
00:03:49 The reason that most of us core team members have to come to RailsConf is so that we know what the next version of Rails will have, like Action Cable.
00:04:02 My name’s Aaron Patterson. If you don’t follow me on the internet, I am Tender Love. If you do not recognize me, this is what I look like on the internet.
00:04:26 Just so that’s clear, this is me. In fact, I've got to start out by saying thanks to everybody. I work for Red Hat, so thank you, Red Hat!
00:04:44 Yes, I'm using Windows. Sorry! I enjoy working at Red Hat. I'm on the ManageIQ team; we write software that manages clouds.
00:05:03 So if you have a cloud, we will manage it. I also want to say thanks to Ruby Central for having me here and for RailsConf. I'm really happy to be here.
00:05:21 I have to say thank you to a couple more people too: Eileen Codes. She wrote a lot of the code that's in my presentation.
00:05:44 You should have seen her yesterday! And also Kichi; he’s going to be giving a presentation right after mine today. I think he's the first one of the first talks. You should go see his talk; he wrote Ruby's virtual machine.
00:06:00 You probably want to see that. And then, I want to say thank you to all of you for coming. I need to have cat photos in the talk because that's what I do.
00:06:26 I’ve got cats. This is one of my cats; this is Gorbachev Puff Puff Thunderhorse. This is my other cat, Cho Cho; she helps me pair program a lot.
00:06:49 That is my keyboard; it's very convenient to have her there. I also brought stickers of my cat, so if you'd like a sticker of my cat, come say hello to me.
00:07:07 The reason I bring these to conferences is that I’m a very shy person. I don't know what to say to people, and I know people don’t know what to say to me.
00:07:28 So if we have this awkward moment, you can say, "Aaron, give me a sticker," and I will just give you one.
00:07:42 I've also been working a lot on doing extreme programming, like brushing up on extreme programming, and I'm thinking about putting out a new product that I’ve been testing out at home for extreme programming.
00:08:01 I think we need to do extreme programming a lot more safely. I think we’re doing it too dangerously these days.
00:08:12 This is what I typically wear when I’m extreme programming because I don’t want to get hurt.
00:08:30 Last year, I gave a keynote. Last year, I was a thought leader in training. I just want to recap what’s happened to me over the past year.
00:08:45 Well, I actually graduated, so that was very difficult. I got a degree in Thought Leering from an online university.
00:08:58 Unfortunately, after I earned my degree, I found out that online university is not an accredited university and that Thought Leering is not an actual degree.
00:09:14 But I'm going to use my Thought Leering presentation skills anyway. Are you ready? Yes?
00:09:25 Okay, so we're going to do some Thought Leering now. Hold on; I’ve got to get my Thought Leering face on.
00:09:39 Ruby! Rails technology! Rails! Computers! Rails! Synergy! Rails! Innovation! Rails! Future!
00:09:55 Rails! These are all words that I am saying to you because I don’t know what to do with an hour on stage.
00:10:18 Anyway, now I’ve got to do some trolling on DHH because this is what I really enjoy doing.
00:10:34 I like to watch the presentation, and then...so anyway, I was really hoping that this year there would be a "TDD is dead" thing like last year because that was gold.
00:10:52 So I think this year it wasn't as extreme, so I’m just going to say it for him: SOA is dead!
00:11:09 It was really exciting to see that we announced Turbo Links 3, RJS.
00:11:27 Also, watching that presentation and learning about the new features that are supposedly coming in Rails 5, I thought to myself; I should announce something.
00:11:50 Like, just something brand new like, okay, we’re going to put this into Rails. So you know, DHH talked a lot about his backpack.
00:12:06 So, I want to announce something today. I want to announce something that's a lot more lightweight than a backpack—much more lightweight!
00:12:27 I’m working on a lightweight framework that’s going to ship inside of Rails 5, and I’m calling it Action Fanny Pack.
00:12:38 It only comes in one color: pink.
00:12:49 All right, let's talk about some tech stuff finally! I'm going to talk about performance because I love performance.
00:13:06 That is what I love working on. I apologize for the early time; I guess it is only 9:15, and we're going to talk about some very technical stuff.
00:13:19 Maybe it'll be in Rails 5; I don't know.
00:13:24 I’m not bound by your timelines and versions; I do what I want. Anyway, I’m going to talk about some of the things that I’m thinking about and working on.
00:13:44 We're going to start from very nebulous things and get more specific as we go through the presentation.
00:14:01 We'll start out with stuff that I’m just thinking about and working on proof of concepts, and there's not much here.
00:14:14 It's definitely not going to ship in Rails 5. I can't say that for sure, but I'm going to talk about more nebula stuff and get more specific.
00:14:28 I have to admit, I had a very hard time putting together this presentation this year because last year I talked about Active Record and the work that I did on that.
00:14:43 The work I presented last year took me three years to do all of that work, so my presentation last year was the culmination of three years' worth of work.
00:15:01 So as I'm working on this presentation, I'm like, 'What am I going to talk about? This is going to pale in comparison to the speech that I gave last year.' This sucks.
00:15:15 But I'm going to talk about the projects that I'm working on, and I hope that we can present these, or I can present these as adequate developments in the future.
00:15:34 Adequate; hopefully, they will be fast enough. The first thing I want to talk about is automatic parallel testing.
00:15:47 Something that I really want to have in future versions of Rails, maybe Rails 5; I'm not sure.
00:16:02 What I mean by automatic parallel testing is that, well, just exactly that—I want your tests to be able to run in parallel and do it automatically.
00:16:12 I want it so that when you upgrade to Rails 5 and you do Rails test, it will just run all of your tests in parallel.
00:16:28 Your tests will just automatically be faster and you don’t need to do anything.
00:16:43 This is my dream, and the way that we're thinking about doing this is by having multiple test processes splitting your tests into multiple test processes.
00:17:03 I think Rails 5 will introduce randomized tests, which will help run your tests in parallel eventually.
00:17:26 But the idea behind this is that we’ll have maybe one database with multiple test processes hitting that database.
00:17:45 So there's two ways that we can do this parallel testing: we can have one database and multiple processes hitting that database or we can have multiple databases.
00:18:06 This is one of the experiments that I was working on, and the idea behind this is that, well, you know our website runs in parallel.
00:18:22 We have multiple processes running and all of those are hitting a single database; clearly we can do things in parallel.
00:18:36 So why don’t we have multiple test processes running against one single database?
00:18:50 Then just each test opens a transaction and rolls everything back at the end, that way we can get parallelization.
00:19:05 I tried this out; I actually put together a proof of concept with this and found that I could only get about a 15% speed improvement on the Active Record tests with this particular technique.
00:19:25 So, I wasn't very happy with it. The other thing I'm looking at is doing multiple databases.
00:19:39 Now, the idea behind this—and there's already a lot of gems out there that do this—is to say, okay, we're going to take our database and split it into multiple copies.
00:19:56 Then just run a set of tests against each of those databases so we can do it in parallel.
00:20:07 But unfortunately, the solutions that are out there today are very manual; you have to set up end databases and do all this stuff.
00:20:23 I want it to be completely automatic, so this is the next thing that I need to test out.
00:20:37 I said this is very nebulous; I have not tested this particular technique yet, but I think we can get much better parallelism out of this technique.
00:20:49 The important thing to me is that this has to be automatic. I don’t want anybody setting up anything to get it to work.
00:21:05 It has to be transparent, so it should work—you should be able to upgrade, and all your tests will run in parallel, and nothing will break.
00:21:22 The next thing that I’m working on is cached and compiled views.
00:21:37 Now, this is a little less nebulous; we actually have some code that works with this.
00:21:56 What I’m talking about with cached and compiled views is your actual views in your Rails application.
00:22:13 Today, we do lazy compilation, so if you have Haml or ERB templates, the way that it works in Rails today is...
00:22:29 When a request comes in, we get a lock and then we say, 'Hey, is the template compiled?'
00:22:45 If the template is compiled, we unlock it. If it isn’t compiled, then we compile it.
00:22:57 We save it into memory and then we unlock. What I mean by compiling is taking that ERB or Haml code and turning it into Ruby code.
00:23:12 So we take your template code, turn it into Ruby code, evaluate that Ruby code, and then cache it.
00:23:28 The problem with this particular setup is that the write, the compile, and save step actually writes the memory.
00:23:48 The reason this sucks is because if you’re running a threaded web server like Puma or whatever, the threaded web servers have to lock.
00:24:00 So you have this locking bottleneck there. The other problem is that this is also bad for forking web servers.
00:24:14 We’re wasting memory; now this template compilation doesn’t happen until after a request comes in.
00:24:31 So when you start up your unicorn, unicorn forks off however many processes you have, and now every single one of those processes is compiling the same thing, storing it into memory.
00:24:45 Now, what would be nice is if we could compile all of those on boot, do it in the master process, and then all the child processes will share that particular memory.
00:25:02 We can save a lot of memory in that case. Now, unfortunately, there are a lot of challenges with this project. I’m going to talk about some of the challenges and how we’re thinking about getting around them.
00:25:37 Some of the challenges are: I have too many templates. I’m telling you this because we’re going to precompile all the templates as soon as you boot up.
00:25:54 Obviously, this is going to impact your boot speed because we're compiling all the templates, and maybe you have a website that has millions of templates.
00:26:16 But unfortunately, you only use like two of them, right? So now we’re compiling millions of templates, and in fact, we’re actually wasting memory.
00:26:36 You normally would have just used two of those millions of templates. I would say you need to delete those templates.
00:26:52 But we should probably have something that you can configure and say, 'I only want these particular templates to be precompiled.' Now, that’s not so much of a challenge.
00:27:07 We can do that; it’s not a big deal. Configuration will compile everything.
00:27:17 Otherwise, you can say, 'I only want these particular ones compiled in advance.' This is not a very big challenge.
00:27:37 The biggest challenge, in my opinion, is rendering with locals. When you do a render template and you specify :locals and you give it that hash, what actually happens is...
00:28:05 You’ll see it looks a little bit like this, where we say, 'Okay, we’re going to put a Prelude above your compiled templates.' We take your template, convert it into Ruby code, and then we make this little Prelude based on that hash of locals that you passed in.
00:28:36 The Prelude sets local variables equal to some particular value from the hash. Unfortunately, this is dynamic and we can only know this at runtime.
00:29:03 We only know this information as soon as that render call happens, which means we have to do it lazily.
00:29:17 Now, what we’re thinking of doing is looking at the source of the templates and trying to determine what locals you actually use.
00:29:32 So maybe we can come up with heuristics. We can definitely come up with heuristics for templates that can’t have locals.
00:29:47 But maybe we can come up with heuristics for ones that do have locals and only precompile sets that we know.
00:30:01 This is one of our challenges, and I'm not telling you not to use locals; please use locals. It's fine, really!
00:30:16 This is one of the biggest challenges that we have. Yes, it’s still nebulous, but we do actually have this up on GitHub.
00:30:35 The next thing I want to talk about is integration tests. Let's take a look at a controller test first.
00:30:51 OK, this is a controller test. All right, do a GET, do an assertion, and we’re done. Everybody's familiar with this; great! Let's look at an integration test now.
00:31:16 Okay, you GET, you do an assertion, and we're done. So controller tests and integration tests—our programmer brains are going, 'These look very similar!'
00:31:28 I thought to myself, 'Why do we write controller tests? The only reason I could come up with is that integration tests are too slow.' This is the only thing I could come up with. What's interesting is if we look at these integration and controller tests, we're testing at different levels.
00:31:50 But nothing about these tests says what assertions we can and can't make about these tests, right? Anyway, we write controller tests because integration tests are slow.
00:32:12 So something didn't make sense to me because I kept thinking about it.
00:32:30 I'm like, 'Well, if integration tests are slow, does that mean that our website is slow too?'
00:32:47 This just didn’t make sense. If the application is fast enough to be serving up requests, why is it too slow to be writing integration tests?
00:33:03 So this did not make sense to me. For Rails 5, this is less of a question mark. I want to have faster integration tests.
00:33:15 So we’ve been working to implement faster integration tests.
00:33:31 If we think about controller tests and integration tests, what’s the difference between those two?
00:33:46 Controller tests have no middleware. When you're testing a controller, you're not going through any of the middleware in your application, while integration tests do have middleware.
00:34:04 These are really the only differences between the two. Integration tests go through the middleware and hit the router, while controller tests do not.
00:34:23 You’re testing directly against the controller. So what I wanted to do is I want to delete controller tests.
00:34:41 I mean I want to delete them. Just go away!
00:34:53 Okay, you don’t need to clap; it’s fine.
00:35:03 I don’t mean I’m just going to delete them, and you’re going to upgrade and go, 'Oh my God, my controller tests don’t work anymore.'
00:35:22 What I want to do is actually delete the controller testing framework and implement it in terms of the integration testing framework.
00:35:39 So that when you upgrade, everything will work the same. It’s just that we’ll only have an integration test framework under the hood.
00:35:58 In order to do this, we need to figure out why these integration tests are so slow. Why so slow?
00:36:16 The first thing we did—and by we, I mean Eileen—she writes the code, I do the presentation.
00:36:30 Anyway, this is what a benchmark looks like for it. Now we’re doing a benchmark; this is the top one. It’s a benchmark against an integration test.
00:36:51 The other one is a controller test, and we're testing exactly the same thing. We’re testing the index.
00:37:10 So the top one’s integration, the bottom one’s controller test.
00:37:21 Then we compare the two. The thing is, I don’t actually care how fast either one of them are; I only care how fast they are in comparison to each other.
00:37:34 We want to make sure that integration tests are just as fast as controller tests.
00:37:49 So when we ran this, we found—we did the comparison, and it was 2.4 times slower. Integration tests were 2.4 times slower than controller tests.
00:38:03 So we need to know what is slow; why is it so slow? To figure this out, we used a gem called Stack Prof.
00:38:20 This is based on profiling stuff inside of Ruby. Maybe Kichi will be talking about it later; I’m not sure.
00:38:37 But we use a gem called Stack Prof, and this is the test that we did. We said, 'Okay, we’re going to run one method against the controller test.
00:38:57 I want to know what the bottlenecks are in the controller tests,' and then I’m going to compare those to the bottlenecks in the integration tests and see what the difference is.
00:39:24 So if I can see what the bottleneck differences are, maybe I can understand what it is about integration tests that makes them so much slower than controller tests.
00:39:42 So we have a controller test where we’re doing a CPU profile on that, which means that we’re timing how long the code spends on the CPU.
00:39:56 I’m going to talk about that a little bit, and then we dump out the profile results to this file.
00:40:11 So I want to talk a little bit about CPU time versus wall time. We configured this test to use CPU time.
00:40:28 What CPU time means is: I only want to know about the time that my code is spending on the CPU.
00:40:44 So to give you a concrete example, what this means is if you sleep for 10 seconds, your code will take 10 seconds.
00:40:56 But that's not going to show up on a CPU profile. So you won’t see things like sleep, you won’t see things like IO.
00:41:12 I said, 'Okay, we’re going to view the stack.' This is how you dump out the stack.
00:41:25 The stack looks like this. We ran it and I was really excited because if you look at the top, 53% of our time was spent in MiniTest inside this method.
00:41:40 We’re like, 'Wow, amazing! If we fix this, our code is going to be 53% faster!' or twice as fast or whatever.
00:41:54 So how do we make this method faster? I’m going to show you my number one favorite technique for making a method faster.
00:42:09 This is my number one favorite technique: delete it! So we change it to this; we go, 'Okay, let’s just yield. We don’t need that code!'
00:42:23 So re-run the benchmarks, look at the results, and we see, 'Okay, boom, that is no longer in our profile, 50% faster! I am extremely happy!'
00:42:37 Yes. Wingdings, it’s the worst!
00:42:51 All right, so I edited the process. This is the before time; it took about 12 seconds.
00:43:07 And then this is the after time; it also took about 12 seconds.
00:43:15 You will note that this is not 50% faster!
00:43:27 So I was sad about this. We know that this test is CPU-bound. We aren’t doing any sleeps or IO calls. What is going on here?
00:43:45 If we change the profile to say, 'If we change the profile to use wall time rather than CPU time, we should be seeing the same stack because we’re CPU-bound, we’re not IO-bound.'
00:44:05 So we re-run it, and this is the CPU stack. If you compare it to the wall stack, you’ll see that they are completely different.
00:44:22 Maybe not completely. Let me look again—are they completely different? Different? Yes, they’re totally different.
00:44:38 This is not what we expected. So, I contacted the person who wrote the profiling tool and said, 'Hey, I got a CPU-bound test here.
00:44:56 When I do a wall profile versus a CPU profile, it's totally different.' He said, 'Well, there’s probably a bug in your code.' I said, 'I don’t think so.'
00:45:16 I showed him, and I was like, 'It's really weird,' and he was like, 'Yeah, that is weird!' I said, 'Can you fix it?' and he said, 'No, I have no idea.'
00:45:38 So then, I contacted Kichi about it because the Stack Profiler uses Ruby’s tools under the hood for doing this stuff.
00:45:57 I said, 'Hey, Kichi, I got this test. It looks weird. The results are very weird to me. Can you take a look at it?'
00:46:05 He looks at it too and he’s like, 'Huh, yeah, those are weird.' I’m like, 'Okay, like so can you—'
00:46:26 He runs a test on Linux; he’s like, 'I’m not getting the same number as you.' Then runs it on OS X, gets the same numbers. It turns out there seems to be a bug in Trap only on OS X.
00:46:41 So there is a bug, and we don’t know what it is. The point I’m trying to make here is that even profilers have bugs.
00:46:57 You need to verify the test results that you’re getting.
00:47:10 Unfortunately, we’ve walked through all this stuff, and we were talking about getting things faster and making speed improvements.
00:47:26 And we made no speed improvements, so I have wasted the last 10 minutes of your time. I am very sorry.
00:47:40 So let’s do something useful. Let’s do some integration test profiles. This time, we’re only going to be using wall time.
00:47:50 We looked at the profile of integration tests to see if we can get them as fast as controller tests.
00:48:05 This is what our stack looked like for wall, and it turned out that we were spending 27% of our time calling delegate.
00:48:19 So, calling delegate was slow, so we rewrote it a little bit; we rewrote our implementation to stop calling delegate.
00:48:30 This is the patch that we applied; this is actually fairly old, but we made this change a few months ago.
00:48:42 There are other caveats about this, so you should come talk to me about astute readers will notice the issues.
00:49:02 This sped it up, so we’ll check our progress. Before, we were 2.47 times slower.
00:49:18 We run it again, check our comparison; we are now down to 1.45 times slower.
00:49:30 If we run time against our stuff, we’ll see that before, it was 26 seconds. Now we’re down to 16 seconds.
00:49:44 We made some improvements! So, even profilers have bugs; you need to check this.
00:49:57 Don’t just use one profiler; check your numbers. Always measure the code that you’re working on and verify the results you come up with.
00:50:10 This code, this project that we did is about a few months old. Today, integration tests are approximately 10% slower than controller tests.
00:50:26 We’ve also improved the speed of both controller tests and integration tests.
00:50:36 So that when we ship Rails 5, if we delete the controller testing framework, the integration tests will be faster than the controller tests were in the previous version.
00:51:00 Our goal was to make sure that if we did delete the controller tests, they had to be at least as fast as the previous version.
00:51:17 So they should be faster than controller tests in previous versions.
00:51:49 Performance! I don’t even know what this stupid phrase means.
00:52:03 What I mean by this is, as a Rails core team member, I want to know what the performance of Rails is.
00:52:19 But I don’t want to know just from inside the framework; I want to know as soon as I hit enter, what is it doing? How fast are we going? Can we make that faster?
00:52:36 In order to study this performance, I had to study Bundler, because very frequently my command is 'bundle exec'.
00:52:51 In order to study Bundler, I needed to study bin stubs. And what I mean by bin stubs—this is extremely confusing.
00:53:07 I am not talking about the stuff that you store in bin rails; I’m not talking about Bundler bin stubs.
00:53:20 I’m talking about the bin stubs that RubyGems installs. So when you install a gem on your system, that bundle command is actually generated by RubyGems.
00:53:35 What I'm talking about gem requirements and stuff is complicated. I needed to study this, and you’ll notice that we have this gem line.
00:53:53 It says 'gem' in there. So I needed to study Gem, and in order to study Gem, I also had to study require.
00:54:08 Okay, so the question is, what does gem do?
00:54:25 I gave a talk at a conference called CEC, Canadian University Software Engineers in Canada.
00:54:41 It was a conference for Canadian University students, put on by University students for University students.
00:54:55 Of course, I had to make puns about John Cusack.
00:55:11 So I had tons of references to his movies and stuff. But you know they’re all 80s movies, and I asked everybody.
00:55:24 'Who raised your hands if you’re over 35?' Two hands went up, and I'm like...
00:55:50 The gem method finds a gem spec with a particular name and puts that gem spec onto the load path.
00:56:05 It does load path manipulation, and we can see that you can pull up IRB and see this today.
00:56:21 This is an example; we’re loading the bundler gem.
00:56:34 So if you look at this, you’re duping the load path, loading the bundler gem, and then checking the load path afterward.
00:56:50 You’ll see that there’s a new path on the load path.
00:57:06 Now, what happens when a file is required? This is another interesting thing we need to know about.
00:57:24 When a file is required, it does require; this is so confusing. RubyGems has its own require, which is different from Ruby's require.
00:57:43 I’m talking about the require that's built into RubyGems. When you write your Ruby code, you probably don’t know which one you’re using.
00:58:05 It depends on the situation. If you’re using Bundler, you’re using Ruby's require.
00:58:19 But if you're just writing a regular Ruby script, you’re using Ruby's require.
00:58:34 This is super fun to figure out and fun for new people to understand.
00:58:49 I’m going to talk about RubyGems' require—the one that is inside of RubyGems.
00:59:03 What does it do? It says, "Okay, if the file is in the load path, then we load it. If it is not in the load path, then we look for it in a gem spec and then we add the gem to the load path and then we load the file. Make sense?
00:59:19 Okay, good. I love these PowerPoint transitions; they are so awesome!
00:59:31 So I want to study RubyGems' performance.
00:59:47 I’ll talk a little bit about my test methodology here. The test methodology is I wanted to require one file and then increase the number of gems that are on the system.
01:00:07 So I require one file and then see, like, increase, add more and more gems on our system and see what the performance of require is as we increase the number of gems on the system.
01:00:24 I also wanted to do the same thing with activating gems, so I say, 'Like the gem command, we add one gem, or we activate one gem and then increase the number of gems that are on the file system.'
01:00:39 So I did the same test. Now, I’m going to show you some graphs from it.
01:00:54 As I was testing it, I found that the time was linear.
01:01:05 We expected a linear growth. The x-axis is the number of gems that are on the file system and the y-axis is the time in seconds.
01:01:21 All these gems are generated; I generated all of these gems—they all contain one file, and all the gems have no dependencies.
01:01:43 What was interesting is since we found that it was linear time, it meant RubyGems was probably looking through an array.
01:02:00 It’s probably trying to find something in an array, which would be linear time.
01:02:16 So I found the array that it was searching through.
01:02:39 This graph represents the best and worst case scenarios. The pink line is if the file was at the very beginning of the list, and the green line is if the file is at the very end of the list.
01:02:51 So probably when you're doing a require, you're going to land somewhere in between these two lines.
01:03:09 So this is how long it took to require one file.
01:03:23 I also looked at allocations to require one file.
01:03:33 So this as you require a file, what do the object allocations look like?
01:03:46 This is both linear growth, exactly the same thing, like same test, just testing memory allocations.
01:04:01 So, best and worst case scenarios. I think it’s pretty interesting; you can see the slope here is much different than the slope on the time.
01:04:15 I think that says something about our garbage collector—that our garbage collector is good.
01:04:28 Then I wanted to look at the time taken to activate one gem.
01:04:43 So this would be doing just 'gem bundler' or whatever gem, right?
01:04:59 So I tried the same thing and saw that it was linear time. I’m like, show me the best, and show me the worst.
01:05:14 It turned out they were exactly the same. Exactly the same growth.
01:05:27 So then I looked at allocations per gem, and I only got one line here: also linear growth.
01:05:43 I only put one line because they’re exactly the same.
01:05:58 Okay, so we got require performance, we’ve got gem performance.
01:06:09 So now let’s take a look at Bundler performance.
01:06:21 Okay, so my test methodology for this—and I'm much more familiar with the RubyGems code than I am with the Bundler code.
01:06:37 So I don’t know why I can’t explain to you any of these numbers; I can explain to you my test methodology, but I can’t tell you why these numbers are the way they are.
01:06:51 So we have the test methodology here. I put one gem in the gem file and then I increase the number of gems that are on the file system.
01:07:01 Then I also did another test where I said, 'Okay, I want to increase the number of gems in the gem file but keep a constant number of gems on the file system.'
01:07:17 Now, remember, as I said earlier, all these gems have no dependencies. All these gems are local to my machine, so I’m not hitting the network in any of these tests.
01:07:35 If we look at varying the number of system gems where we only have one gem in the gem file, we’ll see we have linear growth doing 'bundle update'.
01:07:49 So again, the x-axis is the number of gems on the system, and the y-axis is the time in seconds.
01:08:06 Then if we look at 'bundle exec', exactly the same thing—just different times, both linear growth.
01:08:23 I kind of expected this since RubyGems, under the hood, was linear growth and we’re linearly increasing the gems on the system.
01:08:37 So this was not surprising to me.
01:08:52 The next thing I did was I varied the number of gems in the gem file.
01:09:04 I said, 'Okay, let’s keep a constant number of gems on the system. Let’s increase the number of gems that are in the gem file.'
01:09:18 I have extremely good news for you: the good news is that the time for 'bundle exec'—if you run 'bundle exec', it is actually constant time.
01:09:35 So, we’re seeing a lot of noise in here, but it’s actually about the same time between each run all the way up to a thousand gems.
01:09:56 That’s very good! Unfortunately, that time is like one second, or not good.
01:10:11 What I think is interesting is that if you time what I found, RubyGems was loading all the gem specs on your system.
01:10:21 If we do a breakdown and time loading all the gem specs, we’d see loading all those gem specs took about 710 milliseconds.
01:10:39 We’re spending another 750 milliseconds doing I don’t know what.
01:10:47 The good news is that it's constant; it doesn't grow.
01:10:58 So all we have to do is find that other stuff and probably reduce that.
01:11:09 See if we can find a way to reduce spec loads and we’ll have constant growth that’s smaller.
01:11:24 The next test I ran was, 'Okay, I’m growing the number of gems on the system and want to run bundle update.'
01:11:35 This is the one that surprised me the most: when you run 'bundle update' and you increase the number of gems on your system.
01:11:53 The x-axis is the number of gems in the gem file and the y-axis is time in seconds.
01:12:07 When I was first running this, I wanted to test with 100 gems, 1,000 gems, 10,000 gems.
01:12:21 Unfortunately, when I ran against 10,000 gems, I was sitting there for hours.
01:12:37 So I decided, 'Okay, let’s just go in 100 gem increments.'
01:12:50 Now, I wanted to say, 'Okay, we’ve done a curve fit here; we see that it is polynomial time.' How long would 10,000 gems take?
01:13:05 We can calculate this, so I was like, 'Okay, let’s plug in the numbers to this equation.'
01:13:15 I plugged it in; the time came out to something I did not believe.
01:13:29 I said, 'Okay, I don’t believe that number. Let's take 200, plug it into the equation, and make sure that that 200 matches the point on this graph.'
01:13:46 It did not; this equation is wrong, thanks Excel!
01:14:05 I was doing I was...
01:14:42 Hold on.
01:14:57 Oh my god, hold on. Okay...
01:15:16 OK, quickly, go to black screen please.
01:15:31 Gosh, why did Kent Beck invent this? This is the worst! I don’t understand.
01:15:40 Okay!
01:15:51 Sorry, back to our code here.
01:16:06 This makes me even more nervous. I’m sorry. All right!
01:16:23 So that equation was wrong, and the numbers were not correct.
01:16:38 So what I did was I decided, okay, it’s time to pull out the big guns: let’s use R.
01:16:48 So I had all of this; this is using R.
01:17:02 This is R code used to fit that curve. I’m reading in all the code or all the times from a CSV file.
01:17:20 The second line, we’re doing a regression against a polynomial—that’s a second order polynomial.
01:17:36 Then we plot those points—the points that I got from the CSV file.
01:17:51 Then that last line is writing—plotting the actual equation.
01:18:06 So this is the graph that comes out of R. You can see those circles; those circles are the actual data points.
01:18:23 And the red line is the equation. I asked R for the coefficients and we could get the coefficients for each part of the equation.
01:18:38 Then check the R-squared to see how close we fit that curve; the R-squared is .998, so that’s very good.
01:18:55 This is the correct equation.
01:19:04 So what I did is plugged this into R and said, 'Okay, let’s define a function where the code up there is just basically taking the coefficients from that prediction and then just doing the multiplication.'
01:19:27 So if we run this and check an empty gem file, our empty gem file will take about 11 seconds, which I thought was interesting.
01:19:39 So you have 1,000 gems installed on your system, and your gem file is empty.
01:19:50 You run 'bundle update,' and it takes 11 seconds. Now, if you have a gem file that has 10,000 gems in it,...
01:20:06 remember, I couldn't finish this. I wanted to know how long it would have taken.
01:20:20 It would have taken about 6,800 seconds or so.
01:20:36 So I’m up here on stage talking about performance stuff, complaining about it, saying that it takes forever.
01:20:51 I don’t like doing that; I don’t want to get up here and be like, 'Oh, boohoo, Turbo Links is the worst!'
01:21:07 I want to talk about how to fix it! What do we do going forward, right?
01:21:20 Because who cares? It’s nice that we can see this stuff. It’s good that we can model these things.
01:21:38 But really what matters is: okay, how do we fix it? What do we do moving forward?
01:21:54 If we take a look at all the stuff we’ve been doing—like activating gems, going through the gem file.
01:22:05 If we look at it and think about the information that’s there, let’s look at this code and think, 'What do we know?'
01:22:17 Let’s look at these different things. We’ll look at bin stubs, we’ll look at the gem file.lock and think about what is the information that we have.
01:22:30 So if we look in the bin stub source code, we’ll see the information we have is the gem file name.
01:22:44 Now, in gem file.lock, if we look at the gem, we’ll see the name and the version of the gem file that we care about.
01:23:01 So if you go into RubyGems today, you’ll find some code that looks like this, and this is not actually the code.
01:23:16 This is just a condensed down TL;DR version.
01:23:30 Essentially what it does is it says, 'Okay, I’m going to load all of the gem specs off of your file system.
01:23:42 Then I’m going to look through all of them for one that has this particular name, say, 'Bundler,' whatever you’re looking for.'
01:23:54 So this is where our linear growth is coming from, right? We’re loading all those gem specs and then trying to find the one that has that name.
01:24:11 So, if we say, 'Okay, I want to know what happens if we run: let's look at the impact of this.'
01:24:25 If we had 100,000 gems on our file system—well, 100,000 in one...
01:24:39 We’ll have 100,000 gems, plus Bundler. If we do that, how long does bundle -v take?
01:24:53 Okay, so my machine has 100,001 gems on it. I run bundle -v, it takes 50 seconds to do that.
01:25:04 I think that's pretty ridiculous; it’s too long.
01:25:17 So what I did is I said, 'You know, this doesn’t make sense; we shouldn’t load all the specs. We know we only need the Bundler spec.'
01:25:34 It says it in the code right there; it says gem: Bundler. Why am I loading all of them?
01:25:50 So what I did was change the code to be like this: it’s a very stupid, very dumb change.
01:26:06 'Okay, instead of just getting all the gem specs, why don’t I get the ones that just start with Bundler?'
01:26:20 Revolutionary!
01:26:33 I know, so if I do that and run again, we’re down to 600 milliseconds.
01:26:44 So, 50 seconds down to 600 milliseconds. Ah, thank you, thank you!
01:27:00 Now unfortunately, that code that I just showed you is still linear time.
01:27:15 It’s just that the slope of this is much smaller than the other slope.
01:27:29 I think we can actually make it a one time, and the way we can do that is we say, 'Well, for the Bundler case,
01:27:42 the bundler case knows the gemspec name and also knows exactly the version.
01:27:58 From that, we can actually calculate the filename.
01:28:13 So we can say, 'Okay, let’s just calculate the filename for that; we’ll load that specific one.'
Explore all talks recorded at RailsConf 2015
+122