Talks

The Ruby Guide to *nix Plumbing

Help us caption & translate this video!

http://amara.org/v/GUQL/

GoRuCo 2009

00:00:20 Right, I'm going to say something I don't often say, which is good morning. My body still actually thinks it's Thursday afternoon, and I am still in the UK. None of the beer that I've drunk since I arrived in New York has had any effect whatsoever, but I'm going to need to put these glasses on because that is a very bright light, and it's making me feel a little sick.
00:00:44 When I first put in the proposal for this presentation, I hoped to stand up here and tell you lots of exciting things, mostly about playing around with shared memory and message queues and stuff like that. The trouble is, the more time I spent writing the presentation, the more I realized that this stuff wouldn't make sense. It's such a disjunction from how people normally use Ruby. You can say, oh, well, there are a few tricks you can do, and suddenly you can have this, that, and the other. But I thought it would be more useful to put together something that doesn't quite handhold you but shows you a certain train of thought.
00:01:20 So, what we're going to have is, over here, I'm going to play a little movie. It's quite dull, but it will walk you through all the little things you can do in Ruby sequentially to play nicely on a Unix box and take advantage of low-level facilities. Then, I'm going to try and think of something genuinely intelligent to talk about while this plays, based partly on my own experiences of messing around with this stuff, and how the hell I got into something quite so mad in the first place.
00:01:55 If anybody at any point wants to interrupt and ask a question, just hold the microphone a little closer. It attenuates very much. If anybody at any point wants to ask a question, just stick your hand up, and I will do my best to notice you. This isn't really a planned talk at all, but anyway, I’ll start this little thing off.
00:02:04 I always include a bio in things; it helps people realize that there’s a human being behind all this. About three years ago, I got hired to work on a top-level domain called .doel, which has had a little bit of press recently since it finally launched. There wasn't really a development budget; in fact, I was the only developer. They had sacked two development teams previously, and they needed somebody who could build internal demos and understood a bit about how Unix boxes work—not a lot, just a little.
00:02:30 Mostly because it was a system that’s quite unusual; it doesn’t resolve to human beings, or rather, it does resolve to human beings, but it doesn’t resolve to machines. It's never allowed to resolve to an IP address, which isn't quite what you expect from DNS. Since I had free reign with what I did with the project because it was just internal prototypes, and because I’m very lazy, I decided the best way to go was just to use Ruby. I like Ruby; it takes no effort to code in Ruby, at least not compared to Java, for example.
00:03:13 So I started doing all these mad experiments. At first, it was just standard stuff, using basic utilities that Ruby provides, like process forking and things like that. But as time went on, I couldn't help but wonder how it was all implemented under the hood. I have a bad habit that comes from my background in embedded systems, where I’ve become used to writing everything that is under the hood.
00:03:30 So I started messing around inside MRI (Matz's Ruby Interpreter) and began doing odd little experiments using system C. One of the things I discovered is that while most people don’t think of Ruby as a systems programming language, it’s actually pretty good for it. You hear a lot of complaints about how the runtime is too slow—right there, there are things that Python will do faster. But when you're talking to an operating system to get it to do things, the limit on how fast you go isn’t normally your process or your runtime; it’s actually the operating system.
00:04:04 A lot of people have complained about the lack of native threading that we used to have and that we now sort of have fixed. But if you’re running on a multicore box, I really don’t want to think about threading. If I’m running with sort of like, two cores, okay, my brain can get around that, but if someone sticks a Sun Niagara in the corner of the room and I’ve got 60 in core, my brain can’t cope with that. So I started thinking about how there is a way... I don't care how fast the implementation is; I just care that it plays nicely with these features. It allows me to get at the operating system but still write all of the logic in Ruby.
00:04:54 Lo and behold, you can.
00:05:05 Now, it's explaining the philosophy of Unix. Now, it's a bit hypocritical of me because, actually, I hate Unix. My love-hate affair with Unix actually started in 1988 when I met my first-ever Unix box, and I couldn't get past the fact that everything is sort like two-letter commands. You think, what's the deal with that? But the thing is, if you actually check out what most people think of as Unix, the shell, and actually get into the internals of it, it’s a very nice, light, effective operating system, revolving around the basic premise that you just shouldn’t repeat yourself. Build little things, build them well; don’t repeat yourself.
00:05:56 We have a similar philosophy in the Ruby world. We like to build tools that are well-designed for what they do, or as well-designed as they can be within the time constraints to get them out the door, and we like them to be as dry as possible. There’s a natural marriage; it’s a good fit.
00:06:22 Sorry, this is definitely a ramble. I’m going to wake up now. What Unix consists of is, more than anything else, a very small operating system kernel. It doesn’t matter how it’s implemented—microkernels, monolithic kernels—all of this is stuff that people in the C world care about and will argue about. All that really matters to us is that it provides a very small number of facilities: file descriptors, which basically... I mean, originally the principle was that everything is a file.
00:06:55 If you want to share memory between two processes, effectively, you end up with a file handle; it's just like sharing a file. This allows you to write code that's actually very simple because the vast majority of stuff you can do with IO can be wrapped up in an IO object. Now, that's not true anymore because we've got new features like kernel-level eventing that actually mean we need to do a bit more than just call open. We have to call their own versions of open, but this makes it a very flexible operating system to muck about with.
00:07:43 I should probably give a shout-out to where I actually got this hangover from, which was a bar down on the Bowery called Liters, which seems to have the main distinguishing point of serving pale ale, something you can barely get in England anymore. If you imagine that we’ve got this lovely operating system that's just going to give us file descriptors and we've got complicated tasks that we want to solve, that we’d like to get multicore with, what's really nice about Unix is it gives us a very simple way with these file descriptors to do it.
00:08:40 Every process can effectively just be sitting there with its own handle; they can be talking to each other. But there are downsides to the way in which Unix implements all of this. Unfortunately, for one thing, it's of the opinion that all processes are inherited from a single process. So when you boot up a Linux box or Mac, it creates this init process, and it has this lovely idea of just copying the whole state of that process into a new process to create a new one.
00:09:10 This means that the more complicated your processes get when they create a new child process, the bigger that process is as well, and 99% of the time we end up chucking all of that away because we decide to run a completely different program in the process. Now, with Ruby, I’m going to quote a number, and it's a really strange one: on an iMac G5 using Ruby 1.8.3, its memory footprint is 1.87 megabytes just for the interpreter loaded.
00:09:59 I know this because I wrote a very bad script once, purely because I didn't know what I was doing at the time, and it decided to spawn 543 of them, which unfortunately made the Mac decide it wasn't going to do anything else. I spent ages trying to look up the particular error code, which I think was something like -61 or something, and I couldn't ever find a description of it. But I think it was basically along the lines of 'go away; the kernel doesn't know how to schedule this many processes.' But 1.87 megabytes actually isn't a lot when you bear in mind that most modern implementations of Unix actually don’t copy most of that.
00:10:40 In fact, they don't copy any of it. Nearly all of the system calls for creating a new process will, instead, use a copy-on-write mechanism because you've got virtual memory pages, and you don’t have to copy them until they get dirty. This means you can quite happily, and incidentally, that was one gig of RAM that all of those processes fitted into quite nicely. So, this means you can quite happily schedule several hundred Ruby processes, and each one of them can do whatever you feel like in them.
00:11:06 I was actually doing something really useless involving C because, unfortunately, a lot of my work at the time involved playing around with C. But for quite a lot of problems, you’re basically in a position where you can look at a map-produced style solution just by spawning off a whole pile of Ruby processes that are just bare processes and then running whatever you want in them. I'm not sure that most people actually do this because my experience of deployed Ruby apps is mostly deployed Rails apps. It seems you can get about 10 Rails apps to a gigabyte.
00:11:56 I mean, there’s a lot of weight in Rails, so it's not surprising and there’s an awful lot of process state that has to be copied. But going down this process route, Ruby actually provides you with some really lovely facilities, all baked straight in. There’s kernel support for just spawning off processes that you don't care what they do. So if you’ve got a background IO job, you can just chuck it out there and forget about it.
00:12:27 It also gives you the facility to actually start up processes where you've got a nice pipe connecting the two of them so that you do care about the results, and you can sit there and wait on them. But where it starts to fall down a bit is where you want to actually do non-blocking IO. Now, we’ve got lots of non-blocking IO calls that have been introduced over the last couple of years, and they sort of alleviate some of the problems if you’re interested in just one file or one socket.
00:13:01 But you still tend to end up, if you’re going to write network code in Ruby, basically sitting in selects where you’ve got to explicitly give them timeouts. The funny thing about select is it’s not actually a blocking call technically because under the hood, Ruby doesn't actually block. It pulls and then it pulls and then it pulls some more. It comes back and says the timeout's up.
00:13:44 So it’s actually implemented with non-blocking IO, which is quite amusing. But that’s really not an efficient way of writing a server.
00:14:05 How many people here use Nginx for something? If you go to the Nginx website, in fact, there's a link at the end of this presentation for it, there’s a link through to an article about how to get C to do 10,000 connections, and it’s quite fascinating because all of the techniques that are in that, you can do in Ruby, but only up until the point that you start to hit kernel eventing, which you can't currently do in pure Ruby.
00:14:34 The main trick is to get away from this blocking element. Now, part of what I would really like to talk about today—and the trouble is, I know this would have involved a lot of code, and normally I've had bad responses in the past where many of my presentations have like 20 pages of code in them, because I find that writing the code is a lot more fun than the talking part.
00:15:05 You have to actually go through certain processes; for one thing, you've got to actually get down to the machine and there’s only two ways to do that in Ruby as it ships. You've got the syscall interface, which is possibly the most dumb-headed, unfriendly, useless way of making a system call imaginable because it won’t actually give you back any result except whether or not it had an error code.
00:15:42 And anybody who comes from C and is used to using syscalls where you just dump in a buffer and you get results back gets very frustrated very quickly. But there’s something else that standard Ruby, well standard MRI ships with that most people just don’t seem to take advantage of and that’s Ruby DL. This is just a wrapper for dynamic link libraries; it works on Windows, it works on Unix.
00:16:06 In fact, I could probably have spoken about Windows instead of Unix today on that particular point, and it’s great. Gregory mentioned Ruby FFI, and I really like where Ruby FFI is going because most languages have pretty good support for FFI. So if you want to use Fortran code from Ruby, that’s the way to go because it's just going to be clean.
00:16:35 The thing is, Ruby DL actually ships out of the box, and it’s kind of ugly, but it’s there. Like all ugly children, they’re still quite lovely in their own strange way. Ruby DL is actually an amazing tool because it allows you to do the one thing that I always envy from C code: play with memory pointers. You can get memory pointers in and out of Ruby through various complicated ways, mostly by writing your own extensions in C and passing back the pointer as an integer or in a string or something.
00:17:06 Then, you can muck about with it using array pack and string unpack and all of that nonsense, but Ruby DL allows you to get at it. It says, 'Here, have a pointer. Oh, by the way, I’ll give you a managed pointer; I’ll actually take care of freeing it up for you when you’re finished with it.' You know what? I’ll leave doing that until I do garbage collection, which I quite like.
00:17:45 It’s sort of, I suppose, the one thing that I think would be nice about .NET if I could get around everything else involved in learning .NET. Because the interface is common to both Windows and Unix, you can write some very cross-platform code this way, just for doing all of those things that C programmers do all the time interacting with memory buffers, getting straight into the operating system.
00:18:18 I don’t recommend doing this on Windows for the simple reason that it doesn't have stable system call numbers. That came as a bit of a surprise the first time I wrote some code that used it, and suddenly I realized that in 2003 it doesn't create a new process. What did you think you were doing trying to create a new process from scratch? I mean Unix doesn’t have stable system calls in that the system calls for FreeBSD and the system calls for Mac OS X aren’t the same, except occasionally by coincidence.
00:18:59 And it’s quite a pain in the ass when you’re writing a lot of stuff that uses the syscall interface on Unix. If you want to support more than one Unix, you’ve got to keep big tables of all the different syscall numbers that actually map across. Ruby DL just cuts straight through that. You can just load the C runtime, and once you’ve got the C runtime, it’s great—every single C function in the runtime you can call.
00:19:39 On most Unix boxes, that means every single syscall that’s wrapped by C. Suddenly, you’ve got the whole operating system doing what you want. All you've got to do is some very basic pointer math occasionally. The most obvious example of that is if you’re using memory-mapped files: if you map a chunk of memory in, you’re going to be responsible for figuring out where you’re going to put data structures in it, and you’re going to be responsible for munging them and unmunging them.
00:20:17 But it’s just a very liberating experience to know that you can write system-level code without having to go down into constant pointer math without having to go into C. Yeah, you can very easily go from Ruby to memory-mapped files. Why is everybody using memcached? If Ruby can get at memory-mapped files, why is everybody using memcached?
00:20:54 I don't know. I mean, I only started looking at using memory-mapped files in Ruby about three months ago because somebody's released a local memcached extension that runs memcached, or at least the memcached protocol on a shared file written in C. At the time, I thought to myself, I could rewrite this in Ruby. Most of what I'm interested in on a daily basis is how to get Ruby to act more like a vending machine without having to include event machine.
00:21:32 A lot of my work is blue-sky research stuff; it’s not the sort of stuff people should use in production servers. However, a lot of it is about getting lots and lots of different network interactivity going with minimal weight. But when I started looking through the code for local memcached, I thought this could actually all just be done from Ruby because the only thing that requires you to use a C extension is that if you do a syscall to do an mmap.
00:22:07 It’s going to give back a pointer, but from Ruby core libraries, there’s no way you can do anything with that pointer. You can't reference into the memory space, but Ruby DL, because it actually gives you direct access to pointers and lets you use them essentially as an IO stream, it would allow you to just map in the memory file.
00:22:44 Suddenly, the whole need to have this extension goes from a C extension or I think it’s actually a C++ extension. I’m not quite sure why somebody would want the C++ runtime overhead on top of everything else. But instead, you can just say, 'Okay, I'm just going to memory map that portion of shared memory in.' Shared memory itself, I mean, it’s trivially easy on a Unix box to create shared memory, but it’s also trivially easy to do it badly.
00:23:16 There was a disclaimer at the start of this; I always include a disclaimer because 99% of the things I actually get paid to do are things you should never do, or that nobody knows if they should be done at all. It's indicative of a difference in attitude more than anything else. There’s a common attitude when people write Ruby extensions that if they want to get at this low-level functionality, they’ve got to turn to C.
00:23:46 The trouble is C code, in my experience, is five to ten times more verbose and probably 100 times error-prone. I mean, I don’t care how good a programmer you are; nobody gets pointer math right every time. I started my career writing Aviation systems cockpit control systems, and there was always pressure to use efficient tools like C.
00:24:32 I don't mean efficient in runtime sense here; I mean efficient in terms of the time spent developing, and the fact that an awful lot of the kit we actually built was assembler only. It was the only way you could validate everything, and the very first thing you ever do is you effectively write managed memory access so you don’t ever have to do pointer math again.
00:25:11 But there's a common attitude that if you want to get Ruby doing anything outside the norm, you turn to C. In many ways, it's part of the fact that our industry, in general, isn't adjusting very well to multicore. It’s quite fascinating because I dip in and out of the various things that come out of Intel on how to take best advantage of multicore boxes and how to use threading libraries to take advantage of multicore.
00:25:40 The main thing I think every time I read any of this stuff is how I am not going to get this wrong. How am I going to make it so it works on other processes, etc. The great thing about an operating system is it’s somebody else's problem to fix that.
00:26:00 There are an awful lot of anal-retentive Linux hackers out there who will spend the hours necessary to make multicore work really nicely. I don't have to do it because by myself, I'm not going to do it well. I’m going to get bored, distracted, work on something more interesting on the project. I’m not going to be able to justify to the person paying for it why we're going to go and do this, which is often a problem in low-level stuff.
00:26:41 People just will not pay for it. They say, 'Oh well, we want it; we just won’t pay for it.' You’ve got to do that on your own time. We’re not adjusting well to multicore, but we don't have to adjust to multicore if we just think in terms of processes and pipelines and multiple pipelines.
00:27:07 The proof that that’s a better way to do stuff is if you actually ever go over and talk to anybody who works in high-performance computing or you talk to people who design graphics cards. They care about stream processing. They want to get these things single streams as efficiently as possible. In fact, for the last 5-10 years, everyone’s been obsessed with unified shaders, unified this and that.
00:27:43 The thing is, the more that you just get pipelines, you load on the front of the pipeline, and what comes out the back of the pipeline is what it’s supposed to be. Those pipelines don’t very often have to communicate with each other, and a lot of work has actually gone into various parts of Unix to make sure that those pipelines can communicate with each other.
00:28:07 The most common example is FIFOs or named pipes, which basically allow you to pretend there’s a point in the file system, except it’s not in the file system because it would be hideously inefficient to go via the file system to do it. But you can create something that appears to live at a path that anything can access. No one’s going to tell you off if 20 or 30 different processes are all accessing this one pipe.
00:28:35 Whereas if you’ve got a relationship between parent and child, you’d never have more than the two processes going. That allows you to look very differently at scalability problems for one thing. The reason that the high-performance crowd prefer to go in streams and pipelines is that you can scale ad nauseam. You can just put extra pipelines across.
00:29:10 If you've got 20,000 copies of the same thing that need doing but they are distinct copies, fine; 20,000 instances of a pipeline is completely feasible. It’s the sort of generalization that a lot of the time, especially if you’re working on Rails projects, like I occasionally do.
00:29:46 I'm not quite sure why; I really hate Rails—I've been told off for saying that at several Rails conferences. But I hate it because it doesn't think the way I think. The principle of least surprise for David Heinemeier Hansson is not the principle of least surprise for me. However, if you work on Rails projects, the deadline pressures are often so tight that you can only solve a little problem now and then work on another little problem.
00:30:10 It’s why test-driven development works so well for Rails projects is because you're biting off little bits of the cherry every single time. The thinking in this more generalized sense of what process is, doesn’t get the commercial time to do it a lot of the time.
00:30:40 But an awful lot of the scalability issues that we run into with large websites can equally be solved by thinking in terms of these distinct pipelines. Some of the earliest experiments I did with messing around with using the fork system call, I was working for a financial company in London at the time. It was my one time working for a financial company, and it didn’t go at all well.
00:31:05 I lasted three and a half weeks, which was long enough for me to realize that A, I really don’t care about lead generation. B, they cared more about lead generation than they did about giving me the time to actually solve problems. And C, I couldn't justify taking that much money off them for basically sitting on my ass.
00:31:45 They wanted to know, 'Well, say we use a kernel-level fork to fork off several hundred Rails processes. What’s going to happen?' The guy who was actually the CTO loved Ruby and Unix. His desk only had two books on it: a copy of Programming Ruby, second edition, and a Unix book that has a lightsaber on the front. He really did believe himself to be a kernel Jedi.
00:32:23 He lived the active lifestyle to suit that. I thought you'd think that there'd be quite a benefit to actually using a system-level call for that sort of thing. Because once you look inside Ruby's implementation of fork, you realize that, well, it does a lot of the niceties for you that you might not always want to do.
00:32:59 The strange thing was, forking 1,000 processes on my 300 MHz FreeBSD test box, the difference between using a vfork which does no copying of process data at all versus actually using Ruby’s fork was a fraction of a second. This is odd because in the Unix world, people tend to say that process creation is very expensive. It's not when you go and look at the Windows world. Process creation in Windows? God.
00:33:37 There's no real sense to that either, I find, because it literally creates a blank process from scratch for you when you create a process over on Windows, but it keeps a lot more meta-state, and it’s quite weird. If there’s not really that much benefit for that sort of thing where you’ve actually got it boiled into Ruby anyway, then obviously you don’t have to go and look at the kernel; you can just use the boilerplate that Ruby gives you.
00:34:06 So there are a lot of cases where I’d say that the point of playing on a Unix box, 99% of the time, is actually just to stick to pure Ruby because Ruby loves Unix. It lives in Unix; it's got lots and lots of support for POSIX standards and other meaningless terms like SUS3 and OpenX.
00:34:49 I was actually going to put a slide in that explains the differences between all of the different standards in Unix, and then I realized I don’t know the difference between all of the different standards in Unix. They are just sort of like bold names that I’ve been writing boilerplate code for years to work around this and that, and I’ve been stealing from other books.
00:35:26 This is probably a good point to plug someone else's book. If you want to experiment, you can't go wrong than getting yourself a copy of Advanced Unix Programming by Mark Russinovich because he takes the pain away.
00:36:07 It’s quite funny because quite a few of the examples in there I find my Ruby code tends to end up naturally following a similar shape. But the place where I think Ruby is currently let down is that there is one gem; there is only one gem that gives me absolute must-install fever every time I'm working on a big project, and it's EventMachine.
00:36:41 I was dipping in and out of it for a couple of years on various projects. Last year, it was the tail end of 2007, I was working on a unfortunately canceled social networking site. It was nothing particularly exciting; it was just like a London nightlife social networking thing.
00:37:21 The guy who thought it up liked to drink beer, and the guy he worked for didn’t want him to leave the company because he had become a coder. The company made money off of him sitting in a corner with a stack of about six monitors all up there for tracking various odd things. The company made a lot of money, about ten million pounds a year, and got on the Financial Times top 16 startups in the UK.
00:38:00 He really wanted this nightlife site, so I got drafted into work as a tech architect on it. It was kind of an odd job because it involved managing people, and I don’t think I'm ever going to make that mistake again. A lot of what they wanted to do involved having a lot of live chat running behind it. There’s a nice live chat system called Juggernaut that uses EventMachine.
00:39:02 I sort of know the guys from Juggernaut because they’re based in London as well. We've spoken at a couple of European Rails conferences, and we've talked a lot in the speakers lounge. The guy who did the low-level stuff is like 19, which meant he was 16 the first time I saw him presenting. I thought to myself, I’m sure I wasn’t that obnoxious when I was 16 that I could write a better routing mechanism than Ruby's.
00:40:01 Last year, I had a bit of envy talking to them. I had half an hour before I was supposed to give a presentation on doing scalable socket servers and stuff, which would probably all fall down in the real world. They were going to talk about various things they were doing; a lot of what they do involves push technology.
00:40:51 I thought I could do that; EventMachine is so simple to use I could write a push server in half an hour. I could shove three extra slides in full of code; I know I want to do it. My coding partner was like, 'You don’t want to do that.' But once you start playing with EventMachine, it’s really nice. You write six lines of code and suddenly you’ve got this hugely scalable socket server. But I don’t want to have to keep installing the C extension.
00:41:14 I want to just import that straight into Ruby. So far, I’ve done quite a lot of experiments doing that. I don’t really have anything that’s got any real load; it's all artificial and it would not work in the real world, but it’s really quite nice.
00:41:48 If there’s a single point to this and to my flying 3,000 miles, getting very drunk, meeting some very nice people, and hopefully sleeping in a while, it’s that we have the opportunity to do a lot more than we do. The only thing we can break in playing with it and figuring out if we can do it is our own boxes.
00:42:13 It’s almost, actually apart from the famous Vic 20 bug that used to make Vic 20s go bang, it’s almost impossible to literally destroy your machine by poking the wrong memory location with something. All you’re going to do is get a kernel crash; you’re going to get a reboot.
00:42:43 We should apply the same dynamic experimental attitude to the other areas of commercial computing that we’re applying to web applications. If a single person here decides they agree with me, that more than justifies the jet lag. I think the hangover justifies itself.
00:43:26 So, if there's anyone who's got any questions, I would be quite happy to answer them. I'm really bad at answering questions, but I quite like it. So, well, that’s either stunned silence because you’re all thinking, 'God, get out of the country,' or it's stunned silence because, quite frankly, what? Yes, I'll have this up on SlideShare later.
00:43:50 As soon as I figure out how to actually get onto the network—which probably requires the help of someone who can see. It’s going to get updated later this summer; it will have new stuff in it that will be more expansive. There are areas I've not covered; I’ve not properly covered signal handling, I've not really covered kernel-level events.
00:44:24 I’ve only sort of glossed over the concept of shared memory—all of that’s going to get fleshed out. If I find the time, I might even try and write this up as a proper how-to guide. In the meantime, there are some resources on the slide before this one.
00:44:55 I recommend going and reading BJ's guide to IPC and BJ's guide to networking. They're the two best things I’ve ever found online for getting you past the early learning curve. And apart from that, have fun!