MountainWest RubyConf 2012

Summarized using AI

MiniTest: Write Awesome Tests

Michael J. I. Jackson • August 16, 2012 • Earth

In the video titled "MiniTest: Write Awesome Tests," Michael J. I. Jackson presents at the MountainWest RubyConf 2012, discussing the importance of testing in software development, particularly using the MiniTest framework. Jackson emphasizes that successful testing is crucial for a developer's confidence in their code, countering common overhead issues that inhibit effective testing practices.

Key points covered in the presentation include:

- The Value of Testing: Jackson initiates the talk with a discussion on public perceptions surrounding code testing, referencing a Hacker News poll that outlined various attitudes towards testing. While some developers reject testing entirely, Jackson highlights a significant portion who acknowledge the need but struggle with the associated overhead.
- Types of Overhead in Testing: He identifies four categories of overhead that deter developers from writing tests:

1. Setting up a testing environment.

2. Writing tests may require adjusting existing code for better testability.

3. Continuously running and maintaining tests as code evolves.

- The Importance of Simplicity: Jackson advocates for simple testing methodologies, asserting that developers should feel proud of their code. He offers insights into keeping tests uncomplicated to manage the complexity in projects effectively.
- Introduction to MiniTest: MiniTest is introduced as a flexible and straightforward tool for Ruby developers. It supports traditional unit tests as well as specification-style testing, making it versatile for different coding preferences.
- Testing Methodologies: Jackson demonstrates the functionality of MiniTest, including its ability to generate random orders for tests to catch order-dependent bugs and offer clear output during test runs. His examples include constructing both unit and spec-style tests, highlighting the ease of setup through Rake tasks.
- Advanced Testing Considerations: He discusses practical examples of testing situations, such as stubbing methods to avoid unnecessary network calls, emphasizing the necessity for minimal complexity in testing frameworks.
- Adopting Testing Culture: In response to an audience question about integrating testing in organizations resistant to simple approaches, Jackson underscores the importance of demonstrating value through functioning code and tests, suggesting that a practical approach can often win over complex discussions.

In conclusion, Jackson encourages developers to embrace testing, highlighting the benefits of MiniTest as an effective tool to facilitate great testing practices. The mantra is clear: great tests lead to greater confidence and ultimately better code.

MiniTest: Write Awesome Tests
Michael J. I. Jackson • August 16, 2012 • Earth

Help us caption & translate this video!

http://amara.org/v/FGiN/

MountainWest RubyConf 2012

00:00:14.809 Awesome! So, looks like we're ready to go. Hi, my name is Michael Jackson. I'm a California native and have lived in Utah for a few years. I'm back in California again, working for Twitter. How many people know what Twitter is? Right, a few of you—that's good! How many people use Twitter? How many people are using Twitter right now? Awesome! Yeah, it feels really good to work for Twitter because obviously, you get to work on something that people use every day. So, that's a lot of fun. I just wanted to say, first of all, a big thanks to them for sponsoring me to come out here. We are looking for Ruby enthusiasts—if the Bay Area appeals to you, come and talk to me.
00:01:04.229 A couple of days ago, I saw this question come up on Hacker News. Someone asked whether people test their code. It’s kind of an ambiguous question since there are many different levels of testing. So, he broke it down and created a poll. He said there are three different levels: 1. Yes, I test my code. 2. I'm pretty comfortable with my tests. 3. No, we write some tests, but we probably should be writing more. 4. Man, who needs tests?—the cowboy coder types. So, what do you think the results were? About nine percent of the respondents said, 'Forget tests, we don’t care about them.' Twenty-three percent—almost one in four—said, 'Yeah, we've got good tests; we really like our tests.' Then, there's this massive group that said, 'Yeah, we'd like to do more testing, but you know, it's just too much overhead.' So I thought, 'Hmm, overhead—what are they talking about?'
00:02:08.459 What does overhead mean when it comes to testing? Think about it for a second. Are you in that group that comprises sixty-nine percent? What is the overhead that prevents you from writing as many tests as you would like? Chances are, there are a few people in this room who fit into that category. As I thought about it, I could identify four different kinds of overhead. First, there’s the initial overhead of saying, 'Okay, guys, let's test. We're going to bite the bullet, take a couple of hours, and figure this out. We'll set up a testing environment and maybe a rake task to run our tests. If we’re fancy, we’ll have a build server that runs the builds, and if we’re really fancy, we’ll program a traffic light over here to glow red when the build fails.' You can go to all sorts of levels of complexity there. The second step is writing your tests. Sometimes, this will require you to go back through your code and change it because you learn when you try to test your code. You're like, 'I don't know how to get in there because it's kind of a black box,' and that’s a red flag that says that your code kind of needs to be refactored so you can actually test it. The third thing is running your tests. This isn't something that happens just once. The second and third steps actually happen continuously. You write some code, write some tests, run the tests, and hopefully, run them before you do a commit. Maybe you run them automatically in your pre-commit hook, and as your code changes, you refactor the tests. So, there is this ongoing overhead.
00:03:40.340 There's this massive group of people that say this overhead is too much for them. This leads me to a Dijkstra quote: 'I hope that ten years after I'm dead, people will say,
00:04:52.699 You are human, and you have this brain, right? Dave Brady knows a lot more about the brain than I do, but it has a limited capacity to think about things. As the complexity of your project grows, your ability to hold that complexity in your head and understand it diminishes. Unless you're just super smart—I'm sure there are a few of you here, and I hate you for it.
00:06:23.910 We should be writing awesome tests. We should feel good about our tests. One of the coolest things I've learned by attending Rubyc meetups and coming to MountainWest RubyConf every year is that your code is a manifestation of you. You need to feel good about your code. If there’s ever a disparity there, if you don’t like it, it’s not going to be good code. I call awesome tests simple tests because, unlike Gary, I am a simple man. I only write a little bit of code, and at the end of the day, I can only hold so much complexity in my head. I need everything to be as simple as possible.
00:08:36.100 The cool thing about MiniTest is that it supports a lot of different styles of testing. It’s a drop-in replacement for TestUnit, so you can write your unit tests. They will look just like your TestUnit tests. For example, in an app, when I say, 'def test_something', this is the Rack test syntax. I say, 'get /', and I assert that the last response was OK. You can also write spec-style tests. You can describe something, and say, 'I’m describing this app,' or 'I'm describing a request to /, and it should be an OK response.' Then you've got this nice, chained response: 'last_response.status.must_equal 200.' That’s useful for everyone who gets confused with the order of arguments—it clears that up a lot.
00:10:09.410 As a result, MiniTest will seed the random number generator in Ruby with some value and then use that to determine the order in which it runs your tests. If you're running your tests with MiniTest one day and something weird happens, like it blows up but has never done so before, and then you run it again and it passes, you know you might have an order-dependent test. You can take that seed that came up and replay it to catch that specific bug again, and hopefully, eliminate it. Those are the worst kinds of bugs! So anyway, when you run your tests, you see dots for each test that passed, and you can see how much time it took and how many assertions were made per second. That’s kind of cool! Then at the end, you get a summary: you did this many tests, this many assertions, and you skipped this many. Skipping is also great—you can skip tests that are failing right now.
00:12:27.000 You might say, 'I want to skip this one for now.' You can also get more verbose output, to see how long each of your tests took in the output when you run it with specific options. You can even do cool benchmarking things. This might not come up much in my code, but sometimes people do academic research where they need to make sure that a certain piece of code always fits a specific performance profile.
00:13:43.200 So, here’s what my Rakefile looks like. I put this code out on GitHub so that anyone looking to get started with this can easily jump in. This app repository is just a little web app. I have two types of tests: specs and unit tests. I wanted to show both styles. In either case, my test runner is Rake. Does everybody understand what Rake is? I have these tasks where I can say, 'run some tests.' For example, if I were at the command line, I could say something like 'rake test' or 'rake spec' to run these tests.
00:15:38.500 It’s literally that easy to get up and running with MiniTest in your web app. By the way, let’s check out the about method in my app folder, which is saying, 'Welcome home.' Kind of nice! Now, if we want to do the 'def test,' let’s describe this.
00:17:50.730 MiniTest will save me here; I wonder what spec class I should use for that. Oh! I’ve got this app class; it subclasses Sinatra::Base. If so, we should use the Sinatra spec. Sure enough, we found that the class indeed subclasses Sinatra::Base, so we’re all set to use that for our app. Another cool feature of MiniTest is the ability to register your custom spec classes for the various types of specs you want to run. Again, we’re including Rack Test methods for all that good 'get' and 'post' functionality.
00:19:45.150 I want to go to a more advanced example. I have this Fetcher thing. What does a Fetcher do? It goes and fetches URLs. It's got a complex method called 'fetch.' I’m just going to open this up entirely, as I want to avoid the project folder for now.
00:20:47.700 So, let’s make a stub. Some of you might think, 'I know what you need—you need FakeWeb!' No! I don’t actually need that. Can I just define a stub, so I can test it locally? I can change my before block so that the Fetcher is a new instance. I can stub it out right here and use Curl, thus ignoring the complex threading logic. But that’s still not good enough! I’m still making a network request every time I run that test.
00:21:53.450 And I can put that in my repo. I'll be happy to run those tests on the train, this thing that supposedly makes network requests. The point is, you don’t need the complexity of big testing frameworks. You need one way to make assertions and one way to get some readable output—something that you like! Then, it’s up to you to build your tests. If you ever find yourself in a situation where you think, 'I don’t even know what HTTP calls I’m making; I need to use FakeWeb because something might hit the network and I don’t even know how to stub that out,' then I would suggest you’ve got bigger problems!
00:23:44.140 Are there any questions? Yes? Audience member: How do you go about adopting testing in an organization that prefers complex testing stacks? Michael: Honestly, I can't say I’ve been very successful with that, to be honest. The only thing I found that works is writing code. If you're in an organization where code speaks louder than whiteboarding discussions, you might be lucky. If you can write code, walk into a room, and say, 'Look, I wrote some unit tests, and your stuff is broken!'—you know that commit you made last night while drinking?
00:29:40.550 Well, I have the unit tests that are failing. Can we fix this? You can put up a CI server, a build server, and all of a sudden, your company can say, 'Well, we're not going to provision that.' And you could say, 'I don’t need you to! I can run this on Travis CI!' You would run these tests and let your colleagues know when the build is broken. I think the developers in this room probably have different opinions, but my opinion is that writing code should win in an organization.
Explore all talks recorded at MountainWest RubyConf 2012
+11