Talks
Explicit Tests Tell a Better Tale
Summarized using AI

Explicit Tests Tell a Better Tale

by Tom Ridge

In the talk titled "Explicit Tests Tell a Better Tale," presented by Tom Ridge at RubyConf AU 2016, the focus is on improving the clarity and communicative power of tests in software development, particularly within the Ruby programming language. Ridge emphasizes the importance of writing explicit tests that provide meaningful feedback about the application under test and help steer the developer towards effective refactoring.

Key Points Discussed:
- Importance of Test Clarity: Ridge argues that tests should clearly communicate the intention of the code to developers, akin to telling a good story. He stresses that unclear tests can lead to confusion and misinterpretation.
- Testing Anti-Patterns: The talk highlights several common issues in testing, which he refers to as "testing smells" or anti-patterns, particularly focusing on obscure tests that fail to document their logic well.
- Overuse of Domain-Specific Language (DSL): Ridge discusses how overusing DSL, such as 'let' and 'subject' in testing frameworks, can obscure the context and make tests harder to read. He advocates for more straightforward and explicit code even at the potential expense of some abstractions.
- Examples of Poor Test Documentation: Ridge provides examples comparing poor tests with good ones, illustrating how clarity in method names and logical flow can make tests more understandable.
- Use of Test Data and Fixtures: He points out the pitfalls of relying on test data setups that are not clearly linked to the test's intention. Instead, he suggests using more explicit fixtures or factories to ensure that the context is clear and relevant.
- Explicit Setup and Variables: He encourages developers to maintain explicit setups within tests, using intention-revealing variable names to alleviate cognitive load and enhance documentation.

Significant Examples:
- Ridge shares practical scenarios where tests enhance or obscure communication. He contrasts tests that provide adequate documentation and context with those that leave room for ambiguity, reinforcing the need for explicitness.
- Discussions on the application of factories instead of fixtures underscore his message about improving clarity in tests.

Conclusions and Takeaways:
- Developers are encouraged to embrace the mantra of being explicit in their tests.
- Good tests should serve as a guide for refactoring, providing insight into how code dependencies interact and revealing the logical flow of methods.
- Overall, simpler and clearer tests result in better feedback and a more streamlined development process.

00:00:00.680 My name is Tom Ridge and I'm from Two Red Kites in Brisbane. 2015 was a pretty awesome year for me; I got selected for my first ever conference talk. Naturally, you would have thought I would have practiced. A Star Wars movie came out, and it didn't suck. Most importantly, I became a new dad for the first time to these two beautiful little ladies.
00:00:11.160 That is exactly the reaction I was looking for, so I'm really glad you came out with that. That's Chloe on the left and Ally on the right. You might see me running after them around the conference. If you see some white stuff on my shirt, I can assure you that's Ally; she's just thrown up on me. It's okay. So, naturally, overnight, I've become a bit of an expert in telling stories and singing 'Wincy Spider.' But it's the stories that have got me thinking about our tests.
00:00:31.439 Particularly, I came into Ruby from PHP and Flash, and all I could hear was 'red, green, refactor,' test-driven development, and an emphasis on those tests to drive forward our communication about what the system under test was doing and about what our code was doing. So if we place so much importance and emphasis on a test's ability to communicate with us, why do I see so many tests that look like this? Now, I showed this test to my daughter Ally, and this was her reaction. You might argue that's because she's only six months old and she doesn't understand what code is, but I like to think it's because she was confused about this test. This test doesn't actually describe itself very well, it doesn't list important feedback about the system under test, and it doesn't present very well. It communicates poorly.
00:01:22.200 So, how can you guide yourself towards that refactoring if you don't understand the story being told? Today, I want to help you write tests that tell a better tale. Tests that make way more sense than an episode of 'Home and Away.' Tests that communicate your code's intent not only to you but to your co-workers, your developers, and others who are on boarding. I want to help you write tests that give you better feedback, particularly if you're new, so you can start to find those rough edges and patches of friction to address during your TDD processes.
00:02:03.880 I don't do this every single time I see your tests, and I'm a dad with a short attention span. So the short version or the too-long-didn't-read of this talk is to be more explicit. Now, I don't mean to swear more; I have two young babies and would appreciate it if you didn't. But I mean be more explicit in your tests. Provide more context, provide more meaning. Because explicit tests tell a better tale. So today, we're going to look at some testing anti-patterns, and to do that, we're going to examine some examples. That's not to say that these examples are bad tests. They're out there powering applications—they've gone through that TDD cycle and are producing good quality code nine times out of ten. The tenth time is usually me.
00:02:40.320 But they don't communicate well, and that's what I'd like to address. All of these can kind of be framed by the anti-pattern of obscure tests. Obscure tests are, by definition, tests that hide details about the system under test from the developer and tests that don't reveal the intention of the code very well. The first of these we'll examine is the overuse of domain-specific language. Just a quick show of hands: who here are Aspect users? Okay, so the two people using MiniTest in this room, I apologize; this is going to be an Aspect-centric talk.
00:03:05.400 Here's an example where an overusage of domain-specific language doesn't provide much clarity. Aspect is really great; it gives us all these nice tools like 'lets' and 'subjects,' but we can have this tendency to overuse them. It's like we have this weird desire to dry up or clear up untested code because that's what our test code is—untested test code. In this example, our subject references outlet and four blocks, while we're referencing an external third-party gem's API for one test. I'd suggest a much clearer approach that provides better context and better readability. You won't need to navigate around this file to understand how my dependencies and structure come together.
00:03:45.280 In this case, I've provided a little bit more clarity at the cost of not using those lets. There is definitely a reason why you'd want to use a 'let,' but I would argue that you might want to consider whether reaching for that abstraction is worth the loss of context. There was a discussion yesterday about complicated shared setups between tests. As we touched on before, you'd want to consider whether instead of abstracting all that shared complicated context away, you can just use regular plain old Ruby instead and provide an intention-revealing method name, being explicit about the data that setup requires.
00:04:15.920 So, be more explicit. Rely less on your domain-specific language in your tests. Try to reduce the number of lets, peers, and subjects you reach for if the context of that test is important to you. The next point we're going to look at is tests that fail in documentation. Good tests document well. Raise your hand if you've run this in the last week. Cool! I don't have money for it, but I'd buy you a beer if I did. I would encourage you to run this more often because I ran this on a project recently, and this was the output for one particular method. If anyone can tell me what's going on in this, I would also buy you a beer with my imaginary money.
00:05:01.280 The trouble with this is not only does it fail in documentation, but you can also pick up that there's not a lot of effective use of contexts to describe what's going on. I couldn't tell you what the logical branches in here are. Chloe couldn't either. The only accurate feedback we can get from this is that our method is probably doing a little bit too much. I want you to compare this with another example. In this particular example, our contexts are much clearer. We can clearly identify what the logical branches in our method are and identify our return statements and what they might return.
00:05:49.879 So, be more explicit. Be careful about how your tests do or don't display themselves with documentation. Keep revisiting that as you're refactoring your code to ensure it still holds up. One of the biggest contributors to obscure tests are test data that miss context. Here we have a test where we instantiate a flight and are expecting the flight's carrier to be Qantas. I showed this once again to Ali, and I thought, 'That's a pretty good test.' However, the trouble is that it is very hard to determine the actual value of the number that we care about for the return value of Qantas.
00:06:52.120 So, we provide additional context by pulling that variable out, providing at least a bit more insight into the relationship between 'carrier number' and 'flight carrier.' In regular Ruby, we would call this a refactoring to an explained variable. This works particularly well in a test context because you don't want to refactor it any more than that. So, be more explicit. Give your data some meaningful context. Use intention-revealing variable names. This is tied to a bigger subset of skill tests, but mystery is one of the other biggest contributors to testing anti-patterns.
00:07:53.519 In the Rails world, fixtures are the biggest contributor to obscure tests. They're explicit data setups placed far away from your code. In this particular example, it's not immediately clear whether we actually care about the fact that the task is assigned to Joe or that it has a sensitive activity or that it was created by Frank. We have all these extra data points that are potentially irrelevant to our test, introducing fragility into our test suite. We can improve this by at least assigning it to a temporary variable to give more context. However, we can improve it again by using factories instead of fixtures.
00:08:50.760 Now we have a bit more clarity about the relationships happening inside the test. By taking this a step further and realizing that it's only the value of activity and task that we care about, we can further reduce the data required for the test. I've called back to this example because an aspect of DSL puzzle is kind of a 'choose your own adventure' of mystery. I have to navigate and figure out that the 'describe class' actually refers to the 'cal event,' and we are passing in the event that sets up the let. I barely get enough sleep; I don't have the mental load to figure that out.
00:09:56.080 So yeah, this talk is basically a statement saying, 'Please, for my sake at this point in my life, improve your tests.' Please be more explicit. This concept of fresh fixtures, pulling in stuff specific to your test inside your test, is crucial. Wherever possible, pull that stuff in and align that setup. Rely less on fixtures and use more on factories or better yet, plain old objects with as little data required as possible. Implicit setup is another point of contention, and that's been a recurring theme today.
00:10:50.560 Remember this example. Let's improve it so that our context is considerably clearer. In this case, our return statement with its blog is much clearer. We can understand how our event comes to be, and we can finally start to infer how we get that end result rather than it being obscure. So, be more explicit. Use your tests as a story. Pull that setup inside; make sure your data is clear. I don't want to have to travel around to figure out what's going on because I don't want to trek through your test suites every single time I look at it.
00:11:44.440 So start to love your tests, inline your test setup. Pull all the stuff you can inside that's specific to your test. Because as soon as you start down that chain out of the desire to clean things up, you more often than not find that you start creating specific overrides for those lets inside your tests, further complicating matters and increasing the cognitive load for the developer. Use intention-revealing variable names. Explainables are great; refactoring in any case works well, but in tests particularly, it provides meaningful context where often there isn't any. Use your context blocks as a great way to bring clarity and improve documentation.
00:12:31.480 Run your aspect with the format option turned on just to see what it looks like when you submit your pull requests. This is one of our earlier examples where we pulled out how it looks when we just ran it with documentation. We can see here that our email renderer class is quite clear about how it generates our routes. We've used a temporary variable with a route array to provide context about how we might generate that return result.
00:13:34.800 These tests have better feedback because they give you more information at hand than just a red or green dot. They inform you about your dependencies, especially if you're new and have those instincts for refactoring. They're providing solid data points to help you progress forward. The story being told is easier to read; you don't have to think as much, and you won't have to search all around for context. You've got a beginning, middle, and end. So you can start refactoring with confidence.
00:14:38.239 Be more explicit, because the best kind of stories are the ones that make you think and challenge the status quo. As developers, that's what we want to do all the time: start thinking about the code that's underneath the tests and consider our dependencies.
00:14:54.920 Thank you.
Explore all talks recorded at RubyConf AU 2016
+11