Design Patterns
Workshop: Taming Chaotic Specs: RSpec Design Patterns

Summarized using AI

Workshop: Taming Chaotic Specs: RSpec Design Patterns

Adam Cuppy • June 09, 2016 • Earth

In the workshop titled "Taming Chaotic Specs: RSpec Design Patterns," Adam Cuppy discusses strategies to improve the efficiency and clarity of RSpec testing, focusing on refactoring complex and bloated tests into manageable formats. The workshop addresses the common issue where test suites become cumbersome and difficult to comprehend, leading to increased time spent on testing without clear benefits.

Key Points Discussed:
- Introduction to RSpec: Cuppy explains RSpec’s focus on behavior-driven development (BDD) and how it allows for natural language-like descriptions of the application's functionality.
- Problems with Current Test Practices: He highlights that many organizations treat test code as a second-class citizen, resulting in tests that are often cumbersome, repetitive, and hard to understand. For instance, he describes a user model that had a corresponding test suite of 9,000 lines, revealing inefficiencies and repetition within their testing.
- Design Patterns for Tests:
- Minimum Valid Object (MBO): Cuppy introduces the MBO pattern, which involves establishing a valid object, making specific changes, and asserting the expected invalidity based on those mutations.
- Permutation Tables: This method helps reduce redundancy by defining a set of test cases comprehensively, allowing the representation of multiple variations without excessive repetition.
- Golden Master Testing: He discusses how this method aids in visually confirming expectations for complicated outputs, particularly with legacy code or complex structures like JSON. Understanding and verifying output can help maintain clarity.
- Best Practices: Cuppy emphasizes using descriptive naming for tests, extracting common expectations to avoid redundancy, and choosing flexible factory patterns over hard-coded fixtures to streamline test creation and execution.

Cuppy concludes the workshop by encouraging developers to embrace patterns and practices that reduce cognitive load, enhance clarity, and lead to more maintainable tests. By adopting these techniques, developers can significantly decrease the lines of code in their specs and improve their overall productivity in testing.

Workshop: Taming Chaotic Specs: RSpec Design Patterns
Adam Cuppy • June 09, 2016 • Earth

Workshop / Taming Chaotic Specs: RSpec Design Patterns by Adam Cuppy

Don’t you hate when testing takes 3x as long because your specs are hard to understand? Following a few simple patterns, you can easily take a bloated spec and make it DRY and simple to extend. We will take a bloated sample spec and refactor it to something manageable, readable and concise.

Help us caption & translate this video!

http://amara.org/v/LewZ/

Rails Pacific 2016

00:00:23.750 You guys look great, all of you. Okay, so I've got 139 slides and approximately 45 minutes to do this, and this is going to be an adventure for all of us.
00:00:30.599 If you have any questions at the end, I won't have time, so come find me afterward and I will gladly answer any questions that you might have.
00:00:41.760 My name is Adam Cuppy, I am from Taiwan... No, I’m from the United States. I'm from San Diego, California specifically. I founded this company where we are a consultancy and we do web and mobile applications.
00:00:57.570 In this talk, our spec design patterns came from dealing with a lot of different organizations. When we came into the codebase and we would run the test suite, we found that they were really large, really cumbersome, and really problematic. So this talk is specifically about that.
00:01:09.750 Now before we get too far, you can find me on the interwebs. I am on GitHub at a cuppie, and then you can find me on Twitter at Adam Cuppy. Right now, I would like all of you to do who have a Twitter account, I would like you to tweet at me right now and tell me how amazing I’m doing.
00:01:29.400 The slides are not up there yet, but they will be on Speaker Deck directly following this, so you'll be able to get all the information. Of course, it's on Confreaks — a big shout out to Confreaks, by the way! Can we hear it for Confreaks? Yes, big win! You can get tons of videos there, and you can see all of these ramblings later.
00:01:58.140 Okay, so RSpec is really what we're talking about. For those of you who don't know, it started in 2005 as an experiment. RSpec specifically focuses on behavior-driven development. This is how we describe the functions of the application.
00:02:10.530 RSpec has what's known as a declarative DSL. In other words, as I define the function of something, I am going to leave the implementation within the scope of the application itself. Being declarative, I'm going to express or explain what specifically is happening inside of the app.
00:02:30.290 Now, here's the problem: oftentimes our test suite becomes a second-class citizen. What I mean by that is that when we build our applications, the application code we write often focuses mostly on what the user is going to see or experience.
00:02:56.310 The test code, however, generally comes after the fact for most organizations, and similar to that, the test code often becomes really cumbersome. We want to ensure there's good test coverage, but when it comes to the actual performance of our application, it rarely equates to either good or bad performance directly.
00:03:13.380 So again, our test suite becomes a second-class citizen. It's not the primary focus of our other efforts, and it becomes tough really fast. How many of you write tests for the code that you write? Raise your hand if you write tests for your code.
00:03:30.840 Now, how many of you keep your hand up if you're lying right now? So most organizations require tests for the code you write. That's interesting; normally when I ask that question, most people raise their hands saying they are required to write tests for their code.
00:04:08.760 Oftentimes, the tests are really hard to understand. For example, one of the companies we worked with has a user model that's 6,000 lines of code, and a test for that same model of 9,000 plus lines of code. When we did an analysis to determine how often they were repeating themselves, they were almost literally testing the same method almost a dozen times in effect the same way.
00:04:41.860 Because the test was so large, parsing that and determining what inside of that was already being tested was very, very tough. This became a very big problem they had to solve.
00:05:06.030 So what do you do when your test is that large, cumbersome, and problematic? That's really the focus of this talk. This talk is called 'Taming Chaotic Specs' and more specifically, RSpec design patterns. What this is not about is what to test. There are some really great resources on what you should be testing in your test suite.
00:05:27.820 This is more specifically about patterns and practices that you can follow—suggestions when structuring the tests themselves. Speaking of what a design pattern would be to set expectation, it is first and foremost that it communicates expectation.
00:05:45.260 If I follow the pattern as written, I should understand what is going to be communicated by doing it. Similarly, if I’m looking at it and I know what pattern is there, I can probably guess what the rest of the next is, and it encourages consistency.
00:06:09.669 This is hugely valuable, especially when it comes to a 9,000 line spec. When there's consistency amongst the codebase, I have a strong sense of what patterns are being used and what to expect and where to look to find and parse things.
00:06:34.120 The last visitor is that it reduces mental load. How many of you have looked at a test suite and immediately wanted to flip a desk? Absolutely! We all wanted to flip a desk at some point. Reducing the mental load is a big thing, and we want to do this as much as possible.
00:07:03.670 I'm going to start with our first pattern: the Minimum Valid Object (MVO). If you're taking notes, feel free to take notes. I am going to walk you through this pretty quickly.
00:07:17.380 The MVO workflow looks like this: First, you start with a valid object. Inside Rails, there is the concept of a valid model, but a valid object, whether in Rails or just plain Ruby, is whatever you define inside the domain. For example, a valid object has a username and may have these attributes assigned.
00:07:41.780 The workflow starts with a valid object. The second step is to make one specific change: one mutation of an attribute on that object, and the last step is to assert that the valid object is now invalid.
00:07:59.350 So you say, I've got a valid object, I know it's valid based on certain criteria. I'm going to change specific attributes of that object and then I'm gonna assert that it's now invalid.
00:08:10.560 Let’s say inside our Rails application we have this user model. The user model has some class methods called on it but the biggest part we're going to focus on is the validations.
00:08:19.770 We're validating the presence of a first name, the length of the middle name, last name, email in a certain format, and so on. We've seen this many times before.
00:08:41.650 Here’s our test suite. We’ve opened up our user spec and it looks something like this: we're going to describe the user and run our first assertion. It’s going to say it should be invalid. It's very descriptive.
00:09:01.360 We create a user, and the user is going to have this really long first name. Since we check the length which is supposed to be between 4 and 20 characters, we see that the valid object is not true because we’re over 20 characters.
00:09:28.060 Then we check the short version. In the first case, we're more than 20, and in the second one, we're only three characters, so it’s really short. We test both in the first thing, and the test passes.
00:09:43.710 This is fantastic! However, is it actually accurate? No, it’s totally broken! This is a false positive. Oftentimes we notice we have a lot of false positives inside our code.
00:10:01.990 So what is a false positive? If we look at our factory for our user and we create where the first name is really long, it is invalid because the first name is really long. But all the other validations are also failing.
00:10:19.960 So even though we think we're testing a specific case, it might be functional in one aspect but isn't determining that all the rest of it is also invalid. We actually have a broken test.
00:10:34.230 So if we look back and say, what’s really the problem here? The problem is we are not communicating enough. We have this kind of blanket description like 'it should be invalid.' It doesn’t explain it or express any of that.
00:10:56.270 One of the objectives of your test suite should be that when you read it, it helps you rationalize and reason about the code itself. This is the whole idea of being declarative.
00:11:10.060 If I declare a certain set of criteria and functions, they should come out the other end meeting those expectations. Let's refactor this test a little bit.
00:11:27.210 We start looking at the top and say, well first and foremost, we're going to describe our user object. The next thing we're going to do is take advantage of an RSpec method called subject.
00:11:49.690 Raise your hand if you've heard of subject before in a test. What subject does is it defines what is the focal point—the object we are going to start to describe.
00:12:08.460 If you do not include this line, RSpec will automatically run effectively to describe the class of the user model. But I like to define it for a couple of reasons: it communicates what we’re trying to test.
00:12:22.230 Most people don't realize you can pass a first argument in there and say this is the goal of the test suite; it should communicate those things. So let’s communicate what we are testing and specifically that we’re going to describe the new class.
00:12:42.460 Now let’s look down; our test has two very different assertions all being tested inside the same spot. We’re saying it should be invalid. And then we’re going to run this expectation.
00:13:01.120 We want to clean it up quite a bit. When we refactor this test and add in a couple of contexts, the first context will be when the first name is over 20 characters. It’s testing the validation where it’s too long.
00:13:20.280 Then down below we’ll add another context for the first name that is under four characters. We're not yet saying whether it passed or not; we're just describing what's going on.
00:13:40.320 Now we’re going to establish a let. This is going to be a variable, a value that we pass into our subject. It’s going to look something like this: on our subject, we’re going to say the user has a first name.
00:14:01.240 In the first context of over 20 characters, we will define what the first name looks like and then do the same in context with under four characters. We’re communicating what the expectation looks like.
00:14:20.050 We have cleared off and refactored both of those, which leaves us with those two lines we want to change. We want to make them clearer.
00:14:32.080 Now there are a bunch of problems with this. This is tough reasoning because what we end up with is this: expect user.valid to not equal true. In other words, we’re saying is true not true. Does that make sense? No, it doesn’t work well.
00:14:55.370 Most of you might not know this, but RSpec supports predicate magic methods. If you have a method on the user object that ends in a question mark, like user.valid?, you can write an expectation that looks like this.
00:15:13.300 Effectively, it will take the ending of the method being called and prepend it with to. So the expectation becomes easier to read, and we can assert without any confusion.
00:15:36.140 When we run RSpec, it exports the formatted value at the end, properly formatting it so as you read the code, it is easy to understand. So we’ve gone from 'true is not true' to 'is this true?' The user is invalid. Makes sense? Fantastic.
00:16:08.360 Now, let’s check: now that we've added in our expectation user must be invalid, have we eliminated those false positives? The answer is no, we haven’t yet.
00:16:24.970 This is where the Minimum Valid Object pattern comes into play. We haven’t actually implemented that yet; we are just cleaning up our tests.
00:16:45.180 If we go back, we will inject a little code at the very top of the test. At the very top, you’ll want to establish what a valid object looks like. This is essential.
00:17:06.550 As the test gets super long, that's okay. The goal is to know at the very top what is a valid object, what are the attributes that I can mutate on that object.
00:17:23.920 We run this test and if the first expectation fails, we know that everything down below is less valuable. We need to fix that first state since that is going to help define what a valid object is.
00:17:42.030 So we want to know that the very first expectation is valid as it sits. If that is valid, then when we make mutations down below, and they pass, we know we are actually testing whether this first name is the factor that is changing that validation.
00:18:00.800 Have we fixed the false positive? Yes, we did! There’s some magic that can happen here, which you can use or not; some love it, some hate it.
00:18:16.760 In RSpec, if you do this over and over, you can use a quick method called be_valid. Sometimes I use it, sometimes I don’t; it's totally up to you.
00:18:32.240 So the first thing we’re going to do as we build this whole thing out is create the subject by factory dumping our user object, passing in all the relevant attributes.
00:18:53.130 We assert that it’s in a valid state. When we change the first name, it’s going to map up to the subject line at the bottom, so that's the mutation and we’re good.
00:19:09.860 This is a fully incomplete spec at this point. Pretty quickly you can see how we can truncate the spec really fast. We could take a 9,001 line spec down to less than that pretty fast.
00:19:38.850 Now, minimum valid object pattern number two is permutation tables. Here’s what the workflow looks like: first, you define a set of data.
00:19:55.500 You define the output of each set and then you assert that the method creates the output from the data input. You input this data into the method and the output should match.
00:20:12.660 Let's look back at our user model during this example. We’ve got a method that effectively concatenates various name segments: first, middle, and last name. If any of those are nil, it strips them out and joins them.
00:20:34.490 If we were to write tests for this, it may look like this: we describe the full name method and set the subject to full name and we actually call the method.
00:20:51.940 Most of the time, subjects are reserved for an object to be returned; however, it can sometimes be helpful. It’s essential to define that you align these three aspects.
00:21:09.360 Now, we build out our spec and start adding various permutations. For example, if the first name is nil, it should equal the last name.
00:21:28.990 If the last name is nil, it should equal the first name. So the question becomes: is anything missing? Raise your hand if you've seen specs like this.
00:21:53.610 The reason for specificity in testing is you can have one method call that runs into testing with the same method multiple times.
00:22:12.240 We can start seeing specs that included permutations that should be tested all at once, but they were unclear.
00:22:28.720 So, permutation tables is a pattern to use to prevent all that, but we are lacking variations. To fix this, we build this into a hash.
00:22:48.210 Our set where the key is the value: if nil then the first name is nil, the last name is Johnson, it should equal a full name. Now we take all the examples we had before and plop them in.
00:23:04.320 This way, we can easily see and follow what's missing and answer the question about what options are needed.
00:23:23.660 Next, we look at this approach; we’re going to iterate through sets, and assert that the output matches what's expected.
00:23:40.370 This is an optional step for shared examples, which allow you to define common sets of expectations. It prevents duplicating code.
00:23:56.590 Shared examples is like a mixin where you can define what common expectations should be without redundancy. We can say it behaves like a full name.
00:24:19.840 The last argument will determine the output expected. So this makes it very clear and concise for the developer.
00:24:38.020 Now that we want to add a fourth argument, we just do that by adding to our table, and that’s it! This expedites the entire process.
00:24:54.700 Now the third pattern we have is golden master, which deals with testing broken into problems that need to be resolved.
00:25:09.560 First, it can be a way to backfill untested legacy code. Second, it’s valuable when the output is complicated, requiring visual confirmation.
00:25:21.200 Gold master testing can particularly come in handy if you're looking at complex data types like JSON payloads.
00:25:31.680 The next reason is that when the code complexity significantly exceeds current domain knowledge, it helps a lot.
00:25:42.820 If you have opened your tests and you see a pending scenario, that’s where good master testing can come into play.
00:26:09.370 The workflow works like this: first, it takes a snapshot of the object and puts it into a file. Next, you verify the snapshot manually.
00:26:35.780 You literally review the file and confirm it meets your expectations. From that point forward, you compare future versions to the master.
00:26:55.650 Now, instead of going through every implementation of this, here's a useful gem by Katrina Owens called 'approvals'.
00:27:07.920 aprovllls gem basically implements golden master testing across the board without needing to reinvent the wheel.
00:27:23.720 As we work through the approval gem implementation, the first thing you’ll notice is a method that takes a block working inside the expectation.
00:27:40.100 This yields the data type from the file you expect. If you want to pass in some form of strength of information, you have that option.
00:27:57.600 You can view the output into the command line interface for approvals and manually verify snapshots.
00:28:13.790 I don't think it should be used for large applications where there may be many snapshots to verify; instead, it is great for isolated examples.
00:28:31.850 Here's a few best practices, or better practices, if you prefer. This is how to do things in a easier way.
00:28:49.530 The first is to use let instead of instance variables, which may help prevent debugging issues that are common.
00:29:08.680 You might see a variable that says 'full name' and have an expectation that fails. You will dive back into the code and not realize a spelling mistake.
00:29:29.380 This highlights the importance of utilizing hand. It's much easier to debug when testing and knowing what is expected.
00:29:48.770 The next point is to be descriptive in naming tests. Instead of saying 'it should be valid,' be specific about what is being tested.
00:30:08.950 Next is to extract common expectations. RSpec supports this along with Ruby itself. You can use custom matchers to help keep the tests dry.
00:30:27.860 A custom matcher could help with repetitive tasks across your test suite, increasing clarity and reducing redundancy.
00:30:46.570 Next, you should consider using factories instead of fixtures, which could help create valid objects.
00:31:00.560 Factory Girl is a gem that can help build objects quickly. The goal is to always represent valid objects.
00:31:15.770 If you build an assistant user, it won't have invalid attributes and you can handle all angles of code.
00:31:30.020 Let’s wrap this talk up, and here are some resources you can read. I will include all of this, so don't worry!
00:31:48.750 Better Specs is a great starting point, not all principles are agreeable but they can form a basis.
00:32:06.810 A blog post series called 'getting testy' is also good reading on structuring tests by Randy Coleman.
00:32:20.930 Lastly, for those who haven't read the book 'Practical Object-Oriented Design in Ruby,' this is required reading.
00:32:37.260 This book is crucial for writing quality code. Thank you very much for your attention today.
00:33:01.560 If you have further questions, find me on Twitter or consult me anytime. I’m available the rest of the week. Thank you!
Explore all talks recorded at Rails Pacific 2016
+5