Adarsh Pandit

TDD Workshop: Mocking, Stubbing, and Faking External Services

TDD Workshop: Mocking, Stubbing, and Faking External Services

by Harlow Ward and Adarsh Pandit

The video titled 'TDD Workshop: Mocking, Stubbing, and Faking External Services' features Adarsh Pandit and Harlow Ward discussing practical approaches to Test-Driven Development (TDD) in Ruby on Rails within the context of the RailsConf 2013 event. The workshop emphasizes understanding and implementing validations in Rails models through unit testing rather than integration testing.

Key Points Discussed:
- Introduction to the TDD Approach: The speakers express empathy for newcomers in Ruby and Rails, acknowledging the challenge of grasping TDD principles.
- Handling the 'Sad Path': The session begins with a focus on testing validations using Shoulda Matchers, a Domain Specific Language (DSL) for RSpec, which is instrumental for unit tests.
- Unit Testing with RSpec: The speakers demonstrate writing tests for model validations, highlighting how tests affirm the required code and ensure necessary validations are part of the model before they are implemented.
- Layered Validations: Pandit and Ward emphasize the importance of both model-level and database-level validations, ensuring robust data integrity by typically using both methods during development.
- Assertions with Shoulda Matchers: They illustrate making various assertions such as checking for presence and uniqueness of model attributes. The discussion includes best practices regarding syntax for clarity and readability in validation assertions.
- Custom Validations: They explain the option to define custom validation rules through dedicated methods, enhancing flexibility in how data integrity is enforced.
- Code Review and Maintenance: The speakers discuss the significance of a rigorous code review process to guard against accidental removals of validations and their corresponding tests, ensuring maintainability in the codebase.
- Final Engagement: Concluding the session, the speakers invite any remaining questions and announce a coupon code for workshop materials, reinforcing community support during TDD learning.

Takeaways:
- Test-Driven Development encourages developers to write tests that define required code behavior prior to implementing it.
- Utilizing tools like Shoulda Matchers helps streamline the process of validating models in Rails, making it possible to ensure both application and database-level consistency.
- Collaboration and diligent code review strategies play a crucial role in the effective maintenance of validation logic in software projects.
Overall, the workshop highlights the methods and best practices for implementing TDD and enhancing testing strategies in Ruby on Rails development.

00:00:16.400 Are you all feeling good? Soaking it in? I know it's a lot. As I mentioned to some attendees here, we had planned about seven modules, and we’ve only made it through one so far. A lot of the content will be available as a Markdown document on the RailsConf tutorials site, so fear not! We’re both around all week to answer any fun TDD questions. We are relatively newcomers to Ruby and Rails, so we empathize and have a good sense of how beginners get started.
00:00:30.160 Let me throw a test in here. One of the questions we had just before the break was about handling the 'sad path'—how do we test that validations occur, and things of that nature? Instead of doing an integration test, we're going to use a library called Shoulda Matchers, which is a DSL (Domain Specific Language) that can be used in RSpec. This will allow us to write unit tests on these models to ensure that validations are indeed present.
00:01:00.399 This approach aligns with the philosophy of letting our tests assert the code we wish we had in our code base. We won’t add validations to the model until we have a failing test that indicates we need them. Just to describe what I've done here, this file is named spec/models/task_spec.rb. This follows the convention over configuration, where the spec for the model should be in the models directory and should have the same name with '_spec' appended.
00:01:37.680 I've used the classic RSpec syntax to describe the task. We’re pointing to this model, and I'm using Shoulda Matchers to assert that it should validate the presence of the name. Does that look right? Yes, that looks good to me—let’s see if this fails.
00:02:01.680 Great. The error we see tells us that Shoulda Matchers runs a test to create this element without providing the name, which is set to nil. The error message indicates that it did not expect errors to include 'can't be blank' when the name is nil, and it correctly flagged this issue. This is what we wanted. In the background, we set up the model, call valid? on it, and then check the errors array for our model, looking for 'can't be blank', which is the standard error output you would see on a web form next to that particular field.
00:02:59.120 Instead of checking the contents of the page in our integration tests, we’re unit testing the model to ensure that this validation is present. This allows us to focus on keeping our tests independent of Rails, trusting that Rails will notify of any issues while we focus on our application logic. Since our test was for validation and it failed beautifully, Harlow is now adding a validation to the model to confirm that the task name exists.
00:04:03.200 Could we look at the model again really quick? Yes, what we have here is the model in app/models/task.rb. At the top is the model, and at the bottom is the test. The line validates :name, presence: true, leading to nice documentation later. If someone wonders what validations are applied to a model, they can check the unit tests, which will express in plain English what has been set up so far.
00:05:05.120 An important point we often discuss with beginners is that there are two layers of validation in Rails. The first is model-level validation, and the second is database-level validation. When you run your migrations, you can specify similar requirements, stating that a field needs to be present; otherwise, the database will reject it.
00:06:00.080 We typically use both validation methods to ensure our data remains clean. Shoulda Matchers provides several assertions we can make. For example, I just added an assertion for validating the uniqueness of the name, and I’ll run that assertion next. The question may arise whether we can pass multiple validation assertions in one block; we can indeed do that.
00:07:02.319 We can assert two validations in one line within the model: validates :name, presence: true, uniqueness: true. Alternatively, we can split these validations into two separate lines. I personally prefer this syntax because it enhances readability, making each assertion clearer.
00:08:27.279 In testing, each assertion should be treated as its own independent idea, which maps better to the separate matchers in the test file. Validations are relatively straightforward, yet we can grow more intricate with them as the project develops.
00:08:56.639 Let's pause for any final questions. Yes, the question was about whether we’re inadvertently testing Rails itself by validating its functionalities. To clarify, we're ensuring that the validation exists within our model, asserting that the intended constraints are present; if someone modifies the validations, it will trigger a test failure, ensuring we maintain the necessary checks.
00:09:13.760 Regarding custom validations, we can submit specific method names to validations, allowing for custom rules as needed. For instance, if we need to ensure that every task includes a particular keyword, we define a method to evaluate that condition, and Rails will check through this method during validation.
00:10:35.679 Furthermore, validation methods can be defined in the app/validators directory, where you can implement distinct unit tests for each validator, ensuring comprehensive coverage of all positive and negative cases in your validations.
00:12:05.760 While having a test for a validation ensures that we track changes, there’s always the chance that a developer could remove both the validation and its test. However, rigorous code review processes, programming pairings, and team diligence are effective safeguards against such occurrences.
00:13:44.560 As we approach the end of our session, we recognize that we may not have adequate time to engage in the next module. We will break a few minutes early, but we will mingle in the meantime to address any remaining questions.
00:15:23.840 Before we conclude, I have an announcement: If you’re interested, there’s a coupon code for 20% off everything on our learn portal, including our subscription service, Learn Prime, which offers numerous recorded workshops. I appreciate everyone for coming and for participating in today’s session. Testing and developing in a TDD environment is a challenging endeavor, and it’s gratifying to see everyone come together to support each other.