Talks

TDD Workshop: Outward-in Development, Unit Tests, and Fixture Data

TDD Workshop: Outward-in Development, Unit Tests, and Fixture Data

by Harlow Ward and Adarsh Pandit

The video titled 'TDD Workshop: Outward-in Development, Unit Tests, and Fixture Data' features Adarsh Pandit and Harlow Ward from thoughtbot at RailsConf 2013. The workshop focuses on Test-Driven Development (TDD), addressing the challenges developers face as software complexity increases, and providing hands-on demonstrations of TDD techniques.

Key Points Discussed:
- Introduction to TDD: Adarsh and Harlow emphasize the importance of writing tests first, which helps prevent unnecessary code and reduces bugs.
- Pair Programming: The workshop adopts a hands-on approach called ping-pong pairing where they alternate writing tests and code, illustrating collaboration's importance in learning TDD.
- Integration Testing with RSpec and Capybara: They demonstrate how to conduct integration tests, simulating user behavior and interaction within a web application using RSpec and Capybara.
- Practical Application: The session includes constructing a basic to-do application where they implement new features, such as marking tasks as complete. The importance of prompts from failing tests is stressed as they guide code development.
- Error Handling and Refactoring: They navigate through error messages when tests fail, explaining how to fix issues in a structured manner while maintaining code clarity and reusable components.
- Benefits of TDD: Notable advantages include improved code understanding, documentation, and enhanced collaboration among developers. They argue that it encourages purposeful coding and minimizes confusion.
- Further Engagement: The presenters encourage audience participation and questions, reinforcing the interactive nature of the workshop.

The workshop concludes with a reflection on the process of TDD, promoting an understanding of its delicate balance between testing, development, and collaboration among developers. With an emphasis on continuous improvement, they advocate for well-structured code and the importance of maintaining a clean project history. Overall, the session serves as both an introduction and a deep dive into practical TDD techniques, promoting discipline and practice as vital to mastering the process.

00:00:12.259 Thank you! Hey everybody, welcome. I'm Adarsh Pandit, and this is Harlow Ward.
00:00:18.900 We are both from thoughtbot, and today we're going to do a giant group pairing exercise where we're going to walk through how to do Test-Driven Development (TDD).
00:00:30.960 There are some prerequisite tasks up here. If you haven't had the chance to clone the repo, please do so. If you have any trouble, please raise your hand.
00:00:43.379 We have a number of awesome TAs who are running around to help you. There are also your neighbors nearby. Many of you have paired already, and that's great!
00:00:56.940 So, please help one another out. This is a workshop setup, so we're going to do some live coding, and you can follow along. I'll talk a little bit more about that in a second.
00:01:09.840 We have a USB stick with all of the gems on it, so if you're still struggling, raise your hand, and at some point, one of our team will come over and get you set up.
00:01:21.000 We're expecting to have a great experience working together to learn TDD techniques. This is who we are: we are both developers at thoughtbot.
00:01:36.000 You can find us on email and Twitter.
00:01:41.579 Thoughtbot is a Rails consultancy and iOS development company. We have developers and designers, and we build startups for mobile and web applications. It's a lot of fun! We also have a lot of open-source gems and tools that we manage, which you can find at thoughtbot.com/community.
00:02:07.560 Now, for some housekeeping: we're going to conduct a two-part workshop. This might have been described as one half talk and one half workshop, but we've decided to take a hands-on approach and make it a giant workshop.
00:02:17.400 What that means is we'll be demonstrating code up here. Harlow and I will be pairing, and you guys can code along in the audience.
00:02:37.520 In our typical workday, we do what we call ping-pong pairing, where I write a test that fails, Harlow writes some code to make it pass, and then he writes the next failing test. We go back and forth like that, which keeps a set of eyes on the code.
00:02:54.900 This is particularly helpful during refactoring phases, as you are often refactoring someone else's code.
00:03:06.300 Many of you have different levels of experience. This workshop is probably best for students with some Rails experience, although we have found that even exposing yourself to concepts that are over your head helps you make notes on things to look up later.
00:03:38.580 Some of it may make sense to you further down the line, so don't get discouraged. There is a lot of complex material we will discuss, and if you have questions, please let us know. My main goal is to expose some techniques we have learned over the last few years.
00:04:04.280 When I first started developing in Ruby, it was a bit of a hurdle not knowing how to test something, so these are some techniques we've learned for testing certain scenarios that may seem troublesome upfront.
00:04:29.400 In terms of format, we will walk through different modules where we introduce a topic such as integration testing. We will then live TDD the feature, so that's how we plan to proceed.
00:04:43.800 In between sections, we'll stop for comments or questions. This is meant to be very interactive, so if you have questions, please let us know.
00:04:55.680 We encourage you to pair up. We're going to simulate a pairing experience up here to give you a sense of what that's like. It's, in our opinion, the best way to learn how to be a developer or learn Ruby, Rails, or anything else by sharing a computer.
00:05:19.380 We have a number of TAs running around wearing thoughtbot gear. Can you guys raise your hands? Take a look around, and if you have any issues, these folks can help.
00:05:31.380 The thoughtbot repo that you forked will allow you to commit to that as we do our work, and you can refer to it to see the code we’ve committed, Wi-Fi permitting.
00:05:50.520 That way, you can follow along and also commit to your own fork, keeping a record of your teaching. You can compare and contrast your progress.
00:06:03.479 The main goal is to write our tests first. The idea here is that by writing the tests first, we will get just the code necessary for this feature and hopefully nothing more.
00:06:29.400 You all have a smoke test app running locally, which is a really basic to-do application. You should be able to run the test suite, which demonstrates creating a to-do.
00:06:40.500 Our next step is to write some code to be able to complete these to-dos.
00:06:47.040 Raise your hand if you are familiar with test-driven development. Great! Keep your hand up if you practice TDD. Now, keep your hand up if you practice TDD 100% of the time.
00:07:03.539 You're in the right place! The title of our talk emphasizes discipline, and that's really what it takes.
00:07:10.380 It takes practice and discipline, and we’ll show you how we approach TDD. I think seeing it in action is a lot easier than learning it from a book.
00:07:27.840 So, wherever possible, try to work with someone who is more experienced.
00:07:33.180 Let’s skim through some of the benefits of TDD, especially since most of you are familiar with them. First, you set the expected outcome.
00:07:44.280 TDD forces you to consider the purpose of your code before you write it, which is very important. Otherwise, you can end up on a 'jazz odyssey' of coding where you meander without purpose.
00:07:54.600 TDD reduces bugs and rework, as it alerts you if something has changed or broken something else.
00:08:02.100 Furthermore, it serves as a form of documentation, living on in the repository and instructing other developers about what your code does.
00:08:12.460 Is everyone familiar with the red, green, refactor cycle? Let me touch on that briefly. Red and green refer to the color of test outputs: red means your test fails, and green signifies that it has passed.
00:08:31.679 The term 'red, green, refactor' describes our approach where we often see continuous red until we achieve that green outcome. The tests will assert certain conditions, guiding the code we add for passing tests.
00:08:51.360 Now, let's get started! Harlow is going to lead us off in the first section.
00:09:01.699 Part of the pairing experience is about communication and discussing problems with your pair. We will alternate who writes code and tests, so Harlow will type for a bit and then we'll swap places.
00:09:25.320 This will give you a feel for how we typically pair. Let's walk through the smoke test that is currently implemented to ensure we all understand what is happening.
00:09:39.480 As for those of you who attended yesterday's RSpec and Capybara talk, integrating and testing your entire web application stack is what integration testing is about.
00:10:03.480 In the past, we wrote scripts for a person to log in and perform actions repetitively. Nowadays, we leverage automated browser testing with Capybara or Cucumber.
00:10:20.460 We’re using RSpec with Capybara for our integration tests, which many of you may have seen yesterday. This has been a great improvement.
00:10:36.839 Let's take a look at the tests you already have in your repository. The file containing these tests is located in Spec/features/user_manages_tasks_spec.rb.
00:10:54.840 We're going to add a new feature. This application is straightforward. You can create tasks, read them, update them, or delete (CRUD) them.
00:11:41.520 So, let's add the ability to mark a task as complete. We probably won’t use the web browser again during this process.
00:12:02.000 While troubleshooting, you might find it more efficient to work in the terminal rather than the web browser. It may seem awkward at first, but productivity tends to improve.
00:12:25.379 For those of you familiar with Cucumber, we use similar nomenclature, such as scenarios and features. The entire feature we are focusing on is called 'user manages tasks', and our scenario is marking the task as complete.
00:12:46.560 Using Capybara's syntax, we can drive a browser to simulate user interaction. Harlow is going to define a task name now.
00:13:03.960 The Capybara DSL allows us to fill in fields and click buttons, and a cheat sheet PDF with these actions is included in the repository.
00:13:34.780 Here, we want a variable to hold the task name, visit the root path, click a link for new tasks, fill in the name field with the task name, and click the button to create the task.
00:14:02.520 You see how we mimic the actions performed in a browser and structure this test code. The goal is to write just enough code to pass our tests.
00:14:28.920 I hope to write a failing test that will prompt me to add a 'complete task' button in the interface. This demonstrates the design of code that will follow to meet the test's conditions.
00:15:02.340 When Harlow writes this test, he looks for a specific tag indicating completion next to the task name in our HTML structure. We will write the code to add that expectation to our integration test.
00:15:49.580 Once we have our tests structured properly, we want to run the entire test suite to confirm we haven’t broken anything as we go.
00:16:32.420 I will initiate by running a specific test to ensure that our implementation is on track before proceeding.
00:16:53.760 Running the test yields a failure due to a missing completion link. This tells us we need to implement a link in the view to mark a task as complete.
00:17:16.780 Next, we navigate to the relevant HTML file where tasks are listed and define the action for our 'complete' button.
00:17:37.920 We'll create a simple link to the task completions path, which will later handle the completion logic.”
00:18:06.360 Upon submitting, we'll take action to update the task status in our application.
00:18:29.040 At this point, we will need to define additional parts of our application, including necessary routes.
00:18:43.320 Running our test again shows us an undefined method error, illustrating that our routes and functionality are not complete.
00:19:08.820 We need to add the missing controller and method that will define how tasks get marked as completed.
00:19:35.540 In essence, we want to find the task and mark its completion. We will set up our expected path to direct a complete request correctly.
00:20:12.720 With our migration files in place and our routes defined, we will want to update our task schema to include the necessary fields.”
00:20:55.560 After creating and applying the migration, we will check to confirm our application structure is on the right path and that all changes persist.
00:21:13.060 Once our tasks are created and added to our interface, we will want to ensure that completed tasks display correctly.
00:21:54.540 Thus, we can see the success of our test-driven approach, as inputting data through the web interface will update task statuses accordingly.
00:22:20.280 Upon executing the tests again, we confirm that our integration tests are passing.
00:22:41.040 After evaluating our code and areas for improvement, we examine our methods, commentary, and structure.
00:23:05.160 Refactoring opportunities will arise, and we try consolidating similar components into clean, reusable methods.
00:23:19.440 As we consolidate our methods, we carefully verify existing functionality remains intact while improving clarity.
00:23:29.520 Post-updates will also involve documentation updates to ensure we provide clarity on newly established behaviors in our system.
00:23:46.740 With that, we'll push our completed work to our GitHub repository, keeping a neat codebase.
00:24:02.960 Squashing our commits are essential for keeping a clean Git history as we integrate our latest feature development.
00:24:22.440 We rigorously handle our history and documentation through each round of development, maintaining clear connections to our task card for transparency in changes made.
00:24:43.410 Once our code is validated and checked for any possible failures, we will merge the parts into master and ensure continued system integrity.
00:25:24.260 After confirming accuracy, we turn our attention to deleted branches to maintain the cleanliness of our development process.
00:25:56.740 Overall, this initial acceptance test showcases our TDD approach, starting from the outside and delivering a fully integrated feature.
00:26:08.960 As we conclude this section, recognize how crucial the error messages are, and remember to focus solely on solving one error at a time.
00:26:47.500 With continued observation on how we structure features, we feel facilitated by the tests, which guide our processes by directing code functionality and efficiency.
00:27:20.140 As we venture into further topics, we open the floor for questions and discussions around using TDD in our coding environment.
00:27:47.980 Thank you for participating and please, ask away!