Talks
Surviving Growing from Zero to 15,000 Selenium Tests

Summarized using AI

Surviving Growing from Zero to 15,000 Selenium Tests

Jim Holmes • April 07, 2015 • Earth

In the presentation "Surviving Growing from Zero to 15,000 Selenium Tests" by Jim Holmes, the speaker shares his journey of implementing Selenium for automated testing in an organization that originally did not value automation. The talk captures the progression from using Selenium IDE for initial tests to eventually scaling to 9,000 tests and dealing with the challenges that arose during this growth.

Key points discussed include:

- Initial Adoption: Jim describes starting with Selenium IDE to establish a proof of concept with 100 tests incorporated into the build process, which demonstrated the value of automation.

- Scaling Tests: As the number of tests grew, they transitioned to Selenium WebDriver, allowing them to eliminate issues related to Selenium RC and improve test reliability.

- Challenges with Test Execution: With 9,000 tests taking 16 hours to run, the feedback cycle was too lengthy, prompting the need for a more efficient solution.

- Test Granularity: Jim discusses the importance of breaking tests into smaller, critical pieces to allow for faster execution and feedback.

- Infrastructure and Parallel Execution: He emphasizes the need for proper infrastructure to run tests in parallel with tools such as Selenium Grid, highlighting how early scaling and distribution of tests can prevent bottlenecks.

- Treating Test Code as Production Code: The speaker encourages treating test code with the same respect as production code, advocating for regular refactoring and maintenance to avoid brittleness in tests.

- Collaboration with UI Developers: Jim underscores the value of communication with UI developers to improve testability and streamline the testing process.

- Cultural Shift Towards Automation: The significance of fostering a culture that values testing and automation is also noted, as this can lead to more substantial organizational support for testing efforts.

In conclusion, Holmes asserts that successful testing is not just about the technology but the people involved. He encourages building alliances within the organization and maintaining a focus on providing value through effective testing practices. By tackling infrastructure challenges early, treating test code seriously, and ensuring efficient feedback loops, teams can successfully implement and scale their Selenium testing efforts.

Surviving Growing from Zero to 15,000 Selenium Tests
Jim Holmes • April 07, 2015 • Earth

by: Jim Holmes

Selenium is a wonderful tool for automating acceptance and functional tests; however, real-world implementations bring a lot of pain. I suffered all that pain, and more, as I piloted an effort that started out with Selenium IDE, moved through RC, and ended up with WebDriver. This talk covers things like setting up baseline data, creating backing test frameworks, dealing with brittle tests, and figuring out how to appropriately manage all those incredibly slow Selenium tests so that you actually get effective, useful testing in. Learn from my pain (and successes!) so that you don’t have to suffer it in your own projects!

Help us caption & translate this video!

http://amara.org/v/GZC4/

Rocky Mountain Ruby 2011

00:00:04.160 All right, so I'm here to talk about an experience that I had at a previous job where we grew from zero to 9,000 Selenium tests. If you're looking at the conference guide, it says zero to 15,000 Selenium tests. Sorry, I had an off-by-one error. I'm a .NET guy, and the fact that I had any tests at all is a win.
00:00:06.680 My name is Jim Holmes; if you want to follow me on Twitter, I'm @thejimholmes. I'm just one of them. So the story is really about going into an organization that didn't value automation. I had to figure out how to get some quick wins and start to establish effective testing. Then I had to deal with several issues that arose along the way.
00:00:14.040 When I started at this company, they had no QA department; there was no real testing, and they didn't value automation. I approached this with my eyes open since I went to work with a couple of friends and was aware of the ugly corners, so it wasn't a scary part.
00:00:20.760 At the beginning, I needed to make some quick traction to make the case for automation and demonstrate its value. I used Selenium IDE to get a few test cases running—about 100—and wrapped those into our build process. I got them running three or four times a day to catch some regressions because there were plenty to catch.
00:00:31.560 As a result, people started to see the value of these tests. My QA grew; I got a couple of developers, and we began writing tests in Selenium 1. However, Selenium IDE is completely inappropriate for anything larger than a proof of concept.
00:00:39.680 Eventually, we expanded to about 1,500 tests scattered over roughly 150 fixtures. By this time, we were having to deal with Selenium RC, which required a separate process; it didn't manage page waits well.
00:00:49.440 So we moved over to Selenium 2, using WebDriver. This was a significant win because we eliminated the entire Selenium RC process—I didn't have to worry about it hanging or leaving orphaned browsers. Around that same time, another developer joined our team, and he helped us write some backing APIs.
00:01:02.440 He was familiar with the platform we were working on and assisted us in quickly creating setup data, prerequisites, and conditions. This meant we didn't have to rely on the browser to perform all these actions, which is horrifically slow. Now, we had a backing API that allowed us to write tests more effectively, clearly, and quickly.
00:01:11.920 After a couple of months of focusing on other areas that needed my attention, I returned to find that they had developed 9,000 tests. This was great because they were all high-value tests, but it created a problem.
00:01:19.239 These tests were taking forever to run—16 hours in fact! Instead of running several times a day, we had to wait until the weekend for results. With a feedback loop of seven days, they began losing credibility as no one was paying attention since we weren't getting any quick feedback.
00:01:34.000 Functional tests will never be as quick as unit tests or even integration tests, so you have to accept that. However, a one-week feedback cycle was just a total mess. A couple of different approaches can help solve this issue.
00:01:42.360 We decided to break the tests into smaller validation tests—little small pieces that could run much faster. So we went through our 8 or 900 fixtures and identified the critical tests that needed to be run regularly to prevent significant issues. We pulled those out to run on a separate build cycle that took about 30 to 45 minutes, providing better feedback.
00:01:55.000 However, we still risked losing comprehensive testing. The real answer was to scale out. Scaling out means looking at tools that allow you to run multiple tests in parallel and stitching everything back together. You should address this early on, as infrastructure plays a significant role.
00:02:12.000 In many organizations, infrastructure is a battle; I needed more servers to run these tests quickly, rather than just once on the weekend. It doesn't require buying fancy servers; you can spin up virtual servers. Look to your toolset to distribute tests across those instances.
00:02:23.000 A number of build server products will help with this. If you're running Selenium, consider using Selenium Grid to distribute the tests to those agents or nodes. People sometimes craft their own solutions based on their environments, but that piece is critical. I made a mistake as a leader by waiting too long to implement this.
00:02:39.960 I encourage you to look into scaling early when starting your testing efforts. Get that nailed down early so you won't have to worry about it later when your tests start taking longer. As you're adding more nodes and agents, address this early on.
00:02:46.900 Another area where I found success was in treating test code like production code because it is, indeed, production code. This applies especially when your team is made up of testers with some coding skills rather than actual developers.
00:03:00.640 In those contexts, people often overlook vital concepts like refactoring or keeping tests DRY (Don't Repeat Yourself). However, it's crucial to constantly refactor your test code as aggressively as you would your production code. Remove tests that no longer have value and maintain clean and efficient code.
00:03:18.280 Finding locators, like IDs and interacting elements, should only be done in one place. If you don't, the same problems that plague your production code will also affect your tests. Tests become very brittle, and UI tests are inherently fragile, so minimizing this brittleness is essential.
00:03:38.259 Moreover, backing APIs can help streamline testing. One common test case we faced involved user B replying to a forum post created by user A. This test required creating several users and a forum, followed by the form post. Going through the UI to accomplish all these steps was time-consuming, adding extraneous time to our tests.
00:03:52.120 Instead, we pushed all these responsibilities off to factories and baseline datasets or created a backing API to kick off these processes through internal systems. By adopting this method, we were able to reduce execution time and enhance the maintainability of our tests.
00:04:05.360 Focusing on value is essential. In discussions around lean principles, stressing the importance of value in the systems we build is crucial—and this should apply to the testing phase as well. In our application, CRUD operations were paramount due to a complex security system with various roles.
00:04:20.239 Thus, validating intricate layouts or doing complex messaging tests just didn't make sense for us. By concentrating on valuable tests, we successfully reduced execution time.
00:04:35.039 Look at how your tests are running. You need to keep them granular. In my forum reply example, I set up all prerequisites before spinning up the browser for the actual test.
00:04:50.880 However, if I had to log in as that user first or navigate to the specific forum post to reply to, it added unnecessary time. With around 800 test fixtures needing browser interactions, this setup led to thousands of wasted seconds.
00:05:07.720 To address this, we decided to leverage cookies. By generating a cookie for an authenticated user and passing it to the Selenium browser, we bypassed the logging process. While login needed to be tested, it was covered in more granular tests.
00:05:24.080 Skipping login allowed me to focus on whether I could find the post, click reply, input text, and submit. It's crucial to be very cautious about what you test in functional testing scenarios; any unrelated actions can muddy the specific focus of your tests.
00:05:41.560 Culture is essential in fostering effective automation. There are entire conferences and books dedicated to the topic of culture. Without a culture that values testing and automation, you face significant challenges.
00:06:09.240 When I joined a company that undervalued these concepts, I focused on forming alliances with like-minded individuals. By promoting small, winnable automation successes, I began to gradually shift perspectives.
00:06:26.640 For example, I utilized Selenium IDE at the start of the project to set achievable goals. After a couple of weeks, I had tests running regularly, catching regressions, and my boss began to recognize the value, leading to further support.
00:06:37.080 The next step is examining testability. Many people have experienced working with legacy UIs, which often aren't friendly for testing. In our case, we dealt with TBL-driven layouts that made testing exceptionally difficult.
00:06:55.200 The platform had been around for about six years, using tables within tables, often leading to a situation where the closest ID was multiple elements away. This led to reliance on XPath, which is often a source of frustration for testers.
00:07:10.719 To address this, it's beneficial to collaborate closely with UI developers. I frequently engaged with specific UI developers to emphasize how challenging their UI was to test and requested improvements, like adding ID values to specific elements.
00:07:29.920 Despite facing obstacles, I attempted to involve UI developers in writing tests alongside me. While I didn't completely resolve the issue, it was a step in the right direction, emphasizing the importance of communication and collaboration.
00:07:42.480 In conclusion, long-running tests pose challenges that require careful management. Examine how to run them more effectively, break them into subsets of tests for quicker feedback cycles, and prioritize building your infrastructure early in the process.
00:08:01.119 Virtualization can save resources while allowing for parallel execution of tests. Treat your test code with the same respect as production code; continuously refactor, keep testing principles in mind, and always seek value.
00:08:18.480 Keep tests focused, eliminate unnecessary actions, and address the essential communication hurdles between developers and QA teams early on. To wrap things up, remember that it's not the technology; it’s the people.
00:08:37.200 Engage with your colleagues, build alliances within your organization, and maintain a passion for your work. My name is Jim Holmes, and you can reach me on Twitter @thejimholmes or at my blog frazzleddad.com.
00:08:52.760 I have a couple of minutes left and would be happy to answer any questions. No questions for the .NET guy? Thank you very much!
Explore all talks recorded at Rocky Mountain Ruby 2011
+18