Quality Assurance (QA)

Summarized using AI

99 Problems of Slow Tests

Vladimir Dementyev • July 20, 2018 • Earth

In his talk titled "99 Problems of Slow Tests," Vladimir Dementyev discusses the critical issue of slow test suites that can hinder development efficiency. Recorded at the Paris.rb Conf 2018, he shares insights drawn from his experiences and introduces several approaches to optimize test performance.

Key points discussed during the talk include:

- Introduction and Context: Vladimir introduces himself, mentioning his experience with long CI build times during his work on Fontan.com, where the test suite ran for half an hour with less than 2200 tests per minute.

- Optimizing Test Suites: Emphasizing the importance of improving test performance, he presents his solution: TestProbe, a toolset designed for analyzing and optimizing test suite performance.

- Common Problems and Solutions:

- Misconfigured Environments: Poor configurations in test environments can slow down tests. Tools like RubyProf and StackProf can help diagnose configuration issues.

- Database Cleaner Usage: The necessity of a database cleaner can be diminished in newer Rails versions. By analyzing test performance, he identified instances where removing the database cleaner sped up tests by 30%.

- Handling Inline Jobs: Disabling inline job processing for irrelevant tests can reduce runtime significantly.

- Monitoring Method Calls: Be aware of how libraries like PaperTrail can add overhead. Disabling unnecessary versioning in tests improved speed.

- Factory Setup Optimization: Using tools like FactoryDoctor helps developers avoid costly mistakes with test data creation and highlights inefficiencies from FactoryCascade phenomena.

- Reducing Repetitive Setup Work: Streamlining setup processes through shared setups and utilizing tools like let to minimize repeated work can enhance performance.

- Utilizing Fixtures Effectively: Shared fixture data can reduce test durations significantly by minimizing repeated data generation.

- Continuous Performance Monitoring: Ongoing assessment of test performance aids in averting slowdowns during development, particularly in CI/CD workflows.

Conclusions: In conclusion, Vladimir’s talk provides numerous actionable insights on how to identify and fix common pitfalls that lead to slow test performance. Key takeaways include the necessity of careful configuration of testing environments, the strategic use of tools for debugging and streamlining factory setups, as well as a mindful approach to assertions and data handling in tests. The overarching message emphasizes the importance of maintaining solid performance metrics to enhance development workflows.

00:00:11.490 I want to introduce you to Vladimir
00:00:14.550 Dementyev. Thank you. Check, check, check.
00:00:24.119 Do you hear me? Cool. Hey everyone, welcome
00:00:28.439 to my talk called '99 Problems of Slow Tests'. But first of all, I'd like to thank
00:00:31.110 the organizers for such a great event and for having me here. So, let me
00:00:36.780 introduce myself. Yeah, it works. So my name is Vladimir. People not from Russia often
00:00:42.180 call me Vlad except for Bozidar, who calls me Lola. I really appreciate it. So, you can find me on Twitter,
00:00:50.070 GitHub, and I have a blog that is probably much more interesting than my Twitter. I have a lot of stuff to share with our
00:00:58.440 community, so please check this out.
00:01:04.170 I actually had a really long trip because I visited a World Cup match in Samara, supporting Team Russia. Unfortunately, they lost.
00:01:13.980 So, I'm here working for a company called Evil Martians. We do product development, but it's mostly boring stuff and non-boring stuff. We're doing a lot of open source projects, and I think some of our most popular projects are made public.
00:01:27.890 We write about everything we do in our blog, so please check it out at least to see our beautiful illustrations.
00:01:31.649 Let's start with the talk. Hmm, my clicker, that's working. Yeah, it works.
00:01:36.119 So, '99 Problems of Slow Tests' is the title, and we have only 20 minutes so that's about five problems per minute.
00:01:43.740 But actually, I'm not going to leave any time for QA sessions, so we have 25 minutes; that's much better. Four problems per minute.
00:01:52.920 So I'll try not to be too fast. Let's start with a single question. Do you think your tests are slow?
00:02:01.619 Yeah, I see some nodding in the audience, and I believe you already know why we should care about test performance.
00:02:11.070 We talked about it yesterday in the opening keynote, and I don't want to repeat everything right now since we don't have enough time.
00:02:18.000 Let's talk about my personal story. When I started to care about tests, I worked on a project called Fontan.com, and we didn't have that many tests.
00:02:27.890 However, the test suite runtime was around half an hour, and the speed was less than 2200 tests per minute, which is unacceptable.
00:02:35.500 It led us to the situation where we spent a lot of time waiting for our CI builds to finish or while running local tests, which severely hurt our feedback loop.
00:02:44.420 So, I spent some time optimizing aspects, and we ended up with a much better situation. While not the best yet, we reached a point where we could run a sound contest per minute, which was okay.
00:02:58.690 As a result of this optimization work, a project called TestProbe appeared. What is TestProbe? In a couple of words, it's a wizard that helps you analyze your test suite.
00:03:06.720 It's a collection of multiple tools which helps you to analyze your test suite performance and fix issues.
00:03:16.780 There are two groups of tools: profilers and custom recipes, and I'm going to show some of them. I'll be talking about the test suite problems.
00:03:24.670 So let's start with the first problem, a very common one that I call a misconfigured environment.
00:03:29.899 We have different environments in most applications, especially Rails applications: test environments, production environments, and development environments.
00:03:42.789 They all differ in some settings, and the way you configure your test environment could affect your tests' runtime.
00:03:56.790 I'm going to show you some examples, but let's first talk about how to diagnose such problems. The best way to do that is actually to use general Ruby profilers, like RubyProf, StackProf, and the new one, RubySpy.
00:04:07.240 They are not so useful for analyzing the performance of tests since they are not test-specific; they're general.
00:04:12.710 But we can still gain interesting insights from them. For example, the first one is from an engineering blog that shows us we spend a lot of time on log write operations in our tests.
00:04:25.290 The question is, why are we writing to the log in our tests at all? There is no reason for that, so we can just turn it off to make our tests faster.
00:04:36.180 Another example is a little bit more complex. Here we see that we have a bit of creep in our library, where 20% of the time is spent within certain methods.
00:04:55.900 It turns out that we had encryption settings in our authentication library that were as strong in the testing environment as in production.
00:05:10.100 This doesn't make sense, so we relaxed the setting, and our tests became more than three times faster just because of this change.
00:05:17.620 These are just a few examples of how misconfigured environments can make our tests slow, so it's vital to care about your test environment.
00:05:25.990 Next problem: Database cleaner. We need this to handle multiple tests, especially with Rails 4 and earlier, because Active Record uses one connection for threads.
00:05:35.680 But this is not the case since Rails 5.1 thanks to Arel, and if you're using Rails 5.1 or later, this becomes an issue.
00:05:46.709 Basically, the feature exists, but most of us cannot use it. TestProbe provides an alternative patch called ActiveRecordSharedConnection, which used to be called ActiveRecordOneLaf.
00:06:02.430 Let’s measure the impact of having a database cleaner. We can use the EventProbe profiler from TestProbe to examine our runtime.
00:06:11.599 With the database cleaner enabled for a number of tests, we see that we spend 30% of the time on the database cleaning.
00:06:20.710 However, with the shared connection, we spend no time cleaning, which decreased my test suite runtime by 30%.
00:06:30.950 If you're using database cleaner, you should examine if you need it, as in most cases you probably don't, and removing it will make your tests faster.
00:06:44.929 Next up, let's talk about an important testing feature that is often overlooked: handling inline jobs.
00:06:56.350 Many people default to using Sidekiq in inline mode, where every job is executed immediately, but that's not always ideal.
00:07:03.310 In fact, most of your tests do not rely on jobs, which leads to unnecessary overhead and slower test execution.
00:07:15.750 For example, in our test suite, using Sidekiq testing in inline mode resulted in about a quarter of the execution time being spent executing jobs.
00:07:28.560 We decided to disable this mode for tests that do not rely on jobs, and the result was significantly faster tests.
00:07:38.810 That's a strong example of how tweaking default settings can positively impact performance.
00:07:47.130 The key takeaway here is that inline jobs should not be the default setting for tests unless required, so evaluate your needs.
00:07:58.960 Moving forward to a new feature in TestProbe: monitoring method calls.
00:08:05.800 This can be beneficial because sometimes adding new dependencies can slow down your test suite, especially if they don't provide testing utilities.
00:08:14.490 So, if you add a new library, please pay attention to how it affects the performance of your tests. One common example is 'PaperTrail'.
00:08:25.760 You might find it causes delays due to the creation of versioning records in your ActiveRecord models.
00:08:38.119 Fortunately, PaperTrail offers a way to disable versioning by default, allowing you to enable this feature only when needed.
00:08:53.460 Make sure to check this out when using PaperTrail to ensure you are not wasting time on unnecessary versioning.
00:09:04.170 Next, I want to touch on a common topic: the old adage of 'do not use create, use build'. It's a well-known topic among developers.
00:09:19.130 I don't want to cover it in depth, but the TestProbe includes a profiler known as 'FactoryDoctor'.
00:09:28.260 It helps detect instances where you can replace create calls with build calls, which can significantly improve performance.
00:09:40.960 While it's not foolproof and can produce some false positives or negatives, it often works quite well.
00:09:54.250 As an additional tool, there's a patch called 'FactoryLifters' that simplifies replacing create calls without needing manual changes.
00:10:07.590 Okay, back to EventProbe. Another critical event we should analyze is the 'FactoryCreate' event, which measures the time spent generating test data.
00:10:24.150 I've often seen that data generation in factories can take up to 60-80% of the total test runtime.
00:10:39.390 This isn’t because factories are bad; it’s typically due to poorly engineered factory designs.
00:10:45.590 One common problem is known as 'FactoryCascade', where creating a single object inadvertently creates multiple dependent objects.
00:10:55.070 This can be quite inefficient, leading to a slower test suite execution time.
00:11:04.970 To better illustrate this, we can visualize the 'FactoryStack', which is a representation of calls made during one top-level object creation.
00:11:16.830 Analyzing these stacks can help identify which factories create the largest cascades, leading to inefficiencies in the tests.
00:11:29.830 I had a real-life case where our test suite slowed down significantly after a small commit due to factory cascades.
00:11:42.130 Using EventProbe, I found that just two specs generated about 90% of the excessive time spent in factories.
00:11:53.850 From there, I analyzed the factory stack and managed to streamline it. After making these changes, the performance improved significantly.
00:12:05.690 It's vital to ensure that your test factories are efficient and to avoid unnecessary cascading during the object creation process.
00:12:17.350 Let’s also discuss a design principle about minimizing repeated setup work in your test code.
00:12:30.590 If you find yourself with lengthy before hooks because of setup work across multiple test examples, consider refactoring to reduce duplication.
00:12:42.260 One suggestion is relying more on 'before' and using 'let', which creates variables that can be reused throughout the examples.
00:12:58.160 Reducing setup times can have a significant impact on overall test performance. Consider how many tests can share setup data.
00:13:10.700 Try to employ shared setups when possible to avoid repetitive creation of the same data.
00:13:22.920 Staying with the theme of code efficiency, utilize helper libraries where appropriate to simplify test setups.
00:13:33.650 Keep an eye on how data is generated repeatedly throughout your test suite. This is crucial for maintaining fast test suites.
00:13:46.420 Next, let’s discuss why it’s important to have a mindful approach to assertions in your tests.
00:13:58.160 Tests with multiple assertions can take longer to process if they require recreate data for every assertion.
00:14:06.870 Respond to this by considering utilizing aggregate failures to keep the execution streamlined and efficient.
00:14:19.170 If multiple assertions can check distinct elements of the same data set, consolidate their usage to avoid redundant computations.
00:14:28.990 So, another way to enhance performance is the consideration of shared fixture data instead of generating new data for each test.
00:14:46.480 If you correctly utilize fixtures, it can drastically cut down your test suite run times.
00:14:54.910 Something to ponder is the balance between factories and fixtures. Both have their advantages, so utilize them based on your needs.
00:15:08.220 As we proceed, we should target the elimination of repetitive data generation amidst loops and refactor our approaches.
00:15:20.690 We are approaching our ninth problem discussed so far. As a common pitfall, many developers neglect their code's performance.
00:15:30.190 Ensure performance testing is integral to your process throughout all stages of development, especially in CI/CD.
00:15:39.290 As a recap of the problems addressed, remember the importance of monitoring merges and how they can affect CI/CD workflows.
00:15:51.670 In summary, it's vital to maintain a solid grasp of your test suite's structure and performance metrics as part of your development process.
00:16:03.370 I believe that we've covered an expanse of insights today.
00:16:11.970 If you have further questions or require assistance, please feel free to reach out after the talk or during breaks.
00:16:24.310 Thank you for your attention and I hope these insights into preventing slow tests help you expedite your workflows.
00:16:36.370 Thank you very much, and if you want some of those stickers, feel free to find me.
00:16:51.780 Thank you.
00:16:54.370 Fred Amiel here, any questions?
00:16:57.560 Right, thank you.
00:17:07.790 We usually end up having slow tests and when it's too late, do you have any pointers to prevent that?
00:17:15.030 Thank you for the question! I'm exploring the idea of building CI helpers for TestProbe to monitor your tests during builds and collect stats to alert any issues.
00:17:30.150 It’s quite an open-source project idea I have in mind, although time constraints are a factor. However, I'm hopeful to bring it to fruition.
00:17:45.330 Just ensure to monitor the performance of your tests continuously to prevent heavy slowdowns during integration.
00:17:56.300 Any other questions?
00:18:03.730 Alright, you can catch me at the after-party if you have questions.
00:18:05.360 Thank you!
Explore all talks recorded at Paris.rb Conf 2018
+10