chevron-thin-right chevron-thin-left brand cancel-circle search youtube-icon google-plus-icon linkedin-icon facebook-icon twitter-icon toolbox download check linkedin phone twitter-old google-plus facebook profile-male chat calendar profile-male

Stop Sorting Your Tests

When asked about what makes a test a unit test, I usually skip over the “tests a small part of the code”. I usually go for the “quick feedback” answer. I worry more about how helpful the test is, and it helps with feedback.

However, when I was starting out in unit testing, that wasn’t the case. Let me sit by the fireside and tell you a story.

I remember when I was learning TDD and writing my first tests. I was working on a communication component, based on queuing services. The first test I wrote was a stupid one: I sent a message into the queue, and asserted the message was received. Like I said, a stupid test, but it passed, and that was the important thing.

The thing is, I was very happy the test was passing, I didn’t even notice it took the test 5 seconds to run. 5 seconds for a test doesn’t qualify as “quick feedback”. But at the time I didn’t even look at the clock.

As we write our first tests, we usually concentrate on testing the right things, overcoming big setups, getting familiar with the tools. We don’t notice that running them takes longer and longer. When we do it’s too late. Then we’ve accumulated a large suite of tests, some are quick and some are slow.

But when we run them, we run them together. That means the slow ones win, and slow down complete test runs.

How did we get to this point?

The first answer is already mentioned above: We’re not looking at the run time until we’ve got a big problem. But there’s another subtle issue, which  we usually don’t discuss.

In the beginning, we usually follow some kind of test organization scheme. We decide on a convention that the team uses where to place files, test suites and the tests themselves. The most prevalent one is to place test projects, containing test classes and tests, directly opposite their under test code counter parts. Regardless of convention, the tests are grouped together by logic, and placed in the topology of a file system.

Yet when it comes to measuring test run length, neither logic or file structure take it into account.

Most test frameworks take into account file structure. This is where we get short-cuts for “run all tests in this assembly” or “all tests in this test class”. Note that there’s a hidden assumption here – tests that are located close to each other, probably need to run together. This in turn reinforces test grouping by logic, because it makes sense to run them together – they test the same code (which resides in the same place across the bridge).

Tools should help, not restrict

With Typemock Isolator’s SmartRunner we stick to what make testing effective. First, running the impacted tests by default. The SmartRunner doesn’t care where the tests are located, it runs just the relevant ones, no matter where they are. Moreover, when it identifies long-running tests, it won’t run them automatically, as part of the impacted tests run. If you want to run them, you can do that manually. That means those long tests that make encumber the short tests are taken out of the equation.

So, you don’t need to sort your tests. Gone are the days where you need to relocate tests just because a your test runner feels like it.

You might say: “Code integrity is important. It makes sense to group the tests by their quickness”. But ask yourself this: Are you getting paid to move tests around (not including making and fixing some copying mistakes)? Does code integrity carry real business value?

The answer is no, unless the tools make you work for them.

And that’s just waste.