Towards better test runners/frameworks

Test runners/frameworks caused me a lot of grief over the years. It's part of the reason why really appreciate how standard #Rust test runner works. As usual with Rust, core developers, and the community at very least avoided the most common misfeatures and made the right thing to be the easiest and most natural thing to do.

That's why i recently (probably too abrasively) bashed an announcement of some new Rust testing library.

I've taken some time and collected a list of feature request, misfeature rants, that I just want to share.

No DSLs

Quoting my reddit comment:

In almost every codebase I had to work with professionally, someone always had introduced another lame testing framework with its own DSL for something as basic as an assertion. Each language, each test framework, I now have to guess/look-up the crappy DSL of the day. should.be.equal? should_be_equal? should().be_equal_to()?

My take on DSLs: you have to be introducing some very substantial benefits to outweigh the mental burden of introducing yet another DSL.

And all this assertion DSLs are usually really clunky, trying to mimic the natural language. There's a reason why we generally don't make function calls look like url.download.all.resources(). So why do people keep reintroducing this in testing/assertion frameworks?

Just use the most standard and natural syntax/library for the given language. In the case of Rust assert_eq! macro etc.

Test tagging as opposed to naming

Naming tests is a difficult and not particularly useful task. A much more versatile approach would be tagging your tests. It also should be possible to run a subset of all tests using filtering: --filter "user AND (account OR notification)"

I actually implemented this in a small test runner that I use for some things internally, and it works beautifully so far. It's probably the only feature that I'd like added (maybe with a library) to Rust's standard test runner.

Randomized execution order

Too many times I've seen (sometimes accidental) unit tests that are interdependent. Test runners like Mocha with sequential and deterministic order execution make it much too easy to do the wrong thing and not notice or get away with it.

By randomizing the execution order, we just make it very hard or almost impossible.

No setup functions/hooks

Mocha's before/beforeEach hooks or JUnit-like setUp methods are a bad idea.

Unit tests should be as self-contained, self-describing as possible. If you have some common initialization logic, just put it in init_test() function, and call explicitly at the beginning of tests that need it. And frankly, if your setup requires so much work to use utility functions, maybe you should reconsider your APIs, what and how you're testing etc.

before and setUp-like functionality tend to accumulate cruft with time, leading to over-initialization and unintentional coupling of tests. Relying on developers not to use/abuse them is wishful thinking. IMO: tools drive users behavior. If your tool (testing framework) makes abuse easy, developers will just abuse it.