May 02, 2020
This post is a pointer to an article I completed recently - a comprehensive comparison of the AVA, Jest, Mocha, and
Why is it on GitHub, not this blog?
Not only does the repo contain the comparison article, I created a node application to generate the speed metrics in the article. The process looks like this:
- There is a
make-testscommand that generates a number of identical tests for each test runner. The tests take the length of time that a test suite of an average large codebase would take
test-allcommand will run the tests and time the output of each one. The order of the runners is shuffled every time to remove any ordering bias
- A report is generated at the end that shows how long each test runner took to run the test suite.
*We’ll also be exploring a wrapper for Mocha called
How does one choose the right testing framework for their use case? What criteria should one base their decision on?
In order to do this, we’ll explore some general principles regarding both testing frameworks and testing in general. Then, after outlining the criteria for evaluating the frameworks, we can explore them in detail…
Want to contribute?
Don’t hesitate to contact me if you have feedback.
Want to add details or make a correction? This article is open-source and your contributions are 100% welcome 💥