JavaScript speed testing tutorial with Woosh

Friend and colleague, Jake Archibald, has been developing Woosh, which is a JavaScript speed testing framework. Essentially, it's been developed for Glow because we want to make sure that Glow 2 kicks Glow 1's ass (and any else who fancies a piece), but he's open-sourced the work to let everyone benefit from it.

I thought I'd run you through how to set up some basic tests and start benchmarking your own code with Woosh. Bear with me, as it's still quite new to us too.


Firstly, go and grab the latest copy of Woosh from the Github repo and pop it somewhere to work with it. You're just running scripts, so there's nothing to install or configure. Bear in mind that at the time of writing, Woosh isn't at it's first version yet - so, not that I'm doubting Jake's work, you may find the odd bug and if you do, I'm sure logging it in the issues tracker would be marvellous.

If you're a git user, feel free to include Woosh as a submodule of your own project.

Woosh is primarily designed for comparing libraries, but there's no reason why you can't use it to take a benchmark of your existing scripts and then work up optimised versions to compare. If your code can be unit tested well, it can be speed tested just as easily.

Firstly, you need to let Woosh know about the scripts you want to test. You can just add references to each of your scripts using the Woosh.libs property. For each script to test, though, just make sure they have a unique name so you can reference them later (have a sneaky look in Woosh.js to see which libraries already exist and the formats used - infact, if you're taking your own copy of Woosh, you can just edit your files and add your scripts straight into this file and skip adding them in the test runner page).

Below is how your test runner HTML page should look (also available in the examples directory in the Woosh repo). Notice the reference to your script in the woosh include section, then beneath are links to your individual test files. To make it more manageable, it's probably best to have one test JS for each script you're comparing. Remember that Woosh looks for your scripts relative to where you've got woosh.js.

<html lang="en">
	<meta http-equiv="content-type" content="text/html; charset=utf-8">
	<title>My Tests</title>
&lt;!-- include woosh --&gt;
&lt;script src="/where/youve/got/it/lib/woosh/woosh.js" type="text/javascript"&gt;
	woosh.libs['myTestScript1'] = ['/path/to/scripts/myTestScript.js']

&lt;!-- Add any CSS you need for the test, but restrict styles to #htmlForTest --&gt;

&lt;!-- Add your tests. The first will be treated as the master --&gt;
&lt;script src="MyTestScipt1-Tests.js" type="text/javascript"&gt;&lt;/script&gt;

</head> <body> <div id="wooshOutput"></div> <div id="htmlForTest"> <!-- Put elements you want to use in your tests here. The page will be refreshed for each set of tests, so don't worry about one framework messing with another. --> </div> </body> </html>

The final item in the example above would be your intitial test script to be benchmarked (MyTestScript1-Tests.js). This is just a JavaScript file which will call woosh.addTests (as further down).

Now you've got a choice: You can either make minor changes and incrementally watch the improvements, possibly using the save feature, or you can create a copy of your script. I'd recommend the latter, so create a copy of your script, and add a reference to it with Woosh.libs again and also create a file to add the actual tests to for it.

You can add and compare as many scripts as you like, so long as their methods are comparably the same.

Creating tests

Adding tests is easy and in a way, they become an extension of your unit tests, confirming that the return values or behaviours match across the board.

Test files look like this and you can either put all your tests in each file, with a block for each script, or put each block in it's own file. So, below would be your contents of MyTestScript1-Tests.js. You'll need a second for MyTestScript2-Tests.js etc.

	'Test name identifier 1': new woosh.Test(1000, function() {
		return myFunc();
	'Test name identifier 2': new woosh.Test(1000, function() {
		return myOtherFunc();

Things that matter about these files:

  1. The name identifier needs to match for each of the tests. Woosh isn't looking for them in order - it's matching on the names to know which should go into each row. i.e. "Test name identifier 1" should be the same in all test files for matching tests for that function.
  2. The first parameter of addTests should be the name you gave the script in the Woosh.libs command, so Woosh can find your script.
  3. The first parameter of woosh.Test is the number of times a test is to be run. This should be the same for sets of tests for the same thing. If it's not, Woosh will flag up the test as being unfair.
The value for iteration times is important. It's large because that'll help shake out inaccuracies. Woosh will run the test for the number of times specified, then divide the result by this number to give the average run time for that function. You may find that some browsers don't cope so well with very large numbers of iterations (uh.. IE, we're looking at you) so don't go mad with it and think that running it a million times will help your accuracy. On Glow, we tend to aim to run tests from 100 to 10000 times.

Saving tests

You can save one previous set of tests by clicking the floppy-disk icon. It's just stored in a cookie, and will be over-written if you choose to save another column of tests, but it's useful if you're just doing some small changes and just want to compare before and after.

The hard work

Now, of course, it's down to your hard work. Writing the speed tests is really the easy bit, made ever more so by the simplicity of Woosh. Try your optimisations in the second script and use Woosh to help you benchmark the new script against the old one. All you need to do is load up the test runner page and hit start. The results will pop up as they complete and become colour coded as the results are compared. Keep an eye out for tests that error (they'll go slate grey) or test titles that turn yellow (the titles click to expand further test information). Either of these can indicate that the test isn't fair because the iteration value isn't matching, the return values aren't the same or a method has failed all together. You should aim to have all your tests running without errors or warnings.

Another thing to note is that you'll still need to run all of these tests in all of the browsers you want to optimise for. You'll find massive varience in some cases and it'll be up to you to decide where to keep the speed. Jake's Full Frontal presentation covers some of the things to look out for, so that's definitely worth a look over (most importantly, make sure you're not running developer tools like Firebug when running your tests, since it'll skew your results quite heavily).

Further reading

If you want to have a look at some real tests, Glow 2 has a fair few now for some of the basic modules. They're all up on github, so have a dig around or feel free to clone the repo and run the tests yourself.

The full API has been documented for Woosh, too, although I believe that might be an exclusive as I cannot see reference to it from the github docs at the moment. I recommend taking a look through those to see about running async tests and preparing your code with setup functionality using $preTest, as well as a few other features you might find useful.

On another testing topic, Mat Hampson published an article on A-B testing on the new BBC Web Developer blog.