Unit Testing 101: Are You Testing Your JavaScript?

Christian Johansen | February 23, 2011

 

Of course you are testing your code. Nobody writes any significant amount of code and drops it into production without ever running it. What I want to challenge you on in this article is how you are doing your testing. If you are not already automating as much of your testing as possible, be prepared for a productivity and confidence boost.

A word of warning: I will be talking about unit testing and TDD in this article. If you have already reached the conclusion that this is not for you for any of the following reasons, then please read on, or at least read Why should you care? towards the end.

  • “I use a library such as jQuery, which ensures my code works properly”
  • “Testing is an advanced practice for the pros, not for me”
  • “Testing takes too much time, I'd rather just write production code”

Different goals, different tests

Testing can mean so many things, and how you best can test something is entirely dependent on the goals of a particular test. Here are some examples of tests you might run against your application:

  • Usability testing
  • Performance testing
  • Consistency/Regression testing

In this article we will concentrate on consistency and regression testing. In other words, the kind of testing that makes sure your code does what it should, and that it does not contain bugs. In most cases it is impossible to prove the absolute absence of bugs. What we can do is to take measures to effectively reduce the number of defects, and protect ourselves from having old bugs creep back into our code.

How do you find bugs?

Most programmers are regularly faced with the task of identifying and fixing bugs. In the old days, this task was most commonly carried out by sprinkling code with alert calls and refreshing the browser to inspect values of variables, or to observe where the expected flow of a script diverged from the expected flow.

Nowadays most browsers have a powerful console built in. Those who don't can easily gain one using tools like Firebug Lite. The debugging process is pretty much the same: sprinkle code with console.log calls, refresh the browser to observe the actual behavior and manually compare to the expected behavior.

Debugging: an example

As an example of a debugging session we will look at a jQuery plugin that expects an element to have either a datetime attribute (such as the HTML5 time element), or a custom data-datetime attribute containing a date string and replaces the element's innerHTML with a human-readable difference from the current date (e.g. "3 hours ago").

jQuery.fn.differenceInWords = (function () {
    var units = {
        second: 1000,
        minute: 1000 * 60,
          hour: 1000 * 60 * 60,
           day: 1000 * 60 * 60 * 24,
          week: 1000 * 60 * 60 * 24 * 7,
         month: 1000 * 60 * 60 * 24 * 30
    };

    function format(num, type) {
        return num + " " + type + (num > 1 ? "s" : "");
    }

    return function () {
        this.each(function () {
            var datetime = this.getAttribute("datetime") ||
                             this.getAttribute("data-datetime");
            var diff = new Date(datetime) - new Date();

            if (diff > units.month) {
                this.innerHTML = "more than a month ago";
            } else if (diff > units.week) {
                this.innerHTML = format(Math.floor(diff / units.week), "week") + " ago";
            } else {
                var pieces = [], num, consider = ["day", "hour", "minute", "second"], measure;

                for (var i = 0, l = consider.length; i < l; ++i) {
                    measure = units[consider[i]];

                    if (diff > measure) {
                        num = Math.floor(diff / measure);
                        pieces.push(format(num, consider[i]));
                    }
                }

                this.innerHTML = (pieces.length == 1 ? pieces[0] :
                                  pieces.slice(0, pieces.length - 1).join(", ") + " and " +
                                  pieces[pieces.length - 1]) + " ago";
            }
        });
    };
}());

The code first handles two special cases: differences bigger than a month is represented as "more than a month" and differences bigger than a week display the number of weeks. The function then goes on to gather the exact number of days, hours and seconds in the difference. If the difference is less than a day, days are omitted and so on.

The code looks reasonable enough, but using it immediately indicates that something is not quite right. "Humanizing" a date 8 days back results in "and undefined". Employing the console.log strategy for debugging, we might go ahead and log some of the intermediate values to determine what is going wrong. For instance, logging the initial difference will alert us to the fact that we got the order of the terms wrong. Ok, we can fix that:

var diff = new Date(datetime.replace(/\+.*/, "")) - new Date();

Getting the difference right solves the problem, and instead we now get "1 week ago" which is what we expect. So we toss the plugin into production and keep happily hacking on some other part of the application.

The next day, someone gently informs us that "3 days, 80 hours, 4854 minutes and 291277 seconds" is not an acceptable timestamp representation. Turns out we failed to test dates smaller than a week. Enter console.log. Once again, we litter our code with logging statements (perhaps even re-introducing some of the ones we just cleared out) to eventually discover that the remaining difference is not being re-calculated for each term:

if (diff > measure) {
    num = Math.floor(diff / measure);
    diff = diff - (num * measure); // BUG: This was missing in our first attempt
    pieces.push(format(num, consider[i]));
}

Once we have located and fixed the bug, we strip out the console.log call(s) to avoid the code crashing in browsers that don't define the console object.

Step debuggers

Firebug and similar tools makes debugging JavaScript easier than it used to be. But many people seem to think that console.log is such an advanced tool compared to the archaic alert. True, the console doesn't block the UI and is less likely to have you force-closing the browser, but that's about it. console.log debugging is basically as elegant or inelegant as alert debugging.

A slightly more sophisticated approach is using a step debugger like the one available in Firebug.

Using a step debugger you can possibly save some time by setting a few breakpoints and inspecting all available values rather than logging each variable you want to inspect.

The problem with console.log

console.log style debugging has a few problems. First of all, console.log has the nasty risk of introducing bugs all on its own. If you have ever forgotten to remove that last logging statement before some important demo or deployment, you know what I'm talking about. Dangling log statements will crash your code in browsers that don't support the console object, which includes Firefox when Firebug is not active. "But JavaScript is dynamic", I hear you say. "You can just define your own no-op console, and the problem goes away". Sure, you can, but that's like solving your car's rust problem with a coat of paint.

If dangling console.log calls are not acceptable, we immediately realize the next problem: it's not repeatable. Once a debugging session has concluded, you strip out all the log statements. If (when) new problems occur in the same part of the code base, you are back to square one, reintroducing your ingenious log statements. Step debuggers are equally temporary. Ad hoc debugging is time-consuming, error-prone and non-repeatable.

Finding bugs more efficiently

Unit testing is a way to find bugs and verify correctness that does not suffer the fleetingness of debuggers and manual console.log/alert debugging. Unit testing also brings with it lots of other advantages which I will cover throughout this article.

What is a unit test?

A unit test is code that executes part of your production code with an expectation on the result. As an example, let us pretend the two bugs we previously found in jQuery.fn.differenceInWords weren't fixed, and try to find them using unit tests:

var second = 1000;
var minute = 60 * second;
var hour = 60 * minute;
var day = 24 * hour;

try {
    // Test that 8 day difference results in "1 week ago"
    var dateStr = new Date(new Date() - 8 * day).toString();
    var element = jQuery('Replace me');
    element.differenceInWords();

    if (element.text() != "1 week ago") {
        throw new Error("8 day difference expected\n'1 week ago' got\n'"+
                        element.text() + "'");
    }

    // Test a shorter date
    var diff = 3 * day + 2 * hour + 16 * minute + 10 * second;
    dateStr = new Date(new Date() - diff).toString();
    var element = jQuery('Replace me');
    element.differenceInWords();

    if (element.text() != "3 days, 2 hours, 16 minutes and 10 seconds ago") {
        throw new Error("Small date difference expected\n" +
                        "'3 days, 2 hours, 16 minutes and 10 seconds ago' " +
                        "got\n'" + element.text() + "'");
    }

    alert("All tests OK!");
} catch (e) {
    alert("Assertion failed: " + e.message);
}

The "test case" above processes some elements with known datetime attributes, and throws an error if the resulting humanized string is not the one we expect. The code can be saved in a separate file and included on a page that loads the plugin. Running the page in a browser will immediately alert us with either "All tests OK!" or a message explaining what went wrong.

Debugging your application this way might seem awkward. Not only did we have to write the log statements to help us inspect the code, we also had to programmatically create elements and run them through our plugin to check the generated text. There are quite a few advantages to this approach:

  • The tests can be re-run at any time, in any browser.
  • As long as we remember to run these tests when we change the code, it is highly unlikely that the same bugs will creep back in
  • Properly cleaned up, these tests provide documentation of our code
  • Tests are self-checking. No matter how many tests we add, we still have just one page to check to see if there are any errors.
  • Tests don't interfere with production code, and therefore does not carry the risk of inline alert and console.log calls being deployed as part of the production code.

Writing these tests takes slightly more initial effort, but as we only write them once, we quickly save that time the next time the same code needs debugging.

Using a unit test framework

The tests we just wrote contain quite a bit of ceremony. Luckily, there are lots of testing frameworks around to help us out. Using a test framework allows us to reduce the amount of testing logic we have to embed inside the test, which in turn reduces the chance of having bugs in the tests themselves. A framework can also give us more options for automating the tests and displaying results.

Assertions

An assertion is a special kind of method that performs a given check against its argument(s) and either flags an error (usually by throwing an AssertionError or the like) or does nothing. The simplest assertion is one that expects its argument to be thruthy. Assertions also commonly accept a message to display in case of failure:

assert("Small date difference expected\n '3 days, 2 hours, 16 minutes and " +
       "10 seconds ago' got\n'" + element.text() + "'",
       element.text() == "3 days, 2 hours, 16 minutes and 10 seconds ago");

assert takes the message as the first parameter. The idea is that testing is about stating your expectations upfront, and the assertion resembles a specification with the leading message.

While a simple assert like the one above is usually all you need, most test frameworks ship with a choice of customized assertions. What we are really doing above is checking some computed value against an expected value. Most test frameworks have something along the lines of assertEquals for this specific use case:

assertEquals("3 days, 2 hours, 16 minutes and 10 seconds ago", element.text());

Note how we no longer specify an explanation. assertEquals knows that we expect the second computed value to be equal to the first, so it can generate a suitable message for us.

Test cases, setUp and tearDown

In our manual unit test we had two individual tests. When using a test framework, these are usually specified as individual functions in a test case. A test case is a collection of tests testing related functionality. To make test reports easier to scan, test cases usually have a name. The following is an example of structuring our manual unit test from before as a JsTestDriver test case:

var second = 1000;
var minute = 60 * second;
var hour = 60 * minute;
var day = 24 * hour;

TestCase("TimeDifferenceInWordsTest", {
    "test 8 day difference should result in '1 week ago'": function () {
        var dateStr = new Date(new Date() - 8 * day).toString();
        var element = jQuery('Replace me');
        element.differenceInWords();

        assertEquals("1 week ago", element.text());
    },

    "test should display difference with days, hours, minutes and seconds": function () {
        var diff = 3 * day + 2 * hour + 16 * minute + 10 * second;
        dateStr = new Date(new Date() - diff).toString();
        var element = jQuery('Replace me');
        element.differenceInWords();

        assertEquals("3 days, 2 hours, 16 minutes and 10 seconds ago", element.text());
    }
});

The comments preceding each test was converted to test function names, and the comparisons where converted to assertions. We could even make each test slightly clearer by extracting the date object creation to a special method called setUp, which is called before each test function is executed:

TestCase("TimeDifferenceInWordsTest", {
    setUp: function () {
        this.date8DaysAgo = new Date(new Date() - 8 * day);
        var diff = 3 * day + 2 * hour + 16 * minute + 10 * second;
        this.date3DaysAgo = new Date(new Date() - diff);
    },

    "test 8 day difference should result in '1 week ago'": function () {
        var element = jQuery('Replace me');
        element.differenceInWords();

        assertEquals("1 week ago", element.text());
    },

    "test should display difference with days, hours, minutes and seconds": function () {
        var element = jQuery('Replace me');
        element.differenceInWords();

        assertEquals("3 days, 2 hours, 16 minutes and 10 seconds ago", element.text());
    }
});

The setUp method can also have a complimentary tearDown method, to be executed after each test. This example does not require a tearDown method, but you would typically include one whenever you need to clean up after each test. Imagine you are testing some code that implements caching of some data using localStorage. To prevent tests from interfering with each other you might want to clear any values written to localStorage after each test.

In addition to the code and the test you need to specifiy some way of actually running the tests. Most JavaScript unit testing frameworks require a simple HTML file to load the right files in the right order (including the test framework itself). This HTML file can then be loaded in the browser. Usually all passing tests are colored green, and the failing ones turn a menacing red.

Automate, automate, automate!

By moving from a logging based debugging routine to unit testing we have made sure that our experiments are repeatable and self-checking. Doing so takes a lot of manual labour off our backs, but there are still ways to improve. Running the HTML file containing the tests in a browser is fairly painless, but as you might have noticed, today's web developers can't simply test in a single browser and call it a day. Depending on your environment, you may have to test the 2+ most recent versions of 5+ browsers on 3+ platforms. Suddenly, running that one HTML file is a bit of work.

As mentioned, the test case object above is written for JsTestDriver, a JavaScript testing framework and test runner from Google. What sets JsTestDriver apart from the pack is the way it runs tests. Rather than the standard HTML file to load sources and tests, JsTestDriver runs a server that can help you run tests on multiple browsers all at once. The best way to understand how this works is to see it in action.

Assume that the jQuery plugin lives in src/difference_in_words.jquery.js and the test case lives in test/difference_in_words_test.js. In order to run this test we add a configuration file, jsTestDriver.conf in our project's root directory. It contains the following:

server: https://localhost:4224

load:
  - src/*.js
  - test/*.js

Now download the latest JsTestDriver jar file. You will need Java installed to use it. Then issue the following command in a shell (if you are on Windows, cmd.exe will do):

java -jar JsTestDriver-1.2.2.jar --port 4224

You have now started a JsTestDriver server on your machine. The next step is to point a browser to https://localhost:4224/capture which will turn the browser into an idle test running slave. Do this with all the browsers you have available. Then open a new shell, cd into the project directory and issue:

java -jar JsTestDriver-1.2.2.jar --tests all

After a short while you should see some output indicating that JsTestDriver ran two tests in all available browsers, and show you whether they passed or not. Congratulations, you have just automated testing on multiple browsers! If your machine is reachable from other devices on the network, you can also use this server to test other platforms (OS X, Windows, Linux), your iPhone, Android phone and other mobile devices. And you can verify them all with a single shell command. That is pretty exciting!

JsTestDriver is not your only choice for test automation. If you don't like its assertion framework, it can also run tests written using QUnit, YUI Test and Jasmine. Additionally, Yahoo has YETI, a similar tool built specifically for YUI Test, and Nicholas Zakas recently released YUI Test Standalone, which includes a similar runner based on Selenium Web Driver.

Testability: Using tests to improve your code

By now, you have hopefully started to realize the immense time-saver unit tests can be, particularly for JavaScript which is usually expected to run well in more than a few environments. Not only are unit tests time-saving compared to manual debugging and monkey patching, but they will also increase your confidence, happiness and productivity.

Now that you have decided to start writing unit tests you are probably wondering how to get started. The obvious answer is to write some tests for some existing code. Unfortunately, that often turns out to be really hard. This is partly because writing tests takes practice, and the first few ones are always hard to get right, or even just type out. However, there's usually another reason why writing tests for existing code is hard: code not written with tests in mind is usually not very test-friendly.

Testability by example: calculating time differences

"Testability" is a measure of how test-friendly a particular interface is. A test-friendly interface makes all its interesting pieces easily accessible from the outside, and does not require unrelated state to be established in order to test any given part of the API. In other words, testability is about good design, loose coupling and high cohesion, which is just a fancy way of saying that objects should not depend too much on other objects, and that each object/function does one thing and does it well.

As an example of testability we will return to our jQuery plugin. In our two previous unit tests, we wanted to make sure that using the plugin with a date 8 days back in time resulted in the string "1 week ago" and another date resulted in a more fine-grained string representation. Note how neither of these have anything to do with DOM elements, yet we had to create one in order to test the date difference calculation and human string representation.

The jQuery plugin is obviously harder to test than it could have been, and the main reason is that it does more than one thing: it calculates the difference between two dates, it generates a human readable representation of the difference and it fetches dates from and updates the innerHTML of DOM nodes.

To address these issues, consider the following code, which is an alternative implementation of the same plugin:

var dateUtil = {};

(function () {
    var units = {
        second: 1000,
        minute: 1000 * 60,
          hour: 1000 * 60 * 60,
           day: 1000 * 60 * 60 * 24,
          week: 1000 * 60 * 60 * 24 * 7,
         month: 1000 * 60 * 60 * 24 * 30
    };

    function format(num, type) {
        return num + " " + type + (num > 1 ? "s" : "");
    }

    dateUtil.differenceInWords = function (date) {
        // return correct string
    };

    jQuery.fn.differenceInWords = function () {
        this.each(function () {
            var datetime = this.getAttribute("datetime");
            this.innerHTML = dateUtil.differenceInWords(new Date(datetime));
        });
    };
}());

This is the same code as before, only rearranged. There are now two public functions: the jQuery plugin and the new dateUtil.differenceInWords, which takes a date and returns a human-readable string describing how long ago that date was. Still not perfect, but we have separated two concerns. Now the jQuery plugin is only in charge of replacing the element's innerHTML with a humanized string, and the new function is only in charge of calculating the right string. While the old unit tests will still pass, they would be simpler to write against this new interface:

TestCase("TimeDifferenceInWordsTest", {
    setUp: function () {
        this.date8DaysAgo = new Date(new Date() - 8 * day);
        var diff = 3 * day + 2 * hour + 16 * minute + 10 * second;
        this.date3DaysAgo = new Date(new Date() - diff);
    },

    "test 8 day difference should result in '1 week ago'": function () {
        assertEquals("1 week ago", dateUtil.differenceInWords(this.date8DaysAgo));
    },

    "test should display difference with days, hours, minutes and seconds": function () {
        assertEquals("3 days, 2 hours, 16 minutes and 10 seconds ago",
                     dateUtil.differenceInWords(this.date3DaysAgo));
    }
});

Now there are no DOM elements in our test, and we can more efficiently test the logic in generating the correct strings. Similarly, testing the jQuery plugin is a matter of making sure the text content is replaced.

Why change the code for the tests?

Every time I introduce someone to testing and explain the concept of testability, I invariably hear an argument along the lines of "not only do you want me to spend extra time writing these tests, I have to change my code for the sake of the tests, too?"

Look at the change we just made to the humanized time difference code. The change was motivated by a desire to ease testing, but would you argue that the change only benefitted the tests? Quite the contrary, our change has made the code easier to use by separating unrelated behavior. Now, if we later decide to implement e.g. a Twitter feed on our pages, we can use the differenceInWords function directly with the timestamp rather than going the clumsy route via a DOM element and the jQuery plugin.

Testability is an inherent quality of good design. You can have testability and still have bad design, sure, but you cannot have good design without testability. Think of the tests as small sample use cases - examples of using your code - if testing is hard, it means using the code is hard.

Writing tests first: Test-driven development

The biggest challenge when applying unit testing to existing code is the testability issue. In the interest of continuously improving our workflow, what can we do? It turns out that one surefire way to bake testability right into the very soul of the production code is to write the tests first.

Test-driven development (TDD) is a development process where you work in small tight intervals and each interval always starts with a test. No production code can be written until there is a failing unit test illustrating its need. TDD makes you focus on behavior rather than what code you need next.

Let's say we are told that the jQuery time difference plugin needs to be able to calculate the difference between any two dates, not just comparing to the current timestamp. How could we use TDD to tackle this problem? Well, the first extension would be to provide a second argument which is the date we are comparing to:

"test should accept date to compare to": function () {
    var compareTo = new Date(2010, 1, 3);
    var date = new Date(compareTo.getTime() - 24 * 60 * 60 * 1000);

    assertEquals("24 hours ago", dateUtil.differenceInWords(date, compareTo));
}

This test pretends that the method already accepts two arguments, and expects the resulting string to be "24 hours ago" when comparing two dates in the past that are exactly 24 hours apart. Running the test unsurprisingly tells us that this doesn't work. To pass the test, we have to add a second optional argument to the function, and at the same time make sure we don't change the function such that the existing tests fail. Here is one way of implementing it:

dateUtil.differenceInWords = function (date, compareTo) {
    compareTo = compareTo || new Date();
    var diff = compareTo - date;

    // ...
};

The tests all pass, telling us that both the new and existing requirements are satisfied.

Now that we accept two dates we might want the method to be able to describe both differences in the past and the future. Let's describe this behavior with another test:

"test should humanize differences into the future": function () {
    var compareTo = new Date();
    var date = new Date(compareTo.getTime() + 24 * 60 * 60 * 1000);

    assertEquals("in 24 hours", dateUtil.differenceInWords(date, compareTo));
}

Passing this test will require a little more work. Fortunately, we have tests covering (some of) our previous requirements. (Two unit tests hardly constitute good coverage, but imagine that we had a full suite of tests for this method already). A strong test suite enables us to fearlessly change the code, knowing that we will be warned if we break it. My implementation ended up like this:

dateUtil.differenceInWords = function (date, compareTo) {
    compareTo = compareTo || new Date();
    var diff = compareTo - date;
    var future = diff < 0;
    diff = Math.abs(diff);
    var humanized;

    if (diff > units.month) {
        humanized = "more than a month";
    } else if (diff > units.week) {
        humanized = format(Math.floor(diff / units.week), "week");
    } else {
        var pieces = [], num, consider = ["day", "hour", "minute", "second"], measure;

        for (var i = 0, l = consider.length; i < l; ++i) {
            measure = units[consider[i]];

            if (diff > measure) {
                num = Math.floor(diff / measure);
                diff = diff - (num * measure);
                pieces.push(format(num, consider[i]));
            }
        }

        humanized = (pieces.length == 1 ? pieces[0] :
                     pieces.slice(0, pieces.length - 1).join(", ") + " and " +
                     pieces[pieces.length - 1]);
    }

    return future ? "in " + humanized : humanized + " ago";
};

Notice how I didn't have to touch the jQuery plugin. Because we separated the unrelated parts, I am completely free to change and improve the way the humanized string is built without changing the way jQuery is used to put humanized strings in my website.

Continuous integration

When practicing TDD, we require tight feedback. Feedback comes from our tests, which means that tests need to run effortlessly and fast. JsTestDriver already makes it both easy and fast to run tests, but as always there are limitations. The limitations come in the form of multiple browsers. While JsTestDriver can easily run tests on as many browsers as you want, doing so is inconvenient for the TDD workflow for a couple of reasons:

  • Having test reports from more than one browser at a time makes it harder to see what's happening, and takes away from the momentum TDD is supposed to provide you with.
  • Some less capable browsers - which are usually highly important to test - are slow. And I mean slow. Slow ruins the TDD flow.

One solution to this problem is continuous integration. Continuous integration is the practice of automatically and frequently controlling the quality of your project. This could include tools such as JsLint, and it most definitely should include running tests.

A continuous integration (CI) server can make sure that the work of all developers combined work as intended and it can also be the one in charge of running tests in a wide selection of browsers. A "build" on a CI server is usually triggered by version control systems such as Git or Subversion, and will often offer to send emails to project members when it discovers problems.

I recently wrote a guide to setting up the Hudson CI server for JsTestDriver. Using Hudson and JsTestDriver, it's easy to build a workflow that is time-efficient and promotes quality.

Personally, I use TDD for basically anything I do, and usually I run tests locally against Firefox, as it is the browser I find to have the best error messages and stack traces. Every time I finish a feature, often a small one, I push it to the code repository. At this point, Hudson checks out the change I just committed and runs all the unit tests on a wide range of browsers. If anything fails, I get an email explaining what happened. Additionally, I can visit the Hudson server anytime to view graphs of the project's builds, see console output for individual builds and so on.

In conclusion: Why should I care?

If, after reading this article you are still not convinced that unit testing is a worthwhile practice, let us rehash some common misconceptions.

“I use a library such as jQuery, which ensures my code works properly”

Ajax libraries such as jQuery can go a long way in helping you deal with cross-browser issues. In fact, in many use-cases these libraries completely abstract away all those nasty DOM bugs and even core JavaScript differences. However, these libraries do not - and cannot - protect you from faulty application logic. Unit tests will.

“Testing is an advanced practice for the pros, not for me”

My position is that whatever way you think of your process for writing code, you are testing it, e.g. by refreshing your browser(s) to verify that it is doing what it should. You are simply opting out of automating and improving your testing process, and in the long run (or even not-so-long run) you are spending more time hammering your browser's refresh button than I spend writing tests that I can run today, tomorrow and next year should I so please.

As any other new technique, testing requires practice, but it doesn't take a "ninja" to do it. Tests consist largely of dirt simple statements that exercise your code and make assumptions about it. The hard part is designing code well and making sure testing it is possible. In other words, the hard part is improving your programming skills and thinking about your code before you write it. There is no reason why anyone - be it a pro or a beginner - should not desire to improve.

“Testing takes too much time, I'd rather just write production code”

Both manual and automated testing takes time. But please, don't spend an hour or two "evaluating" unit testing and/or TDD and then decide it is a waste of time. Unit testing and TDD are disciplines that need to be practiced just like any other. There is no way to get good at automated testing in a few hours. You need practice, and once you get there, then you will recognize the benefits I'm describing here, and then you will realize how much of a waste manual ad-hoc testing is. Besides, even if writing unit tests and testing your code rigorously takes a little more time, what would you prefer? To fail really fast, or to succeed?

Adjust to your needs

You might get the impression from this article that I feel that everybody should adopt my ways of working. I don't feel that way. But I do feel like being serious about the quality and correctness of your application is important, and I do think that unit testing is an integral part of that equation.

TDD is more of an optional layer here, but my experience tells me that TDD greatly simplifies unit testing. It helps improve the design of code and helps me implement only those things which count by forcing me to reason about my code before I implement it. Surely you can achieve these goals by other means as well, but for me, TDD is the perfect solution.

Now go practice!

 

About the Author

Originally a student in informatics, mathematics, and digital signal processing, Christian Johansen has spent his professional career specializing in web and front-end development with technologies such as JavaScript, CSS, and HTML using agile practices. A frequent open source contributor, he blogs about JavaScript, Ruby, and web development at cjohansen.no. Christian works at Gitorious.org, an open source Git hosting service.

Find Christian on: