testing, testing

Test-Driven Development in Modern Software Design
Friday, May 8th, 2015

back to blog index

The main idea of "test-driven development" (TDD) is fairly simple: write tests before writing code. Slightly more broadly, Wikipedia describes it as "a software development process that relies on the repetition of a very short development cycle." That cycle is this: a developer writes a test for a change to a program, before writing any code. The test is designed to initially fail. When it fails - as expected - the developer writes the minimum amount of code needed to pass the test. They then refactor the code to improve it, and repeat. As the number of tests grows, the tests will largely ensure that small changes to a program do not change its effectiveness as a whole.

The benefits that come from this style of design are many. Kent Beck, credited as a key developer of TDD and one of the visionaries behind "agile development," has said that it encourages simple designs and inspires confidence. This covers two main points. By writing only the minimum amount of code to pass tests, designs are more concise. And as previously discussed, passing each test as the design evolves encourages a deserved faith in the reliability of the code.

There are other benefits as well. Writing tests first allows the code to be more easily testable, by considering its structure as the code is written, instead of after. It also builds strong tests, as the tests initially fail, indicating that they are testing what is intended. It ensures that most - if not all - features of the program get tested. It also leads to a deeper understanding of the design. And because tests and code are written in small, independent parts, the code is often more flexible and extensible. Tests also become an important part of the software documentation.

TDD does not solve every problem, of course. Functional tests are needed to check larger parts of programs. In addition, if tests are written by the designer, they may share the same "blind spots" as the code. A passing test may not mean that a program that is working as intended, and this, along with the sheer volume of passing tests, may lead to a false sense of security. In addition, some believe that TDD leads to wasted time. If an entire team is not onboard, disunity can result.

When the TDD mantra is embraced, it is necessary to understand the full value of the tests themselves. They must be maintained just like the code as the software evolves. Kent Beck states that tests should run fast and run in isolation (they can be reordered), that they use real data, data that makes the tests easy to understand, and that they represent just a single step toward the overall goal. This ensures that the tests can find value long after they are written.

While TDD doesn't solve everything, the goal of making tests a part of the design and documentation, not just verification, makes them an important partner of the code itself. They can be used to see that the software not only functions, but functions well. And if time is taken to write the tests well, they lead to better results, and better developers.

back to blog index