Satirical reuse of a beer advert

Test-driven Development

Satirical reuse of a beer advert
He CAN divide by zero.

If you search Wikipedia for a list of software development philosophies, you will literally find over a hundred styles to choose from. This high number suggests that there is no standard way of developing software and that a successful enterprise must find the methods that best benefit its environment.

Our team in Integration has recently been using a methodology with the OpenEai framework (open enterprise architecture) to develop our software in a test suite. This test-driven development has dramatically changed the way we work and I am very excited by the results. I wanted to let folks in OIT know about our approach and I am curious to see what other applications could utilize this methodology.

In test-driven development, the customer writes test cases before we start developing anything. This method of development is applicable to nearly any type of OIT development work and especially instances where automated testing can be done.

In this methodology, our testing begins early in the project, just after the specs are written. The tests in the suite become a focal point for managing the project.

Satirical reuse of a cat photo
This IS my happy face.

The test cases serve as explicit, codified representations of key requirements. In other words, given this input, I expect that output. The customers either use XML to write the test cases or they work with the xml-author and approve the exact document. In this way, there are no translation errors between the narrative specifications and the actual test. This is a distinctive advantage in xml-based test suites. Much of the rework from bugs in programming comes down to the interpretation of ambiguous or vague specifications.

Once created, the test suite becomes part of the operational artifacts of the project and can be stored next to the production application. Everyone can therefore easily find and run the test at any time to verify production. Ideally, no documentation would be needed to run this test, but we’re finding that some extra data prep steps are sometimes needed.

We use this method for regression testing (testing when you retest after simple-to-more-complex changes are made to make sure the changes haven’t broken anything) and we use the OpenEai test suite on the Enterprise Service Bus (ESB). Because our test is built in to the console for the running application, anyone can run a test without having to be an application expert.

Satirical reuse of an image from "Married... With Children"
Stay in school, kids, or this could be you.

As one might expect, there are a couple of drawbacks to this methodology.

  1. Doesn’t replace the need for narrative specs (these are the business logic specs where the customer defines actions and results. It is a sort of logical pseudo-code). This type of testing augments narrative. We still need to know the business logic so we can design the program well.
  2. Doesn’t test everything. Just the basic functionality and those examples that are known to be troublesome, even if rare. Programmers still need unit testing to test all possible outcomes.
  3. Another system to maintain. The test suites are not forgiving; a slight change in the output that may not be significant in meaning will cause the test to fail. Therefore, time to keep the test suites up to date is needed. There might be 200 test cases we might have to update with the new results. It can be very time-consuming and tedious.
  4. Time-consuming to build, however I would argue that overall time savings from avoiding reworks and bug resolution are achieved.

Despite these shortcomings, we are enjoying this methodology because it has improved the quality of our production code. This is very accurate testing, methodical, and you uncover errors sooner and more errors.

So if you are a developer, how can you apply these concepts to your environment? We would love to hear your feedback so please place your comments below. I would like to know if any other teams are doing this or perhaps they could share their methodologies.


Comments

One response to “Test-driven Development”

  1. Peter Day Avatar
    Peter Day

    Julia, this is a great article with clever illustrations. The consulting firm GCA that is implementing the initial functionality of the NetIQ Identity Manager replacement for ENID uses a similar methodology. They begin with an overall design document that lists all the connectors, systems and software that is needed, along with what the system overall is supposed to do in business terms. Once that is approved, then for each connector they create a high-level design document that details what the connector is to do in business terms. When that is approved by the customer, they create a detailed design document and associated use case document. The detailed design specifies how they will configure the connector including the values to be used. The use case document lists the tests that will be done to verify that the connector provides the desired functionality. Each use case says what is being tested, and specifies the input and expected output. The testing is partially automated using a NetIQ tool called Validator that can run through a series of scripted tests and pause to allow an examination of the result. A GCA developer uses this tool along with other tools to run each use case test and demonstrate to us that the connector passes all the functional tests. The developer also captures detailed log output for each test to document that the test was successful. The functional testing of the connectors is unit testing in that each connector is given synthetic inputs and the outputs are examined for correctness. Once all the functional tests are complete, integration testing will be done by letting one connector react to the operation of another, and then adding additional connectors, untill we test the whole process end-to-end.

Leave a Reply

Your email address will not be published. Required fields are marked *