If you search Wikipedia for a list of software development philosophies, you will literally find over a hundred styles to choose from. This high number suggests that there is no standard way of developing software and that a successful enterprise must find the methods that best benefit its environment.
Our team in Integration has recently been using a methodology with the OpenEai framework (open enterprise architecture) to develop our software in a test suite. This test-driven development has dramatically changed the way we work and I am very excited by the results. I wanted to let folks in OIT know about our approach and I am curious to see what other applications could utilize this methodology.
In test-driven development, the customer writes test cases before we start developing anything. This method of development is applicable to nearly any type of OIT development work and especially instances where automated testing can be done.
In this methodology, our testing begins early in the project, just after the specs are written. The tests in the suite become a focal point for managing the project.
The test cases serve as explicit, codified representations of key requirements. In other words, given this input, I expect that output. The customers either use XML to write the test cases or they work with the xml-author and approve the exact document. In this way, there are no translation errors between the narrative specifications and the actual test. This is a distinctive advantage in xml-based test suites. Much of the rework from bugs in programming comes down to the interpretation of ambiguous or vague specifications.
Once created, the test suite becomes part of the operational artifacts of the project and can be stored next to the production application. Everyone can therefore easily find and run the test at any time to verify production. Ideally, no documentation would be needed to run this test, but we’re finding that some extra data prep steps are sometimes needed.
We use this method for regression testing (testing when you retest after simple-to-more-complex changes are made to make sure the changes haven’t broken anything) and we use the OpenEai test suite on the Enterprise Service Bus (ESB). Because our test is built in to the console for the running application, anyone can run a test without having to be an application expert.
As one might expect, there are a couple of drawbacks to this methodology.
- Doesn’t replace the need for narrative specs (these are the business logic specs where the customer defines actions and results. It is a sort of logical pseudo-code). This type of testing augments narrative. We still need to know the business logic so we can design the program well.
- Doesn’t test everything. Just the basic functionality and those examples that are known to be troublesome, even if rare. Programmers still need unit testing to test all possible outcomes.
- Another system to maintain. The test suites are not forgiving; a slight change in the output that may not be significant in meaning will cause the test to fail. Therefore, time to keep the test suites up to date is needed. There might be 200 test cases we might have to update with the new results. It can be very time-consuming and tedious.
- Time-consuming to build, however I would argue that overall time savings from avoiding reworks and bug resolution are achieved.
Despite these shortcomings, we are enjoying this methodology because it has improved the quality of our production code. This is very accurate testing, methodical, and you uncover errors sooner and more errors.
So if you are a developer, how can you apply these concepts to your environment? We would love to hear your feedback so please place your comments below. I would like to know if any other teams are doing this or perhaps they could share their methodologies.
Leave a Reply