One of the new capabilities that I can see coming into place in UTS, and one that will be more exciting the more we get into 2013, is our increasing ability to look at the application layer and treat it with a greater degree of sophistication.
Dana Haggas (Director, Enterprise Applications) has a goal this year to increase the combined ability of UTS and our functional partners to not only catch errors during unit testing (where individual components of a software package are tested), but also during regression testing (which attempts to uncover bugs caused by upgrades or changes to exiting software). Both types of testing are necessary but our capabilities for each are currently limited.
We expect to make progress for the coming year and the new PS tools will increase our PS testing. The new PS upgrade (student) will lead to a tools upgrade, which will lead to the ability to have new testing tools for testing functionality before everything goes live.
Kevin Chen‘s (Manager, Integration) team also does a degree of application development and he has a challenge similar to Dana’s. His team currently makes use of freeware tools to bring a greater level of robustness to our testing. They are already using open source testing tools such as JMeter, JUnit, and OpenEAI Test Suite for regression and performance tests. They are also looking at Selenium for web application testing. This freeware helps the Integration Development team to automate testing early in the process instead of doing this work during production.
In a related fashion, the Monitoring Project is looking at methods for detecting outages and performance degradation in the application layer AFTER go-live. While we have a rudimentary capability to create synthetic transactions in Xabbix, it was never really used. The new tools have improved recording capabilities and should make it possible for us to quickly develop a rich suite of functionality checks that we can monitor on a regularly recurring basis.
Some of these tools are being used in the Quick Fix (see the article about the Service Availability Dashboard) but I’d like to expand this out to our entire suite of applications such as Bb, Exchange, etc.
The final frontier for these types of testing initiatives is to be able to intelligently predict the number of errors or outages we will have at the application layers. The most mature organizations I have seen have the capability of estimating the number of errors that should be found during testing and will postpone their go-live until they believe they can manage them. Similarly, we hope modeling and experience will also help us plan for unplanned outages.
With these new tools, we will have a greater ability to detect errors prior to go-live than ever before. I think we will see big strides in each of these efforts and I hope that next month I will be able to announce progress in the monitoring project and the tools they have selected to bring these monitoring capabilities to life.
Leave a Reply