The latest release of the middle-office system has gone live, but somehow the pricing analysis screen, the most important screen in the entire system, will not accept prices entered for equity swap positions. It is at the end of the trading day, and the traders are getting heatedly upset.
Ask the developers, who are newcomers to developing for this system, if they tested the screen, and they say "Yes, but we didn't modify anything on that screen, so we didn't test it fully." Ask the business analyst if he tested the system, and he says "Yes, but I didn't test that screen because the developers said they didn't modify anything on that screen, so I just entered a few prices as a litmus test and that was it." Ask the development manager what went wrong, and he says "Didn't anyone bother to test the system? If I go into the system and try to enter this price, it's clear to me that it doesn't work! Are you all blind?"
Did the developers modify the screen directly? No. Is the business analyst lazy? No. Is everyone in the development manager's group blind? No. But is this breakdown of testing a common occurrence in software development? Yes. (A change to a pricing validator component shared by multiple components of the system, including the pricing analysis screen, was the cause of this situation.) And is everyone feeling sore about it? Absolutely.
So what is to be done? In this particular environment there is no systematic procedure for testing. The development group is weeks or months away from being fully up-and-running with automated development and testing practices and facilities, if they can carve out the time away from their regular responsibilities. Other than the business analyst, there is no budget for a dedicated QA/testing group. Yet something must be put in place quickly so that a system with shared yet inter-connected components can be reliably tested without negatively affecting the business users upon release.
This is the perfect time for this group to begin a testing practice built upon a foundation of acceptance testing. Acceptance testing proves the real-world conditions that every feature and function of the system must satisfy correctly and repeatedly.
Acceptance tests represent the common point of understanding and agreement between the business users and the technologists responsible for a system. If the system can satisfy all these tests consistently, every time the tests are run, then any changes to components can be readily verified as not having a detrimental effect on the system. And since acceptance tests are satisfying real-world conditions, the business users are given some measure of the system's reliability before the system is released. *
It does take a bit of work to get started and enumerate the acceptance test cases. It also takes some work to get both the business users and IT developers and analysts to buy into the process and realize the benefits. But ask the traders above and their staff if it would be better to be frustrated by a malfunctioning system. But building the foundation takes less time than you may think. You can start with one simple spreadsheet listing the test cases, but the point is to start somewhere. The journey of 1000 miles begins with a single step.
Once you have this foundation, then your developers can branch out into Test-Driven-Development and other testing practices, and your business users can be more self-sufficient in setting up test cases. There are even open-source tools that can translate Excel files and "natural language" test cases into actionable code (FIT, for example). Imagine the situation where your business users can keep up with changing business conditions by submitting test cases in Excel on their own, without having to know XML or a cryptic language. To receive the most up-to-date feedback, automate the tests so that their execution is a convenience to be enjoyed by all, not a burdensome task to be carried out by a lone savior/scapegoat. You may even find your organization creating defect-free releases before too long.
And the next time the middle-office system is tested, the only things taking shots are the components, business processes, and assumptions made about the features and functions - but NOT the people developing, testing, or using the system.
*BONUS FEATURE: Acceptance tests provide some very valuable documentation of the functions and features of the system. The value of documentation will be addressed in a later post.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment