Of the basic disciplines involved in software development and delivery – analysis, design, programming, testing, management, deployment, operations, architecture, etc. – programming is usually seen as the most technically demanding and complicated to learn. Many people look primarily to programming when they assess the effectiveness of their software delivery processes. Historically, the knee-jerk response to slow delivery has been to hire more programmers. After all, software is code, right? Therefore, if there’s a problem delivering software, it must have something to do with the way it’s coded.
After some 36 years in the IT industry, most of it in a hands-on software development role, I’ve come to the conclusion that the core discipline in software development is not programming, but rather testing. Even if programming is objectively more time-consuming to master than the other disciplines, it seems to me that testing is more critical to success.
When we find we are promoting code that has a lot of defects, and we’re spending too much time chasing down bugs and fixing them, what do we do? We add more comprehensive after-the-fact testing at multiple levels, we look for ways to break dependencies and isolate code so that we can test it more thoroughly, we try to improve the effectiveness of our testing methods, and we adopt a test-first approach to building the code in the first place.
When we find our delivery process includes wasteful back-flows from test to development, what do we do? We have programmers and testers collaborate more closely throughout the development process, and we encourage programmers to learn to think like testers.
When we find we are not delivering what the customer really needs, what do we do? We pull testing forward in the process and blend requirements specification with executable examples, adopting a behavior-driven approach and eliminating the need to match up test cases with requirements after the fact.
When we find our applications don’t support the necessary system qualities (non-functional requirements or “-ilities”), what do we do? We add test cases and learn appropriate testing methods to ensure we are aware of the state of system qualities throughout the process, to avoid unpleasant surprises late in the game.
When we find our applications exhibit unexpected behavior around edge cases, what do we do? We increase the amount of exploratory testing we do, and we look for more effective ways to perform exploratory testing.
The solutions to our delivery problems keep coming back around to testing of one kind or another…changing the methods or the timing or the scope of testing. We don’t often change the way we work in the other disciplines, apart from adding more testing activities to them, and asking specialists in those disciplines to learn more about effective testing practices.
It might be worth mentioning that I’m not using the word “testing” in the narrow sense that software testing specialists use it. I mean it in a broader sense that includes “checking” and “monitoring” and so forth. I’m thinking of the old adage that we should begin with the end in mind, and I’m thinking about automating as much of the repetitive and routine work as possible.
Anecdotally, when I’m working in the role of programmer, I find it very useful to approach development by writing test cases that express the functionality I would like to build, before writing any code that might tend to go off on a tangent or lead to gold-plating.
(The purist will insist this isn’t really “testing,” it’s a design technique. That’s true, but it’s not the whole truth. We apply testing skills when we do test-driven development, and the result is an automated regression test suite whose value extends well beyond the initial development work.)
When working in the role of analyst, I find it very useful to think about requirements in terms of how I will assure myself they have been satisfied, in a repeatable, simple, and reliable way. Behavior-driven development is the most effective way I know of at the moment, and it’s even more effective when automated.
When working as an architect or in any sort of infrastructure support role, I find it very useful to apply test-first and behavior-oriented concepts to work such as installing software products, configuring servers, deploying components, and more.
I’ve seen cases when common IT functions such as ETL or batch merges and updates have benefitted from defining the end state first and then using that definition to guide development.
Infrastructure development such as the creation of an ESB benefits from this approach, as well, as it enables us to focus on the features that will be needed by applications currently in the pipeline rather than waiting until every detail of the ESB is complete before providing any services. The old Big Bang approach is at the heart of many past ESB/SOA implementation failures; by beginning with a clear definition of what is needed, we can deliver support as early as possible with no negative business impact.
In an operations or support role, I find it very useful to have automated facilities in place to handle business activity monitoring and to predict imminent system failures before they turn into outages that affect customers; in a sense “testing” the ongoing production operation itself, in real time.
In general, approaching IT work with a tester’s mindset seems to mitigate or eliminate various types of problems. How will I demonstrate the software behaves as expected? How will I know when to stop adding features to this new application? How will I know when this server is about to go off the rails? How can I explore the limits of what this system can support? How can I collect operational and usage information that will help with capacity planning, or predict an impending outage? Many of these questions can be answered through some flavor of monitoring, checking, or testing.