Don’t do anything stupid on purpose
It’s a familiar adage among engineers, often posted in work areas. Does it pertain to software development? The seemingly endless circular debates about software delivery methods lead me to think so. The latest chapter in the ongoing drama is the recent schism between Lean Kanban University’s flavor of Kanban Method and the rest of the lean/kanban community. And the paint hasn’t yet dried on the sumo match between Kanban and Scrum. A few years ago (mid-00′s, as memory serves) the same debate (except for the names of the methods) raged between proponents of Evolutionary Project Management (Evo) and Extreme Programming (XP). Prior to that, we kept ourselves entertained by debating whether RUP was Agile. Before we could do that, we had to settle the debate about the relative merits of Spiral and RUP, of course. And Spiral vs. linear SDLC processes. Tomorrow, next week, or next month, it will be something else. Important questions, all.
Yet, I can’t help noticing, as Ron Jeffries puts it, it’s all the same elephant. When I stopped arguing and started listening to the elephant, I heard it say "Don’t do anything stupid on purpose." What does the phrase mean in the context of software development and delivery? To explore the question, I think it would be helpful to define the terms stupid and on purpose for that context.
The three things
Getting down to basics, we’re primarily interested in these three things:
- Building the right thing
- Building the thing right
- Keeping the work on track
"Stupid is as stupid does." (Forrest Gump)
If we can agree that the general goal of software development is to produce a piece of working software that provides value to someone, for some definition of "value," then anything that interferes with our ability to meet that goal would fall under the heading, stupid. It could be something that causes misunderstanding about stakeholder needs; such a thing would cause us not to "build the right thing." It could be something that results in a product full of defects; such a thing would cause us not to "build the thing right." It could be something that causes delay or cost overruns, such that the business value of the product is lost due to time-to-market concerns or reduced return on investment.
…intentionally, advisedly, by design, calculatingly, consciously, knowingly, willfully…
We all choose to do our work in certain ways. My guess is that we make those choices based on our past experience, training, and education. When we misapprehend the effects of our choices, the outcome may not be what we had expected. Yet, we may not properly understand why it happened. When we make choices that conform with the definition above, but we do not understand the cause-and-effect relationship between our choices and our outcomes, did we cause the negative outcome on purpose? I think it would be unfair to say we did so. However, once we have been exposed to the correct information, and we have had the opportunity to choose to learn it, then we no longer have an excuse. From that point forward, when we choose to do the stupid thing we used to do innocently, we are doing it on purpose. We are no longer innocent.
"…doomed to repeat it." (George Santayana)
I think all the various process frameworks and methods that have come along since the advent of electronic computers have had the same basic purpose: To help us avoid doing anything stupid on purpose while we attempt to take care of the Three Things. What is different about them, from a very high-altitude point of view, is that successive methods have taken into account lessons learned from previous experience in developing and delivering software. Early methods were based on the approaches then thought to be best suited to the purpose. Later methods have applied lessons learned from previous efforts.
In 1963, US President John F. Kennedy challenged the nation to put a man on the moon and bring him safely home again by the end of the decade. It was, to say the least, a significant challenge. One of the enabling technologies for the journey was the inertial guidance system, developed during World War II to guide rockets carrying warheads. Such a system has three main components: Gyroscopes to sense rotation, accelerometers to sense lateral motion, and a computerized control system. World War II era systems used simple analog computers. What made inertial guidance feasible for the Apollo program was the advent of the digital computer.
Digital computers run software. That’s hardly news today, but in the 1960s effective techniques for developing software were unknown. As the team at MIT fell further and further behind in developing the software for the Apollo Guidance Computer (AGC), NASA became worried that the delay could put the whole program behind schedule. They inspected the work in progress and found it was being done in a very ad hoc manner. The assumption had been that the real engineering work went into the hardware, and the "software" (whatever that was) would somehow just happen. The code was a mass of spaghetti with considerable unnecessary duplication.
Sound engineering principles had been rigorously applied throughout the Apollo program except in this area. NASA created and imposed an engineering approach to AGC software development. Even so, they had to make do with less than had been originally envisioned. Rather than serving as the primary guidance system for the moon missions, the inertial guidance system was only used when the spacecraft was out of radio contact with Earth. Otherwise, the spacecraft were guided from ground control.
This may have been the origin of the idea that software development is a branch of engineering. For example, this report from MIT, dating from 1971, lays out an approach for software development that resembles the much-discussed "waterfall" method. Note that the famous paper by Winston Royce that is usually credited with presenting the "waterfall" model was published a year earlier, in 1970. Although Royce warned against using a strictly linear approach, the notion of software development as engineering superceded his work.
But it was all to the good: A careful, step-by-step process is certainly more effective than a purely ad hoc approach. The ad hoc approach might be seen, in hindsight, as a case of doing something stupid. Now we knew enough about software development not to do that particular thing on purpose. Well, I suppose we didn’t all learn that lesson; at least not immediately. People are still writing spaghetti code with a lot of unnecessary duplication, using no particular engineering principles and following no particular process. Most of them are very, very proud of themselves, too.
We did pretty well with the linear approach, known as the Systems Development Life Cycle or SDLC, throughout the 1970s. During this period, companies were beginning to make use of digital computers in their business operations. We were writing back-office applications like general ledger systems and accounts payable systems. These had well-known "requirements," they were quite well backed up by manual procedures, and because they did not provide any sort of competitive edge there was no time-to-market pressure to deliver them.
By the end of the decade, the competitive picture was changing. The role of computers in business was growing, and we needed effective ways to deliver business applications quickly: Rather than five to eight year lead times, we needed to get code into production in a blistering two to three year time frame.
At the same time, customers of software were demanding better alignment with their needs, and greater assurances that the code would function properly and reliably. Hughes Aircraft developed the V-Model process in 1982 to respond to an RFP that required the contractor to be able to demonstrate that everything was working according to spec. It was the first large-scale example of a "test-driven" development approach. Around the same time, in 1986, Barry Boehm published his Spiral method. It was the first popular software development method that took an iterative approach to realizing the requirements.
Corporate IT departments worldwide resisted the move away from the linear SDLC model, but by the start of the 1990s there were several iterative process frameworks at play, developed and published by such luminaries as Steve McConnell, Tom Gilb, Ivar Jacobson, and Philippe Kruchten.
During the 1990s, people interested in software delivery methods began to create and apply so-called "lightweight processes." The basic idea was to focus on the activities that contribute to the creation of value-add software, and to de-emphasize supporting activities that don’t directly contribute to the software. In hindsight, the idea seems almost too obvious to bear mentioning, and yet it continues to be resisted to this day. It’s really the same concept that drove the first software engineering methods: The desire to avoid doing anything stupid on purpose.
At each painful step on the road of improvement, entrenched forces insist that "proven" methods of the past are good enough, and that new methods haven’t been vetted by academic researchers who have never delivered any real code to a real customer under real time and budget constraints. When I look at the historical pattern, though, it seems to me it’s the newer method that has had more credibility and promise than the older method it sought to replace, at each stage of evolution. Are we doing something stupid when we ignore this? Are we doing it on purpose, or do we really not know better?
IMO all of the Three Things come down to a question of communication among the people involved in producing the software, including the project sponsor, business stakeholders, the development team, and anyone else who is involved in any way. People use a variety of means to communicate with one another. These are often called channels. The trick is to choose the most appropriate channel for the type of communication needed.
Communication channels span a spectrum from personal to impersonal, from direct to indirect, from informal to formal, depending on how you look at it. A face-to-face conversation represents the most personal, direct, and informal channel. A formal document, prepared and reviewed by committee and stored in a repository where interested parties must search for it, represents the most impersonal, indirect, and formal channel. Numerous options exist between the two extremes, such as the telephone, email, formal meetings, videoconferencing, and more.
Is there a "best" communication channel for information pertaining to a software development initiative? The answer is not crystal clear.
Laura Stack of The Productivity Pro, Inc., recommends defaulting to the most impersonal and most indirect channel, and resorting to more personal and more direct channels when circumstances call for it. Scott Ambler, Alistair Cockburn, and others recommend precisely the opposite: Try to use the most personal and direct channel available, and resort to less personal and less direct channels only when circumstances make it necessary.
As there is no Single Right Answer, it’s up to us to choose the communication channel that best helps us achieve our goals. To an extent, the type of information we need to convey dictates the appropriate communication channels. Static, standard, unambiguous information can be conveyed accurately using impersonal, indirect, formal channels. For example, technical standards for calling services through an enterprise service bus can be documented in a central location. The overhead of convening meetings to discuss the same, unchanging standards over and over again would cause delay, one of the outcomes of doing stupid things. It would negatively impact Thing #3.
In contrast, details about the look and feel of a data entry form is dynamic, non-standard, and usually ambiguous information. Stakeholders generally cannot fully and accurately specify what they would like a data entry form to look like purely from their imaginations, using words that are subject to interpretation, long before they have had a chance to have a look at any results. More often, they benefit from seeing and using interim results of development, so that they can refine their concept of what they ultimately will need. Miscommunication in this area would cause misalignment with stakeholder needs, another outcome of doing stupid things. When we choose to convey this type of information using an impersonal, indirect, formal channel, we are doing something that negatively impacts Thing #1.
All the various process frameworks and methods that people enjoy debating provide for some forms of formal communication and some forms of informal communication. All of them define artifacts of some sort that capture static, long-lived information. All of them define meetings or checkpoints or ceremonies of some sort that bring people together, where they can communicate directly and informally. The problem isn’t that my method is better than your method; the problem is that people choose an inappropriate communication channel for some type of information.
We are talking specifically about software development and delivery work. In turns out that a significant proportion of the information that has to be communicated in this type of work can be described as dynamic and subject to different people’s assumptions and interpretations. In this context, we will normally want to default to personal, direct, and informal communication channels, and to resort to impersonal, indirect, and formal channels only for relatively static and standard information.
When we depend heavily on formal communication channels during software development, we will probably have a negative effect on all Three Things. Miscommunication between stakeholders and development teams leads to a product that doesn’t align well with stakeholder needs. That’s Thing #1. Miscommunication between technical professionals leads to inconsistent coding standards, cobbled-together architectures, fragile interfaces, technical debt, and unreliable software. That’s Thing #2. The effort to recover from all the instances of miscommunication, the rework, the defect correction, the waiting for answers, causes extended lead times and cost overruns. That’s Thing #3. It’s the Trifecta.
Lightweight development methods emphasize informal, direct, immediate communication with plenty of opportunity for clarification and lots of feedback from stakeholders about interim results. The fact this is different from the linear SDLC approach is not "wrong." It is a correction based on lessons learned from past experience. We are doing the best we can, based on the collective experience of the software industry, to avoid doing anything stupid on purpose. We still do stupid things. We just haven’t yet learned why they’re stupid. Don’t worry; we’re working on that.
In a keynote talk at a recent conference, Robert C. "Uncle Bob" Martin was quoted as saying, "The purpose of Agile is to destroy hope." After I just said that lightweight methods represent a correction for past mistakes, why would I bring up such a comment? The point is that past methods tended to depend a great deal on hope. Based on bottom-up, time-based estimates of individual tasks, managers hoped they could guess how long a project would take and how much it would cost, as well as how many (ahem) "resources" they would need.
Aforementioned lightweight methods, including those touted under the "agile" banner, seek (among other things) to base project planning on empirical observation and measurement of outcomes, rather than on guesswork and measurement of busy-ness. When we wagered significant sums of other people’s money on nothing more than hope, we were doing something stupid. We meant no harm. We just didn’t know better.
Now we do.
Things aren’t what they ain’t
Things are what they are. When we understand the nature of a thing, we can anticipate how the thing is likely to respond to various stimuli. When we want to create and deliver a software product, we have to understand three key kinds of things: resources, people, and constraints. The various elements that contribute to a software solution fall into one of these categories. When we misunderstand the category to which an element belongs, we are likely to be surprised by the outcome.
Perhaps the single most debilitating misunderstanding about the nature of things has been the assumption that people are resources. Many stupid things are done innocently, because managers and stakeholders believe people are resources. When they learn that people are not resources, they will be equipped to avoid doing these stupid things on purpose.
What is a resource? A resource is a thing that always behaves in the same way. For example, a chair is a type of resource. If a chair breaks, it can be replaced by another chair. The new chair will immediately function as a chair at 100% of its rated capacity. It will have no learning curve to discover how to be a chair. Its previous experience in other rooms will not affect its behavior in the new room. It will not have a personal style of being a chair, different from the personal styles of other chairs in the room. The inter-chair dynamics in the room will not be affected by the replacement of one of the chairs. The chair will never feel sick, bored, frustrated, tired, or excited. It will never have outside responsibilities, such as picking up its baby chair from chair day care. The performance of the new chair as a chair can be measured and predicted in exactly the same way as any other chair. These observations apply to any resource.
A human being does not have the defining characteristics of a resource. If a team member quits, he/she can be replaced by another person. The new person will not immediately function at 100% of capacity. He/she will have a learning curve to discover how to function in the organization and on the team. His/her previous work experience may be similar to that of the person he/she replaced, but it will not be identical. He/she will have different strengths and weaknesses than the previous person. He/she will have a unique personal style for doing the job, different from the last person who held the same job. The interpersonal dynamics of the team will change as a result of the new person joining; it will take some time for the team to settle in. At various times the person will feel sick, bored, frustrated, tired, or excited. He/she will have outside responsibilities, such as picking up a child from day care. The performance of a person cannot be measured and predicted in a mechanically consistent and predictable way. A person is not a resource.
Over the years, many managers and stakeholders have mistaken people for resources. They refer to people as resources when they speak. They predict the performance of people as if they were resources. They expect people to function as resources. They are at a loss to understand why this does not happen. Now, having read this, if they do it again they will be doing something stupid on purpose. They are no longer innocent.