There’s a buzz about systems thinking in the software world these days. Systems thinking isn’t new. Jerry Weinberg’s An Introduction to General Systems Thinking was first published in 1975. Senge’s Fifth Discipline came out in the 90s.

Still, we haven’t turned the corner on this thinking revolution.  That may be because the pragmatic benefits of the systems thinking approach aren’t always clear, and some people find system diagrams inaccessible. Further, it’s not always easy to see what’s a system problem and what’s not.

You probably have a system problem if:

  • You have tried repeatedly to solve a problem and it keeps coming back
  • You replace people and the overall behavior doesn’t change
  • Multiple factors interact to produce the result.

When you suspect a system problem, gather data that will help you understand the how the system performs, and how various factors interact. The following steps outline one way to “see” your system.

1) Expand the time horizon. Look back to the point where the problem may have started.  If you have historical data, look back at least two years. Notice any events that might have precipitated the problem (but don’t jump to conclusions).

2) Brainstorm factors that might be related to the problem. Choose factors that are potentially measurable. Name those variables using neutral or positive language to avoid confusing double negatives.

3) Sketch a graph of the variables to see how they move in relationship to each other.  Notice whether they move in parallel, in opposite directions, or seem unrelated.

4) Formulate a hypothesis based on the graph.  See if you can test it in a small way.

Here’s how one group of managers used this process to understand and improve their system.

During the year-end financial review the executives at FinCo were displeased that most of the IT projects were at least 100 per cent over budget. Wanting to  educe budget over-runs, the executives established a bonus target tied to meeting +/- 5 per cent of the original project budget.  They reasoned that project managers weren’t trying hard enough to meet budget targets, since there was no real consequence (to the project managers ) for not meeting targets. The incentive, they reasoned would provide the will project managers needed, and focus their attention that important number.

At the first quarterly review, most of the projects appeared to be on track to meet the target. But as the year went on, it was clear that many projects were still blasting through budgets at an alarming rate.  Since the incentive was clear, it must mean that the current crop of project managers didn’t have the skill to deliver to budget, they reasoned. This time the executives replaced the project managers.

But there was no cheer at year end. Once again, every single project was over budget. Faced with another year of disappointing results, the executives decided to try another approach. The fact that the results didn’t change after they changed the people made them think that perhaps their project managers weren’t entirely at fault.

First, they expanded their time horizon and looked as far back as they had project data.  They were shocked to see that out of dozens of projects, only one had spent less than 100% of it’s original budget–and it wasn’t a project managed by one of their replacement project managers, the ones they had hoped would work miracles.

Many projects spent 200-500 per cent more than their original budget. Could it really be that, over six years only one project manager had the will and skill to meet the budget?  Based their previous interventions, they could see that changing incentives and changing  people didn’t change the result. (Though changing the incentive did bring a change in behavior: project managers managed to game the numbers, at least early in the project.)

The executives brainstormed a list of potentially measurable factors of that might be related to the problem: size of project, lack of involvement of business stakeholder, number of scope changes, team size, length of project, number of new technologies, staff turnover, full-time vs. part-time team assignment.

To make it easier to see relationships among variables, they used neutral or positive words to avoid confusing double negatives. For example, stakeholder involvement, rather than lack stakeholder involvement.

They decided to look at three factors: stakeholder involvement, measured by the number of meetings between the stakeholder and the project team; original project size measured in effort months; and number of new or unfamiliar technologies used on the project. They didn’t worry about having precise measurements–they were going for a rough picture that would help them form a testable hypothesis.

Here’s what they saw:

Based on this initial sketch, they did more research and more graphing, looking for useful information about how their projects worked, and which levers were likely to reduce budget over-runs.

When the next project prioritization came up, the executives formulated an experiment. They chose two of the biggest projects. They set a guideline that these projects use rolling budget and deliver some piece of useful working software within three months. They directed the project sponsors and project managers of those projects to work together to identify smaller chunks of work that the projects could teams could build in short time-boxes, leading up to the three-month delivery. They also gained commitment from project sponsors to participate in project meetings.

At the end of three months, both projects had delivered useful software. Neither hit their budget numbers exactly– as one would expect. However, both stakeholders agreed that the new arrangement helped build confidence and trust. All agreed they had a better basis to predict costs for the next timebox than had been possible at the outset of the project.

Based on these results, (and despite some grumbling about re-work and wasted work in progress) the executives revised their project portfolio. They structured projects to complete useful working software within three months–at which point they could re-evaluate based on data and the current environment. Rather than plan and budget a year ahead, they committed to adaptive planning and incremental funding.

Initially, the executives in this story believed in their budget numbers more than they believed in their project managers.  It is common for people to put faith in predictive numbers such as budgets and estimates.  That faith led the executives down the wrong path–they tried to fix the problem by fixing their project managers. It wasn’t until they looked beyond the budget numbers that they began to see their project system and understand how to improve it.

Of course budget reports can be useful; while they can alert you to a problem, they  don’t give you the information you need to really improve the situation. To do that, look beyond budget reports. If you want to steer your system, you have to see it first. Using the steps described in this article can help you understand problems in your organization–whether the problem relates to projects, technical debt, or staff turnover–and see the dynamics of your system. Armed with that understanding, you’ll better equipped to find a fix that fits.

Pin It on Pinterest

Share This