Archive for April 2012
Projects as networks of commitments – a paper preview
Mainstream project management standards and texts tend to focus on the tools, techniques and processes needed to manage projects. They largely ignore the fact that projects are conceived, planned, carried out and monitored by people. In a paper entitled, Towards a holding environment: building shared understanding and commitment in projects, Paul Culmsee and I present a viewpoint that puts people and their project-relevant concerns at the center of projects. This post is a summary of the main ideas of the paper which is published in the May 2012 issue of the International Journal of Managing Projects in Business.
Conventional approaches to project management tend to give short shrift to the social aspects of projects – issues such as stakeholders with differing viewpoints regarding the rationale and goals of a project. Our contention is that problems arising from stakeholder diversity are best resolved by helping stakeholders achieve a shared (or common) understanding of project goals and, based on that, a shared commitment to working towards them. Indeed, we believe this is a crucial but often overlooked facet of managing the early stages of a project.
One of the prerequisites to achieving shared understanding is open dialogue –dialogue that is free from politics, strategic behaviours and power games that are common in organisations. Details of what constitutes such dialogue and the conditions necessary for it are described by the philosopher Juergen Habermas in his theory of communicative rationality. Our paper draws extensively on Habermas’ work and also presents a succinct summary of the main ideas of communicative rationality.
The conditions required for open dialogue in the sense described by Habermas are:
- Inclusion: all affected stakeholders should be included in the dialogue.
- Autonomy: all participants should be able to present their viewpoints and debate those of others independently.
- Empathy: participants must be willing to listen to viewpoints that may be different from theirs and make the effort to understand them.
- Power neutrality: differences in status or authority levels should not affect the discussion:
- Transparency: participants must be completely honest when presenting their views or discussing those of others.
We call an environment which fosters open dialogue a holding environment. Although a holding environment as characterised above may seem impossible to create, it turns out that an alliance-based approach to projects can approximate the conditions necessary for one. In brief, alliancing is an approach to projects in which different stakeholders agree, by contract, to work collaboratively to achieve mutually agreed goals while sharing risks and rewards in an equitable manner. There are a fair number of large projects that have been successfully delivered using such an approach (see the case studies on the Center for Collaborative Contracting web site).
Once such an approach is endorsed by all project stakeholders, most of the impediments to open dialogue are removed. In the paper we use a case study to illustrate how stakeholder differences can be resolved in such an environment. In particular we show how project-relevant issues and the diverse viewpoints on them can be captured and reconciled using the IBIS (Issue-based information system) notation (see this post for a quick introduction to IBIS). It should be noted that our concept of a holding environment does not require the use of IBIS; any means to capture issues, ideas and arguments raised in a debate will work just as well. The aim is to reach a shared understanding, and once stakeholders do this – using IBIS or any other means – they are able to make mutual commitments to action.
It should be emphasised that an alliance-based approach to projects takes a fair bit of effort and commitment from all parties to implement successfully. In general such effort is justifiable only for very large projects, typically public infrastructure projects (which is why many government agencies are interested in it). It is interesting to speculate how such an approach can be “scaled down” to smaller projects like the ones undertaken by corporate IT departments. Unfortunately such speculations are not permitted in research papers. However, we discuss some of these at length in our book, The Heretic’s Guide to Best Practices.
In their ground-breaking book on design Terry Winograd and Fernando Flores describe organisations as networks of commitments. We believe this metaphor is appropriate for projects too. As we state in the paper, “Organisations and projects are made up of people, and it is the commitments that people make (to carry out certain actions) that make organisations or projects tick. This metaphor – that projects are networks of commitments – lies at the heart of the perspective we propose in this paper. The focus of project management ought to be on how commitments are made and maintained through the life of a project.
The shape of things to come: an essay on probability in project estimation
Introduction
Project estimates are generally based on assumptions about future events and their outcomes. As the future is uncertain, the concept of probability is sometimes invoked in the estimation process. There’s enough been written about how probabilities can be used in developing estimates; indeed there are a good number of articles on this blog – see this post or this one, for example. However, most of these writings focus on the practical applications of probability rather than on the concept itself – what it means and how it should be interpreted. In this article I address the latter point in a way that will (hopefully!) be of interest to those working in project management and related areas.
Uncertainty is a shape, not a number
Since the future can unfold in a number of different ways one can describe it only in terms of a range of possible outcomes. A good way to explore the implications of this statement is through a simple estimation-related example:
Assume you’ve been asked to do a particular task relating to your area of expertise. From experience you know that this task usually takes 4 days to complete. If things go right, however, it could take as little as 2 days. On the other hand, if things go wrong it could take as long as 8 days. Therefore, your range of possible finish times (outcomes) is anywhere between 2 to 8 days.
Clearly, each of these outcomes is not equally likely. The most likely outcome is that you will finish the task in 4 days. Moreover, the likelihood of finishing in less than 2 days or more than 8 days is zero. If we plot the likelihood of completion against completion time, it would look something like Figure 1.
Figure 1 begs a couple of questions:
- What are the relative likelihoods of completion for all intermediate times – i.e. those between 2 to 4 days and 4 to 8 days?
- How can one quantify the likelihood of intermediate times? In other words, how can one get a numerical value of the likelihood for all times between 2 to 8 days? Note that we know from the earlier discussion that this must be zero for any time less than 2 or greater than 8 days.
The two questions are actually related: as we shall soon see, once we know the relative likelihood of completion at all times (compared to the maximum), we can work out its numerical value.
Since we don’t know anything about intermediate times (I’m assuming there is no historical data available, and I’ll have more to say about this later…), the simplest thing to do is to assume that the likelihood increases linearly (as a straight line) from 2 to 4 days and decreases in the same way from 4 to 8 days as shown in Figure 2. This gives us the well-known triangular distribution.
Note: The term distribution is simply a fancy word for a plot of likelihood vs. time.
Of course, this isn’t the only possibility; there are an infinite number of others. Figure 3 is another (admittedly weird) example.
Further, it is quite possible that the upper limit (8 days) is not a hard one. It may be that in exceptional cases the task could take much longer (say, if you call in sick for two weeks) or even not be completed at all (say, if you leave for that mythical greener pasture). Catering for the latter possibility, the shape of the likelihood might resemble Figure 4.
From the figures above, we see that uncertainties are shapes rather than single numbers, a notion popularised by Sam Savage in his book, The Flaw of Averages. Moreover, the “shape of things to come” depends on a host of factors, some of which may not even be on the radar when a future event is being estimated.
Making likelihood precise
Thus far, I have used the word “likelihood” without bothering to define it. It’s time to make the notion more precise. I’ll begin by asking the question: what common sense properties do we expect a quantitative measure of likelihood to have?
Consider the following:
- If an event is impossible, its likelihood should be zero.
- The sum of likelihoods of all possible events should equal complete certainty. That is, it should be a constant. As this constant can be anything, let us define it to be 1.
In terms of the example above, if we denote time by and the likelihood by then:
for and
And
where
Where denotes the sum of all non-zero likelihoods – i.e. those that lie between 2 and 8 days. In simple terms this is the area enclosed by the likelihood curves and the x axis in figures 2 to 4. (Technical Note: Since is a continuous variable, this should be denoted by an integral rather than a simple sum, but this is a technicality that need not concern us here)
is , in fact, what mathematicians call probability– which explains why I have used the symbol rather than . Now that I’ve explained what it is, I’ll use the word “probability” instead of ” likelihood” in the remainder of this article.
With these assumptions in hand, we can now obtain numerical values for the probability of completion for all times between 2 and 8 days. This can be figured out by noting that the area under the probability curve (the triangle in figure 2 and the weird shape in figure 3) must equal 1. I won’t go into any further details here, but those interested in the maths for the triangular case may want to take a look at this post where the details have been worked out.
The meaning of it all
(Note: parts of this section borrow from my post on the interpretation of probability in project management)
So now we understand how uncertainty is actually a shape corresponding to a range of possible outcomes, each with their own probability of occurrence. Moreover, we also know, in principle, how the probability can be calculated for any valid value of time (between 2 and 8 days). Nevertheless, we are still left with the question as to what a numerical probability really means.
As a concrete case from the example above, what do we mean when we say that there is 100% chance (probability=1) of finishing within 8 days? Some possible interpretations of such a statement include:
- If the task is done many times over, it will always finish within 8 days. This is called the frequency interpretation of probability, and is the one most commonly described in maths and physics textbooks.
- It is believed that the task will definitely finish within 8 days. This is called the belief interpretation. Note that this interpretation hinges on subjective personal beliefs.
- Based on a comparison to similar tasks, the task will finish within 8 days. This is called the support interpretation.
Note that these interpretations are based on a paper by Glen Shafer. Other papers and textbooks frame these differently.
The first thing to note is how different these interpretations are from each other. For example, the first one offers a seemingly objective interpretation whereas the second one is unabashedly subjective.
So, which is the best – or most correct – one?
A person trained in science or mathematics might claim that the frequency interpretation wins hands down because it lays out an objective, well -defined procedure for calculating probability: simply perform the same task many times and note the completion times.
Problem is, in real life situations it is impossible to carry out exactly the same task over and over again. Sure, it may be possible to do almost the same task, but even straightforward tasks such as vacuuming a room or baking a cake can hold hidden surprise (vacuum cleaners do malfunction and a friend may call when one is mixing the batter for a cake). Moreover, tasks that are complex (as is often the case in the project work) tend to be unique and can never be performed in exactly the same way twice. Consequently, the frequency interpretation is great in theory but not much use in practice.
“That’s OK,” another estimator might say,” when drawing up an estimate, I compared it to other similar tasks that I have done before.”
This is essentially the support interpretation (interpretation 3 above). However, although this seems reasonable, there is a problem: tasks that are superficially similar will differ in the details, and these small differences may turn out to be significant when one is actually carrying out the task. One never knows beforehand which variables are important. For example, my ability to finish a particular task within a stated time depends not only on my skill but also on things such as my workload, stress levels and even my state of mind. There are many external factors that one might not even recognize as being significant. This is a manifestation of the reference class problem.
So where does that leave us? Is probability just a matter of subjective belief?
No, not quite: in reality, estimators will use some or all of three interpretations to arrive at “best guess” probabilities. For example, when estimating a project task, a person will likely use one or more of the following pieces of information:
- Experience with similar tasks.
- Subjective belief regarding task complexity and potential problems. Also, their “gut feeling” of how long they think it ought to take. These factors often drive excess time or padding that people work into their estimates.
- Any relevant historical data (if available)
Clearly, depending on the situation at hand, estimators may be forced to rely on one piece of information more than others. However, when called upon to defend their estimates, estimators may use other arguments to justify their conclusions depending on who they are talking to. For example, in discussions involving managers, they may use hard data presented in a way that supports their estimates, whereas when talking to their peers they may emphasise their gut feeling based on differences between the task at hand and similar ones they have done in the past. Such contradictory representations tend to obscure the means by which the estimates were actually made.
Summing up
Estimates are invariably made in the face of uncertainty. One way to get a handle on this is by estimating the probabilities associated with possible outcomes. Probabilities can be reckoned in a number of different ways. Clearly, when using them in estimation, it is crucial to understand how probabilities have been derived and the assumptions underlying these. We have seen three ways in which probabilities are interpreted corresponding to three different ways in which they are arrived at. In reality, estimators may use a mix of the three approaches so it isn’t always clear how the numerical value should be interpreted. Nevertheless, an awareness of what probability is and its different interpretations may help managers ask the right questions to better understand the estimates made by their teams.