Can You Predict If A Project Is Going To Be Successful?

Can You Predict

We all have failed projects. But what if we could predict how likely a project was to be successful. Can we?

Extensive History

There are certainly some factors that we would all agree are definite indicators of a project’s probability of being successful. Take two projects, identical in every way, except one has all resources utilised at 200% of capacity and the other has all resources utilised at 50% of capacity. Everything else is identical. There is universal agreement that the project with over-utilisation of resources is less likely to be successful than the project with under-utilisation of resources. In this very abstract scenario, project success has an element of predictability.

But that doesn’t mean a project with more resource availability has a higher probability of being successful, than a project with lower resource availability, even if all other things are equal. For example, is a project with resources utilised at 51% of capacity, more likely to be successful, than a project with resources utilised at 52% of capacity? The difference is probably negligible. Both projects are equally likely to be successful. But what about a project with resources utilised at 100% of capacity, compared to a project with resources utilised at 101% of capacity? The difference is the same as in the previous example (1%), but is the effect on probability of success different?

So now we have the situation where there is a tipping point beyond which a project’s likelihood of success starts to change, as well as another tipping point later, after which changes have no discernible effect (projects with a 500% or 501% of resource utilisation, for example, are equally likely to be successful). This would give us a success curve

This leads to the next logical question. What are the values of the tipping points? Of course, that question we can never truly find the answer to. You can’t set up identical projects with different values of resource availability, keep everything else equal, and then run the projects to completion to see which ones were successful and which ones failed. Maybe that means project success is not predictable? Or is there another way?

Like we have developed the argument around resource utilisation and shown how that could affect project success rates, there are other variables that we can develop a similar argument for. Keep everything else equal and only alter the amount of budget contingency; Keep everything else equal and only alter the amount of slack on the critical path; Keep everything else equal and only alter the amount of scope creep. All these scenarios will develop along similar arguments to the resource availability example. Whilst we can’t provide absolute measurements and can’t define our tipping points, we can at least develop a theoretical model, a probability of success curve, for how probability of success will alter depending on different values.

Before we come on to how we can use this, we need to think about the effect of combinatorial factors. So far, in all our examples, we have only changed one factor and kept everything else equal to derive our success curves. In our real projects, there are thousands of moving parts and thousands of factors that we might want to take account of. These factors change values at the same time. What effect does that have? Does it have any effect?

If we have a project with 95% resource utilisation and -30% budget contingency, is that more, or less, likely to be successful than a project with 95% resource utilisation and a 30% scope creep? Are scope creep and resource utilisation the deadly duo and when seen in combination, there is an accelerator effect and projects are even less likely to be successful? And how can we measure and validate this?

There is no doubt that combinatorial factors make the whole analysis of project success a good deal more complicated. Measurement and validation of any model, very difficult to start with, now becomes almost impossible and our hopes of finding a model to predict project success are fading. But there are some assumptions and techniques we can use to give us a glimmer of hope.

If we were to build such a model that predicted project success, what would we use it for? It turns out, that an answer to this question, could help us build a useful model for at least one scenario. A model that ranks project success, across a range of projects, relative to each other, would be useful to help us understand which of our projects, across our whole portfolio, are least likely to be successful. Those are the projects that we might review, change, or keep a careful eye on as they progressed. In this scenario, an absolute ‘score’, a ‘percentage probability of success’, doesn’t matter. What matters, is a comparative score. We are only interested in those projects that are low, compared to others.

Our work is simplified considerably with a comparative model. The position of our tipping points does not need to be as exact as the comparative differences still apply wherever the values of the tipping points are set.2 The probability of success ‘score’ for different points along our success curve no longer matter either.

As we are only building a comparative model, it’s the difference between the scores for different projects that matter, not the absolute scores. So now, if a project has a 100% resource utilisation, it doesn’t matter what ‘success score’ is given to this point, what matters is the comparison of this score to other scores.

There is still complexity in combining factors, which absolutely needs to be done in any model of worth. No one would argue that project success is entirely dependent on one single factor. Since the ‘multiplier’ effect of different combinations cannot be safely evaluated (you can’t prove that factor A, in combination with factor B, is more likely to lead to a failed project), the simplest thing to do is to combine factors in the least aggressive way (i.e. additive not multiplicative) and to combine all factors in a consistent way. The model will not be perfect, but it will still be valid as a comparative tool to compare project A’s chance of success against project B’s chance of success.

So, what do we end up with? We have a very simplified model that gives us the ability to compare a group of projects against each other, to show which ones are more likely to be successful and which ones are less likely to be successful. It’s not perfect and there’s still work to be done to work out which factors we should be including (1000s is not practical. But do we need 100s to have a good working model, or are 10s of factors enough?). But with enough data to analyse, this problem can be solved. There are also assumptions and simplifications that we’ve had to use to get to any model. Despite the limitations, the model is something we can use in our evaluation of projects - another tool to help us deliver successful projects.

Any model of project success becomes even more useful when we apply human interference and irrationality in to the model, which is the environment that a real project must be delivered in to, but that’s a blog post for another day.

Want to understand how SNAK Consultancy can help your Business?

Contact Us