Subjectivity and Risk Scoring

One common misconception that I was guilty of subscribing to is that while qualitative risk analysis always has some degree of subjectivity, quantitative risk analysis should remain strictly objective. My school of thought has since shifted that there is no such thing as truly objective data with which to conduct such an analysis, because if nothing else, even if the measurements are truly reproducible and repeatable, there is an observer bias that is injected into the mix. Rather than fighting the subjectivity, it’s much easier to just accept it. I know it sounds like a terrible idea, but bear with me.

In a previous post, I described pulling marbles from a bag using a Bayesian technique to develop an estimate. I actually had a few people get in touch with me about how ridiculous it is just automatically assume that there is an equiprobability of marbles without any prior knowledge as to the contents. Lacking any prior knowledge or experience, we are using a Bayesian concept of uninformative prior which gives us a general starting point. This concept demands indifference, and until we find out otherwise we will assume that there is an equiprobability.

But what if we had prior knowledge or experience? If I spent my youth playing marbles, perhaps I could tell you that a bag of that size and weight likely contains about 20 marbles. Could we use this to our advantage? Absolutely. We can now reasonably assume that the red marble will be drawn at a minimum of 5% of the time. This is what is often referred to as a priorpriori, or informative prior. But that’s just a minimum, we could be facing a significantly higher percentage. However, an incomplete data set is what drove use towards a Bayesian technique in the first place. We now have the option of either using the informative or the uninformative prior for our first round of testing.

This is something that pains people to discuss, because we are now possibly considering 5% likelihood as the low end of the spectrum based upon my personal experience. Before we grab the pitchforks and torches, let’s remember that expert judgment is regularly called upon throughout planning various aspects of a project and that human input remains important. This is not to say that we should not debate the veracity of this estimate, because we should! But let’s not debate the source, let’s not argue solely because it was based upon a person’s opinion rather than empirical data.

At the end of the first test, when we draw the first marble and record the result, we have our first posterior. However, when we go to run our test again, we will change the equation that we use to calculate probability. We will now have a new likelihood for drawing a red marble, as the posterior from the previous experiment becomes the next experiment’s prior.

As further data is collected, our calculations will continue to evolve and with this additional data we will develop refined probabilities. Here in lies the argument that people have against Bayesian methods for risk management in a predictive life cycle project: if planning is done upfront, how can appropriate plans ever be completed if the results of risk management activities continue to change? The problem becomes one of attempting to conduct planning in a vacuum rather than the methodology we are using for risk.

When the facts change, I change my mind. What do you do, sir? – John Maynard Keynes

It goes without any great argument that planning within a project is iterative and ongoing. In fact, project managers regularly engage in what is called progressive elaboration where a plan is regularly refined as more information becomes available. Risk management is no different. One of the processes invoked is Control Risks, where risks are supposed to be reassessed at a frequency determined within the risk management plan. This reassessment is supposed to determine if a shift in probability and/or impact has occurred since last assessed.

This type of reassessment should be intuitive for most people. I have lived in New Jersey for a number of years, and it gets some great weather – especially in the winter time. When we receive word that we are going to have a major snow storm, I try to check the weather every 3-4 hours to see if it’s still coming towards us and how much snow we are going to get. It’s a running joke that if they call for the storm of the century, we’ll get flurries; but the opposite holds true, as well. Think about how unreasonable it would be for me to watch the news once and tell the kids that they don’t have school next week!

Risk is uncertainty. The thing that hurts most projects is that we try and turn that uncertainty into certainty, which just does not work. I embrace everything in terms of likelihood of occurrence based upon probability developed from data. There’s an 80% chance of snow? I like to tell my kids there’s a 20% chance that they’re going to school. Risks can be managed, and we can do our due diligence to gather as much data as possible.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Systemigrams and Complexity

I have been down in Virginia this week for some training and had some time to talk with some really smart people, something that I always enjoy. One of the topics that invariably pops up in the greater D.C. area is government contracts and the administration thereof. It all started with discussing how difficult it is to give initial estimates for work that will somehow, hopefully be within whatever the tolerance is of whatever the government’s independent estimators come up with.

These discussions are always fun to be a part of, especially when you are working with people from different industries. One came from a background of bidding as a prime to sub work, and touted his experience with LPTA contracting, while the other works in an engineering firm where the complexity of solutions runs contrary to the cost-only provisions of LPTA. Hearing the words complexity and solution in the same sentence always piques my interest, so I immediately started asking about how they manage to develop accurate estimates where uncertainty reigns.

The traditional means of reducing risk is to conduct planning in a predictive life-cycle project, or to conduct a spike in an agile organization. The issue is that funding and resources may not be available to a project manager who is tasked with putting together a RFP for a procurement activity. This is where systemigrams come in. When I first heard the word systemigram, I first asked them to say it again, and then spell it, and finally I admitted my ignorance on the topic. To be fair, and in my defense, even for a portmanteau it sounds like it was made up on the spot.

While this may be overwhelming to look at initially, think about how beautifully it represents the multitude of factors that can affect cyber security on social media. This is not a concept that lends itself well to illustration. Systemigrams do an awesome job of organizing chaotic thoughts that relate to complexity.

Blair, Boardman and Sauser wrote about systemigrams as one approach for the United Kingdom’s Ministry of Defense to examine System of Systems (SoS) in lieu of traditional models that were used previously, which typically discounted rather than made sense of the complexity that an organization faces. The process begins with capturing the system in a narrative, and then illustrating it. This concept is hardly new, but the manner in which it is carried out makes it far more useful as it is possible to examine the subject holistically. They listed the following rules when building a systemigram:

Rules for Prose
1. Address strategic intent, not procedural tactics.
2. Be well-crafted, searching the mind of reader and
author.
3. Facilitation and dialogue with stakeholders
(owner/originator of strategic intent) may be required
to create structured text.
4. Length variable but less than 2000 words; scope
of prose must fit scope of resulting systemigram
Rules for Graphic
1. Required entities are nodes, links, inputs, outputs,
beginning, end.
2. Sized for a single page.
3. Nodes represent key concepts, noun phrases
specifying people, organizations, groups, artifacts,
and conditions.
4. Links represent relationships and flow between
nodes, verb phrases (occasional prepositional
phrases) indicating transformation, belonging,
and being.
5. Nodes may contain other nodes (to indicate
break-out of a document or an organizational/product/process
structure.
6. For clarity, the systemigram should contain no
crossover of links.
7. Based on experience, to maintain reasonable
size for presentation purposes, the ratio of
nodes to links should be approximately 1.5.
8. Main flow of systemigram is from top left to
bottom right.
9. Geography of systemigram may be exploited to
elucidate the “why,” “what,” “how” in order to
validate the Transformational aspect of the systemic
model.
10. Color may be used to draw attention to subfamilies
of concepts and transformations.

Even with these rules clearly stated, people struggle with complex concepts. It turns out that the ability to create and interpret systemigrams depend upon abilities that are often called Systems Thinking Skills (STS). Dorani et al. broke these skills down into seven categories: Dynamic Thinking, System-as-Cause Thinking, Forest Thinking, Operational Thinking, Close-Loop Thinking, Quantitative Thinking, and Scientific Thinking. These all boil down to looking at the whole as being more than just the sum of its parts.

This work expanded upon that of Richard Plate  who developed CMAST, Cognitive Mapping Assessment of Systems Thinking. Plate used CMAST to evaluate the ability of middle school aged children to understand and interpret non-linear relationships in a complex system. Plate’s work is an awesome read, and one story towards the end really stood out for me. It was about a child that wanted very badly to be correct, but was unable to deviate from a linear structure to a more uncomfortable, but correct, structure that involved branching.

One of the reason that I believe complexity is arguably the single largest threat to any project’s success is the fact that I have met many adults, some in positions of authority, that thought like this child: in a linear manner, unwilling to deviate even though they want to, even though they know that their answer is wrong – they just can’t wrap their minds around the complexity.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Bayesian Risk Management

One area of project management that stumps a lot of people is how we come up with the probability and impact data for quantitative analysis. This is something that is not discussed with any depth in the PMBoK, or even PMI’s Practice Standard for Project Risk Management. In fact, it is summed up succinctly in two paragraphs under Data Gathering and Representation techniques as basically “collect the data through interviews” and then “create a probability distribution”. For the record, I am definitely not criticizing PMI for this approach, as entire books are written, and fields of study based, upon what is described in those two paragraphs. Also, just a friendly reminder this is ONE way to prepare estimates for probability and impact.

If you’ve never heard of Bayes, you’re in for a treat. If I had to sum up the work of Thomas Bayes in one sentence, I would say that it allows you to make inferences with incomplete data. His work has evolved into fields of study from Game Theory to Statistics. Right now, I want to concentrate solely on Bayesian Statistics.

We can all agree that the initial period of a project life-cycle is when uncertainty is highest. This uncertainty is inherent in any project that is stood up, and it invariably decreases as planning is conducted. Project Risk is negatively correlated with Planning. This is not to say that planning can eliminate all risk, because that is impossible – but we can reduce uncertainty through concerted planning.

Risk management should begin at the start of a project. When I would find myself assigned to a project, one of the first things I sought to identify is what I needed to look into. What uncertainties are out there that I must address? Of course, the identification is the easy part! Relative estimation through qualitative risk analysis is the next step, and can be a fun exercise by ranking risk using animal names. Personally, I like to use chickens, horses and elephants. But what about when we got to quantitative analysis? Now we are no longer comparing one risk against another, but trying to determine numeric values for probability and impact.

Quantitative analysis can be especially difficult to do with any degree of accuracy if your organization has no historical experience in this type of work, or if the solution’s technological maturity is lacking. How can we make estimates about uncertainty, when we’re so uncertain about that with which we are uncertain? Management Reserves, per the PMBoK, are set aside for unidentified and unforeseeable risks – so it’s too late, we’ve already identified it, we own it, and it would be irresponsible to not plan for it.

Complexity and technical risk are not new challenges during quantitative analysis. I have read many papers on the topic, but I’m quite fond of a RAND Corp Working Paper by Lionel Galway which addresses the level of uncertainty inherent in complex projects, stating:

One argument against quantitative methods in advanced-technology projects is that there simply is not enough information with which to make judgments about time and cost. There may not even be enough information to specify the tasks and components.

I’m inclined to agree with Lionel that it is very difficult to make judgments with any degree of certainty when we’re lacking solid information. Risk data quality assessments are something called for by the PMBoK to test the veracity of the data we use. So how can we move forward?

Scott Ferson gives us a road map in a great article about Bayesian Methods in Risk Assessment. If you’d like to see the math side of this, please check out the article – I’m staying strictly conceptual. In this article, he used a scenario that described these concepts quite well: a bag of marbles. You have a cloth bag full of marbles. Well, you think it’s just marbles in there – but you don’t know and you can’t peek inside. Ferson is kind enough to tell us that there are five colors inside, including red – so we know red is possible but we don’t know if there is equal representation for all five colors. If we pull out one marble, what are the odds it will be red? This scenario has incomplete data, just like what most project teams have at the beginning of a project.

This comic does a great job introducing the two schools of thought for statistics that we’ll examine, and pretty quickly you will see why I am a fan of the Bayesian approach. This is not to totally discount frequentist probability, as I use it on a regular basis while conducting Six Sigma initiatives; however, it just does not work for our bag of marbles.

Determining a frequentist probability would require first establishing a key population parameter, its size: how many marbles are in the bag? Next, we would calculate a sample size based upon: population size, desired confidence level, precision level and the fact that we are working with discrete data. If the P value is too high, we can increase the sample size to increase of confidence that the results are not by chance. Based upon the sample, we can make statistical inferences about the population, and eventually we could establish the probability of drawing a red marble.

Just a couple problems here… I told you that you have a bag of marbles, but don’t forget that you’re not allowed to peek. You just have to tell me what the odds are of you drawing a red marble. But you don’t know the population size and you cannot draw a sample. The first marble you will see will be the one for which you were supposed to determine probability. Lacking any data makes frequentist probability calculations an impossibility, and having incomplete data severely inhibits its effectiveness. So let’s look at another method. Since it was developed to deal with incomplete data, Bayesian statistics allows us to approach everything very differently.

Bayesian statistics becomes more accurate as more information becomes available. The first marble will have the least accurate estimate, with the estimates getting better with every subsequent drawing. A simplified version of the formula becomes (n+1)/(N+c); where n=the number of red marbles we’ve seen so far, N=total number of marbles sampled, and c=number of colors possible. So for the first marble, the probability is calculated as (0+1)/(0+5)=0.20. While it may or may not be correct, it is a starting point. For every marble sampled and returned to the bag, the formula will change and the accuracy of future estimates will improve. Glickman and Vandyk expand upon this usage by expanding into the application of multilevel models with the use of Monte Carlo analysis.

I can’t ignore the very human aspects of Bayesian statistics, which was captured well by Haas et al. as they described three pillars of Bayesian Risk Management: Softcore BRM, Bayesian Due Diligence, and Hardcore BRM. Softcore relies upon subjective interpretation of uncertainty, think about consulting your subject matter experts. Hardcore leans on mathematical approaches and statistical inference, while the Due Diligence mitigates the triplet of opacity by ensuring that facts do not override the expertise of authoritative and learned people.

The reality is that the majority of people working on projects use software to determine their quantitative estimates for specific risks that have been identified. However, I’m not a fan of answering someone’s question by stating “don’t worry, there’s software for that!” While you may never have to calculate risks in an analogue manner, it is worth knowing that if you are dealing with unfamiliar risk a Bayesian approach makes more sense than a frequentist approach. If you have historical information available and a well-defined population, by all means, collect a sample. But keep in mind, as I often tell people when I work Six Sigma, “don’t make the data fit the test, select a test that fits the data”.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

What is Technical Risk?

I have seen technical risk described in many ways. One definition that I am particularly fond of was written almost ten years ago. In 2008, Mike Cottmeyer wrote:

Technical risk deals with those unproven assumptions in your emerging design that might significantly impact your ability to deliver the solution. Are we planning to use any unproven technologies to build the solution? Have we exercised the significant interfaces between systems? Can we demonstrate a working skeleton of the system? What about performance? What about security? Any technical decisions not validated by working software count as technical risks.

Technical risk is, arguably, the most dangerous type of risk that a project team could face because it is often the least understood. Identifying the risks may prove to be nearly impossible without concerted effort, let alone attempting the establishment of sufficient reserves in time or funds. Despite the fact that so many organizations readily acknowledge the threat of technical risk on a project, few follow-through by establishing a methodology that can successfully address it.

I have been asked, does technical risk only apply to software projects? The answer is no, it can apply to anything where we have gaps in our knowledge. These gaps may result from some aspect of the project that we either do not fully understand or through the depth of complexity we have failed to acknowledge. The Wright brothers spent four years developing the first airplane, and then a further two years improving it enough to be useful. Boeing’s Everett factory has a 49 day average build time for a 777.

This time lapse is awesome to watch but I can’t help but imagine the stress that the project management team is under during it. But as we watch the work proceed, does it look like it they’re having any issues figuring out what to do next? The AIDAprima was not a first, even though its $645 million price tag and two-year delivery time may lead you to believe it was the only of its kind. This video shows you what a complicated project, effectively managed, looks like.

The difference between complicated and complex is an important one. The construction of the AIDAprima was complicated, but they knew exactly what they had to do, and when they had to do it. They could account for risks that had been experienced on previous, similar projects. They were using tried and tested technologies and techniques that were implemented by experienced practitioners. It is hardly surprising that the project made for a great time lapse.

The description of the AIDAprima in the previous paragraph is sufficient to demonstrate that technical risk was not present, at least in any significant amount. Think about how the Wright brothers were not just building a deliverable, but also learning how the deliverable should work as they were pioneering the application of aeronautics. Contrast the construction of the AIDAprima with the Wright brothers as you consider the following questions as a high-level means of determining technical risk.

  • Has our organization ever managed a project of this type before?
  • Have the technology or the techniques to implement it been successful anywhere else previously?
  • Are we able to clearly articulate how requirements will be fulfilled?

Needless to say, if these answers are in the negative, then your project is likely facing a lot of technical risks. Addressing it effectively and developing realistic timelines and budgets are still possible, but take a far different approach than most organizations use. There is no ‘one size fits all’ so organizations must tailor an approach to address complexity.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions