Schedule Risk: Event Chain Methodology

I want you to think about time estimates for a moment, specifically, let’s talk about something we all enjoy: our commute. The last significant commute I had was 70 miles each way through two metropolitan traffic patterns. Yikes! Does anybody want to guess how long it took me to get to work? If on a highway with a speed limit of 65mph, and assuming I never went below that speed from my driveway to my parking space at work – just over an hour. But what about traffic signals, stop signs, and traffic congestion? How can we account for these uncertainties?

Out of the many types of risk that we discuss, I believe that schedule risk is one of the most difficult to deal with through reserves. There are two scheduling methods that we will discuss. Both rely upon a network diagram that are created through the use of a diagramming method, typically the Precedence Diagramming Method .

  • Critical Path Method (CPM): This method establishes the longest duration sequence of activities from the start to the finish of the project through the use of a forward and backward pass. The Critical Path is that path which has the least amount of float, usually zero, but possibly a positive or negative value. Typically, the individual activity estimates are calculated as a function of optimistic (best-case), most-likely, and pessimistic (worst-case) scenario estimates.
  • Critical Chain Method (CCM): This method strips activity duration down to the bare minimum of what could be considered reasonable. By forcing a sense of urgency, it counteracts a natural tendency towards Parkinson’s Law.

The methods with which we handle risk in either scheduling methodology are quite different. In CPM, we tend to use what is called the beta-PERT distribution. Since this distribution is non-normal, we cannot calculate the variance as a function of distance data has from the mean; instead, we sneak in the back door of the equation. Treating this non-normal distribution as normal, we simply look at the range of the data (Pessimistic – Optimistic) and divide by six to get a standard deviation, which we then apply to the central tendency. We can then calculate the variance by squaring the standard deviation. There are a few major issues with this:

  • The three estimates collected for calculations are almost always spread out in a specific manner. If my regular commute to work is 60 minutes, a severe amount of traffic may double that amount of time to 120 minutes. Could a total absence of traffic reduce the regular commute by an equal amount, all the way down to 0 minutes? Not likely. The delta between Optimistic and Most-Likely, and Most-Likely and Pessimistic are not equal. This leads to a beta-PERT distribution with a positive skew. The central tendency tends to  land near the mode (as PERT is weighted towards the Most-Likely) and the standard deviation treats it as a normal distribution regardless of how the data is truly distributed. 
  • Once we calculate this faux-variance that is based upon the range of the data rather than its spread, what do we do with it? Many times, it is added in the form of a contingency reserve at the end of the project. This is no different than assuming that you might hit traffic and have a 120 minute commute, so you should always leave 120 minutes early. While this may be true depending on your risk exposure (think about a big meeting or interview!), wasting 60 minutes every morning can get old fast.
  • The original estimates were made up by someone. I’m not saying that this is a bad thing. The fact of the matter is that all estimates come from someone at some point, we just need to ensure their veracity. If it is an estimate about work that has never been done before, a Bayesian model may prove useful.

The Critical Chain Method calls for a series of buffers to be placed within it to account for the fact that it has no wiggle room whatsoever. The largest tends to be called the project buffer which sits at the end of the critical chain before it hits the end of the project. To protect the critical chain, other chains having feeding buffers to account for their uncertainty prior to merging. While CCM certainly has its merits, it also has its dangers:

  • Buffers are arbitrarily established. With CPM we used some type of equation to calculate the likely variance that we could encounter, but with CCM we are not utilizing the estimates provided to us. 
  • By not using realistic activity duration, we may need to increase the amount of buffer above whatever amount would have been necessary for a schedule that accounted for realistic duration and variance.
  • Practitioners may become accustomed to the manufactured sense of urgency and begin to pad their estimates, thereby negating the positive effects of CCM. This has a secondary effect of adding risk to the schedule.
  • Since all of the buffer is at the end of the project, there is little ability for the management team to address risks as they occur throughout the project work outside of delaying subsequent tasks.

An interesting solution to these issues is the Event Chain Method (ECM). This is definitely not a ‘cure-all’ that should be applied to every project, but should be considered for those projects where two major preconditions exist: the possibility of project failure due to schedule slippage, and the abundance of schedule risk. ECM examines activities and their uncertainties in a manner different form CPM or CCM.

The Intaver Institute does a great job describing how ECM addresses the two basic types of risk, aleatory and epistemic. When we think about how our commute is normally 60 minutes, but sometimes it is 58 minutes, or 61 minutes, that is what we would typically call common cause variation, or aleatory uncertainty. If you got a new job at a new location, how can you give a good estimate if you have never made the commute? Here we are dealing with epistemic uncertainty due to a lack of knowledge. ECM addresses these types of uncertainty through the regular use of Monte Carlo simulation and Bayesian Belief Networks.

Virine and Trumper wrote at ProjectDecisions.org that ECM has six principles (I will only focus on the first two):

  1. Moment of event and excitation status
  2. Event chains
  3. Event chain diagrams and state tables
  4. Monte Carlo analysis
  5. Critical event chains and event cost
  6. Project performance measurement with event and event chains

The first principle refers to what is typically called risk responses. When the environment changes, so will our actions. By assigning a ground state and various levels of excited states for different possible triggering events and related outcomes, we can visually represent the different possibilities on the same schedule model instance – typically a Gantt chart. Another consideration on this principle is when the risk may occur in relationship to the timeline of an activity, even as it occurs. Too often, we look at risk as binary events of either happening or not, normally before an activity begins. I know I’m not the only one guilty of going to work and leaving my umbrella in the car because it’s not raining, but then it starts raining later and I get soaked!

The second principle is of event chains, and what are sometimes called second and third order effects, or the law of unintended consequences. When there is a problem on a project, most project managers will have a knee-jerk reaction and cause more harm than good. By offering built in responses, in the form of the aforementioned excited states, we can offer immediate actions without unplanned workarounds becoming necessary.

When it comes to project scheduling, there is no one-size-fits-all. Different methods will work well in different situations. This is one that can work well in a project that is facing potential failure through a large amount of risk.


Karl Cheney
kcheney@castlebarsolutions.com
For more posts, check out our blog at:
Castlebar Solutions

Subjectivity and Risk Scoring

One common misconception that I was guilty of subscribing to is that while qualitative risk analysis always has some degree of subjectivity, quantitative risk analysis should remain strictly objective. My school of thought has since shifted that there is no such thing as truly objective data with which to conduct such an analysis, because if nothing else, even if the measurements are truly reproducible and repeatable, there is an observer bias that is injected into the mix. Rather than fighting the subjectivity, it’s much easier to just accept it. I know it sounds like a terrible idea, but bear with me.

In a previous post, I described pulling marbles from a bag using a Bayesian technique to develop an estimate. I actually had a few people get in touch with me about how ridiculous it is just automatically assume that there is an equiprobability of marbles without any prior knowledge as to the contents. Lacking any prior knowledge or experience, we are using a Bayesian concept of uninformative prior which gives us a general starting point. This concept demands indifference, and until we find out otherwise we will assume that there is an equiprobability.

But what if we had prior knowledge or experience? If I spent my youth playing marbles, perhaps I could tell you that a bag of that size and weight likely contains about 20 marbles. Could we use this to our advantage? Absolutely. We can now reasonably assume that the red marble will be drawn at a minimum of 5% of the time. This is what is often referred to as a priorpriori, or informative prior. But that’s just a minimum, we could be facing a significantly higher percentage. However, an incomplete data set is what drove use towards a Bayesian technique in the first place. We now have the option of either using the informative or the uninformative prior for our first round of testing.

This is something that pains people to discuss, because we are now possibly considering 5% likelihood as the low end of the spectrum based upon my personal experience. Before we grab the pitchforks and torches, let’s remember that expert judgment is regularly called upon throughout planning various aspects of a project and that human input remains important. This is not to say that we should not debate the veracity of this estimate, because we should! But let’s not debate the source, let’s not argue solely because it was based upon a person’s opinion rather than empirical data.

At the end of the first test, when we draw the first marble and record the result, we have our first posterior. However, when we go to run our test again, we will change the equation that we use to calculate probability. We will now have a new likelihood for drawing a red marble, as the posterior from the previous experiment becomes the next experiment’s prior.

As further data is collected, our calculations will continue to evolve and with this additional data we will develop refined probabilities. Here in lies the argument that people have against Bayesian methods for risk management in a predictive life cycle project: if planning is done upfront, how can appropriate plans ever be completed if the results of risk management activities continue to change? The problem becomes one of attempting to conduct planning in a vacuum rather than the methodology we are using for risk.

When the facts change, I change my mind. What do you do, sir? – John Maynard Keynes

It goes without any great argument that planning within a project is iterative and ongoing. In fact, project managers regularly engage in what is called progressive elaboration where a plan is regularly refined as more information becomes available. Risk management is no different. One of the processes invoked is Control Risks, where risks are supposed to be reassessed at a frequency determined within the risk management plan. This reassessment is supposed to determine if a shift in probability and/or impact has occurred since last assessed.

This type of reassessment should be intuitive for most people. I have lived in New Jersey for a number of years, and it gets some great weather – especially in the winter time. When we receive word that we are going to have a major snow storm, I try to check the weather every 3-4 hours to see if it’s still coming towards us and how much snow we are going to get. It’s a running joke that if they call for the storm of the century, we’ll get flurries; but the opposite holds true, as well. Think about how unreasonable it would be for me to watch the news once and tell the kids that they don’t have school next week!

Risk is uncertainty. The thing that hurts most projects is that we try and turn that uncertainty into certainty, which just does not work. I embrace everything in terms of likelihood of occurrence based upon probability developed from data. There’s an 80% chance of snow? I like to tell my kids there’s a 20% chance that they’re going to school. Risks can be managed, and we can do our due diligence to gather as much data as possible.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Systemigrams and Complexity

I have been down in Virginia this week for some training and had some time to talk with some really smart people, something that I always enjoy. One of the topics that invariably pops up in the greater D.C. area is government contracts and the administration thereof. It all started with discussing how difficult it is to give initial estimates for work that will somehow, hopefully be within whatever the tolerance is of whatever the government’s independent estimators come up with.

These discussions are always fun to be a part of, especially when you are working with people from different industries. One came from a background of bidding as a prime to sub work, and touted his experience with LPTA contracting, while the other works in an engineering firm where the complexity of solutions runs contrary to the cost-only provisions of LPTA. Hearing the words complexity and solution in the same sentence always piques my interest, so I immediately started asking about how they manage to develop accurate estimates where uncertainty reigns.

The traditional means of reducing risk is to conduct planning in a predictive life-cycle project, or to conduct a spike in an agile organization. The issue is that funding and resources may not be available to a project manager who is tasked with putting together a RFP for a procurement activity. This is where systemigrams come in. When I first heard the word systemigram, I first asked them to say it again, and then spell it, and finally I admitted my ignorance on the topic. To be fair, and in my defense, even for a portmanteau it sounds like it was made up on the spot.

While this may be overwhelming to look at initially, think about how beautifully it represents the multitude of factors that can affect cyber security on social media. This is not a concept that lends itself well to illustration. Systemigrams do an awesome job of organizing chaotic thoughts that relate to complexity.

Blair, Boardman and Sauser wrote about systemigrams as one approach for the United Kingdom’s Ministry of Defense to examine System of Systems (SoS) in lieu of traditional models that were used previously, which typically discounted rather than made sense of the complexity that an organization faces. The process begins with capturing the system in a narrative, and then illustrating it. This concept is hardly new, but the manner in which it is carried out makes it far more useful as it is possible to examine the subject holistically. They listed the following rules when building a systemigram:

Rules for Prose
1. Address strategic intent, not procedural tactics.
2. Be well-crafted, searching the mind of reader and
author.
3. Facilitation and dialogue with stakeholders
(owner/originator of strategic intent) may be required
to create structured text.
4. Length variable but less than 2000 words; scope
of prose must fit scope of resulting systemigram
Rules for Graphic
1. Required entities are nodes, links, inputs, outputs,
beginning, end.
2. Sized for a single page.
3. Nodes represent key concepts, noun phrases
specifying people, organizations, groups, artifacts,
and conditions.
4. Links represent relationships and flow between
nodes, verb phrases (occasional prepositional
phrases) indicating transformation, belonging,
and being.
5. Nodes may contain other nodes (to indicate
break-out of a document or an organizational/product/process
structure.
6. For clarity, the systemigram should contain no
crossover of links.
7. Based on experience, to maintain reasonable
size for presentation purposes, the ratio of
nodes to links should be approximately 1.5.
8. Main flow of systemigram is from top left to
bottom right.
9. Geography of systemigram may be exploited to
elucidate the “why,” “what,” “how” in order to
validate the Transformational aspect of the systemic
model.
10. Color may be used to draw attention to subfamilies
of concepts and transformations.

Even with these rules clearly stated, people struggle with complex concepts. It turns out that the ability to create and interpret systemigrams depend upon abilities that are often called Systems Thinking Skills (STS). Dorani et al. broke these skills down into seven categories: Dynamic Thinking, System-as-Cause Thinking, Forest Thinking, Operational Thinking, Close-Loop Thinking, Quantitative Thinking, and Scientific Thinking. These all boil down to looking at the whole as being more than just the sum of its parts.

This work expanded upon that of Richard Plate  who developed CMAST, Cognitive Mapping Assessment of Systems Thinking. Plate used CMAST to evaluate the ability of middle school aged children to understand and interpret non-linear relationships in a complex system. Plate’s work is an awesome read, and one story towards the end really stood out for me. It was about a child that wanted very badly to be correct, but was unable to deviate from a linear structure to a more uncomfortable, but correct, structure that involved branching.

One of the reason that I believe complexity is arguably the single largest threat to any project’s success is the fact that I have met many adults, some in positions of authority, that thought like this child: in a linear manner, unwilling to deviate even though they want to, even though they know that their answer is wrong – they just can’t wrap their minds around the complexity.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Bayesian Risk Management

One area of project management that stumps a lot of people is how we come up with the probability and impact data for quantitative analysis. This is something that is not discussed with any depth in the PMBoK, or even PMI’s Practice Standard for Project Risk Management. In fact, it is summed up succinctly in two paragraphs under Data Gathering and Representation techniques as basically “collect the data through interviews” and then “create a probability distribution”. For the record, I am definitely not criticizing PMI for this approach, as entire books are written, and fields of study based, upon what is described in those two paragraphs. Also, just a friendly reminder this is ONE way to prepare estimates for probability and impact.

If you’ve never heard of Bayes, you’re in for a treat. If I had to sum up the work of Thomas Bayes in one sentence, I would say that it allows you to make inferences with incomplete data. His work has evolved into fields of study from Game Theory to Statistics. Right now, I want to concentrate solely on Bayesian Statistics.

We can all agree that the initial period of a project life-cycle is when uncertainty is highest. This uncertainty is inherent in any project that is stood up, and it invariably decreases as planning is conducted. Project Risk is negatively correlated with Planning. This is not to say that planning can eliminate all risk, because that is impossible – but we can reduce uncertainty through concerted planning.

Risk management should begin at the start of a project. When I would find myself assigned to a project, one of the first things I sought to identify is what I needed to look into. What uncertainties are out there that I must address? Of course, the identification is the easy part! Relative estimation through qualitative risk analysis is the next step, and can be a fun exercise by ranking risk using animal names. Personally, I like to use chickens, horses and elephants. But what about when we got to quantitative analysis? Now we are no longer comparing one risk against another, but trying to determine numeric values for probability and impact.

Quantitative analysis can be especially difficult to do with any degree of accuracy if your organization has no historical experience in this type of work, or if the solution’s technological maturity is lacking. How can we make estimates about uncertainty, when we’re so uncertain about that with which we are uncertain? Management Reserves, per the PMBoK, are set aside for unidentified and unforeseeable risks – so it’s too late, we’ve already identified it, we own it, and it would be irresponsible to not plan for it.

Complexity and technical risk are not new challenges during quantitative analysis. I have read many papers on the topic, but I’m quite fond of a RAND Corp Working Paper by Lionel Galway which addresses the level of uncertainty inherent in complex projects, stating:

One argument against quantitative methods in advanced-technology projects is that there simply is not enough information with which to make judgments about time and cost. There may not even be enough information to specify the tasks and components.

I’m inclined to agree with Lionel that it is very difficult to make judgments with any degree of certainty when we’re lacking solid information. Risk data quality assessments are something called for by the PMBoK to test the veracity of the data we use. So how can we move forward?

Scott Ferson gives us a road map in a great article about Bayesian Methods in Risk Assessment. If you’d like to see the math side of this, please check out the article – I’m staying strictly conceptual. In this article, he used a scenario that described these concepts quite well: a bag of marbles. You have a cloth bag full of marbles. Well, you think it’s just marbles in there – but you don’t know and you can’t peek inside. Ferson is kind enough to tell us that there are five colors inside, including red – so we know red is possible but we don’t know if there is equal representation for all five colors. If we pull out one marble, what are the odds it will be red? This scenario has incomplete data, just like what most project teams have at the beginning of a project.

This comic does a great job introducing the two schools of thought for statistics that we’ll examine, and pretty quickly you will see why I am a fan of the Bayesian approach. This is not to totally discount frequentist probability, as I use it on a regular basis while conducting Six Sigma initiatives; however, it just does not work for our bag of marbles.

Determining a frequentist probability would require first establishing a key population parameter, its size: how many marbles are in the bag? Next, we would calculate a sample size based upon: population size, desired confidence level, precision level and the fact that we are working with discrete data. If the P value is too high, we can increase the sample size to increase of confidence that the results are not by chance. Based upon the sample, we can make statistical inferences about the population, and eventually we could establish the probability of drawing a red marble.

Just a couple problems here… I told you that you have a bag of marbles, but don’t forget that you’re not allowed to peek. You just have to tell me what the odds are of you drawing a red marble. But you don’t know the population size and you cannot draw a sample. The first marble you will see will be the one for which you were supposed to determine probability. Lacking any data makes frequentist probability calculations an impossibility, and having incomplete data severely inhibits its effectiveness. So let’s look at another method. Since it was developed to deal with incomplete data, Bayesian statistics allows us to approach everything very differently.

Bayesian statistics becomes more accurate as more information becomes available. The first marble will have the least accurate estimate, with the estimates getting better with every subsequent drawing. A simplified version of the formula becomes (n+1)/(N+c); where n=the number of red marbles we’ve seen so far, N=total number of marbles sampled, and c=number of colors possible. So for the first marble, the probability is calculated as (0+1)/(0+5)=0.20. While it may or may not be correct, it is a starting point. For every marble sampled and returned to the bag, the formula will change and the accuracy of future estimates will improve. Glickman and Vandyk expand upon this usage by expanding into the application of multilevel models with the use of Monte Carlo analysis.

I can’t ignore the very human aspects of Bayesian statistics, which was captured well by Haas et al. as they described three pillars of Bayesian Risk Management: Softcore BRM, Bayesian Due Diligence, and Hardcore BRM. Softcore relies upon subjective interpretation of uncertainty, think about consulting your subject matter experts. Hardcore leans on mathematical approaches and statistical inference, while the Due Diligence mitigates the triplet of opacity by ensuring that facts do not override the expertise of authoritative and learned people.

The reality is that the majority of people working on projects use software to determine their quantitative estimates for specific risks that have been identified. However, I’m not a fan of answering someone’s question by stating “don’t worry, there’s software for that!” While you may never have to calculate risks in an analogue manner, it is worth knowing that if you are dealing with unfamiliar risk a Bayesian approach makes more sense than a frequentist approach. If you have historical information available and a well-defined population, by all means, collect a sample. But keep in mind, as I often tell people when I work Six Sigma, “don’t make the data fit the test, select a test that fits the data”.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Effective Managing Meetings

A few years ago, I did something weird. I fell in love with a concept. I was studying Business Process Management, and one of the key aspects is defining processes. If I had to describe this concept in three bullet points, it would be:

  1. You cannot improve a process that you cannot control
  2. You cannot control a process that you cannot map
  3. You cannot map a process that you cannot describe

W. Edwards Deming stated, “If you cannot define what you are doing as a process, you do not understand what you are doing.” I typically take it a step further, as I believe that if you cannot define what you are doing, you should probably stop doing it. I don’t know about you, but I’ve been in a lot of meetings where I didn’t know what I was doing there.

Meetings are the reason that I hated Monday mornings for most of my 20’s. Well, one of the reasons. After working for a few people that enjoyed the manufactured self-importance that comes with mandating attendance and occupying 90 minutes of my life, I developed a deep disdain for meetings. I reasoned that wasting 90 minutes, almost 4% of the time most people spend at work for an entire week, was quite counterproductive.

This rationale led me to, for years, avoid calling meetings unless absolutely necessary. However, in the years since, I think that I have come up with a pretty good system for managing meetings. It blends the PMBoK view of three basic meeting types with how Scrum ceremonies are conducted.

Information Exchange

We have all spent time in this type of meeting. The danger increases as more people are invited, because the number of participants seems to unfortunately have no negative correlation with individual desire to “contribute” to the discussion. More people, more to say, longer meetings. A few ideas on how to more effective run these meetings:

  • First order of business is to establish a time box, such as a 30 minute maximum for the meeting. The meeting may take less time than allotted, but it will not take more. Set a timer, or state a time out loud (e.g., “This meeting will end at 9:45 am”). This creates a sense of urgency, and allows people to recommend taking off-topic items to be continued afterwards or placed ‘in the parking lot‘.
  • If you are running a meeting and you’re not sure why someone is in there, ask them if there is a reason for their attendance. If all they need is an update, they can get a copy of the meeting’s notes e-mailed to them afterwards – and not waste a half hour in a conference room.
  • Have a specific agenda, and stick to it. It is impossible to adhere to a time box if you allow topic drift. It isn’t necessary to use Robert’s Rules for every meeting, but order should be maintained.
  • Have someone take notes for dissemination after the meeting, and be sure to follow up on any items that were left open.

Decision Making

Meetings that result in a decision can often be difficult to manage, because people should disagree. Be wary of meetings where everyone agrees. Following the Bay of Pigs debacle, JFK stated that “Most of us thought it would work. I know there are some men now saying they were opposed from the start. I wasn’t aware of any great opposition.” While disagreement should be allowed, chaos can take over if these meetings are not well-controlled.

  • Clearly state the following at the beginning of the meeting:
    • what decision must be made
    • how the decision will be made (hand vote, private ballot, panel, etc.)
    • how much time is available for deliberation
  • If possible, assign someone as a facilitator to aid the flow of the meeting
  • Have someone document all of the points made for and against the decision, as they should be recorded as the basis of the decision
  • Remind everyone to make the effort to maintain decorum and professionalism

Brain Storming

Depending on the participants and the topic being discussed, this can be the most fun type of meeting or the most stressful. Trying to do a retrospective on a successful sprint? Sounds like a good time for some pizza and light music in the background. Trying to come up with alternatives because the scheduled slipped after MLR was hung up for a whole day on two paragraphs? It may be a more somber meeting. Cleverism has some great Brainstorming tips. Some of my favorite considerations:

  • Describe the purpose of the meeting – what are we brainstorming, exactly? I like to write this at the top of a white board so we can just point to it if someone gets off topic.
  • Make the exit criteria clear, are we here for a finite period, or will we stay until we have captured every conceivable thought?
  • Quantity over quality. Someone’s bad idea may trigger someone else’s good idea. Yes, even if it’s tongue-in-cheek. Just because we record an idea doesn’t mean we have to act on it. You have to foster a safe place that is conducive to people stating ideas without fear of ridicule.
  • Make it fun, variations of hot potato are great ways of getting people to interact.
  • Allow people to write their ideas if they don’t want to state them aloud.

 

Meetings are necessary, but wasted time is not. Get more done with less by thinking lean. What is the least amount of time and the fewest number of participants that we need in order to fulfill whatever need for which the meeting is called? Let’s take back our Monday mornings.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

What is Technical Risk?

I have seen technical risk described in many ways. One definition that I am particularly fond of was written almost ten years ago. In 2008, Mike Cottmeyer wrote:

Technical risk deals with those unproven assumptions in your emerging design that might significantly impact your ability to deliver the solution. Are we planning to use any unproven technologies to build the solution? Have we exercised the significant interfaces between systems? Can we demonstrate a working skeleton of the system? What about performance? What about security? Any technical decisions not validated by working software count as technical risks.

Technical risk is, arguably, the most dangerous type of risk that a project team could face because it is often the least understood. Identifying the risks may prove to be nearly impossible without concerted effort, let alone attempting the establishment of sufficient reserves in time or funds. Despite the fact that so many organizations readily acknowledge the threat of technical risk on a project, few follow-through by establishing a methodology that can successfully address it.

I have been asked, does technical risk only apply to software projects? The answer is no, it can apply to anything where we have gaps in our knowledge. These gaps may result from some aspect of the project that we either do not fully understand or through the depth of complexity we have failed to acknowledge. The Wright brothers spent four years developing the first airplane, and then a further two years improving it enough to be useful. Boeing’s Everett factory has a 49 day average build time for a 777.

This time lapse is awesome to watch but I can’t help but imagine the stress that the project management team is under during it. But as we watch the work proceed, does it look like it they’re having any issues figuring out what to do next? The AIDAprima was not a first, even though its $645 million price tag and two-year delivery time may lead you to believe it was the only of its kind. This video shows you what a complicated project, effectively managed, looks like.

The difference between complicated and complex is an important one. The construction of the AIDAprima was complicated, but they knew exactly what they had to do, and when they had to do it. They could account for risks that had been experienced on previous, similar projects. They were using tried and tested technologies and techniques that were implemented by experienced practitioners. It is hardly surprising that the project made for a great time lapse.

The description of the AIDAprima in the previous paragraph is sufficient to demonstrate that technical risk was not present, at least in any significant amount. Think about how the Wright brothers were not just building a deliverable, but also learning how the deliverable should work as they were pioneering the application of aeronautics. Contrast the construction of the AIDAprima with the Wright brothers as you consider the following questions as a high-level means of determining technical risk.

  • Has our organization ever managed a project of this type before?
  • Have the technology or the techniques to implement it been successful anywhere else previously?
  • Are we able to clearly articulate how requirements will be fulfilled?

Needless to say, if these answers are in the negative, then your project is likely facing a lot of technical risks. Addressing it effectively and developing realistic timelines and budgets are still possible, but take a far different approach than most organizations use. There is no ‘one size fits all’ so organizations must tailor an approach to address complexity.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

Flying, or Monitoring and Controlling a Plane

Learning to fly is one of those things that I always thought about doing, but never got around to. It was definitely on the ‘to-do’ list, somewhere between taking the kids to Disney and cleaning out the garage. Luckily for me, we live in the age of Groupon. The one hour intro was just enough to get me hooked, and I couldn’t help but go back again and again.

Flying has given me a new perspective on how I describe the PMBoK process groups to people. The iterative nature is found much more naturally in flight, or even sea travel, than you could ever find on the ground. Think about when you’re driving a car, it’s almost entirely effortless and without any thought at all: plan where you are headed, start driving, and ensure that you stay in the lane and at a safe speed while obeying local traffic laws.

I flew around the world a half dozen times in the military, and probably logged at least a hundred thousand miles in the US alone. In all that time, I never really got over my fear of flying because I never really knew what the heck was going on. Turbulence. All I knew about that word was two things: it made me want another drink, and it scared the heck out of me. I’m not sure if those two were related, but what I do know is that it wasn’t until I started flying that I learned the causes of thermal turbulence, and realized that the nicest days to fly were always the bumpiest!

Driving in a straight line is relatively easy. For some people at least. Flying in a straight line can be a challenge, even in ideal conditions. Maintain level horizon, adjust trim, watch airspeed, adjust throttle . . . and just when everything is perfect, here comes a thermal layer that lifts the plane 200 feet in about 3 seconds. Time to put the nose down and reduce the throttle, regain proper altitude and level off.

That last paragraph just described, in order, preventive actions which were done in an effort to keep our plane on course at the speed and altitude of our choosing. However, something happened that messed up our plan, so we had to do a corrective action to get back on track. Flying and sailing are much like M&C. You often don’t have a clear reference as to your progress. Can you eyeball the difference between 3000 and 3500 feet of altitude? Probably not. Can you tell if you’re sailing in a straight line with no land on the horizon? I spent two months at sea, and I’ll tell you it’s not that easy! These situations necessitate deliberate actions to ensure that we stay on track and make progress at the rate and in the manner of how we had planned.

Plans change. The other day I was flying cross-country to another airport. My heading was roughly Northeast, but a wind out of a different angle than what the METAR called for really messed my morning up. Being blown consistently off course after having calculated for wind correction with inaccurate data (think risk data quality assessment!) meant that a new plan had to be developed. A few quick calculations on some iPad apps and a new course and timeline were developed.

Monitoring and Controlling processes happen throughout a project’s life cycle, much like how constant and deliberate effort is undertaken from taxi to take-off and landing. It’s always easy to drift off course, on a project or in a plane.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions

8 Lessons from an Army Instructor

Three big things happened for me in 2005. I finished my second tour in Iraq, I turned 21, and I was assigned as an instructor at Fort Dix, NJ. In all, I spent almost ten years as an instructor at Fort Dix. Learning to instruct was truly an experience I appreciate.

I was very lucky to be surrounded by some of the smartest people the Army had in its ranks, and I got to learn from all of their mistakes. There are a few of those lessons that I still use when I teach college courses or in professional settings. I still think about those lessons even as I write on topics in this blog.

I’m definitely not saying that I have the best presentation style you would ever see. I’m not sure there’s such a thing as a quantifiable ‘best’ style. The lessons below may offer you some insight as to why I write the way that I do. I will most likely link people to this page when they accuse me of gross oversimplification. If that’s why you’re here: I apologize and I promise that offending you was unintentional, but the simplification was totally on purpose. It’s true, I am often guilty of being a crude reductionist.

1. Start by making a deal.

For presentations that last more than an hour, I always start with a proposition. I offer them a ten-minute break. Every hour, on the hour. I tell them that this is my promise to them, that they will never have to wait more than fifty minutes to check their Instagram or update their Twitter so all their friends know how awesome the presentation is. But I ask them to please save those updates, and any other business on electronic devices, until designated break times. Approaching this as a deal rather than dictated terms has led to people being far more likely to display proper etiquette.

2. Lying is the quickest way to lose credibility.

Admit that you have gaps in your knowledge. Admit if you have a forgetful moment. Admit it if you are stumped, but you’ll check on a break. Heck, you can even admit that you are a human being. You can feel the temperature in the room change when a presenter makes up an answer and people know it. Disengagement follows. Conversely, if you acknowledge the difficulty of a question and ask if anyone knows an answer or if they’d be willing to help you find it, you create a collaborative environment. Oh, and see #1 again – they didn’t forget that promise about the break.

3. It is literally impossible to win a fight during a presentation.

If you are presenting and someone is disruptive or makes a joke at your expense, the best response is to laugh along with them and move right along. Maybe even making a quip such ‘okay, more about how lousy my [ presentation / hair / height / etc. ] is later, but let’s get through this presentation right now.’ William Irvine wrote a book on Stoicism and how to effectively handle situations exactly like this. I’m a big fan of redirection. It works great with my kids and works great during a presentation. You determine your level of control, don’t cede it through a lack of self-control.

4. People learn best with stories.

If you bore adults in a classroom, it’s amazing how fast etiquette goes out the window and cell phones come out of pockets. Paul Smith wrote an awesome book called Lead with a Story. His book is more about stories and their use in sales, but I believe that there are a lot of areas of crossover. People often disregard sales and negotiation skills, but if you’re conducting a presentation you want it to be interesting so as to captivate their attention and then transfer your knowledge. This is best done with a mix of hot and cold cognitions, with one example being where I’d start telling a great story and right before the climax send everyone on a break – it’s amazing how everyone comes back on time!

5. They can read, too.

PowerPoint Presentation. Everyone has strong opinions about PowerPoint. Mine? I love it. It guides the presentation. It gives everyone something to look at other than staring into my eyes as I scan the room. But PowerPoint doesn’t actually present – that’s my job. When I use a PowerPoint presentation, I will occasionally reference it, maybe point to a word or two on there, and maybe even look at it on occasion. I have never met anyone who appreciates having slides read to them, verbatim, by a presenter.

6. The most powerful communication is powerless.

Adam Grant wrote about powerless communication in his book Give and Take, but it turns out I had been doing it unintentionally for years. I often found myself surrounded by people with vastly more experience and a far greater depth of subject matter expertise. By approaching topics in a deferential manner and often soliciting examples from their experience, I was almost always able to maintain an environment conducive to learning.

7. Anyone can have the knowledge, you need to be able to convey it.

Noone cares how many books you have read or how many articles you have written. I stopped being impressed with big words when I realized that most people using them had a word calendar on their desk or a thesaurus close by. Can you take the thoughts in your mind, put them into words, turn those words into sentences, speak them aloud and have them interpreted by someone else, as you had originally intended? Did you successfully convey a thought? Often this means breaking down a topic to a sometimes gross oversimplification. That’s okay though! Being understandable makes you more important than verbosity could.

8. You should not teach to the smartest person in the room, you should teach to the most inexperienced.

How perfect a presentation is when everyone comes in already possessing a depth of knowledge on every topic that we’ll be discussing. This, of course, would never happen. Typically there are multiple levels of knowledge in the room, and while it is tempting to teach to the highest skill level, you run the risk of losing the lowest. This is why I swear by deliberate reductionism. People would swear I’m obsessed with Mashed Potatoes.

The planned work is contained within the lowest level of WBS components, which are called work packages. A work package can be used to group the acitivites where work is scheduled and estimated, monitored, and controlled. In the context of the WBS, work refers to work products or deliverables that are the result of activity and not to the activity itself.

This definition is from the PMBOK® Guide, 5th Edition. This is a great definition if you already know what the WBS and work packages are and how we arrive at them. When I discuss these concepts with someone who has never seen them before, I ask them to picture a nice Thanksgiving dinner and start categorizing the food on the table. We may have ‘meat’, ‘vegetables’, ‘appetizers’, ‘desserts’, ‘sides’, etc.. Under the category for ‘vegetables’, we may put ‘mashed potatoes’, ‘candied yams’, ‘green bean casserole’, etc.. But then, I ask, let’s look at the mashed potatoes. Can we further subdivide this dish into anything else and still have a ‘thing’ that we eat, without getting into the activities or ingredients needed to create them? No.

When you hit mashed potatoes, you’re at the bottom of your WBS.


Karl Cheney
kcheney@castlebarsolutions.com
Castlebar Solutions