Friday, July 27, 2012

Current Blog Info:

For Latest Post - scroll below or see Blog Archive at right

Pending Articles in Draft:
  • Capitalizing on Lessons Learned  (Part 1)
  • Lessons Learned  (Part 2)
  • Motivating Without Authority
  • Contributing to the 'Project Good'
  • Process Improvement

Value of Earned Value

a case for taking the time to understand and apply Earned Value methods
 
A lot has been written in corporate control and contingency management circles about Earned Value Management (EVM) over many decades; it has been embraced by some organizations, yet ignored by many.  Is it really that unknown, or merely under appreciated?  Perhaps it is deemed of questionable value, or just considered too complicated and time-consuming.  Some may profess to have the essential sum of data needed to establish comfort in their projects' risk stance and burn rate.  Is it quantifiable?  Is it reliable?  How early can they confidently project 'estimate at complete'?

EVM is a method of cost containment and cash flow projection thought first to evolve in large corporations of the 40's-60's before being developed to a close approximation of the stage it is today by government DoD controllers in the 60's and 70's.  Earned Value excels mostly at trending future risk and cash flow based on variance from planned progress, and providing confidence in the final output of a project knowing only the progress to date.  One of the more convincing arguments for the use of EVM is that post-project analysis revealed that as early as 15% into project completion, the final (cost & schedule) outcome could be accurately extrapolated.  In essence, if over budget at 15% completion, the project was likely to be over budget at project end, and by a predictable, measurable amount.  Further, any subsequent cost or schedule mitigation exercises could rarely succeed in affecting project outcome by more than +/- 10%.  What would you give to be able to predict your project outcome with that degree of certainty?

As a brief introduction, this posting cannot begin to be a 'how to' on implementing EVM on your projects; for a thorough account of EVM calculations and methodology, a quick online search will uncover dozens of potentially useful sources.  The basic variables and calculations are listed below:

Primary variables include:
BAC   (budget at completion)
BCWS   (budgeted cost of work scheduled - or 'Planned Value')
BCWP   (budgeted cost of work performed - this is also your 'Earned Value')
ACWP   (actual cost of work performed)

Basic cost performance measures:
CV   (cost variance) = BCWP - ACWP
CPI   (cost performance index) = BCWP / ACWP
Percent Complete = (BCWP / BAC) *100
EAC   (estimate at completion)  = BAC / CPI

Basic schedule performance measures:
SV   (schedule variance) = BCWP - BCWS
SPI   (schedule performance index) = BCWP / BCWS

The starting point in applying an EVM approach on your project is in matching work packages to budget dollars in a time-based manner (effectively linking scope, cost and schedule together).  The most effective approach I have found is to first quantify the number of units for each task in your work breakdown structure; then divide the total budget for each completed task by the number of units.  Here you are effectively equating dollars spent with work accomplished.  Then plot the planned number of units expected to be done per month from initiation to close.  When current actuals are updated monthly, these become the building blocks for calculating the primary EVM variables (above).  Like most things in life, the more you put into it the more you get out of it - and like most aspects of project planning, the vast majority of the effort is front-end weighted. 
A typical monthly focus would look at SPI (schedule performance index) and CPI (cost performance index) values, derived from the primary variables above.  How far they deviated from '1.0', in simple terms, points to three broad states to worry about: are you over, under or on track?  If you found your project schedule or cost to be pretty near 1.0 at, say, 25% complete, consider yourself on a very good track.  Similarly, if you are straying far off by this time you know you have some serious decisions to make and perhaps a chat with the project stakeholder(s).  In the above pictured example, the blue line is the PV (planned value or BCWS).  It would be taking little risk for this PM to predict that this project was headed for an on-time deliverable, and potentially somewhat under budget.

In the end, EVM is a tool like any other, adding support and confidence to project execution; it cannot do the work for you or account for product quality.  There is great benefit to be derived from investing the required time to apply these methods, but beware becoming so immersed in tracking the numbers as to miss paying proper attention to other aspects of project execution and stakeholder satisfaction.

Thursday, March 15, 2012

Reinventing The Wheel

Who asked for a new wheel anyway?

Possibly the greatest expression of inefficiency and wasting of time in any process is the re-invention of the wheel.  The wheel I am referring to in this case being any tool, template, form, process, database, log, etc that a project team routinely requires to get the job done or track and execute task completion.  Time and time again I see fancy new wheels spinning along the project execution cycle, certain that the existing wheel would have achieved the same thing with less time, effort and confusion.  And how then, if it proves 'road-worthy', will this new wheel find it's rightful place amongst the tools available to all other PM's?  This failure to seek out and use best practices and tools is to the ultimate detriment of a project, the project team, and future project efforts.

Not that new ideas should not be welcomed or encouraged - in fact new ideas are vital to progress and efficiency, and are a basic element of a CAPA type system every PM organization should have set up (corrective and preventative actions).  But starting from scratch is rarely less effort than the few steps it takes to source the latest platform and customize it within reason towards your new vision of things.  It is worth noting that virtually all project teams are made up of functional resources moving from project to project within a similar environment.  When similar, recognizable tools are shared between projects there is a less steep learning curve at the task beginning, or in the event of inevitable team member turnover.

Except for a highly regulated environment or very narrow range of application (such as a pre-flight checklist or nuclear waste containment protocol, perhaps..?), templates and tools are made to be a guide which can then be modified and customized to fit current needs.  Templates typically represent a tried and true standard for execution, having all of the bugs already worked out.  If, however, a template in practice shows inefficiency, error, etc, that is what feedback loops and Lessons Learned meetings are for (see post elsewhere in this blog), along with a system to flag the latest established version.  So, your shiny new wheel could be the best thing to happen in your industry, but if it is not systematically shared and fed back into the 'wheel pool', future teams cannot benefit.  Likewise, fail to use a perceived 'old wheel', and you may be missing out on a past team's hard-earned knowledge.

Besides the missed lessons or potential misalignment between team members and other project teams, an extension of this reinvention problem occurs where you risk invalidating your output or procedure due to the deviation from an accepted norm or industry standard.  Take the case of pooled project data or downstream application of a project output where an upstream team reinvented a variable capture, timing or definition.  When multiple project output is later pooled and the variables do not match, what additional problems have then been created that a standardized approach would have prevented?

Is there ever any reason To reinvent the wheel?  Of course!  If the old wheel contributes to known problems or proves to be dated and unreliable, throw it away.  When cultural, regulatory or economic shifts portend the birth of a new reality, it's time to take a fresh perspective.  If the new tasks are so unique that the old wheel does not conform sufficiently, establish a new one and make it available for current and future project teams to use.  It should be noted, however, that few projects conducted in the same basic environment are truly significantly different from recent ongoing project activities.  And if, finally, through sharing of Lessons Learned, the shortcomings of a particular wheel are shown to outweigh its value.

Best practices are not always The best practices but should at least serve as a starting point for project and functional leaders, and at times may be the de facto requirement whether perfect or not.  The key is in obtaining consensus from past and future end-users, alignment with standards, implement a Lessons Learned or feedback loop, and have a process for updating and centrally locating all tools, templates, etc (essentially best practices) where they are known to all and easily accessible.


Tuesday, February 7, 2012

Scope Change vs Scope Creep (Part 1)

Change is inevitable and not necessarily bad, but problematic when it creeps

The subject of scope creep is one about which most project team members know and talk, yet few seem to be able to define exactly what it is or acknowledge what to do about it - or when.  Change is a natural occurrence and change will happen to virtually every project of long enough duration and detailed enough scope.  Most project-savvy organizations have a change-control process with proper identification, approval, documentation outcomes to address change and all resulting implications.  Scope creep is more than just change; it may linger undetected until too late, and involve processes which may or may not be adequately covered by your change-control process. 

Scope Creep is essentially OOS (out of scope) work, but the 'creep' part adds several additional elements to typical scope change.  'Creep' tends to accumulate slowly and sometimes without the knowledge of the PM, team or stakeholders.  'Creep' has many sources, but it also comes from what you as PM allow and do not allow (knowingly or unknowingly).  It can manifest itself in several ways from first not properly identifying what is in scope, and allowing OOS work to proceed, or by snowballing from one or two passing allowances into items of bigger and bigger cost, schedule or resource implications.  "We allowed this and this, why not that?"  Scope creep, unfortunately, can also result from a "lost in translation" effect when the requesting party and the performing party each make divergent assumptions on some defined element of scope.  You know..., 'ass - u - me'.

Change is somewhat conscious although it can be planned or unplanned.  Creep is virtually always unplanned, sometimes unconscious and is often happening or has occurred by the time you realize it has happened, and are forced into mitigation (stop it, allow it, take the hit or re-negotiate the SOW, budget, timeline, etc).

Staying on top of scope creep begins first and foremost with an adequately defined and approved scope in the form of a written SOW, linked to budget, time and resources - you cannot identify what's OOS or creeping if you don't know the agreed starting point.  This is followed a close second by acknowledging clearly and in advance with stakeholders what is (and in some cases is Not) in the agreed scope and what scope change control process is in place.

One critical technique to effectively managing scope creep is through adequate and frequent communication.  Many requestors of scope change do not know that what they are requesting is potentially scope creep, and/or may not have the proper authority to make the change request.  Ensure you have a method to flag this while the request is being made and before agreeing to or proceeding with the action (relying on the standardized change control process you established at the start of the project).  Many organizations will also have a processes in place to prevent work without prior written agreement, so keep an adequate allowance for delays associated with this too.

Monday, February 6, 2012

Scope Change vs Scope Creep (Part 2)

Expect it and plan for it and your project won't be adversely affected by it

Another method to help differentiate real change from scope creep is to track project changes with a log; all changes, big or small, creep or not, actionable or not.  Flag each according to a scale meaningful for your project environment; the critical part is to identify what the consequences could be, which to act upon, how and when. 

Project sponsors and stakeholders don't want to be 'nickled and dimed', and project managers are not effective by repeatedly going 'back to the well'.  If a small, simple request can be accommodated with a few keystrokes or an extra print-out, I sometimes choose to avoid the drip-drip of change requests and just do it.  Keep the project goal in mind and aim for satisfaction, but beware the slippery slope of agreeing to too much, too often.  It can sometimes help to bundle a few smaller OOS change requests into one change order.  The change log can also serve as leverage in moving more quickly to authorizing a big scope change in light of the several smaller changes requested earlier wthout revising the original scope.  Remember to take the time to explain the 'triple constraint' between cost, time and quality - at least one of these three major project elements is going to have to give for whatever change in scope is ultimately approved.

Talk about the process in advance with project sponsor/stakeholders so it does not come as a surprise.  Every undocumented or OOS request is not necessarily creep.  Change is not necessarily a bad thing either, although it is almost always associated with risk and consequence.  In the end, the sponsor and stakeholders should be delivered the deliverables they want - your job isn't to talk them into or out of deliverables, but to provide all necessary information (ie, constraints) to allow them to make the best decision possible.  As a project is executed you naturally know more than you did when it was originally defined (look up 'cone of uncertainty').  But creep will lead to mistrust and dissatisfaction or blown budget/timeline if not handled properly.

PRACTICAL EXAMPLE: A Form of 'Hidden' Scope Creep
It should be noted that scope creep may not be 'new' or 'added' work, but a result of miscommunication, misunderstanding and/or lack of specificity.  Consider a not-unusual situation when part way through execution a sponsor/customer/stakeholder was under the impression that the budget and schedule they approved for "Activity Y" included sub-tasks 88-100.  The minimum industry standard is, lets say, 1-87 and 88-100 are perhaps desirable but beyond whatever industry regulatory hurdle that exists. 
Upon re-checking, the signed project agreement states something like 'Activity Y will be performed to meet industry standard'.  While this can be considered reasonably specific, it could also be sufficiently ambiguous to lead the service-provider to believe 88-100 are not included and the customer to believe that they are.  The customer feels these should be obvious and assumed that they were included; all the project PM can reply is that the intended SOW is what was budgeted, unless they want to initiate a change order.

Although the service-provider has a contractual justification to continue to exclude the added project scope, the customer is not happy doing without or paying more and maybe waiting longer.  Either way represents a blow to trust and to satisfaction.  Now we can't anticipate every assumption, and a certain level of detail will start to become counter-productive - but like many elements of project management, it starts at the beginning and with lessons learned, best practices, proper scope definition, a scope change control plan, and frequent communication.