+. 01-JUN-2000 +( Boyd cycle Date: Sat, 17 Jul 1999 11:45:51 -0400 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] Re: what is Boyd cycle > Forgive me for my ignorance - but what is the Boyd cycle? It's also known as the O.O.D.A. loop. It stands for Observe, Orient, Decide, Act. Pilots use it to kick asss in a dog fight. They OBSERVE their battle situation, ORIENT themselves with respect to the horizon and their opponent, DECIDE what maneuver to execute next, and they ACT, i.e., they do the maneuver. The pilot who can execute the Boyd cycle a little faster than the opponent gains a small, cumulative time advantage over the opponent and takes control of the battle situation. Usually, that's the pilot that gets to fly home. The other, well.... The Boyd cycle is applicable in product development. The TOC Multi-Project Management Method is an absolute requirement for achieving and improving an organization's Boyd Cycle. In fact, an organization can have an Effective Boyd Cycle that is far shorter than the typically measured "cycle time." The latter is just the average duration of the organization's projects. The former is the interval between releases of successive product iterations. Look at the figure, below: traditional |<-- cycle time --->|<-- * -->| | | | p1p1p1p1p1p1p1p1p1p1 p2p2p2p2p2p2p2p2p2p2p2p2p p3p3p3p3p3p3p3p3p3p3p3p3p3p3 p4p4p4p4p4p4p4p4p4p4p4p4p4p (*) denotes the Effective Boyd Cycle Time. The traditional cycle time is measured as the average interval between project start and project end. The Effective Boyd Cycle time (a term that I've defined), is the interval between the introduction of one product release and the introduction of the successive product release. If the organization does a good job of communicating with customers, and if the organization can achieve and maintain rapid product releases, each with new features, then the organization gains a small time advantage with each new release. That time advantage is cumulative. Therefore, the competitors fall behind more and more with each iteration of the organization's Effective Boyd Cycle. Before too many cycles, the competitors find themselves developing features that the organization is already making obsolete in its own product line. Any organization that adopts the TOC Multi-Project Management Method today and exploits it with effective product management will gain market share at incredible rates, because the competitors won't be able to keep up with the rapid product releases. In fact, the competitors will kill themselves trying to achieve faster cycle times by pushing their traditional system harder and harder. Therefore, not only will the organization gain market share because it's products reach the market faster but also because the competitors screw up their own operations trying to keep up with it, and as a result they delay their own product releases and damage the quality of their offerings. At that point, resistance is futile! The TOC Multi-Project Management Method is a critically important necessary condition for exploiting this operational method, which in the military is known as Maneuver Warfare. Do you know of any organization that wants to become not an industry leader but the entire industry? Date: Sun, 18 Jul 1999 00:45:14 -0400 From: Tony Rizzo Amir Sultan wrote: > > This is quite an interesting/revolutionary principle. I would like to know > more on Boyd Cycle. From your explanation, OODA appears to a be a paradigm > applicable to indiviual like a pilot or a closed loop automated system. Here > process of Observe, Orient, Decide & Act can be well integrated. Also, at an > industry level we have seen OODA manifesting at the macro-level, like the IT > industey or the entertainment electronics in Japan in the '80s, and in > automobile during transitions/crossroads in the buying patterns. I've been told that Parametric Technologies Corporation worked to maintain a six-month release cycle in the late 1980s, while the industry leader for that market (McDonald Douglas, Unigraphics CADD software) could barely do one release per year. The Parametrics Technologies product, Pro Engineer, is now the industry standard. The Japanese regularly apply Maneuver Warfare in their product development and marketing efforts. I've read accounts of Japanese companies creating multiple versions of a product and testing all of them simultaneously, seeming to compete with themselves. In fact, they are not competing with themselves. They are collecting reconnaissance, market information. Each version of the product provides feedback to the company's developers. They, in turn, abandon the versions that customers don't favor and develop the produce in quantity the one or two versions that customers favor. Think of it as sending out many small patrols, to probe the enemy's weak points. When you find a weak point you focus your efforts there. This is much more effective than confronting the enemy's strength with your strength (attrition warfare). 3-Com corporation is using an effective Boyd Cycle right now, with its Palm Pilot product. The first release was the Palm Pilot, then there was the Palm II, which was followed quickly by the Palm III. Now there is the Palm V. The Palm VII is already being featured in the magazines. I'm willing to guess that the Palm X is in development at this time. Do you see 3-Com's competitors keeping pace? I don't. Another classic example is Intel. It started with the 8080 chip. That was followed by the 8086 chip. Then came the 80186, and the 80286, and the 80386, and the 80486, and the Pentium, and the Pentium II,.... Motorola, with its 6800 chip and its 68000 chip couldn't keep pace. Neither could the other chip producers. Of course, the entertainment electronics industry is also a good example, as you point out. > Are there > cases where OODA can be applied as a methodology in Companies? Are there > safeguards to ensure that the small cycles / tasks on OODA approach do not > act contra sometimes & result in self-inflicting damage. The only safeguards of which I am aware are provided by effective strategy and effective management. Without these, well.... +( Buffer Estimation Check Tony Rizzo in the "CCPM" record of this file in his tutorial at www.pdinstitute.com Check Rob Newbold in his book From: "Jim Bowles" Subject: [cmsig] Re: how calculate the buffer -Reply -Reply Date: Thu, 23 Sep 1999 21:15:33 +0100 Mark said RE: Buffers: I seem to recall that one of the books, pbobably the Goal suggested half the processing time as the default buffer to set up. Sound familiar to anyone? Mark Fountain This is not what Dr G has said. In his recent Satellite broadcast he said that for starters. The sum of the time buffers should be 50% of the current lead time. Some DBR experts may question this but it should be good enough to establish what is happening. ====================== From: "Tony Rizzo" Date: Wed, 29 Jun 2005 14:26:12 -0400 Subject: RE: [tocleaders] Buffer Size and 3 Sigma Limits For example, if we have the variation plotted for the time it takes to complete tasks of a similar nature (say, laying bricks as an example) and we have the 3 sigma limits calculated for the task durations, is there any way to use that data to estimate a suitable feeding chain or project buffer? Yes, there is. Let L = the control limit, as defined in "Understanding Variation" by Wheeler. Let M = mean task duration. Let i denote the ith task of n tasks in the sequence for which we seek an estimate of variation. The variation in the duration of the ith task is Di = Li - Mi The VARIANCE of the sequence is V = Sum(Di^2) Let E denote the variation in the duration of the sequence of tasks. E = SQRT(V) If we use the tasks of a component sequence (feeding chain) then we have the variation in the duration of that sequence, caused by variation in the tasks of the sequence alone and estimated to the same confidence level for which we have the control limits of task-level variation data. The following cautions are important. First, the control limit calculation presented by Wheeler uses constants determined from empirical work done by Shewhart. As such, the calculation does not require prior knowledge of the distribution of the data. This is useful. However, the control limits do not coincide with the 3-sigma limits of any assumed contribution, except by rare coincidence. Consequently, the 3-sigma limits based on some assumed distribution often underestimate the degree of variation. Second, this project tolerance (or component tolerance) calculation is based solely upon the task-level variation of the tasks in a single sequence, such as the critical sequence. As such, the calculated tolerance value is an underestimate. It excludes a number of significant causes of variation, some external to the project and some internal to the project. Two of these additional and quite significant causes of variation are: 1) The grossly inefficient management process of virtually every enterprise, which causes extensive multitasking -- The effect of this disastrous and ubiquitous management process is to multiply project duration by a factor of 2 to 3. It also wastes a proportional degree of capacity and, therefore, of the development payroll. But it does maximize the organization's workforce utilization measurement, which the leadership and the management team use as their recreational drug of choice. 2) The significant effect of the interaction between intra-sequence variation and the number of parallel sequences -- This latter effect exists to an extensive degree at every obvious integration event. But it also exists at the start of nearly every task of a project, because the start of every task is really the integration of a resource and one or more inputs, all of which (including the resource) are the products of earlier parallel sequences. Therefore, this interaction effect is, both, concentrated at the obvious integration events and also dispersed throughout the project. The resulting impact on the variation in project duration is massive and unaddressed by any of the calculation methods. --- > I am looking at the way in which variation is tackled in the different > scheduling systems, therefore also at TOC. > > I however still have a problem in understanding how TOC really > handles (calculates?) its buffers to cope with variation : In Rob Newbolds standard : Project Management in the Fast Lane p 94 : Buffer = 2*sigma = 2*SQRT(((w1-a1)/2)^2 + ((w2-a2)/2)^2 +...((wn-an)/2)^2) where wi = worst case duration of a task ai = avg duration of that task At the beginning of the same page he states a rule of thumb : Buffer = 50% of unpadded Critical Chain duration. Ed D. Walker II: An introduction to Critical Chain Project Management http://www2.gasou.edu./facstaff/edwalker/ccpm.PDF points to literature which elaborates on buffers. When Goldratt, on page 238 of the Haystack Syndrom, splits the buffer in a "fixed portion" (pure Murphy) and a "non-instant availability (of non-bottleneck ressources)" i.e. variable portion, he states : "The system has to determine this variable portion" ============================ Watch the following example: A: x B: xxx C: x D: xxx E: x F: xxx G: x H: xxx I: xxxbbb The likelihood for A, C, E and G is (so defined) each 50%. In the above cascade, this means that the likelihood of the whole cascade to finish is 0.5^4 which is 6.25% only. So you can almost be sure the buffer (bbb) will be eaten and thus the project buffer will be affected. If you do: A: xb B: xxx C: xb D: xxx E: xb F: xxx G: xb H: xxx I: xxxbbb the likelihood of the whole cascade to complete on time is still about 50%. Why to take a risk that is *very* high? I think it is better to either do it as above or have a loooong buffer at "I" because you can be sure a fair part of it will be eaten. --- From: "Larry Leach" Date: Sat, 9 Mar 2002 09:23:43 -0600 > From: "Tony Rizzo" > The SSQ accounts for common-cause variation and not for special- > cause variation. So what?! This doesn't mean that it's incorrect. It > means only that we might be trying to use a perfectly valid mathematical > model for a situation for which the model was neither designed nor intended > to be used. The problem isn't with the SSQ method. The problem is > with us, for being so ignorant regarding special cause variation and > common cause variation. The solution isn't to discard a perfectly > useful model. The solution is for us to smarten up!!!!! > We are in complete agreement, but I did not notice anyone suggest that SSQ is incorrect, or to discard it. It is mathematically correct, as explained in my book. My point is that SSQ is insufficient for even modest size larger (i.e. critical chain greater than about 10 tasks) projects. You need to size (at least the project buffer) using Buffer = SSQ + Bias. ProChain allows for this. I do not know if Scitor does. Let me illustrate why I say "for larger projects." Consider a chain of tasks, where the sum of the average duration estimates is one. Assume that however many (equal size) tasks you divide the chain into, the low-risk durations are twice the average durations. If you divide the chain into n tasks, the SSQ buffer is the square root of 1/n. (Homework: please verify this.) Thus, you get the following: n Buffer (% of Chain) 4 50% 10 33% 16 25% 100 10% Many of the potential sources of bias can cause schedule (and cost) over-runs that affect performance by several tens of percent. Thus, as the number of tasks increases, the protection provided by a SSQ buffer alone is not enough. BTW, ALL sources of project performance data I have found (Standish Chaos study for IT projects the best documented) demonstrate that average over-runs go UP with larger project size; not down. I have attached the table from my paper, not the entire paper, as I am still working on it. I do not know how to sum these special causes. I would appreciate thoughts on this. I do know how to use the actual data in an ongoing multi-project environment to determine the actual bias and needed correction. It takes some explanation, though...so I will save it for another time. +( buffer slack To: From: "Jack Vinson" Subject: RE: [Yahoo cmsig] Buffer Slack Santiago- Eli Schragenheim went into more detail on S-DBR at TOC ICO, and I wrote up my impressions at http://blog.jackvinson.com/archives/2006/11/09/toc_ico_simplified_drumbuffer rope.html. The last question first. If you know a) your CCR and its capacity, b) what orders you have accepted into the system, and c) your lead time (both quoted and production), you can estimate when each order will cross the CCR. In most cases, since the time in production is primarily queue time, the recommendation is that the order will cross the CCR at about 1/2-way through the lead time. From this, you should be able to build a view of the planned load on the CCR that looks like a wave of orders. As this executes, you should see a high activation of the CCR in the near term. At some point, the planned orders on the CCR will drop off. You can plan the next order to run through the CCR at this point. The date quoted to customers is the greater of your quoted lead time OR the location of that front plus 50% of the lead time (in case you are accepting more orders than your system can handle in the standard lead time. Also a signal for improvement.) Orders are then released onto the floor via the shipping buffer. Note that in the first case (the front is < 1/2 QLT), you should add the difference to the shipping buffer for that order. This ensures the order runs across the CCR at the planned time AND gives it the correct priority mechanism (the shipping date). The concern about work released too early is mitigated by the assumption that the plant isn't the true constraint of the system, and that you want to do a good enough job of controlling the flow of work across the CCR. Maybe this last concept (changing the size of the shipping buffer) is buffer slack? Jack Vinson ________________________________ From: cmsig@yahoogroups.com [mailto:cmsig@yahoogroups.com] On Behalf Of Santiago Velásquez Martínez Sent: Saturday, December 16, 2006 7:55 PM To: CMSIG; cmsig@yahoogroups.com Subject: [Yahoo cmsig] Buffer Slack Hello to all, I heard there is a new concept in S-DBR called buffer slack. Also there is a new way of releasing orders to the shop floor based on the planned load and the next available capacity on the CCR. Does anyone have any insights about this? Maybe any TOC ICO attendee who listened to such concepts? --- To: From: "Jack Vinson" Subject: RE: [Yahoo cmsig] Buffer Slack If the "buffer slack" is the excess between the time you expect the order to be done and the time it is really due, then we are talking about the same thing. And, yes, it is a great way to monitor for when you will need to enhance capacity at the CCR. An example... Your quoted lead time is 10 weeks, and your production lead time is seven. The current load of work is such that the CCR is loaded for the next 5 weeks. If a new order comes in, the soonest it could be scheduled on the CCR is 5 weeks. Under normal circumstances this order would finish around 8.5 weeks from now (5 weeks + 1/2 the production lead time), but the customer is given a quoted date of 10 weeks AND the buffer for this order is set to 8.5 weeks to ensure the correct prioritization of other work coming through the system. This will release the work onto the floor in about 1 1/2 weeks from now, which will get it to the CCR around 5 weeks. On the other hand, if the planned load for the CCR is showing 8 weeks to be the nearest-available time, then we must provide a longer-than-normal quotation to our customers: 8 weeks + 3.5 weeks: 11.5 weeks. In this case the shipping buffer stays at 7 weeks, and the work will be released at 2.5 weeks from now to get to the CCR around 8 weeks. With Rapid Response, a portion of the CCR capacity is reserved for rapid response orders, but the calculations are the same. (Of course, the production lead time / quoted lead time ratio is probably smaller by the time you are doing rapid response offers.) Jack ________________________________ From: cmsig@yahoogroups.com [mailto:cmsig@yahoogroups.com] On Behalf Of Santiago Velásquez Martínez Sent: Monday, December 18, 2006 10:55 AM To: cmsig@yahoogroups.com Subject: Re: [Yahoo cmsig] Buffer Slack Jack, Would you mind sharing a numerical example of: "Note that in the first case (the front is < 1/2 QLT), you should add the difference to the shipping buffer for that order. This ensures the order runs across the CCR at the planned time AND gives it the correct priority mechanism (the shipping date). The concern about work released too early is mitigated by the assumption that the plant isn't the true constraint of the system, and that you want to do a good enough job of controlling the flow of work across the CCR." As I understand it, buffer slack is: Quoted Lead time - The next available position in the CCR + 1/2 shipping buffer The more this gap closes and nears cero, the more you can anticipate when you will need additional capacity. It is a mechanism so can open capacity before you run out of it! +( building a project plan in reverse Date: Fri, 28 Jul 2000 01:07:29 -0400 From: Tony Rizzo After mentioning my name, Gene Kania says, "...a few years ago when I tried to use this approach with some of the Lucent teams that I work with, it was so tedious and counter-intuitive that this approach was quickly rejected and abandoned." First, Gene, I've never taught you PDI's project planning process. I am most certain that the process which you tried and discounted a few years ago isn't even close to the process that PDI brings to client organizations today. PDI's project planning process is very different from earlier attempts to adapt the Prerequisite Tree to project planning. Consequently, you've just given everyone a rather wrong impression. Second, yes, planning in reverse is somewhat tedious, even with PDI's project planning process. In fact, I go well out of my way to make it even more tedious, for a very practical reason. One of the biggest problems that exist today in product development is the inability of project teams to create useful project plans. By using the work-flow method, project teams often include many unnecessary steps in projects, with "danglers," i.e., outputs that go to no one. Worse! Project teams often fail to identify important segments of the project, which they discover only too late, when there are insufficient time, insufficient resources, and insufficient money with which to do the surprise work. Therefore, it is well worthwhile to undertake even a tedious project planning process, so long as it yields a useful, complete project plan. Building a project plan in reverse, i.e., beginning with the final deliverables and working backward, causes the project teams to identify a more complete set of intermediate deliverables. This, in turn, helps the team avoid unnecessary work, while ensuring that the right work is included in the project plan. This reverse planning approach is at the heart of creating a deliverables oriented project plan. PDI's project planning process takes this process one important step further, by including explicitly the subject matter expertise of the developers. I'm proud to quote a recent client, with respect to PDI's project planning process: "Your process is tedious. But it's good." If we're going to take shortcuts, we shouldn't take them when we're thinking. Kania, Eugene (Gene) wrote: > > Tony's approach is valid, and probably to his chagrin, I lump it under the AGI > or "traditional" Goldratt approach which seems rooted in the logic of the > TOC Thinking Process. > > Using Tony's own words, the approach is "tedious". In fact, a few years ago > when I tried to use this approach with some of the Lucent teams that I work > with, it was so tedious and counter-intuitive (you are asked to construct > the network backwards!) that this approach was quickly rejected and > abandoned. In my opinion, it has more negatives for the project planning > process than positives. Today, I no longer use or recommend this approach > with any of the Lucent teams I work with. > > Over time, I have settled on a simple, common sense approach for building > networks (PERT charts) which synthesizes good practices that I have learned > from competent people and teams inside and outside of Lucent. The core of > the approach I have to give credit to the folks at ProChain Solutions who > have worked extensively with Lucent over the years and continue to support > our efforts. > > I suggest you contact them to learn more about how they approach building > networks (PERT charts). +( CCPM Erfunden von E Goldratt beschrieben in : Goldratt's Critical Chain, www.chesapeake.com CCPM improves Project Performance, Larry Leach/Quality Systems 1) Erkenntnisse aus Projektmanagement : -Projektschritte unterliegen einer asymetrischen, rechtsgeneigten Wahrscheinlichkeitskurve (keiner Normalverteilung!) - d.h. wenige Projektschritte werden vor der krzestm”glichen Zeit fertig, sehr viele dauern l„nger. -Projekte unterliegen dem Studentensyndrom : in den ersten 2/3 der Zeit werden nur 1/3 der Arbeit erledigt - im letzten Drittel werden 2/3 der Arbeit gemacht. -Durch Multitasking werden die durchschnittlichen DLZ einer Aktivit„t "gottgegeben" verl„ngert. -Wir haben eine Kultur, die Versp„tungen bestraft Daher baut jeder Sicherheiten ein. Diese Sicherheiten pro Projektschritt werden linear aufaddiert obwohl gem„ss Mathematik/Statistik ber aggregierte Wahrscheinlichkeiten das nicht notwendig w„re 2) CCPM PHILOSOPHIE - fr die DLZ einer Aktivit„t werden nur 50% der angesetzten Zeit herangezogen. Man nimmt in Kauf, dass 50% der Aktivit„ten die Zeit berziehen (eigentlich werden die Reserven herausgerechnet) - Projektabl„ufe die nicht auf der Critical Chain liegen werden dieser untergeordnet und mit Buffern versehen. - auf diese Weise wird der Projektplan nicht nur in logischen Ablaufschritten aufgestellt sondern nimmt auch Rcksicht auf die Ressourcen (nur die kritischen) - man l„sst kein Multitasking zu, dh die Ressourcen drfen immer nur an einem Projektschritt arbeiten - zu 100% - am Ende des Projektes l„sst man einen Buffer - fr kritische Ressourcen werden in der Critical Chain Bufferzeiten eingeplant. Unkritische Abl„ufe laufen auch mit einem Buffer in die Kette ein, um diese nicht durch Versp„tung zu gef„hrden - CCPM verwendet Vorw„rtsterminisierung ! (=late start scheduling) - der Projektmanager steuert das Projekt ber die Beobachtung der Buffer. Ist das erste drittel des Buffers verbraucht, dann noch keine Reaktion, im zweiten Bufferdrittel werden Massnahmenpl„ne berlegt und im letzten Bufferdrittel in Aktionen umgestzt ================================================== Date: Mon, 26 Apr 1999 17:25:36 -0400 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] RE: Why the three zone rule doesn't work for project buffers Actually, the statistically valid way to model a single chain of tasks is by representing the individual tasks with the expected value of duration. This expected value is approximated best with the average, if/when data are available. The expected value of the chain (a sum) equals the sum of the individual expected values. Don't take my word for it. Look in any introductory text on probability and statistics. Also, as Eli S. noted, for such a simple chain of independent tasks (independene is assumed, really), the variance of the sum equals the sum of the variances. This is the theoretical basis for using the square root of the sum of the squares as a means for calculating buffer size. It is important to note that, as Eli S. observed, the independence assumption at times is only a crude approximation. Usually, it is the best approximation that we can make. When there are parallel feeding chains involved, such as we find at assembly tasks, the model becomes rather non linear. However, if the feeding buffers are used and sized properly, then the linear model mentioned above still holds, subject to the above mentioned assumptions. The feeding buffers serve to decouple the feeding chains from the critical chain. If the independence assumption is not violated too badly, and if the feeding buffers are used and sized adequately, and if the critical chain consists of at least 6 to 12 tasks that don't vary too greatly in duration (Zultner's example of a 10-day task and 10 one-day tasks is one where the tasks vary too greatly), then then the distribution of completion times of the critical chain approaches a normal distribution, for which the expected value is also the median, i.e., the average is also the 50% probability time. To verify this statement, look up the Central Limit Theorem in that introductory text on probability and statistics. Do not take my word for it! So, if the above assumptions are fairly valid, then on average the projects should finish without any buffer consumption. But, this means that approximately half the projects should consume some of the project buffer. Approximately half should consume no project buffer. One benefit of this approach is that unless something goes wrong in a big way, the management team has to do virtually nothing to cause the projects to be on time. Once in a while, the management team really has to intervene, to maintain on-time performance well above 90%. Of course, this assumes that everybody is behaving appropriately, which is a big assumption, right? Yes, it is a big assumption. This is why weekly buffer management meetings with the decision-maker are important. That's when the decision-maker has her opportunity to communicate to her managers that it's important to her and to them to save the project buffers, i.e., to prevent the complete consumption of the project buffers. When the managers finally believe that it's important to them to save the project buffers, then behaviors begin to improve in a noticeable way. The organization's performance, magically, improves as well. The realization that comes over them is a thing to behold: "Hey! This stuff is really working. We're finishing project much faster than we did in the past." Tony Rizzo --- Date: Fri, 17 Aug 2001 15:42:24 -0400 From: Frank Patrick Subject: [cmsig] Re: Definition of Critical Chain J Caspari asked . . . >Assuming that the critical chain of a project has been properly >buffered, does the term 'critical chain' include the project buffer? . . . to which Tony answered very quickly and emphatically . . . >Yes! I thought that a properly buffered critical chain (with PB among other components) constituted a "critical chain schedule," not a "critical chain." The PB is consumed or replenished by variation in the performance of the CC. They're two different things. Just to check myself, I went back to my FAQ list on the subject in which I wrote . . . 02 - What is a critical chain? The critical chain of a project is the set of dependent tasks that define the expected lower limit of a project's possible lead time. Dependencies used to determine the critical chain include both logical hand-off dependencies (where the output of the predecessor task is required to start the successor), and resource dependencies (where a task has to wait for a resource to finish work on another task). The identification of the critical chain uses a network of tasks with "aggressive but achievable" estimates, that is first "resource leveled" against a finite set of resources. In traditional project management language, the structure of a critical chain is similar to that of a "resource constrained critical path." Buffers aren't tasks, so IMHO, I consider the CC and PB as separate entities. Now I'm not trying to set myself up for a disagreement with Tony based only on my own interpretation and definition, so I checked out a couple other sources. Given Eli G's annoying habit of not including an index or glossary in his books, it took me a while to find, in CRITICAL CHAIN the following passage . . . " . . . we'd better straighten out the terminology. Let's leave critical path to be what everyone else calls a critical path, the longest path. But we know what counts is the constraint, and the constraint is the longest chain of dependent events. Since we acknowledge that dependency can be the result of a resource, we better provide another name for the chain of steps that are the constraint. "Why not 'critical chain'?" Brian suggests. "Sounds good." (This happened well after the introduction and discussion of buffers in the book, but this is not really conclusive on the question at hand.) From Newbold's glossary in PROJECT MANAGEMENT IN THE FAST LANE . . . Critical Chain. The Critical Chain is that set of tasks which determines the overall duration of a project. Usually it requires taking resource capacity into account. It is typically regarded as the constraint or leverage point of a project. and Project Buffer. A project buffer is placed after the final task of a project in order to protect the completion date from delays, especially delays along the Critical Chain. This is the most important buffer to place and to monitor. hmmm . . . the CC is a set of tasks and the buffer is placed after the final task. Buffers aren't really tasks, are they? The fact that the CC determines the overall duration doesn't mean it in and of itself constitutes that duration. The CC plus the PB = a reasonable promise for that duration. I wouldn't want to leave out Larry Leach's CRITICAL CHAIN PROJECT MANAGEMENT . . . Critical Chain. The longest set of dependent activities, with explicit consideration of resource availability, to achieve a project goal. . . and Project Buffer. Time placed at the end of the critical chain in a project schedule to protect the overall schedule. hmmm . . . I thought that would be more helpful. (By "at the end," does Larry mean "as the last component of" or "after?" Ah, but again, when we're talking buffers, we're talking chunks of time. When we're talking chains, we're talking tasks. Two different things.) One more source -- I knew I'd find someone to agree with me. I've got a copy of an old AGI-copyrighted workshop entitled "Critical Chain in R&D - The One-Project Solution," authored by a former Lucent employee whose name escapes me for the moment (grin), in which in the summary page on "A Critical Chain Schedule" it says: A Critical Chain Schedule consists of the following: - A feasible and complete dependency structure (a tough call). - A Critical Chain. - A Project Buffer. - Feeding Buffers. - Resource Buffers. - A start-date for the Critical Chain. - Start-dates for the non-critical chains. - A project due-date. OK. That's clear enough for me. We've got a Critical Chain and we've got a Project Buffer. No matter the source, I still think of the Critical Chain and the Project Buffer as two distinct entities. The critical chain is made up of the tasks and is identified based on their dependencies and on estimates that reflect significant safety removed from them. The project buffer is derived from the critical chain and reflects the anticipated variation of its component tasks, and is used to aid in the protection of the promised due date from that variation in the performance of the critical chain. The overall schedule is primarily based on the combination of the CC plus the PB, with occasional additional time for feeding buffer effects. --- Date: Fri, 17 Aug 2001 21:19:25 -0400 From: Frank Patrick Subject: [cmsig] Re: Definition of Critical Chain When I mentioned "occasional additional time for feeding buffer effects," I was referring to those times when the schedule contains gaps in the Critical Chain due to feeding chains that are anchored at two ends by CC tasks and parallel CC task time that is close (less than a feeding buffer's difference) to the length of the chain. (I believe that there have been recent discussions on this situation and alternatives to accepting the CC gaps, so I'll let it go at saying that plain vanilla CC scheduling accepts those gaps.) There are also those times when the earliest tasks of a project schedule could be non-critical (feeding) chain tasks that are pushed earlier than the first CC task by the insertion of a feeding buffer. --- Date: Sat, 18 Aug 2001 09:37:56 +0300 From: Eli Schragenheim I think John is right in looking for a second look. Let's go back to the way Frank's remark was understood, meaning the PB has to cover for delays through the CC, and to delays through non critical chain that consume all the feeding buffers. That's not all. Resource contention can impact the CC (especially contention between different projects) and it might easily consume feeding buffers. So, how should we calculate the size of the buffer? Should we try to improve the estimation by looking how many long non-critical chains are integrated into the PB? How much of resource contention we have in reality? Remember, within the scope of a single project, the planning resolved all resource contention. Only on paper, of course. In reality even within the single project we'll experience resource contention. Then, for multi-projects we resolve the contention of the drum resource only. So, many more resource contention cases will appear. Would you consider this in your calculation of your next project buffer? On the other hand, the higher the number of the tasks in the chain, the less the impact of the fluctuations. Would take the parameter of the number of tasks into account? I don't have a practical methodology to that. In some cases I might consider the above for my initial guess. In the next project I'd like to learn from experience. Of course, I like to learn the RIGHT lesson. I can look on the final consumption of the previous project buffer. I can also look on the actual consumption of the feeding buffers. And I have to consider the possibility that some of the parameters changes since the last project. I still have to make a guess, but I have some inputs regarding what happened to my previous guess. Now, I like to consider John's question. >>What corrective action would be triggered if a two year (on the average) and three year (sometimes) project (hat is, two years of median task times and a one year project buffer) were to be completed in 23 months?>> I look here on two distinct questions. One is, what result would be significant enough to suggest a change next time. The second is shouldn't we learn from "too early" completion as well? Here comes the term I try to introduce: the reasonable range. First let us estimate what are the reasonable completion time of a project, already planned according to CC. Certainly we expect the project not to finish LATER than the project buffer. But, do we really expect the project to finish before the start of the buffer? I truly don't think so, not because mathematically is cannot happen. Given 50% estimation should provide you with some good chance to finish early. But, I think one of those "non-parameteric" intervention actually rules it out: We did not eliminate Parkinson Law. We've reduced its impact, but not chased it out. Let me tell you John that running the PMSim (the project management simulator) so many times, with CC planning and reduced impact of PArkinson Law shows almost nil occurrences of completion before the buffer. The many times where projects finish AT THE VERY END of the buffer, shows Parkinson Law to be active. Hence, I tend to estimate that the project buffer is the reasonable range for completion of projects. Now, the insight of buffer management is: do not look only for cases where the result was beyond the control limits - accumulate data for future learning whenever the result is pretty close to the limits. [This means inquiring about the tail of the non-parametric distribution function] For me, I'd look for cases where the result is 5% of less from the limits. In John's example, a two-year long of critical chain with another one year of project buffer, if the project would complete on 23 month or even 25 month, I'd assume that I'm getting too long task estimations. If the project would complete in 35 months, I'd be tempted to ask the project team "how come?", and if they convince me that it wasn't due to Parkinson, then I'd consider using somewhat longer buffers. Now, John, Tony, Brian and everybody else, are you really interested where is the 50% confidence for the project as a whole? (it is NOT at the start of the buffer, but it should be somewhere within the buffer). And knowing that the 50% confidence (the median) is not the average - are you interested also where is the average project duration? And please, if you think it is an important piece of information, can you show me the impact on the decisions that we need to make? --- From: "Holt, Steven C" Subject: [cmsig] RE: Parkinson's Law Date: Tue, 28 Aug 2001 19:01:35 -0700 Last year I read a paper from the Bios Group (partially a spin off of the Sante Fe Institute) on the use of computer based models to determine optimal organization size. Two of the major variables were the number of interactions within the group and the number of connections external to the group. (See: Optimal Organizational Size in a Stochastic Environment with Externalities (1999) by Bennett Levitan, Jos‚ Lobo, Stuart Kauffman, Richard Schuler). The paper used agent based models to predict the "best" group size based on a number of variables. This got me searching for examples of small, fast project teams. Since I've always liked airplanes, I went looking for other airplane projects other than Lockheed's Skunk Works. Two cases from the 1940s are good examples of fast design projects in which many of the "harmful" human factors of project management do not appear to be present. The first is the P-51 Mustang, which many consider to be the finest airplane to have come out of WWII. It was designed and built by North American Aviation in 1940 in fewer than 120 days. It was also the very first fighter they'd ever designed, so they had no previous experience to draw from. There is information available on what the design environment was like (including transcripts of interviews from the Smithsonian's web site). It seems to have been a small group acting under a commercial deadline. (Basically the Royal Air Force asked them to license build the Curtis P-40 fighter. NAA didn't want to build someone else's plane and the RAF said that if they could come up with a new one in the same amount of time, 120 days, that it would take them to retool to build P-40s then they'd get the contract.) People in on the project refer to the work pace as "relaxed." Note that this was before the US entered the war and this was primarily a commercial project, not one taken on out of a sense of wartime pressure. Further, NAA was essentially being actively discouraged from the fighter business at the time by the US government. The second example was a German jet fighter from 1944, the He-162. On September 6, 1944 the German Air Ministry issued a request for a contract proposal for a single seat jet fighter. They offered very few requirements. They awarded the contract to Heinkel on September 24, 1944 after reviewing a nearly completed mockup (after only 18 days!). The first flight of the plane was December 6, 1944, a mere 69 days after granting the contract and only 88 days after the Request for Proposal. This was a new technology developed under a time of intense pressure. The first prototype did suffer design problems, but the airplane went into production. Both design teams seem to have been very small and both built upon the skill of a relatively few experts. Accounts suggest that both companies had a history of sometimes keeping design projects from their respective governments so that they could try out truly innovative ideas out of the eyes of any outside "help". In the words of the paper, they kept their "externalities" to a minimum. --- Date: Sat, 7 Jul 2001 10:18:51 -0400 From: Frank Patrick elan.g@hical.soft.net asked: >Could any one suggest me web sites regarding project management and also >some good books available. To heck with false modesty . . . For Critical Chain-based Project Management, I've gotten a fair number of compliments on the project management section of the Focused Performance website, which you can find at . . . For general project management, I particularly enjoy the iconoclastic (and non-profit) NewGrange site at . . . For discussion on general project management topics, you might want to check out Gantthead at . . . Regarding books, check out two by Tom DeMarco . . . SLACK and THE DEADLINE. SLACK is not really a project management book per se, but talks all the right stuff on resource management. THE DEADLINE is, like Eli G's best books, a "business novel" about a portfolio of software projects with a bullet-list of "lessons learned" at the end of each fictional chapter. It blew me away how many of those lessons were in sync with CC and TOC concepts. Of the Critical Chain books out there, once you get behind Eli's basic intro (CRITICAL CHAIN), I find Larry Leach's CRITICAL CHAIN PROJECT MANAGEMENT to be the best out there today (it's more comprehensive and up-to-date on the thinking around multi-project management than the Newbold book). You do have to wade through quite a lot on introductory TOC stuff that is probably superfluous and might turn off people looking for focused info on CC, but there is good PM stuff there if you carry through with it. --- Date: Tue, 23 Nov 1999 21:44:24 -0600 From: Kathy Austin Subject: [cmsig] Re: CCPM feeding buffers I am _very_ comfortable predicting that any organization that only implements the "scheduling" portion, even the "scheduling and managing" portion of CCPM will not see the expected results. To put Single Project CCPM in the context of the Five Focusing Steps: 1. Identify the constraint: identify the critical chain of the project (the longest chain of dependent events (task, path, resource dependencies all considered). 2. Exploit the constraint: protect the due date of the project by placing a project buffer at the project completion end of the critical chain with its size determined by the critical chain tasks time estimates (as a starting point). The purpose of the Project Buffer is to protect the Critical Chain from variation in task completion times. Another area of exploitation is to place resource buffers before every Critical Chain task that is using a resource non-consecutively. The purpose of the resource buffer is to ensure that the resource is available to perform critical chain work when the work from the preceding task is available (to eliminate late starts on critical chain tasks due to resource non-availability). The resource buffer is not a time buffer (like feeding buffers and the project buffer; rather it is a countdown/alert/notification signal to the resource to ensure it is going to be ready to start the critical chain task work when the work is ready. 3. Subordinate to the constraint: Feeding buffers are part of the subordination process (subordinating the non-critical chains to the critical chain (constraint) of the project). Their purpose is to protect the critical chain where paths merge or feed into it from variation in the non-critical chain tasks; feeding buffers also have to compensate for resource non-availability (late start due to resource unavailability). 4. Elevation, if required: If, at the end of step 3 the project due date does not meet a required due date, elevation is required to bring the due date in to the required date. There are many elevation options, including re-looking at the size of the various buffers, re-looking at the project network to determine if additional detail is required, looking at the resource availability to determine if additional resources are available at tasks that would allow you to pull the due-date in. Please note that if you do add additional resources you have actually moved to step 5 (broken the original constraint of the project -- its critical chain) and you need to start over (remove all buffers, identify the new critical chain and so on. Thank goodness software makes it very easy to do these what-ifs. Is this some of the data you were looking for (is it information?)? Best regards, Kathy Austin ========================================== Date: Wed, 24 Nov 1999 19:13:46 +0200 From: Eli Schragenheim To: "CM SIG List" Subject: [cmsig] Re: TOC and variability The way dynamic buffering was implemented in the "Disaster" software (that was developed at about the time Eli Goldratt wrote 'The Haystack') is as follows: First, the buffer duration (I prefer to use the term 'duration' instead of 'size' - these are time buffers) are estimated to include mainly Murphy, some processing time and fairly short wait time. These are supposed to be shorter buffers than the case where the buffers need to consider also temporary peaks of load. The 'dynamic part' is target to consider the peaks on non constraints, so when there is none, the buffers are significantly shorter. The schedule starts with developing the finite capacity schedule for the capacity constraint (CCR). Then the program schedules all the non constraint operations, starting from the due dates going through the upstream operations - until it reaches the constraint, then it continues for the upstream operations of the constraint. When a buffer is crossed the time goes backwards by the duration of the buffer. The capacity for each operation is calculated and accumulated for that day for that resource. Whenever the calculated capacity for a certain day for a specific non constraint resource is more than the available capacity for the day - the time goes backwards by one day. Thus, the release of materials may go backwards by number of days when one or more resources seems to be pretty loaded. This may sound complicated, and one needs to consider the cases when it is not possible to go backwards in time (like requiring to release the materials yesterday). But, the basic logic is to establish a full schedule - also for the non constraint. BUT - THIS SCHEDULE IS NOT PRODUCED - it is just a feasibility check to identify peaks of loads on non constraint, and then enlarge the buffers accordingly. The non-constraint schedule does NOT reach the user! This kind of scheduling can be done only by computerized programs. I know 'Disaster' (later 'The Goal System') did that. I assume Resonance does the same, but I'm not familiar with the details. Somewhere in the Goldratt Institute there is a version of the simulators (entitled 'ordsim') with an animation that explains the algorithm. I don't think they use it anymore after AGI abandoned the Disaster software. Anyway, I don't see any way to actually implement 'dynamic buffering' without such an algorithm. From another angle, in most written materials about finite capacity scheduling it is assumed that "Backwards scheduling" has to be "infinite capacity loading" and only "Forward scheduling" can be "finite capacity loading". The above algorithm shows that it is possible to do "finite loading" when going order by order from the due date backwards to the release. This is just an example to the power of inertia in software development. I hope the above can be understood. If you struggle with the concept I can come up with a numerical example. Eli Schragenheim Dynamic buffering is the process described in Chapter 36 of The Haystack Syndrome. ========================================== From: "Jim Bowles" To: "CM SIG List" Subject: [cmsig] Re: Task Estimation using PreRequisiteTree Date: Tue, 20 Jul 1999 12:59:34 +0100 Govindkrishna You may be suffering from having too much knowledge of the Jonah tools. Although the project plan can legitimately be referred to as a prerequisite tree it is better to think of it as a network of logical and resource dependencies. It will look like a PRT on completion but the method of construction is somewhat different. The expression "The Ends reflect the Means" is probably apt in that you start with the Objective or Outcome that you want to achieve. It's similar to a backward and forward scheduling exercise. You ask the questions, "What do I need before I can have that?" Once you get to the point that is close to your starting point you can the check the logic by going forward. Now you can use the PRT rules to check the logic and missing steps. In order to have this milestone.... I must achieve that milestone.... As an example of its use I recently helped a project manager with a Millennium bug project. They were getting close to the migration day; this had been moved back several times and was in danger of doing so again. My contact had been assigned to sort out the problem of roll out and had been given a month to do so. He had brought everyone together to produce a plan. The outcome of the meeting had left him frustrated under a pile of papers with lots of detail. This is not what I wanted but its what I've got. He started by showing me the list of tasks that had to be done to meet the deadline. I suggested that he put those to one side. Lets' start by writing a statement of what you would like to have by the 30 June. Using Post It notes and by asking questions the logical tree began to form quickly. Two hours later we entered the network into MS Project, ran through the Critical Chain steps and we had the origins of a CCPM plan. His face was a picture - That's what I wanted - a proper plan. I love it. During the next few days he discussed the outline plan with the people who would do the work. The number of boxes and the required detail was added until they had about 160 tasks. They completed the project 4 hours later than planned but still on the day required. They did not to disrupt production or the commercial departments. His last comments to me, "Everything's working fine, the sticking plaster is holding." Hope this help you see the wood for the trees regards Jim Bowles The PreRequisite Tree is used to prepare the CC Project Plan. This is what Robert Newbold's book says. In the software / R&D industry forget emphasising using 50% estimates instead of 90% estimates. Estimating to within an order of magnitude is the first step. Of course the PreRequisite tree will force me to put down my worries and having verbalised my obstacles I can brainstorm and come up with better task durations. 1. Can anybody share a real world PRT used for this purpose ? 2. Can anybody share experience on how using PRT has helped in streamlining the estimation process. From: "Kania, Eugene (Gene)" To: "CM SIG List" Subject: [cmsig] RE: CCPM Date: Tue, 3 Aug 1999 14:43:41 -0500 Hi! Having been on this list for some time now, I am surprised that no one has yet responded to your inquiry. After all, 5 days have passed. I can only think of 2 reasons why I haven't seen a response: 1. People are responding privately to you. 2. Your questions are of a nature that a typical e-mail response would not do justice to it. For the time being, I fall into category 2. Nevertheless, let me take a stab at a few comments for you to digest: What obstacles do you have implementing projects using CCPM? The biggest one is properly engaging the leadership team of the organization. Usually middle managers want to implement in their area of control and may not have authority over important things like measurements. At Lucent, we try to find a decision maker who does control a logical product development system and can change the measurements of that system to fit the CCPM paradigm. There are other implementation obstacles but we have found most of those to be surmountable. Any successes? Yes, where we have properly engaged the leadership team of the organization. What do you think CCPM has over conventional PM? CCPM provides an operational measurement scheme (Buffer Management) which ties local decisions to the global performance of the system. It provides a tremendous focus and leveraging opportunity to maximize the effectiveness of the product development organization using it. Other differences between the approaches are too involved to mention here but I'd like to refer you to two recent articles from TOC practitioners who get into the details: June article in PM Journal by Larry Leach April article in PM Network by Frank Patrick Can anyone provide Steve with more details on how to access these 2 articles? What works and what doesn't? Man, I could fill pages, but for brevity let me just highlight one point. CCPM doesn't work when it is considered simply as a new way of scheduling projects. CCPM does work when it is recognized as a paradigm change which requires new policies, measurements and behaviors to be successful. Got any tips? Engage the leadership team. Invest in plenty of training. If you want to get up the learning curve fast, invest in a good "guide" (some people call them consultants). How do you know they are good? I think that's the subject of a whole other e-mail. +( ccpm game From: Jean-Claude Miremont Date: Wed, 20 Mar 2002 12:34:26 +0100 Subject: FW: [tocexperts] a game to teach critical chain I join Tony. As many of us, I have used this game and it works well. I have magnetic board and blocks and a home made spinner. I have to add a comment though. The random durations (events) that help visualize the distribution curve are fine. I have checked the figures, the mean and the standard deviation are correct. However, the events on the spinner are discrete events. The probability to draw a duration less than the mean is therefore much higher than 50%. Be prepared to handle a situation where for one single run, the results are very optimistic, to the point that sometimes none of the 9 (for this particular network) scheduled tasks gets a duration higher than the mean which is already an optimistic one! Note: the statistical approach is valid for a large number of runs..... Also, if resources can work on more than one project, the bead experiment is a must. Thanks, Tony, for this appreciated contribution to the CC community. Jean-Claude Miremont ----- Original Message ----- From: "Tony Rizzo" Sent: Wednesday, March 20, 2002 5:52 AM Subject: Re: [tocexperts] a game to teach critical chain > My old game with colored blocks and a spinner worked well. > I'll see what I can scrounge up from the cobwebs. +( CCPM in IT business From: "Tony Rizzo" Subject: [cmsig] Re: DBR application in an IS environment Date: Wed, 9 May 2001 11:26:34 -0400 The problem in IS functions is that there are many customers, all with needs that they, the customers, perceive as urgent. Add to this the complete absence of any prioritization method, and you have the recipie for massive amounts of multitasking, also known as the extreme dilution of resource capacity across too many projects. The symptoms that you can expect to observe are the following: 1) All projects are late, consistently. 2) When one project appears on the radar screen of an executive, that project becomes the focus for the entire IS department. At that time, the one project makes great progress, at the expense of the other projects. 3) Due-dates are set not on the basis of any understanding of real capacity but on the basis of wishful thinking and politics. 4) Due-dates are always unrealistic. 5) Due-dates are set with the assumptions that the projects have dedicated resources, which they never have. 6) The managers of the IS department never feel that they have the authority or clout with which to say, "Not now!" 7) Resources tend to work extensive overtime, yet they feel that they achieve only a fraction of what they need to achieve. Need I go on? Or is this a sufficiently close explanation of your environment? A solution is possible. But, it requires the very active support of the decision-maker of the IS department AND the sincere cooperation of the decision-makers of the many customer organizations of the IS department. It's a tough political nut to crack. Another approach is to cause the customers to prioritize their own projects. There are techniques with which to achive this. But, again, it requires the very active support of the decision-maker of the IS department. By the way, if you just try to implement DBR, it'll crash and burn. You'll never be able to achieve the behavior change in the developers, until you first make that behavior change safe for them. The key is in making it safe for the developers and managers of the IS department to behave in a way that benefits the overall business to a greater degree. +( common cause and special cause variation From: "larry leach" Subject: [cmsig] Develpment Cost Importance Date: Mon, 11 Mar 2002 08:14:58 -0600 For the case in point, the theories we are comparing are a)the theory that schedule is most important compared to b) the extant mental model that says development cost is most important. Neither theory addresses the variables you mention. The empirical data presented leads to questioning b. The empirical data has no variables either; but it may be sortable. The variables you mention are probably confounded in the data; but if you check it out, I would be interested in hearing what you find. I do have a more complete mental model. It starts with the assertion that any project worth doing must have an ROI that exceeds the cost. Investment (negative cash flow) starts the day the project starts, and my return does not start (usually) until the project is complete. Thus, for fixed investment, I improve ROI/time by completing any project ASAP. My motto is, "Any project worth doing is worth doing fast." This result is robust despite all the variables you mention. In addition, for NPD, the market share of first-to-market can be 80%+, compared to less than 10% for late commers. Thus, if the ROI (for the development) is X for a late commer, it is 8-10X for first to market. So for me, the empirical data just confirms my mental model. Much more value (learning) is added by creative development. If I had the concerns you do, I would put together a comprehensive model, and put it to the test...which includes putting it out for criticism. ----- Original Message ----- From: "Roggenkamp, David (D.B.)" Sent: Tuesday, March 12, 2002 10:20 AM Subject: [tocexperts] Multitasking - common or special cause? > I can't help but throw in my 2 cents worth to this discussion: > > I had a similar discussion with a boss several years ago. At the time, I was arguing that our process was out of control because of high variability (common and special). He asked me if I could predict, at the start of a project, what the outcome would be. I said, "Yes, all projects run about x months late to our generic timing model." He said, "Then how can you say we have high variability? The outcome is entirely predictable". I said, "But the causes are different for each project." He said, "But the outcome is the same and consistent over time - therefore there is minimal variation to the overall project, we just need to acknowledge this time 'increase' as a part of our process, inherent in our method of execution even if not included in the written plans." In other words, at a portfolio level we could choose to acknowledge the effect of multitasking (and other sources of process variation in place at the time) and achieve fairly stable plans in spite of it. Inefficient, perhaps, but I came to agree with his viewpoint. > > Which (in part) leads me to suggest... > > How about multitasking being both common and special depending on the status of CC-MPM implementation? > > Scenario 1 - Company running 'business as usual' (with bad multitasking) > Multitasking is Common Cause. It IS inherent in the companies implementation of the process due to practices / policies that are in place at the time. Revision to the process (i.e. changing to CC-MPM) will greatly reduce the variation (part of a definition of 'common cause'). > > Scenario 2 - Company running CC-MPM > Multitasking is Special Cause. 'Bad' Multitasking has been largely (if not entirely) eliminated from the PD process of the firm. Occurrences of multitasking then can be treated as Rob suggests below. The project and/or resource buffers serve as a control signal when a disruption occurs and the event can be treated as a special cause variation. > > > -----Original Message----- > From: CZago/RRoy [mailto:eclozion@colba.net] > Sent: Tuesday, March 12, 2002 9:53 AM > Subject: RE: [tocexperts] Re: Critical Chain and PS8 > > How about saying that an event causing multitasking is a special cause when > not in control ? The objective is to minimize its effect until a better > system allows its probability of occurence to become remote to the point > where part of the buffers will absorb the inevitable but controlled > multitasking. > > Each event causing multitasking can be included in a risk evaluation matrix. > This matrix is based on Failute Mode Effect Analysis (FMEA). > > In this matrix 3 variables could be evaluated using a ranking chart > > 1) Probability of occurence (1:remote to 10:very high) > 2) Severety of the effect (1: none to 10:hazardous without warning) > 3) Detection capability (1: Almost certain to 10:absolute uncertainty) > > 1 * 2 * 3 = risk probability number > > What is the Risk probability number of multitasking in your business? > > High = special cause > Low = common cause > > In some companies multitasking detection capability is a 1, in others a 5, > probably none are at 10. However the goal will always be to eliminate > multitasking that reduces overall T. Its a special cause when a major > unpredictable event throws the business in a multitasking spiral. Its common > cause when it is generally under control and buffers are enought to absorb > them 90% of the time. > > -----Original Message----- > From: Tony Rizzo [mailto:tocguy@pdinstitute.com] > Sent: Tuesday, March 12, 2002 8:58 AM > Subject: Re: [tocexperts] Re: Critical Chain and PS8 > > > > Special causes of variation are from outside the normal process, and > > they come and go. They are unpredictable as to when they will come > > and go. > > So, a resource is diverted to work on this week's high priority project, > instead of being permitted to work an earlier project. This sort of > event comes unpredictably, so far as the anyone is concerned. How > is this different from special cause variation in the duration of projects? > Does it not show up on a control chart? If you tracked project duration > on a control chart, this would show up as large, irregularly recurring > variation. Yes, the effect tends to be mostly one-sided. It tends to > only add to project duration. But I don't recall the definition of > special cause variation containing any language about the sign of > that variation. > > The variation exists, it is not inherent to the process, it causes the > process of performing projects to be out of statistical control, and > it has an assignable cause, multitasking. What am I missing? +( customize MS Project for CCPM Hi Ravi, I don't know whether it's ok, but I customise MS-Project for TOC/ CCPM as follows...... Once we evaluate and determine the feeding buffers (there locations & size), I treat these buffers as individual/ seperate tasks in MSP. I enter them. The nomenclature is like - "Buffer for HLD Rework UID -4976". This kind of nomenclature is helpful to understand as to what non-critical link the buffer relates to. (UID is Unique ID to prevent the confusion in change in ID if any task is added in/ deleted from the schedule.) The colour for the complete task record and also for the bar of the task is changed. I colour it Green. Further I make use of custom flag field. Customise it as Buffer. All the Buffer Tasks (Feeding buffers as well as Project Buffer) are marked 'Yes' in this field. Why to do all this - this will be clear from the attached image. I have created custom filter named as 'Buffer Milestone'. Application of the filter tells me the milestones in the project and buffer tasks. And finally comes the actual use of buffers. I control it thro' reducing or increasing the duration of the buffer tasks depending on the project progress. Treating the buffers as 'Savings Bank Account' - depositing & withdrawing the 'time'! This goes for buffer management. As far as advance alarming to the resources is concerned, seperate tasks are entered under a seperate Summary Task - "Advance Notifications". Each subtasks under this is a milestone. It is linked with the respective project task with appropriate "lead" timing in predecessor.. (e.g. 17ss-3d, wherein 17 is the project task for a resource. 3 days is advance notification time required for that resource to leave everything else and start working on task 17). Hope this would be helpful Note: Scitor has recently announced it's product "PS-suite" which is supposed to based upon the CCPM. I myself have not yet explored this one. However, I am finding the above arrangement quite useful. Thanks Dhananjay +( flush I also am unable to find, or construct, any compelling examples where using flush clearly gives better insight to a financial decision (than existing payback, NPV, and IRR measures). Does anyone have, or know of, any examples that really show the power of flush? Regards, Richard ................ Dr. E. M Goldratt's concept of Flush suffers not from a lack of validity but from a name that associates the concept with a certain porcelain fixture. Mathematically, the concept is sound. Once the assumptions that underlie the concept's derivation are known, the soundness of the concept becomes evident, as does the questionable nature of the interest-models in use today. The reasoning behind the concept of Flush speaks to the finite quantities of two commodities available to all investors: These are time and money. Investments utilize both commodities, not just money. Therefore, one would like to evaluate competing investment by considering each investment's contribution to both commodities. However, while we certainly can and do accumulate money, the nature of our universe prevents us from accumulating time. Since we cannot evaluate the degree to which competing investments impact our two, limited commodities individually, Goldratt suggests that we evaluate the effects of competing investments on a composite function (E. M. Goldratt, personal communication, March, 1996.) The composite function, F, is a measure of the maximum, initial investment-potential available to an investor. This initial investment-potential is expressed as follows: F = (a*n)*(b*m) (=) 1*Time*1*Money where, a and b are weighting functions. n and m are the initial quantities of time and money available to the investor, respectively. The weighting functions, a and b, are dimensionless. They represent the degree to which an investor might value each of the two investable quantities, time and money. For example, a very young investor might value time to a lesser degree than the investor valued money. For such a young investor, the appropriate composite function might be expressed as follows: F = (0.1*n)*(1.0*m) An older investor nearing retirement might value time to a greater degree than the investor valued money. For the older investor, the appropriate composite function might be the following: F = (1.0*n)*(0.1*m) However, in the absence of due justification for tailoring the composite function to specific investors, which justification can come only from the specific investors, the general form of Goldratt's model correctly assigns equal values to the two weighing functions, rather than assuming arbitrarily that one of the two investable quantities can be considered less valuable than the other. To evaluate competing investments, one first must calculate the maximum, initial investment-potential available to an investor at the time of the decision. Second, one must calculate the maximum, final investment-potential that each of the competing investments is likely to create for the investor, at the end of the corresponding investment-horizon. THE DIFFERENCE, between the maximum, final investment-potential and the maximum, initial investment-potential, IS A COMPARATIVE MEASURE OF GOODNESS for the corresponding investment. It is a mistake, to interpret this calculation as anything other than a comparative measure of goodness for competing investments. The term, Flush, describes the rare event, where an investment returns one's investment-potential to its initial state. This ends the explanation of Dr. E. M Goldratt's contribution to the proper evaluation of competing investments, a.k.a. Flush. Regarding another subject, information-based planning for projects, which may be of more immediate interest to some, please follow the link below. http://www.eventbrite.com/event/477546354 Tony Rizzo CriticalChain@yahoogroups.com Nov 8th, 2009 --- Great discussion, I have been following this one with great interest. I think this thread is trying to answer a Strategic Question based upon the behavior of the Local implementation. I think we need to look at the larger picture to see if we can answer the questions at the holistic system level. TOC has been really good at the three main questions: 1 What to Change? 2. What to change to? 3. How to cause the change? These questions are used to create solutions to seemingly unsolvable problems in industry. Most times TOC solutions are applied independently of corporate strategy; i.e. Engineering using CCPM, Manufacturing using DBR, Supply Chain using TOC Distribution and Replenishment. Even in these solutions there are gaps, for the solutions are focused upon the negative behavior of the previous system. My guess is there is an unspoken assumption by the inventors of the solutions that if you improve the performance of the system to such a degree, worrying about some of the less defined aspects is something you can do once to overriding problem is overcome. For example, in DBR what do you do with the non-constraint work centers? How do you measure their performance? That is the negative branch that TDD and IDD were meant to overcome. Not doing too much, and not wasting T. Even with Subordination, you still have gaps to fill for the individual non-constraint aspects of the system, or you will loose the Road Runner Effect, as they will disengage. People want to feel their efforts are valued, how valued do you feel if someone tells you to read the newspaper until work shows up, then drop the paper and go like hell? TDD and IDD like a number of aspects in the literature are written in isolation to overcome the negative effects of the cost world system. The cost world has well written rules, and behaviors to support the old paradigm. That same resource knows they have to keep busy or they get trimmed, or yelled at for example. However, the TOC measurements do not have the same distinction; the resource is left with questions no one feels are important enough to address. Like Rob said, if you have good buffer management and buffer statistics you'll know the issues in the plant, and these measurements may be overkill. However, if you move up to a multi-plant system, with multiple distribution nodes, the measurements are quite helpful, I have been to this point several times, and TDD and IDD are good for these larger systems. They are difficult to get without some serious data manipulation, for many IT systems do not produce the measurements easily, but that is a different story. As for Flush, if the implementation is a local implementation, it might not hold the meaning intended, just like TDD and IDD on the plant level. Flush may also look like and afterthought, but that is the nature of negative branches. Now if we look at a larger system in a company that has; 1. Multi-plant DBR, or S-DBR manufacturing, 2. a supply chain with TOC distribution and replenishment in multiple locations, 3. sales and marketing using unrefusable offers, 4. A measurement and decision support system a.k.a. an Accounting method. Note: I do not refer to Throughput Accounting, for I do not believe that all the negative branches have been overcome in TA. I would say Constraints Accounting by John and Pamela Caspari have a much more fully developed accounting methodology. Now, how does and organization that has made the change to decision making the TOC Way decide on how to invest money? When I first read about Flush, this is the context of which I believe the method was to be used. Not an entirely Local focus, but a global focus. Since there are not a great amount of companies out there that have fully made that switch, I see it as something yet to be proven true or false. This falls under the category if the fourth question that TOC practitioners need to answer: 4. How to operate now that we changed? This is where and when the question of Flush can be answered. Just my thoughts, Rick Denison in RE: [CriticalChain] Dollar-Days and Flush are these measurements meant to be used in isolation? --- +( Follow Up on Project Status From: "Potter, Brian (James B.)" To: "CM SIG List" Subject: [cmsig] RE: Project Status Reporting! Date: Wed, 9 Feb 2000 08:39:00 -0500 Regarding project status, one normally has three questions ... 1- Will it finish on time? 2- Will it deliver its intended content? 3- Will it finish within budget? When a completed project will produce revenues, the first to considerations often completely dominate the third. Lifting other notions from Goldratt's "Critical Chain" and Newbold's "Project Management in the Fast Lane," addressing the first question reduces to ... A- Do the project's buffers satisfactorily protect the planned completion date? B- If not, where should the organization concentrate extra resources to protect the completion date? To address "A" ... a- Have each task team report the expected time that elapse before they finish their task. b- Treat the remaining incomplete (and not yet begun) tasks as a project. c- Determine the buffers needed to protect the remainder of the project in "b." d- Compare the buffers estimated in "c" to the actual unconsumed buffers in the actual project (give primary attention to the project buffer). If the actual buffers protect the completion date at least as well as the buffers from "c," the project should finish on time. If not, address question "B" by noting which tasks (and their successors) cause the deepest penetrations into the project buffer. Focus the organization's "extra" resources and other expediting efforts on those tasks. If the original budget includes "money buffers" (funds allocated in the original project plan to cover unexpected project expenses), you can address funding issues like scheduling issues. Perhaps, an original plan could include "contingent content" which would increase project value but which may be eliminated without endangering the project. "Contingent content" could act as a buffer by removing it from a project with an exhausted project (time or funding) buffer. Such an approach WOULD complicate project planning, but it might be a good way to approach nice "extra" features (e.g., things which you might include in an enhanced version or a field upgrade kit). -----Original Message----- From: dhananjayg [mailto:dhananjayg@sohm.soft.net] I need help. We are trying out to implement a project status reporting procedure in our organisation. This shall mainly put forward the ratings for the project. Primafacie we are looking at 4 such ratings as A: OK -- OK B: OK -- Not OK C: Not OK -- OK D: Not OK -- Not OK It is read as A: The project status is OK at present and it shall be ok in future. Now the question is that how do we really say the project is OK or Not OK at present and or in future. Combination of following criteria are selected to start with 1. % Duration (lead time) complete (consumed). 2. % Efforts (work) spent (completed). 3. % Tasks complete. 4. Duration Variance 5. Effort (work) Variance. 6. Cost Variance. Apart from this Qualitative Comments of Project Manager shall also play an important role. --- Date: Wed, 23 Aug 2000 17:20:16 +0300 From: Eli Schragenheim Subject: [cmsig] Re: Buffer Management First things first. Dividing the buffer into three belongs mainly to the manufacturing enviornment. It is quite effective there, because in manufacturing the ratio between the net processing time and the actual production lead-time is very small. Hence, how much of the actual chain has been completed is not particularly important. All that truly matters is how much time is left. Project management is different than manufacturing because the net processing time is quite close to the lead-time. IT IS NOT EQUAL, by the way. Multi-tasking is quite a killer here. In project management you are right that a ratio between the percentage of buffer consumption and the percentage of the chain completed is a much better indication. We fully agree on this point. What I definitely disagree is that a ratio of 1, like when half of the buffer was consumed and half of the critical chain is completed, is a good indication. As a matter of fact, it tells you that the probability of finishing on time is WORSE than when you started the project. Of course, if 99% of the project has been completed and 99% of the buffer has been consumed, you might argue that even if you are late (high probability of being late) it is not going to be very serious because how much the last 1% of the project can take? Well, I know of occasions where is can take VERY long, but at that instance you cannot do much about it. But, at the midst of the project, you can still take some actions to better ensure on time completion. Why a linear 1:1 correlation between the project (the chain feeding the buffer) and the buffer is not good enough? Because of the same reason why we try to accumulate the safety and not distribute it. A buffer of 50 days protecting a chain of 100 days may be good enough. Suppose that "good enough" means 90% confidence the project will finish before or on time. With the same amount of uncertainty applied to 20 days project - a buffer of 10 days might NOT be good enough (significantly less than 90% confidence). You see, your assumption that each task will take the estimated time duration PLUS its part of the buffer, is contrary to the basic logic that initiated the placement of the buffers in CCPM. If this happens, either the time estimation (for 50% confidence) of the tasks were too optimistic, OR that Parkinson Law is still there and every task manager target to finish at "the planned time plus the appopriate share of the buffer". The nature of buffers, like insurance, is that in most cases we don't consume all of it. While we expect some of the buffer to be consumed (both because of tasks that took more time and because of the impact of dependencies like integration and resource contention), we should look on a fully consumed buffer as a warning. It is either one of those statistical fluctuations that because of it we've put a buffer OR as a sign that something went wrong with the implementation. So, what is a "good ratio" that indicates that the project goes on well? Certainly the judgment changes along the way. When only 10% of the project is completed, a ratio of 1, or even 2, is not good, but much too early to panic. After 30% of the project I'll be worried if the ratio is above 0.8. I recommend to collect the data on the final status of the buffers of completed projects. If more than 20% of the buffers were fully consumed, I'd check the feasibility of the planning and whether the new culture is fully accepted. Eli Schragenheim DandSKing@aol.com wrote: > > I need some clarification, please. > > Many articles about Critical Chain present the technique of Buffer Management > as first dividing the buffer into thirds. If buffer consumption is less than > 1/3 then the project or feeding branch is okay and the PM doesn't have to > worry. If buffer consumption is 1/3-2/3, the PM should watch and plan, if > > 2/3 the PM should act. So far, so good. > > However, if a project is proceeding along "normally," that is averaging the > consumption of a proportionate amount of buffer for each task, than when 1/2 > of the tasks are finished, 1/2 of the buffer will be consumed. This would > not seem to be a cause for Warning and Planning, yet, the buffer would be in > the yellow zone. Likewise, if the feeding branch or project were 3/4 done, > the buffer would also be 3/4 consumed and would be in the red zone. Again, > logically this should be OK. Instead, I'm told by the Buffer Management > scheme that I should act. --- Date: Wed, 23 Aug 2000 17:28:41 -0400 From: Frank Patrick Subject: [cmsig] Re: Buffer Management Another 2 cents on buffer management for projects, partially from an old post, updated to reflect my recent thinking on the topic... Let's look at a 10-task chain of 20 day ("90% confidence -- safe" estimates). Assume the "50% agressive" estimate are half the length -- 10 days. (I know, I know -- variability is variable, but stick with me for a bit.) The usual method for sizing the buffer (half of the safety removed) would result in a 100-day chain and a 50-day buffer. The "square-root-of-the-sum-of-the-squares" (SRSS) method would suggest that a workable approximation of a statistically correct buffer to provide 90% confidence of the chain-plus-buffer date is about 32 (31.6) days. Let's assume the project progresses as the schedule expected. As each task is complete, we could, if we wanted to, derive a new view of how much buffer is really needed to provide 90% confidence of completing the remaining tasks, using either approach. (This would really be the ideal for managing buffers, IMHO...How much buffer is left could then be compared to the amount needed to protect the promise from the remaining work and its expected variability? Actually, in advising my clients, I suggest that they treat any flag -- either a crossing into a "yellow zone," an undesirable trend in project buffer consumption vs critical chain completion, or whatever -- as the hint to ask the question "Does this mean anything?" It means something, and suggests some planning/action is prudent if there is insufficient buffer to protect against the remaining work.) The two methods would result in the following buffers (Forgive the decimal points, they are only included to show the subtle changes in the early going in the SRSS method). Remaining Buffer Sizes Tasks/Chain Half-of-safety SRSS =========== ============== ==== 10/100 50 31.6 9/90 45 30.0 8/80 40 28.3 7/70 35 26.4 6/60 30 24.5 5/50 25 22.4 4/40 20 20.0 3/30 15 17.3 2/20 10 14.1 1/10 5 10.0 Notice that for very short chains, in terms of number of tasks across which we can spread the protection, the indicated buffers are larger than the "half-of-safety removed", and vice versa for larger tasks. If the numbers in the estimates really mean something (and we have to assume they do), the "statistically valid" calculation of a 90% confidence buffer results in a smaller buffer per task for longer chains and a larger buffer per task for shorter chains. The reason that buffers work as well as they do is that most project string together chains of tasks that exceed 8 or 9, and therefore result in "half-of-safety" buffers that far exceed 90% confidence for the chain. This is good. This gives us our comfort for letting the "green zone" deplete. This makes sense, if you look at the extreme case of 1 task. The buffer required to provide a 90% confidence level, plus the 50% "aggressive estimate", equals the 90% "safe estimate." For very short chains, (in number of tasks), there is less chance to spread and absorb variability that could jeopardize the promise. The half-the-safety approach for those short chains yields buffers that are too small and will therefore trigger more "yellow-zone" concern and planning. You don't need more buffer as the chain is completed, but you do probably need more buffer per amount of chain remaining. The absolute size decreases but the needed proportion increases, as the SRSS numbers suggest... 31.6/100 = .316 30.0/90 = .33 28.3/80 = .35 26.4/70 = .38 24.5/60 = .40 22.4/50 = .45 20.0/40 = .50 17.3/30 = .58 14.1/20 = .71 10.0/10 = 1.00 I like a combination of the two approaches, where the SRSS calculation is used to define the buffer associated with the yellow zone for the remaining tasks. It's interesting that for longer chains (greater than 8 tasks, which is common), the "green zone" of a "half-of-safety-removed" buffer is less than or equal to the difference between the two calculations. For longer chains, the extra time provided by the "half-of-safety" approach is true green zone. (Note the 45-day buffer vs the 30-day buffer for 9 tasks -- looks like a 1/3 of buffer for green to me). The proportion of chain to buffer that we concern ourselves with really can't be linear. Thinking that way is a result of thinking the "half-of-safety" approach yields anything resembling statistically valid numbers. It doesn't and it wasn't designed to. It was designed to provide a "good enough" approximation that could be supported by prudent buffer management. The issue is that prudent buffer management needs some help to assess whether we're really in a "yellow zone" or not. So trying to fine tune a ratio of buffer to chain doesn't help either, whether it's 1:1, .8:1 or 2:1 or whatever. I like to advise that the SRSS calculation, performed periodically on the tasks remaining to be done, provides guidance to the first question that we ask when a flag is raised -- "Is this worth worrying about?" If the remaining buffer is less than the SRSS calculation, we have less than a 90% chance of keeping our promises (again, I admit, assuming our estimates are anywhere near valid -- big assumption). That situation, to me, bears watching. If it stays below that for several reporting periods, or trends further in the wrong direction, or whatever, then recovery planning is probably warranted. This approach can also deal with varying variability, as the required buffer for remaining work will take into account more or less variable tasks appropriately. If this could be done automatically, it would serve as a very nice flag. This kind of calculation is something that software should be able to do really easily (If programmed to do so... -- Are you listening, Scitor, Speed to Market, and ProChain?), and therefore provide a quite valid flag for attention when buffers start to be consumed. +( implementation steps From: "Lattner, Greg" Date: Thu, 29 Mar 2001 09:48:07 -0700 Here are some steps that I identified that could be taken with CCPM manually until software can be implemented. Am I missing some? Is this unrealistic? Can this be done with Microsoft Project? Can this be done on a white board? Step 1. identify the critical chain constraint resource. Step 2. exploit the time of these people more by a. eliminating what Constraint Management calls "bad multitasking", b. staggering the projects so each project is focused on intensly and close to completion, before becoming distracted and starting with the next project. c. buffering the uncertainty in the time estimates with a big buffer at the end of the whole project rather than trying to waste brain cells thinking about and building in complicated little buffers on every single task time estimate. It looks lovely in the end but wastes brain cells. Having one big buffer at the end creates urgency on the total project completion for the best case scenario, rather than thinking about each task completion, which is another terrible distraction! So many distractions are dysfunctional. d. monitoring the buffer for the project and how much of the buffer is consumed e. always figuring out how to prevent consumption of the project buffer to finish very quickly with the total project, and then move on to do the same on the next project, rather than tolerating distractions. Constraint Management Project Management hates distractions with a passion! The above tasks could be done manually without software. More exploiting, focusing techniques are best done with Constraint Management Project Management software Step 3. subordinate existing internal resources to helping on the constraint to free up existing technical expertise capacity- which should be treated as gold time. This step should not be asked however until step 2, the most powerful focusing step is answered sufficiently. Don't skip step 2 to start on step 3! Step 4. Then, only after the above focused questions are anwered sufficiently, add additonal capacity, which would elevate the current capacity constraint to a new level, in a way that does not distract!, or skip!, steps 2 and 3. Step 5. go back to step 1 know what the new constraint is now after this constraint is elevated. +( Late Work First From: "Tony Rizzo" Date: Thu, 17 May 2001 23:04:25 -0400 Late Work First During my days with Bell Labs, first at AT&T and then as part of Lucent, I had occasion to chat with people who had really designed the nation's telephone system. One such individual shared a little secret with me, during one of these conversations. He started with the phrase, "Fresh work first!" When I asked him what it meant, he explained that the normal run rule for a telephone switch is first-in-first-out. 99.9% of the time, the first call to get to a switch gets placed first. However, from time to time shift happens, like the shift of a tectonic plate under California. Such shifts tend to generate huge amounts of telephone traffic, which overwhelms the telephone switches. When a telephone switch is overwhelmed by too much demand, in other words, when the capacity of the switch is exceeded, the switch changes its run rule. Here's the reasoning. If the switch continued to use the first-in-first-out rule during periods of excessive demand, then all calls would be delayed, perhaps for hours. It would take the switch many hours to clean up the huge backlog of calls. As a consequence, all calls would be very late for a very long time. Instead of sticking to the first-in-first-out run rule, during periods of peak capacity, a telephone switch changes its run rule to "fresh work first." This means that the switch deliberately ignores old calls and instead responds to the newest call first. Thus, new calls get the attention of the telephone switch and are put through, while old calls are "left to rot." By changing the run rule, the switch at least ensures that some customers receive timely service, rather than ensuring that all customers receive very late service during emergencies. What does this have to do with multi-project management? Everything. The current run rule for multi-project organizations is, you guessed it, "fresh work first." As a result, new projects get the attention of resources, and old projects are left to rot. Now, consider the effect of changing the run rule from "fresh work first" to "late work first." With the "late work first" rule, those projects that are delayed the most get the most attention from resources. In fact, this is precisely what we do with buffer management. We use buffer data to identify which projects are in greatest jeopardy, and we focus resources on these. Interestingly, when an organization gets its mammaries in the proverbial ringer, this is precisely the change in run rule that takes place. Once a customer begins to make serious threats, the managers of the organization move the resources to that customer's project, and that project speeds along. "Late work first!" So, if you really want to achieve speed in your multi-project organization, all you really have to do is to change the run rule, from "fresh work first" to "late work first." It's that simple. Of course, buffer management is a very handy way to do this, isn't it? :-) +( maximum sizes for projects and ressources From: "Livio G. Perla" Date: Fri, 1 Dec 2000 19:06:34 -0500 The background is a multi-project environment where the projects were complex design to build systems. When the basics of CCPM were applied, they were found lacking for a number of reasons. The result was the need to develop additional metrics, processes, and applications to make CCPM work. For example, Goldratt once said that a project's tasks should never exceed 400 in any given activity network. That's okay for a high-level approximation, but for a multi-project CCPM system to deliver the advertised benefits, it must be able to drive resource application. When you have 30-50+ types of resource, all with lots of individual activities, the limit of 400 tasks goes out the window quickly. The only way for the 400-task network to work is for each/most of those 400 to actually be a rolled-up network itself. Goldratt said (in Tampa last March) "Of course, but TOC does not need to worry about the details." That smacks of the old cartoon where two scientists in front of a blackboard full of complex calculations complete the equations with: " ... and then a miracle occurs." The problem with such rolled-up networks is that someone must eventually define the details so that resource application can occur properly. Many expansions of basic CCPM were developed and applied. Some were considered by senior management to be breakthrough and couple were even described as elegant. (Contrary to the the claims of the "leaders" of the TOC world, few components of TOC are even remotely elegant.) It was interesting to note the response of the $1000/hour consultants. They couldn't be bothered. Some didn't understand the new items, while all didn't want to bother because they were not "simple and elegant" and would therefore be difficult to sell to harried exectives -- especially by $1000/hour consultants who could not fully understand the depth of the material. Translation: "There's no money in it." You might also look to my posts here a few months ago about the need for rigor. How much protest they generated that I was making things too complex!! I can't count the number of "TOC experts" who pedanticly lectured me about clouds and thinking processes and "simple and elegant." They were oh so quick to recommend books to read (often theirs) while not really bothering to objectively look at what I presented. God help us if that's the state of objective "analysis" in the TOC community! These same people who were so quick to lecture about the "right and simple and elegant" method never actually undertook an implementation of the complexity and magnitude I am speaking of. As you might surmise, I have a significant amount of disdain for those "experts" and "leaders" who have never actually produced in the type of "rubber-meets-the-road" environment I refer to. If they were as interested in advancing the boundaries of TOC/CCPM as they were in lining their pockets with the status quo, we might actually make some progress. +( milestones in projects From: Julie Robitaille [mailto:Julierobitaille@sympatico.ca] Sent: Thursday, May 18, 2000 07:45 To: CM SIG List Subject: [cmsig] Using a milestone or a fixed date in a plan I am using CCPM for the first time to plan a project. I am putting my plan as per the " rules " such as: using the " real duration ", start as late as possible, solving resource contention, asking resource to work in a " roadrunner " style. But I have tasks that have to be done on a certain date mainly because it is a design review that involve many people and I have to fix a date. I understand that ideally I should let " float " that task but practically I cannot tell the people 1 or 2 days in advance when the meeting will be held. I need to fix it now and I know that the resource that will have to prepare for that review will have no incentive to work on it " ASAP " since if he finishes early, the next activity cannot start because it is a milestone. Do you have any suggestions? Or it is just that I have to live with a real life situation that theory doesn't always cover and since it is a small portion of the plan that is fixed, the rest of it will be able to work as per the rules of CCPM. --- From: "Potter, Brian (James B.)" To: "CM SIG List" Subject: [cmsig] Using a milestone or a fixed date in a plan Date: Thu, 18 May 2000 08:56:10 -0400 I see several major choices ... - Treat each "mile stone" as an invariant "stake in the ground" and build a distinct CC/PM plan for the work between "mile stones." Under this scheme, you will protect each "mile stone" with its own "mile stone buffer." Advantage: the VIPs participating in the review can have their fixed schedule. Disadvantage: by dividing your project buffer into a number of smaller pieces to protect the "mile stones" from variation, you will require more total buffer time and a larger lead time for your total project. If you can sell this disadvantage to your VIP reviewers, you may have the opportunity to deploy one of the following more flexible approaches ... - Ignore the exact "mile stone" timing in your plan, and treat the "mile stone" reviewers like any other resource. Notify reviewers when you expect the "mile stone" event to occur with resource buffer rules (perhaps, beginning months before the event). When the project is ready for the "mile stone" event, the VIPs will not be ready (maybe, they are on different continents) and everyone will be shocked because you are early or on time. Let them know that the whole project is waiting for their review. Perhaps, they will authorize post "mile stone" work pending the review. The surprise created by reaching a "mile stone" without one or more postponements may create interest in the project and encourage a prompt review. Advantage: potential for minimum adverse impact on the project created by "stake in the ground" extra-project reviews. Disadvantage: potential for confusing and alienating VIPs. - Set the external "mile stone" timing as though it were protected with a "mile stone buffer" (thus, you expect to be ready on time or early), but ignore the "mile stones" for all project planning purposes. When you reach (or may reach) a "mile stone" ahead of original external timing, send out early completion announcements according to "resource buffer" rules. The VIPS may be so shocked by a project running ahead of schedule that some will pull their reviews forward. If that happens, you may secure approval for post "mile stone" effort before the complete "mile stone event." Advantage: potential for minimum adverse impact on the project created by "stake in the ground" extra-project reviews. Disadvantage: potential for substantial delay while the total project waits for external VIP reviewers at a milestone reached ahead of schedule. - Set external "mile stone" timing on a contingency basis. Perhaps, you schedule one tentative "mile stone event" each week over the interval between the "expected time" to reach the "mile stone" and the time that the "mile stone" would happen if it were fully protected by a "mile stone buffer." As you approach the "mile stone" event, start sending resource buffer signals indicating the most likely ACTUAL timing of the event. Advantage: minimal impact from extra-project reviews. Disadvantage: difficulty gaining buy-in for multiple tentative timings for the reviews in VIPs' calendars. +( Monte Carlo Simulation From: Prasad Velaga Subject: [cmsig] Monte Carlo Simulations of Multi-Project Management To: "Constraints Management SIG" Hi, I never saw any discussion among TOC people on Monte Carlo simulations of multi-project management. If we use triangular or any other valid statistical distributions for task durations and carryout thousands of simulations of resource-constrained multi-project management, we can obtain 3-sigma limits for the start time of each task (for any specific priority rules). We can take actions on the system based on 3-sigma limits on the start times of tasks that require critical resources as we do in control chart implementation, that is, 3-sigma limits on task start times (and durations) can be used to monitor and control projects. The lower limit can be considered as the target for buffer management. If the statistical distributions used in simulation are appropriate, then this approach can give a better control of project progress. I know many people may have reservation for Monte Carlo simulation due to the amount of time required for simulations. But, I am optimistic about it since the computing power available even on PCs is increasing rapidly. For example, our job shop scheduling tool schedules 1500 operations (of 200 jobs with different routings) on about 100 resources with individual weekly resource calendars in 0.05 seconds on a PC with 2.8 Ghz speed. At this speed, the tool can carryout 10,000 simulations of this size within 9 minutes. If simulation tools with high speed are available at a low price, isn't Monte Carlo simulation practically effective for buffer management in some production and project management cases? +( multitasking From: "Potter, Brian (AAI)" To: "CM SIG List" Subject: [cmsig] RE: Let's talk about it. Date: Thu, 29 Jul 1999 13:49:41 -0400 Clarke Ching [SMTP:CCHING@esat.ie] wrote ... > They give the example of a study was done by Michael Lawrence and Ross > Jeffery of the University of New South Wales, based on 103 projects, which > gave the following results: > Average number of > Effort estimate prepared by Productivity* projects > --------------------------- ------------ --------- > programmer alone 8.0 19 > supervisor alone 6.6 23 > Programmer & supervisor 7.8 16 > Systems analyst 9.5 21 > No estimate 12.0 24 > > *Productivity was measured based on an estimate of the size of the project > using a technique similiar to Boehms COCOMO. (I have no more detail) > > ... > > The last line, where no estimate was given, is most interesting ... and > perhaps a validation of the road-runner technique. Is there anyway we can > prepare a critical chain plan without getting estimates? > > Cheers, > Clarke Ching > a kiwi in Dublin, Ireland > > Clarke, Perhaps, one could ... 1) Have analysts do the estimating 2) Build the project plan from analysts' estimates 3) In the publicly published project plan, show ... a) start times for gating tasks b) precedence relationships 4) Have the scheduling software maintain (and share) timing and buffer penetration information on a "need to know basis." NBR: This approach could leave folks feeling manipulated, controlled, and "kept in the dark." +( padding of task duration Date: Thu, 23 Mar 2000 10:31:53 -0500 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] Re: Paradigm shifts > I would be interested to hear from people who have implemented Critical > Chain Project Management (CCPM), and especially how did you address the > paradigm shifts involved, when people cannot protect themselves with safety > time etc. etc. > What were the reactions to the new change and how did you go about > introducing it and create buy-in? > I have a reasonable amount of theoretical knowledge in the area but I'm very > interested in hearing from people who are actually working with the > implementation. Developers are smart people. They'll continue to cover their assets with padding, until they SEE that management is behaving reasonably. This is as it should be. The real behavioral change happens AFTER management makes the measurement changes, i.e., after developers see that a few people actually overran their initial estimates and nothing happened to those people. I encourage developers to record (for their own use only) their initial estimates of task duration and the actual task duration. With this data, individual developers enable themselves to correct their estimates, so that their estimates more accurately represent what the developers can really do. Then, the burden is upon the managers, to change the environment, policies, and measurements, so as to enable the developers' actual task duration intervals to become shorter, by shielding the developers from unnecessary interruptions and by prioritizing the work of the developers more effectively, through buffer management. With this model, the organization is able to improve continually, from the very beginning. More importantly, the improvement and the change process become safe for all concerned. Further, the whole process does not take very long. The time constant for the change is of the order of a few tasks, not of a few projects. Recall, developers are asked to update their estimates of the remaining work throughout the course of a project. So, even the first TOC project is likely to see significant benefit. +( performance measurement Subject: [cmsig] Re: What should be the local measurement for resources in CCPM To: "CM SIG List" How about the "flush" criterion? With this metric as expressed below, larger positive numbers are better. At task end, compute: Project_Value * ( days_early_at_task_end - days_early_at_task_start ) For tasks in progress, compute: Project_Value * ( days_early_now - days_early_at_task_start ) Sum the products over all tasks assigned to the resource during the interesting time period such that activity at the resource caused a change in buffer penetration (at the project buffer for sure, probably at the constraint resource buffer for a multi-project system, and possibly at the feeding buffers). :-) Brian Potter In a message dated 11/14/99 11:54:16 AM, shakey2000@usa.net writes: CCPM suggest we should measure buffer penetration to know how a project is progressing. But this is at a project level. If I as a resource want to know how I am performing how I should know. I am already given 50% confidence estimate. If I am measured on my on time performance I will inflate the time estimate. If I am not measured on my on time performance I may keep on polishing my work. ("Tell me how you measure me and I will tell you how I behave") Date: Wed, 05 Jan 2000 09:49:24 -0500 From: Tony Rizzo Subject: [cmsig] Re: Re: Critical Chain implementation CE900320 wrote: > > Tony Rizzo > I agree that these behaviours are UDE's of the traditional systems, and that > a TOC orientated projectmanager probably stands little chance of > implementing CCPM in a traditional environment, because it must be > implemented top-down. However, that still leaves my initial questions > unanswered: > What measurement systems/rewards/penalties should be introduced in the > organisation to prevent inertia and induce trust in the new system? > Morten 1) Stop measuring individual performance! 2) Stop measuring cycle time. 3) Stop measuring work hours spent on non project work. 4) Install an effective system for selecting projects, to prevent the organization from shooting blanks. 5) Install an effective system for prioritizing projects. 6) Install an effective process for planning projects. 7) Instrument the project plans with buffers. 8) Stagger the project plans, with respect to the time line, to prevent the over-commitment or resources across projects as well as within projects. 9) Measure and report the buffers, particularly the project buffers, at least weekly. 10) Manage so as to preserve all the project buffers as much as possible. These steps will cause a phenomenal amount of improvement in cycle time and in on-time performance. You still face the problem of making the change a permanent one. To prevent any back-sliding, make a bonus available to everybody in the organization. Tie the amount of the bonus to the magnitude of the improvement in the organization's financial performance, with the pre-TOC year as the reference. You need not worry about the "cost" of the bonus. By the time that the bonus is required, your business will be more than sufficiently profitable to cover the bonus and a great deal more. You won't pay out a bonus until the improvement really happens. However, by putting such a bonus in place, you create an interesting condition _after_ the change to TOC has taken place. You associate personal financial pain with back-sliding. Further, you associate personal financial gain with additional performance improvements. Once such a bonus is in place, the managers of the organization will be well motivated to sustain the organization's performance and even to improve it further. +( PERT model From: Caspari@aol.com Date: Tue, 8 Aug 2006 14:33:58 EDT Subject: Re: [Yahoo cmsig] Buffer size In a message dated 8/8/2006 10:33:44 AM Eastern Daylight Time, bpotter@lcscorp.com writes: This fallacy arises from the same deterministic thinking that makes the ... PERT model look better than it realy is. Hi Brian - As I recall, the PERT model had provision for probabliistic estimates from the earliiest days. It used the formula, [(o+4m+p)/6], where o = optimistic, m = median, and p = pessimistic, estimate of teh time that would be required for each task. +( program management cloud From: "Tony Rizzo" Subject: [cmsig] Re: Dollar Days Metric Date: Fri, 31 Aug 2001 13:49:03 -0400 If (10) for nearly all PD organizations today, cross-project schedules are determined by powerful external stakeholders, and if (20) virtually all external stakeholders are fully convinced that their interests are served best when their projects are begun immediately, then (30) nearly every PD project defined by a powerful external stakeholder is begun immediately. (40) Every PD organization has finite capacity to perform projects. (45) Nearly all PD organizations today already enough project work to keep their constraint resources busy. If (30) nearly every PD project defined by a powerful external stakeholder is begun immediately, and if (40) every PD organization has finite capacity to perform projects, and if (45) nearly all PD organizations today already have enough project work to keep their constraint resources busy, then (50) the finite capacity of nearly all PD organizations is overwhelmed by projects that are begun immediately. (60) Once the project of a powerful stakeholder, external or internal, is begun, that stakeholder expects the resource managers of the PD organization to show progress on his/her project. (70) Most of the time, most resource managers fear adverse consequences should they not meet the expectations of powerful stakeholders. If (50) the finite capacity of nearly all PD organizations is overwhelmed by projects that are begun immediately, and if (60) once the project of a powerful stakeholder, external or internal, is begun, that stakeholder expects the resource managers of the PD organization to show progress on his/her project, and if (70) most of the time most resource managers fear adverse consequences should they not meet the expectations of powerful stakeholders, then (80) the resource managers of nearly all PD organizations...show progress on all thei projects. This is what causes multitasking. Is not the impact of multitasking, on the variation in task duration and in project duration, an assignable cause? I call on the Six-Sigma folks here to help me out. Answer this, please. Does it make any sense to even speak about common cause variation, if the variation due to assignable causes is so large as to completely overwhelm the common cause variation? Also, so long as a system continues to exhibit significant variation that is attributable largely to assignable causes, can we consider the system to be in a state of statistical control? What should we do first, with such a system? +( project planing from LLeach 2002-03-30 : Michael N. Carroll > For obvious reasons the schedule is slipping. Any suggestions on how to bring things back on track quickly? Yes. I can give you the answer that ALWAYS works when used. Unfortunately, prior experience with managers in your situation predicts few managers take this advice. (Please surprise me.) STOP the project. Create an effective plan to finish the project (this is a Project Plan, not just a schedule). Then restart the project to the new plan, only after you have the plan endorsed by all project stakeholders. The Project Plan must specify the business result, the scope of the project (using a deliverables oriented Work Breakdown Structure), and a responsibility assignment to the WBS. It must include a change control process. It must be agreed to by the project customer and all resource providers. It should identify an effective project communication plan, and must include a risk management plan. It should identify your QA process, including the acceptance criteria for all deliverables... etc. See PMBOK. The new schedule may not need to be more detailed than the current schedule (can't recommend without seeing what you have). But it must: 1. Have resources assigned (i.e. be resource loaded), 2. Have the resources leveled (which is an automatic part of Critical Chain.), 3. Have adequate contingency reserve (Project Buffer if you use Critical Chain), and 4. Have a clearly identified critical path or chain. Finally, you need to track performance to the new schedule weekly and take action when you exceed action thresholds (e.g. Buffer Management). +( relay race From: "Tony Rizzo" Date: Fri, 19 Aug 2005 13:43:20 -0400 Subject: [tocleaders] One team, many races, and no purse for the also rans. I'm appreciating the relay race analogy more and more. Consider that single team of relay racers tasked to run not one race but many races throughout the track and field day. Imagine the confusion that all team members would experience and the pitiful performance that the team would achieve, if no one told the team members which race was about to start or which race was underway at the time. Their confusion would increase exponentially as the scheduled races became increasingly overlapped. Now, imagine not the traditional relay races, which are run on the usual oval track and are short and fast, but multiple relay races that extend for miles and wind their way through a variety of terrains, such as forested areas, city streets, hills, etc. How many races would the team win, if the team didn't have a plan for each of its scheduled races? Project planning is the process that identifies for us the necessary interactions among team members. It also identifies for us the REQUIRED SEQUENCE of interactions. Without a useful, rapid planning process that yields complete plans, well, those races simply can't be won. :-) +( reporting From: "Tony Rizzo" Date: Mon, 25 Feb 2002 08:51:45 -0500 Subject: Re: [tocexperts] TOC results, anybody? Actually, no, the traditional reporting approach needs to be modified, for a very practical reason. The traditional way is to report what's been completed. But this is a look backwards. A project plan needs to be first and foremost a predictive model of the work that remains, at all times. Tracking what's been done does nothing to improve the predictive aspects of the plan. Instead, we need to have people tell us how much effort remains in the rest of their active tasks and, if the plan becomes less than completely useful, we need them to update the remainder of the plan. Here is something that a client found useful: The Difference between the Plan and Reality There is a common misconception held by managers and resources alike in many product development organizations, about the relationship between the project plan and what is generally referred to as "reality." This misconception is the source of much discomfort and counterproductive behavior on the part of many people in product development organizations. The nature of the misconception is that if the project plan doesn't match "reality" exactly, then the plan is of little or no value, and therefore maintaining and updating it is a waste of time. In such organizations, the goal of maintaining a project plan tends to revolve around accurately representing what has happened to date, accompanied by analysis to quantify the discrepancy between what was planned and what actually happened. Very little if any attention is given to proactively anticipating the future and preparing for it. Someone once said, "No model is perfect, but some models are more useful than others." I prefer to make rather forcefully the distinction between the model and the real system of resources for which the plan is a model. The plan is only a representation of the real system. It is an image, a depiction, a mere description that illustrates but a few interesting aspects of the real system. Real people achieve the actual performance, by doing the real work in a focused, event-driven manner. The plan, in turn, is nothing more than a model of that real system, which we use to arrive at unbiased estimates of the dates of a few key events, like the completion of the project, the release of the drum resource, the engagement of strategic resources (like a wind tunnel), etc. Of course, the plan also reflects the outcome of an effective planning process. The output of such a process is a nearly optimum logistical network of exchanges among the resources. As such, the project plan is both a model of the project and a guide for the resources. With it, the resources have a better chance of doing the right thing at the right time. The probability of doing the right thing at the right time decreases as the project progresses, and the plan ages. This aging of the plan is caused by the inevitable statistical shifts. The aging is cumulative, and its effects on the information content of the plan increase with time. Ultimately, the plan could become obsolete, meaning that its content of useful information could approach zero, unless we refresh the plan continually with infusions of fresh, clean information about the remaining work of the project. The primary measure of the value of incorporating current status therefore is the degree to which it reinvigorates the plan for the remaining work. All other measures of value are secondary to this measure. Given the primary measure of value, it is clear that the right frequency for updating the plan can be determined by figuring out how long it takes for the plan to lose it's predictive capabilities in a given environment. Note, too, how important it is to use a deliverables oriented planning process for the initial plan. Such a process enriches the information content of the original plan, giving the original plan a much longer shelf life than it would have otherwise. -----Original Message----- From: Tony Rizzo [mailto:tocguy@pdinstitute.com] Sent: Sunday, February 24, 2002 4:06 PM To: tocexperts@yahoogroups.com Subject: Re: [tocexperts] TOC results, anybody? > Tony, if you have something like this to bridge the gap > between Larry's great book and the practical aspects of > actually running a set of projects within a corporate > setting, it would be great to learn about it. OK! Here's something that works, as a basis for discussion: 1) Prioritize all projects, 1 through n. 2) Convey to everyone this message: Until better information is made available by the buffer report, everyone is to prioritize task-level work in a way that is entirely consistent with the project-level prioritzation. As soon as everyone begins working their task-level work in this manner you'll see an initial burst of speed. But don't stop there, please. 3) From the list of prioritized projects, choose the first one that has some flexibility in the due-date. Use this one as the first project with which your people learn and begin to apply the critical chain method in its entirety. The flexible due-date is important, because you need to provide for them a learning experience that lets them succeed the very first time that they try it. Their initial success, in turn, provides you with the opportunity to flaunt their success and use it to drive more of the same behavior throughout the organization. 4) Use the cutover project to install an effective project planning process in the organization, and make sure that YOU make project planning a necessary and desirable activity for all concerned. The Deliverables Oriented Planning process that we teach our customers is rather effective. Pam Greca, of Confluence Technologies writes this about our planning process: "Confluence project managers now have a great new tool for creating realistic projects. On every occasion that we have employed your process, we have had positive feedback from the project teams involved and a noticeable improvement in the quality of the project plans. Your planning process has helped our project teams to clarify their processes and rid projects of unplanned work - once a costly source of frustration for us. Project plans are now complete, logically connected, and agreed upon by our teams." 5) Once enough projects are planned with the CC method, select a drum resource and start scheduling the projects according to the availability of the drum resource. 6) As soon as your people are able to generate timely, credible buffer reports, start doing buffer management. This means using the buffer report for all active projects, to reassign resources in real time and to account for the inevitable shift that happens. 7) Look at marketing or production policy and measurements as the next constraint area. Oh what a fool am I. Larry Leach's book discusses the multi-project method: "Critical Chain Project Management" by Larry Leach, isbn: 1-58053-074-5. Tony Rizzo +( Scheduling and Planning The schedule (the sequence of product production) is given by the customers' orders. The customers order both in quantity and due date. So in a make to order situation you schedule the release of material into the factory according to the drum's availability and when the drum needs the material to be able to meet the due date. The buffer you want (time) to take care of Murphy determines how far in advance the drum will order release of material. So there is scheduling but it is given by the customer. In a make to stock situation it is very similar as then your warehouse in effect places the orders. In a situation I have been working with the business must produce to stock, short runs are very costly in capacity and material - so they have defined a product wheel - the order in which the produce product (a sort of colour wheel). The rest (quantity) is determined by the customer as the plant only replenishes exactly what has been consumed out of inventory (to the closest batch/package size). So here the schedule is determined by the plant's need for a product wheel and the customer's need for quantity. Planning is also necessary (as I am sure you know). The plant has only limited capacity and as demand approaches capacity it must decide how to manage this demand so that it does not exceed capacity. (This is something many businesses don't do - they want the maximum and therefore accept all orders. Soon they cannot deliver and the market punishes them (this is planning to be punished!). There is a cloud in this which every business must solve for themselves. In many plants not all products go through he constraint. These are the 'free products'. The more you sell the more Throughput you get. But wait - don't they also use resources that other products (those that do go through the bottleneck) use? Usually the answer is 'of course'. Here a business must plan to sell only as much of the 'free' product that just avoids causing an interactive constraints situation. This is part of planning Inventories are also part of the planning process. As we approach capacity our protective capacity declines. It takes us longer to recover from spikes in demand. A business can plan to increase inventories to cover (at least for a time) demand spikes. In the end demand management will be the only way to manage through a capacity bottleneck. Hope this clarifies the difference between scheduling and planning. Maybe we need to invent a new word in German for scheduling - wouldn't 'disponieren' be the right word according to the above? Rudi Burkhard My understanding of DBR is that production is triggered by consumption through customers and material release is triggered by the consumption of the constraint ressourse. No need to do scheduling or planning for that (IMHO). +( software Oddet Cohen at the EDM program 10-JUL-2001 : - ProChain is mainly single project management - Concerto is multi project management I'm aware of two software products that support the TOC Multi-Project> Management Method and the one-project Critical Chain Method. They are: > 1) ProChain and ProChain Plus, from Prochain Solutions, Inc. > http://www.prochain.com/ > > 2) Concerto, from the now altered Thru-Put Technologies. I've been > told that the company now is called, Speed To Market Engines. But > this is second hand information. > http://www.speedtomarket.com > > I have had no direct experience with Concerto. Therefore I cannot > comment on its effectiveness. > > I have had much direct experience with ProChain and ProChain Plus. > The products work. I've also found the company's support to be > quite responsive. > > Tony Rizzo From: "Graaf, Menno (Menno)" To: "CM SIG List" Subject: [cmsig] Re: TOC & CCPM "enabling" products? Software etc? Date: Tue, 21 Dec 1999 13:33:48 +0100 My experience with Concerto vs ProChain is the following: Concerto is a client-server structure with a central place for storing and administering and tracking all the projects. It is very much designed from the concept of multi project management within, as Hanan named it, heavy matrix organizations, where you want to have all projects available and controlled on a central location. Because of the multi-project concept, it misses some of the "sub-optimization" capabilities that ProChain has. These are things like only allowed to make FS relations, not allowed to assign people for anything else than 100%, not allowed to specify taskduration in non-integer days. Not taking into account early finishes when there is no buffer penetration. Not supporting the square root buffer calculation. Though I had lots of problems with these restrictions initially (coming from ProChain), I found them eventually very liberating and I now see them as a competitive edge for Concerto. It forces you to focus on the global project plan without giving you the opportunities to divert in sub-optimizations. ProChain requires much more self restraint to keep focus on what is really important. Concerto forces you completely through the paradigm shift from the very start, and that can be a bit painfull. One of the big advantages of Concerto is the central reporting structure and the ability for managers to make their own updates for tasks assigned to them. Again, I find this usefull as I am part of a "heavy matrix organization". For other organizations this may be less of an advantage or selling point. I think both tools have their own right of existence. You have to look at the sort of organization and see what they need for their application. --- From: "Larry Leach" Date: Sat, 29 Sep 2001 09:48:07 -0500 I am aware of four Crititical Chain Software vendors: 1. ProChain.com 2. Scitor.com 3. Speedtomarket.com 4. And, I had some contact with one from AGI Scotland, I think; but do not have a link. I have used the first two, and they are fine. I have had a presentation on the third (Software is named Concerto), and it looks excellent also, but is signifcantly more expensive. It seems to have a little bit of a production mindset to it, but I do not know enough about it to offer a recommendation. I keep hearing Primavera is going to do it, but have seen no evidence. +( task duration From: "larry leach" Subject: [cmsig] Task Duration Estimating Date: Fri, 12 Apr 2002 15:44:54 -0500 ---Tom stated: >When you 'arbitrarily' cut time out of a schedule, you are: 1) Saying that the team is incompetent (or at least hasn't got a clue as to how long it will take them to do THEIR work). 2) Saying that the team is dishonest, padding the time so they can goldbrick later. 3) If you don't believe either 1 or 2 applies, then you agree their time estimates ARE valid and you are faced with specifing which you are willing to allow to happen: A) Quality will be allowed to suffer, or B) Costs can rise, either through paying overtime/bonuses to get the work out on schedule, or you may have to pay late penalties when the product/service can't be delivered by the "promised" schedule. Tom: I suggest to you that the above set of assertions demonstrate fundamental misunderstanding of reality; especially understanding variation and estimates. I spend a lot of time with people getting them to understand and agree that all estimates are probabilisitic. Any one number they give has a probability of zero. If they are claiming to do it in some time or less, then that estimate has some probability associated with it. I then ask them what probability is management asking for by their actions? Do you have buffers on your projects? (Almost universally No.) How many tasks on the critical path need be late to make a project late? (All agree it is one.) Does management like half their projects over-running cost and/por schedule? So, what must the probability be that management is asking you for on each task? All agree it is high...something you can commit to. Like 90% plus. Then, we do some exercises to illustrate the varaiance in real distributions. It is large...plus or minus tens or hundreds of percents for the simplest of tasks. Tasks much simpler than their real project tasks. Then they agree that the mean is much less than what they have been asked to estimate. We next go through illustrations to show that it is much better to aggregate the uncertainty at the end of chains (the insurance principle.) I never have anyone complain about task activity cuts in that light. They know they are not arbitrary; it is simply a way to reallocate uncertainty to the buffer. We always use their estimates as the starting point. Their estimate is valid for what it is...a high probability (i.e. low risk) estimate. It requires changing the mind set first. I tell them that the word 'padding' is evidence they do not yet understand. There is no such thing. There are only estimates with associated probabilities of getting done in that time or less. Not bad or good; just reality. BTW, one of the other things I do is first create a resource leveled MS Project 'critical path' plan to show management what we did. Whenever anyone even hints at cutting the project buffer, I tell them 'that means we are back to the critical path plan.' It is always longer, so they go away. I think this helps project team security. (PS: I put 'ciritcal path' in quotes because when you resource level, there is (in general) no longer a crtical path. All paths have gaps (slack or float).) +( task priority From: "Tony Rizzo" Date: Mon, 20 Jun 2005 14:05:08 -0400 Subject: RE: [tocleaders] Task Priority Larry, I understand your concerns regarding the SSQ method of calculating the project tolerance and the component tolerances. But you too make an assumption, this time regarding my reasons for preferring it. Indeed, initially I was motivated by the mathematics. It didn't seem useful that the cut-and-paste method would assign a one-year "buffer" for a two-year project. However, a number of painful experiences gave me a much more powerful reason for teaching the SSQ method, and that reason has nothing to do with math or with accuracy. The cut-and-paste method described in the Critical Chain book and supported by all the CC software packages (including our own) creates the perception that it is acceptable for project managers to cut in half the estimates of developers and that we, consultants, advocate this practice. The practice, in fact, destroys trust, and it causes the developers to protect themselves further, by providing estimates of duration that are increasingly larger. The project managers (and even the resource managers) in turn cut more deeply with each iteration. As result, the entire enterprise embarks upon a process of ongoing degradation rather than one of ongoing improvement. From the perspective of a change agent whose goal is to change the behavior of the many, this effect is devastating. Rather than improving buy-in and gaining the support of the very people whose behavior we absolutely need to change, the cut-and-paste method alienates these important people. It provides them with unquestionable evidence that supports their most skeptical suspicions. And it significantly increases the number of front-line people who resist the right changes rather than embracing them. Given your own quest for making the right changes, that is, the changes that cause the right behaviors, I would hope that you too would see the cut-and-paste method from the more global, longer-term perspective. The SSQ method reverses the counterproductive effect created by the cut-and-paste method. When an enterprise uses the SSQ method, the people who perform the work are asked to provide THEIR inputs. They become the sources of the data with which the project tolerances and the component tolerances are calculated. Thus, rather than seeing their estimates cut in half by management, the workers whose behaviors we must change see a management team that solicits their inputs, relies upon their experience, and trusts their expertise. From the perspective of "protection against variation," quite frankly, all the accepted methods provide tolerance estimates that are vastly insufficient. To confirm this, just look at a control chart of project duration, normalized by the respective planned duration. You'll see that the upper bound of variation (the upper control limit) is often 5 or 6 times greater than the planned duration. Given the huge degree of variation in project duration that virtually all enterprises experience (by the way, where are all the six sigma people when it really counts) we might as well use and teach a tolerance calculation method that brings workers into the fold by providing them with opportunities to contribute, rather than making them feel yet again that another improvement program is being inflicted upon them. As you say, it's really all about changing behaviors. But the behavior change doesn't begin in the trenches. It begins at the top, and it ends at the trenches. Tony -----Original Message----- From: tocleaders@yahoogroups.com [mailto:tocleaders@yahoogroups.com] On Behalf Of Larry Sent: Monday, June 20, 2005 12:54 PM To: tocleaders@yahoogroups.com Subject: [tocleaders] Task Priority Hi, Tony The trigger to think about it (again) is that one CC software vendor recently introduced the idea of the flow index. That caused me to want to rethink the subject. I think we should revisit every such topic every so often anyway, to see if what we learned indicates a need to improve. I don't have a problem with more robust tools, where justified. I used to be an advocate of the SSQ method. As you know, it contains assumptions and also contains simplifications. The problem with more detailed models that do not make these assumptions and simplifications clear is that people forget that they are there, and thus miss when the model may no longer be valid. IMHO, some also come to rely too much on the model, and turn off their brains. The biggest issue I have with SSQ (addressed in my PM Journal paper) is that it does not describe the reality of the trend of over-runs from small projects to large. The SSQ method yields a smaller and smaller relative buffer (% of chain) as chains (and thus projects) get longer (larger), while reality is in the opposite direction: larger projects over-run schedule by larger percentages. I initially proposed correcting this with a bias correction (in the paper), but subsequently have found that Goldratt's initial suggestion of half the chain (applied to the variable tasks in the chain) works very well over a wide range of conditions. This is a minor effect compared to the above, but SSQ makes an assumption of the independence of task durations. Sometimes true, sometimes not. One can use Monte-Carlo tools to overcome some of the issues with SSQ (specifically, the resource interactions and task correlations), but I haven't found that to be any help at all (I am working with one unconvinced customer right now doing Monte Carlo). Indeed, people begin to believe the model more and more, and lose track of the fact that if they don't change behavior (input to reality), they don't change output. So far, I have found that the more detailed analysis does not lead to better performance. When that is true, I am an advocate of Occam's razor (See http://en.wikipediaorg/wiki/Occam%27s_razor : the original TOC thinker, I now suspect). Larry Leach, PMP http://www.Advanced-Projects.com 208-830-7860 Date: Sun, 19 Jun 2005 14:30:45 -0400 From: "Tony Rizzo" Subject: RE: Digest Number 1065 > ...in the spirit of TOC, I am looking for a simple rule. The simplicity was desirable and necessary when computers were feeble, expensive, and difficult to use. In those early days the models needed to be updated manually quite often. Today, the simplicity is no longer necessary. I maintain that blind consistency to this concept of simplicity is inappropriate and even counterproductive today. We have moved away from it in some areas. I don't understand why there is so much insistence that we keep things simple, even when simplicity comes at the expense of effectiveness. This thinking simply prevents us from adopting new, more powerful tools in a timely manner. Had this insistence upon simplicity been heeded, say, when MRP software became available, production managers might have insisted that the software tools limited the instances of net requirement to one per month. But this would have precluded the benefits of the speed made available by computers. Simplicity is useful when the more complex method is infeasible. Simplicity is useful also when the audience is inappropriately educated. But neither of these conditions exists any longer for the vast majority of organizations. So why should we not take advantage of the more powerful tools that are available today? +( teaching project management From: Edmundwschuster@aol.com Date: Wed, 15 Mar 2000 10:16:42 EST Subject: [cmsig] Re: Training Game! To: "CM SIG List" This email is a general reply to the question about project management software and education. If you need some good information on teaching project management contact Jeff Pinto at Penn State Erie, jkp4@psu.edu. Jeff is an outstanding classroom instructor and has published a number of books and papers on project management. I encourage CM-SIG members to take a look at his list of publications on project management: http://www.personal.psu.edu/faculty/j/k/jkp4/cv.htm Also, there is a new company in Cambridge, MA that applies system dynamics (commonly known in APICS as the beer game) to project management. The name of the company is Strategic Simulation Systems, inc. (SSSi). It is a small company and they are in the process of commercializing their software, however, I am very impressed with the approach that they take and they certainly know a great deal about project management. The parent company is Pugh - Roberts Associates who have over 35 years experience in large scale project management assignments. In my opinion, the ideas SSSi put forth will save large amounts of money on complex projects. Their thinking is innovative and CM-SIG members will find the approach interesting. Their web site contains some benchmarking data as well as some papers on project management. Both SSSi and Pugh - Roberts have strong links to MIT. Strategic Simulation Systems is at: http://www.strategicsimulation.com/ +( The Germ Theory of Project Management From: "Richard E. Zultner" To: "CM SIG List" Cc: , Subject: [cmsig] The Germ Theory of Project Management Date: Thu, 4 Nov 1999 12:37:56 -0500 ............... A recent discussion about TOC Critical Chain and evangelism, got me thinking about two related issues: why is there such resistance to Critical Chain project management among a significant number of experienced project managers? (I mean, when I get a post I don't like, or think is silly, I just delete it and forget about it. To be upset over a critical chain post suggests that it is seen as some sort of ... threat. And, on the other hand, why are the adherents of Critical Chain so ... intense in their promotion? Why do they have this almost religious fervor, such that have they acquired the nickname of "chainers"?) There is an interesting parallel to this -- from a previous century, and from a different field -- which may illuminate the "threat" that ToC's critical chain approach may represent to some experienced project managers. And perhaps offer an explanation as to why those who are adherents of critical chain are so annoyingly persistent about it... WARNING: The material which follows is rated "PS" for Paradigm Shift. It may be considered provocative, or even inflammatory, but that is NOT the intent! For mature and open-minded project managers only. Think of it as a story, a fable perhaps. Or even science fiction... ======================== The Germ Theory of Project Management (An adaptation by Richard Zultner of an article by Myron Tribus) A recent review of contemporary project management practices stated: "Medicine had been 'successfully" practiced for centuries without knowledge of germs. In a pre-germ theory paradigm, some patients got better, some got worse and some stayed the same; in each case, some rationale could be used to explain the outcome..." Doctors administered to the needs of their patients according to what they learned in school and in their training. They also learned by experience. They could only apply what they knew and believed. They had no other choice. They could not apply what they did not know or what they disbelieved. What they did was always interpreted in terms of what they understood was "the way things worked". As professionals, they found it difficult to stray too far from the common knowledge and understanding of their profession. They were under pressure to follow "accepted practice". In this regard, the doctors were no better and no worse than the rest of us. We are all prisoners of our paradigms, our training, and the knowledge of our teachers, mentors, and fellow practitioners. Today we smile when we read that after sewing up a wound with silk thread, the surgeons of 150 years ago recommended leaving a length of the thread outside the wound. This was done to draw off the pus that was sure to follow the insertion of unsterilized thread by unwashed hands using an unsterilized needle. The Challenge: Changing People's Beliefs Imagine that it is the year 1869. Pasteur has only recently demonstrated that fermentation is caused by organisms which are carried in the air. Only a few months ago Lister tried out the first antiseptic, carbolic acid, and found that it worked to prevent inflammation and pus after surgery. In 1869 the spread of medical information was much slower than today. Imagine you are a young researcher in an American medical school. The American Civil War is over, and you are trying to develop your own career after your Army service. You are a serious young doctor who tries to learn the latest developments in the medical profession. Suppose that you have just read about Pasteur's and Lister's work, and that you have been invited to speak before a group of distinguished physicians, many of them having come to fame for their heroic service as surgeons during the Civil War. Unfortunately, what you now understand from your readings, is that these famous physicians are actually killing their patients. Your responsibility is to explain to them, if you can, that because they do not wash their hands or sterilize their instruments, they sew death into every wound. Your assignment is to persuade them to forget most of what they have been taught, to abandon much of the wisdom they have accumulated over their distinguished careers, and to rebuild their understanding of the practice of medicine around the new theory of germs. Do you think you could do it? Do you think you could convince them? Do you think they will they be glad to hear you? Now, imagine that instead of being the speaker, you are a member of the audience. You are one of the good doctors who have earned respect and prestige in your village. You have a nice house on the hill, a nice spouse, a nice carriage, some fine horses, and a few servants. You are part of the elite of your society. How will you feel if someone starts spreading the word that your treatments are a menace, that the theories you hold are bunk, and that your habit of moving from one patient to another, laying unwashed hands on each, guarantees the spread of disease to all who are so unfortunate as to become your patients? What do you think will happen to your practice if this kind of word gets bandied about? How would you be likely to greet the messenger? The Origin of the Germ Theory of Project Management In 1865 Pasteur was in the south of France to investigate what was killing the silkworms in the silk industry of France. He not only isolated the bacilli of two distinct diseases, he also developed a method to prevent contagion. Lord Lister applied this knowledge in medicine in the same year. Thus was born the germ theory of medicine. In the 1990's Eli Goldratt was asked by a company facing financial ruin what they could do to save their organization. This company did projects for their customers as their basic business. They were no strangers to the "best current practices" in project management. They were as good as anyone else in their industry at short, expensive, technically demanding projects. But they were failing, and were facing massive layoffs -- which were politically impossible. They couldn't take the easy way out and just give up. They had to try something. Anything. Even Theory of Constraints. So they did. And the result was the first proof that applying Theory of Constraints to projects could produce company-saving results. Today this firm is the leader in their industry, and enormously profitable. Just like Pasteur's germs, "Murphy" is everywhere. Variation (or "risk") cannot be seen with the naked eye. This virus of variability attacks all projects in all companies. But what can be done? Projects have always suffered from Murphy... Germs are controlled by pasteurization. Goldratt showed how to control the virus of variability, how to reduce it, and how to manage it. In short, Goldratt invented the equivalent of pasteurization for projects: critical chain project management. In the beginning people thought that Goldratt's Theory of Constraints approach was only a "theory", and only suited to manufacturing. Just as Lister understood the broader significance of Pasteur's work to the practice of medicine, so too did Dee Jacob, Tony Rizzo, and others understand the significance of Goldratt's work to project management. Goldratt's investigations thus laid the foundations for the "germ theory of project management". For most people, the virus of "Murphy" is invisible. Sometimes its cost is invisible too. It takes special methods to find viruses. You have to know how to look. Doctors had a theory of how malaria was spread. They called it "mal-aria" to emphasize that it was the bad air, the unhealthy vapors in the night, that caused the disease. Their theory of medicine caused them to look in the wrong places for wrong answers to the solutions to their most pressing problems. Today, project managers may be doing the same. When they are up against increasing demands for schedule reduction, they look to increases in management commitment, in resources available, in team communication, in software tools -- everywhere except in their own understanding of what makes a project take so long. Today we are facing a new paradigm of project management. The differences are almost as great as the shift from thinking the Earth is flat to understanding it is round. What is at issue is a substantial redefinition of the project manager's job. It is a new world and project managers need to learn how to navigate properly in it. If they think the world is flat, they will be continually worried about falling over the edge. They will be forever bound to staying very close to home, afraid to venture into new territory. Innovation will be stifled. THE PROJECT MANAGER'S JOB HAS BEEN REDEFINED The germ theory of project management requires managers to pay much more attention than before to the entire system of projects, and how the project they are responsible for impacts that system, in order to provide greater benefits to the organization. What the doctors were taught was not good enough. Some things they did were downright dangerous and harmful. But in time, they learned. So, in time, will today's project managers. But a lot of people had to suffer along the way. Doctors, after all, could bury their mistakes. Unfortunately, some enterprises may have to go bankrupt before we develop a new generation of project managers. The causes will be buried in the dead files of the bankruptcy courts. We cannot fully immunize all projects against "Murphy". No one knows, however, just how much can be done. Until Goldratt applied Theory of Constraints to projects, in order to save a desperate firm, and the results were seen on a large scale, it was not appreciated that in some instances schedules could be cut by as much as 50% without affecting scope, resources, risk, or quality. Such results have now been seen in both large and small projects across many industries. The Task is Re-Education The readers of this article, of course, are different. You are independent thinkers who deign to run with the herd. You are obviously enlightened people. Surely you will not behave as the doctors a century ago behaved when they were told they should see that their operating rooms were sterile. They fought it tooth and nail. "What, stop to wash my hands? Don't be silly. I have important things to do." It was a lot of work for them to change. They had to admit they had a lot to learn. They were human. They resented the need to change and hoped in their heart of hearts that it would all blow over. In the first place, changing the practices and procedures in the operating room was not something they could do alone. They needed nurses and orderlies to help them. They had to begin by first understanding the germ theory of disease themselves. It is one thing to learn a new theory when you are a young student in medical school; it is another when you are busy supporting your family through your practice of medicine. After they learned the theory themselves they had to teach the nurses and orderlies how to sterilize instruments and medical facilities. They could not just leave these things to chance. They had to institute practices and procedures and train people to follow them. They had to influence the training and education of nurses so that these nurses would do the right things without having to be told. Such changes could not come about over night. Today I meet project managers who do not want to learn. They are busy with initiating yet another project, and with demanding more resources for their existing projects. They are busy beseeching their management to do something about their scope, their resources, their tools -- all the while asking to be left alone so they can just "get on" with their projects. With their flawed images of how a project ought to be managed, they make heroic demands on their team members and thereby they provide job security only for recruiters. The task of re-education is so vast that it is difficult to see where to begin. One is reminded of the recipe for eating an elephant: One bite at a time. Or one project manager at a time? =========================== Of course, to us experienced project managers, as we gaze at our screen saver at the end of the day, this is just a story. A fable perhaps. It couldn't possibly have anything to do with us, today, could it? Nah... Copyright 1999 by Richard Zultner Richard@Zultner.com ............... Comments: > ... I don't want to throw the baby out with the bath water but > as most of us (and or respective organizations) have had a few > successful projects and while I'm always willing to do better I > have absolutely no tolerance for evangelical, "there's only one > way into heaven" thinking. Similar perhaps, to what a respected, experienced doctor might have said at the conclusion of the imagined talk on "The Theory of Germs", as described above? "We've cared for our patients for over one hundred years our traditional way. And now you say it's wrong? And that we do harm despite our good intentions? Such impudence! Have you no respect for your elders? For the traditions of medicine?" "With this "germ theory" (and it's only a 'theory', right?) you just go too far! And being so demanding (EVERYTHING has to be sterile?) and uncompromising (DOCTORS are causing disease by spreading germs?) about it just won't be tolerated!" And yet those who believed in the Germ Theory could not just stand idly by and watch innocent patients die unnecessarily. Their knowledge placed a moral obligation on them to act. And so they did... Paradigm shifts are painful, and not everyone can make the shift. How long will it take project managers to make theirs? +( Throughput Rate in Project Environments Date: Thu, 01 Jun 2000 19:42:37 -0400 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] Re: Throughput Rates and Tony's System If the drum moves around, as it does for some organizations, then T per unit time of the drum resource becomes more difficult to use. Still, something close to it can be achieved. Given a set of n projects, each of which is expected to generate a specific cash flow upon completion, and none of which is a technology platform development project (little or no T for these in the near term), then an appropriate sequence for the set of n projects can be determined with the following algorithm: 1) Select one project at random, from the set, and schedule it. 2) Determine the effective stagger between this project's end date and the end date of the previously scheduled project. 3) Calculate T for the project, for an appropriate short-term period, such as a year. 4) Calculate T per unit time of stagger for the project, using the T calculated in step 3. 5) Repeat steps 1 through 4 for all projects in the set. 6) Schedule first that project for which the measurement is highest. 7) Repeat steps 1 through 6 with the remaining projects in the set, until all the projects are scheduled. This approach is not the best possible method. It does not yield the absolute maximum cash flow picture. But it is one that can be done manually, with a little effort. Achieving the absolute optimum sequence is a difficult optimization problem, which requires more sophisticated software than most executive teams are able to utilize. Given n projects, I think that there are n-factorial combinations available. Over 40,000 combinations are possible with only 8 projects. However, your friend and mine always tends to follow a very useful policy. First, do what's simple. This takes care of most of the cases that we're likely to encounter. The rest of the benefit is probably in the noise anyway. After all, how precisely can anybody predict the cash flow generated by any project? Regarding the drum being the most heavily scheduled resource, this would be a nice thing to have. However, just about every product development organization that exists today, with the exception of the few that have already adopted the TOC Multi-Project Managemnet Method, has such screwed up development operations (logistics and scheduling) that it is virtually impossible to find the most heavily loaded resource. It's wise to try to identify the most heavily loaded resource. But it's tough to do initially. Even when we can find that most heavily loaded resource, is it an internal constraint? No! The constraint is a policy, which causes multitasking, lethargic cycle times, delayed and greatly reduced cash flow, and pitiful on-time performance. +( Uncertainty of number of iteretions within a CCR operation From: "Tony Rizzo" Date: Thu, 10 May 2001 19:20:09 -0400 Prioritize the flow into the CCR on the basis of the number of times that a work package has passed through the CCR. The highest priority is given to work packages that have passed through it the most. Work packages that have not passed through the CCR at all are given the lowest priority. How you implement this depends on the work downstream of the CCR. If there is considerable work done to the work packages downstream of the CCR, then you might want to schedule fresh work, but leave holes in the schedule of the fresh work. The holes are filled by work items that have passed through the CCR earlier. In the event that there are no such older work items to fill the holes, simply pull in the schedules for fresh work items. In the project world, we call this the "place holder" method. ----- Original Message ----- From: "Juan A. Cisneros M." Sent: Thursday, May 10, 2001 6:15 PM Subject: [cmsig] Uncertainty of number of iteretions within a CCR operation > Dear CMSIG users, > > We have a factory operation whose CCR feeds itself through other resources, > we have understanding on how to schedule the fragmented order blocks using > the concept of the rods stated in the Haystack Syndrome chapter 33, the > problem we can't deal with and are discussing about is the uncertainty about > how many times an order is required to pass through the CCR operation, by > the way this CCR processes leather and right now it is impossible to develop > a schedule that doesn't need to be broken to get a satisfactory color of the > leather. > Can someone advise how to schedule the CCR with these two pecularities? +( virtual drum A virtual drum is a best-guess interval for spacing the prescribed reference-point of one project relative to the comparable reference-points of adjacent projects, on the timeline. The virtual drum achieves the same effect as the schedule of a physical resource that is designated as a drum: It creates spacing between projects, albeit an identical, prescribed amount of spacing between each pair of projects. The prescribed degree of spacing must be at least large enough to prevent the over-commitment of resources across projects; it can be considerably larger. With the former condition met, the virtual drum enables the same degree of performance as do similar techniques - performance denotes the average rate at which projects are completed. This outcome can be observed whenever the behavior of resources, i.e. the mechanism by which knowledge-work is completed, is uncoupled from any so-called schedule, such that the resources work not only without multitasking but also in an event-driven (pull) manner. When a knowledge-work engine is running at peak output and uncoupled from schedules, as I've described, all so-called schedules become nothing more than barely adequate, predictive models of the engine's performance. The barely adequate (and often inadequate) models serve merely as tools for shaping expectations. So long as the schedules are not permitted to interfere with the mechanism by which knowledge-work is completed, the schedules remain irrelevant to performance. One risk, which the use of the virtual drum technique creates, is that the prescribed degree of spacing might be insufficient - after all, it is a guess. When this happens, uninformed executives and managers are likely to interfere once more, with the mechanism by which knowledge-work is completed. By interfering, the uninformed decision-makers create new waves of multitasking throughout the enterprise and cripple performance again. Tony +1.908.322.1840 desk +1.908.930.0411 mobile tony.rizzo@pdinstitute.com From: CriticalChain@yahoogroups.com [mailto:CriticalChain@yahoogroups.com] On Behalf Of lawrence_leach Sent: Monday, January 11, 2010 10:27 PM To: CriticalChain@yahoogroups.com Subject: [CriticalChain] Re: Virtual Drum Again Hi, Jack You seem to be about where I am on it. It appears to be a sticky idea, that somehow causes people to want to do it without really working out if they have an actual constraint or not. I did see an explanation like you give somewhere, but I have a current case that does not match that situation, where one of the best CC firms in the land is suggesting using a virtual drum...I haven't yet connected the right person to ask why, but thought I'd check here while waiting to see if I am missing something. Regards, Larry --- In CriticalChain@yahoogroups.com , "Jack Vinson" wrote: > > Larry and All- > > Caveat: I haven't used them either. But I am curious. > > I get the impression that the Virtual Drum is a representation of a > collective constraint, where the organization does not wish to detail out > the specifics of that collective. If this is correct, then it might > represent something like "we have the capacity to do 5 XYZ projects that use > space, and require input from 10 different skill sets." > > I agree that the questions of how to Squeeze, Align and Elevate in this > situation are interesting. Again, if I understand, the elevation would > require being able to get more out of all the resources that impinge upon > the XYZ projects. > > It is supposed to offer a way to simplify projects as well, so that you > needn't get down into the overly-detailed plans that make assumptions about > handoffs that aren't as clean as we like to believe. At the Realization > Project Flow conference, several large organizations mentioned using them > (Boeing, ABB). But I didn't get a good feel for HOW they are used. > > Jack Vinson > > > -----Original Message----- > From: CriticalChain@yahoogroups.com [mailto:CriticalChain@yahoogroups.com ] > On Behalf Of lawrence_leach > Sent: Sunday, January 10, 2010 9:05 PM > To: CriticalChain@yahoogroups.com > Subject: [CriticalChain] Virtual Drum Again > > Hi, All > > Those of you who have been here for a while know I am allergic to the > virtual drum idea, but being a glutton for certain kinds of punishment, > thought I'd bring it up again. > > For those of you new, a virtual drum, as I understand it, is setting an > arbitrary limit on the number of projets in work at one time; and perhaps > staggering the start of projects by some arbitrary amount. > > I am open to correction on that definition, because it isn't something I > use. > > Can anyone say how it aligns with the TOC focusing steps? > > That includes, how do you "elevate" a virtual drum? +( work break down structure I've had training from Tony, but I haven't had a chance to apply his exact approach on anything sizable yet. What I learned in a nutshell: A plan is a MODEL of a project. The heart of the model consists of a network of tasks and the resources allocated to them. To build this network, WORK BACKWARDS. A plan is a bridge between an end state and the current state. Start from the end result and work backwards asking the following five questions repeatedly: - What is this item? - Who provides it? - What is the name of the process that creates it? - What tangible inputs does the process need? - Are these inputs enough? Some tips: - Define your exit criteria in detail. Be pedantic. - Assign resources by name, not by role, so you know they exist and what skills and capacity they have. - Project manager creates the plan, but draws on subject-matter experts when necessary. - It's a good idea to put in more detail than necessary to start with, and then consolidate to make it tighter. If you don't put in the detail you may miss out some exchanges that could be sequenced better. - The right level of detail lets you identify the interactions between team members. - The process is somewhat tedious, but that's part of the job. - Once you have a model you can play with it to improve it. New product development is discovery, so plan an iterative approach up-front. Usually three iterations is enough, so design from the start to have three mini projects in a row - three critical chains and one project buffer. The first iteration should be interfaces that mimic the working software. These can be used to accelerate the discovery process. Within each iteration, use the model for planning the next iteration as soon as you have the information. Think of it like a trip through uncharted territory. Plan to the horizon. David Paterson