+. 01-JUL-2000 +( 0 process of using ToC by Jim Bowles, Aug 2004 There is a hierarchy of steps to using TOC, but unfortunately it often doesn't come across that way. Maybe it goes like this Ensure that the Goal is clearly defined. Ensure that the metrics used are those that are appropriate to the Goal. 1) Apply 5 Focusing Steps. [Answering the 3 questions What to?, To what to?, and How to make it happen?] Does the problem fall into one of the generic applications categories? (DBR, Critical Chain, Distribution, etc.) If, Yes, then apply the appropriate Generic applications. {Where necessary adapting the generic to the specific and if you don't have good change management processes use TP.} If No, go to step 2 2) Conduct Thinking Process analysis to determine core conflict/cause, develop solution, [and Implementation plans] etc. Do not use this step unless you have to since it's a lot more effort than is required in many/most cases. [What sort of analysis, preparation and planning processes would you use here, especially when in those cases where you are likely to encounter high resistance to change?] --- To: , ...snip... CriticalChain@yahoogroups.com> From: "Nicos Leon" Subject: [Yahoo cmsig] RE: Thought experiment: common sense - behavior - manipulation Jack, and al, I had a quite similar thought. The following statement is a basic building block of TOC: “The Goal of any for-profit organization is to make money now and in the future”. We also know that any organization is operating as a system and that “Any system is composed of interrelated elements that interact between each other”. The necessary conditions for an organization to be successful are to be in a market big enough, to have a realistic goal, to use all hes (his or her) resources in an effective and productive way, and minimize the risks. The living environment of any organization, its ecosystem, contains the organization and its’ stakeholders: 1. The direct customers and the resellers 2. The suppliers 3. The employees and unions 4. The government agencies 5. The community (directly or through interest groups and the media) 6. The capital lenders (the owners/shareholders (or stockholders) and the other creditors (banks, etc.)) The Goal of all these stakeholders is to gain benefits from their relationship to the organization. We have to distinguish two kinds of benefits i.e. monetary benefits and non-monetary ones. The customers who buy the company’s products and services exchange money for the benefit of owning the product or using the services – (that is the organization’s revenue). The community benefits from the social contributions of the organization and from the increase of their quality of life because of the existence of the organization. All the rest of the stakeholders will split between themselves the monetary benefit, the revenue of the organization - the money customers pay to the organization. The split of the monetary pie is a matter of power game between the stakeholders. It is natural that the behavior of each stakeholder is serving hes own goal. Everybody wants to get more of the pie for himself. The main thinking is “what is in for me?” As long as the goal of each one is conflicting with the goals of the other stakeholders, they will pursue their actual behavior. They may change their behavior if and only if they all agree that they will benefit more by changing behavior. A common goal can be achieved if and only if all stakeholders consider that their part of the pie is fair. Let’s see then who gets what. The sales revenue (what the customer pays in exchange for the benefit for the product or service) is split between the 4 remaining stakeholders: the suppliers the employees the government and the capital lenders (the owners/shareholders and the other creditors). The total Revenue is split among the 4 stakeholders as follows: No STAKEHOLDER Part of Revenue of each Stakeholder Split according to Throughput Accounting 1 the suppliers Raw Materials, other direct costs Variable Costs 2 the employees The wages Throughput as per TA (TOC) Employees Revenue 3 the government Taxes and Duties EBITDA 4 the capital lenders Various creditors (Banks etc.) Interest the owners/shareholders. Net Profit Given that the money to be paid for taxes and duties is an external legal obligation, and the Interest to the various creditors is based on contractual agreements, it comes out that the remaining amount is split between the owners and the employees. Today’s practice in the Western World is that the employees get their salaries based on their contracts -legal agreements, and whatever remains is the NP for the owners/shareholders. Incentives exist also in the form of bonuses, commissions, fringe benefits, etc. Depending on the country and the local law, the termination of the contract is easy or difficult. A slightly modified system exists in Japan with the lifetime employment. The ongoing discussion on the problem of behavior and manipulation ignited by Santiago’s question can be formulated as follows: Under today’s common practice, the employees’ behavior is not aligned with the goal of the organization and any attempt of the owners and/or their representatives (top managers) to use “carrot and stick” methods are seen as manipulation. Eli Goldratt has identified that the root cause of non alignment is that the employees resent that they are not treated with “respect”. “The goal of the company is to make money now as well as in the future”. This statement is common sense to the owners of the company. It is common sense to the top management as (their role is to make profits a reality and their bonuses depend on their ability to achieve that). But how does this statement sound to an employee lower in the hierarchy, whose goal is to bring more money at home? I claim that the above statement even though it is in line with the owners’ and/or top managers’, as long as the employees do not consider that their revenues are fair, and that they are not treated respectfully, by the owners and/or the managers, they will not accept the statement and will not change their behavior. As long as we did not arrive to a consensus we cannot realistically speak of VV. Nicos Leon Senior Business Consultant Athens, Greece Skype ID : leonnl +( 10 steps to ToC The Decalogue : ten steps to improvement What is a system? If we want to achieve continuous improvement, we have to understand that an organization is a system. We can define a system as a set of interdependent components that work together to achieve the goal of the system. Production seen as a system (W.Edwards Deming, The New Economics for Industry, Government, Education) When an organization is managed systemically, managers have very precise responsibilities: There is no system without a goal. Managers must therefore communicate the goal of their organization both internally, to staff, and externally, to customers and competitors. Communicating the goal is not sufficient, however. Managers have to create a common vision. Managers have to be able to govern the present and the future, not just put out fires. The goal allows managers to create a long-term management policy. On a day-to-day basis, managers need to be able to manage the interdependencies among the components of the system. That means solving conflicts and removing barriers that prevent or complicate cooperation between individuals and functions. Measuring the system We have to optimize our system and be able to measure how much the system is achieving its goal. To do this we need to establish appropriate units of measurement. We also need a set of measurements to assess the impact every local decision has on the goal. In a system, results do not come from the sum of individual efforts, but from coordinated activities. The only efforts that make sense are those that help achieve the goal. As this is the case, it makes no sense to reward people for their local, individual efforts, but only for how much they have contributed to achieving the goal of the organization. Otherwise we only undermine the common and shared vision we established before. What measurements do we need? We can measure a system very simply in terms of input, output, and what we spend to make the system work. In other words: Throughput (T): The pace at which the system generates units of the goal Inventory - Investment (I): all the money the system invests to purchase goods Operating Expenses (OE): all the money the system spends to transform inventory into Throughput. The more we generate Throughput, the closer we are to the goal of our system. Clearly, investment and operating expenses should tend toward minimum as money is a scarce resourpendent processes. What is a process? Our system is made up of processes. What is a process ? A set of inputs and a set of outputs. Inputs can come from components of the system or, for example, from the material the process is acting on. If we want to understand more about the interactions in our organization, we must design the processes that constitute our system. Once we have designed a process, we know: who does what what gets done and when which decisions have to be made what the possible consequences of every decision are Tools for improving the system The simplest, most effective method for this is the Deployment Flowchart. (See an example) This tool provides us with a representation of a sequence of events, activities, steps and decisions that transform inputs in a system or process into output. The knowledge we gain from a Deployment Flowchart is what we need to be able to make the right decisions throughout the process in order to improve it. Moreover, if people work together to design the system, they can see how they contribute to the goal of the process. This fosters continuous improvement as people are able to understand the best way to do their job. When we are able to see how and where various processes interact, we can identify internal customers and suppliers. This helps break down barriers between individuals and functions. When we examine the process we have designed and we find inconsistencies in relation to the goal of the system, we then have to redesign the process so it conforms with that goal. Redesigning our organization can also have negative implications. People tend to resist change and managers need an effective means of dealing with fears and conflicts among staff. The TOC Thinking Processes tools to help with this are the Conflict Cloud and Negative Branch Reservation. Summary In Step Two we have designed and redefined the processes of our organization so that they are consistent with the goal of the system. Having a clear and constant view of the system allows us to study results and decide how to behave. That means being able to improve the system The behavior of a system We know that our system is a network of interdependent processes. Now we need to be able to answer these questions: How do processes behave? How can the behavior of a process influence our system's behavior? What actions should we take to continuously improve our system? The essential requirement for managing a system is the ability to predict. However, systems are unstable by their very nature , i.e. they produce unpredictable results. If we want to have control over our actions, then our system must be stable. Control charts and reducing variation Control charts were devised to measure and improve the variation of a system. A control chart describes the way a process behaves. It allows us to measure the degree of stability of the process. We need to know if a process is in control or not, i.e. what kind of variation is affecting it. Variation in a process can be of two kinds: Controlled variation which is stable and consistent over time, due to common causes, i.e. causes intrinsic to the process and therefore predictable Uncontrolled variation which is not consistent over time, due to special causes i.e. causes external to the process itself. It seriously undermines a manager's ability to predict, and therefore his capacity to manage. Failing to identify the source of variation, special or common, leads to taking inappropriate actions on the system. Deming called this tampering with the system. We can measure and improve the stability of our system by applying control charts to the main processes of our system that we have previously flowcharted. We achieve Quality and the continuous improvement of our system's processes by constantly reducing the sources of variation that undermine the predictability of our processes. It is not sufficient to satisfy customer specifications. This alone does not allow us to understand the reliability and repeatability of our processes. We can give the customer 100% of what he wants operating a process which is unstable. A state of control is not a natural state for a process, and entropy does exist. If we cannot predict the outcome of a process, we cannot manage it. Summary Maximum Quality is the result of minimum variation in processes. The processes which make up a system or organization are interdependent. If we do not understand the variation of processes, we cannot know what impact our efforts to improve a performance might have elsewhere. For this reason the goal of Step Three is to achieve a stable system. Only when we achieve stability can we truly focus on improving the system's performance and increasing its Throughput. At this point we understand our system and how it works. How can we get the maximum out of it? What is a constraint? Every system, by its very nature, has a 'bottleneck', or constraint. Let's imagine an organization as a tube, with raw material going in one end, and finished products coming out the other. In an ideal world, that tube has the same diameter all along it, so what goes in comes out in a perfectly smooth flow. But in the real world, somewhere that tube gets narrower: there is a 'bottleneck'. This is a part of the process that holds production up (because it's slower, or the machinery doesn't work properly, or any other reason). So no matter how much material goes in, the amount of finished products coming out of the tube will only be what gets past the bottleneck, or constraint. We might wish for a perfect world with no constraints. But there will always be one. A constraint does not have to be a negative. If we are able to manage our system through the constraint, we can optimize the output of our system. The Five Focusing Steps Eli Goldratt developed the model of the Five Focusing Steps for managing systems through the constraint: Focusing Step 1: Identify the constraint of the system Generally, we can identify a constraint in a system on the basis of the interval of variability of its processes, and by observing how they interact. Focusing Step 2: Decide how to exploit the constraint As the constraint is what limits the system's Throughput, we have to make it work to the maximum. Focusing Step 3: Subordinate everything else to the decision taken regarding the constraint All the other components of the system must work so as to guarantee full-speed functioning of the constraint. At this stage the constraint is producing to the maximum. What can we do to further increase Throughput? Focusing Step 4: Elevate the constraint The only thing left to do is increase the capacity of the constraint, e.g. add a machine, or resources. If the constraint is now no longer the constraint, we go to Focusing Step 5. Focusing Step 5: Go back to Focusing Step 1 A new constraint will take over and we have to start the cycle all over again. Drum - Buffer - Rope In the TOC application for production, the plan for exploiting the constraint is known as the Drum. Material is released according to the pace consumed by the Drum. When to release material is determined by the length of time needed to allow the journey from material release to the Drum. This is called Buffer time. The control of this operation is known as Rope. Summary Our organization is a system. If all the processes making up the system are stable, then it has a constraint. If we manage our system through the constraint, following the Five Focusing Steps, we can maximize our Throughput. Remember, you can ignore the constraint, but it will not ignore you. Implementing buffer management A buffer is essentially a form of protection against variation and a control over the performance of the system. We have decided to manage our system through the constraint. Our system remains, however, a network of interrelated processes. These are all affected by variation. This variation will inevitably affect the performance of the constraint and therefore the Throughput of the system. We can protect the constraint's performance by implementing buffer management. The unit of measurement of the buffer is time. How much time do we need to complete something? How many finished goods should be ready by a certain time? The buffer protects the constraint from the variation of the processes that feed it and that could "starve" it. So we establish the length of the buffer on the basis of the variation of the processes which impact it. The greater the variability of the process, the longer the buffer, and vice versa. In order to identify problems and initiate actions to overcome them, we can divide the buffer into three zones. When we find a disruption to the flow of a process in zone 3, we do not need to interfere. In zone 2 we begin to take notice, and in zone 1 we must take action at all costs to protect Throughput. Using buffers in Project management A project is a set of interdependent activities that have to be completed respecting certain requirements. Just like an organization, a project is a system. This means we can apply the Five Focusing Steps we saw in Step Four to managing a project. The major obstacle to completing projects successfully is the variation in the interdependent processes involved. Unless we have made our system stable, with control mechanisms in place, we cannot apply the Five Focusing Steps: 1) Identify the constraint: the constraint of a project is the longest chain of dependent events (not only in a temporal sense but also considering the use of common resources) on which the duration of the project depends. We call this the Critical Chain. 2) We exploit the Critical Chain by protecting the total time of the project with the project buffer. This protects the project conclusion date from fluctuations along the Critical Chain. The size of the buffer depends on the variation of the interdependent processes. The resource buffer guarantees that the resources we have allocated along the Critical Chain are present when needed. 3) We subordinate everything to the Critical Chain schedule. This means that all 'feeding' activities (the non-critical parts of the project) are completed before the planned start of the critical activity. We guarantee this by installing feeding buffers. 4) We elevate the Critical Chain when the total duration of the project planned is too long and unacceptable for the customer. 5) We go back to Focusing Step 1. In project management this means moving from planning to execution. By using buffer management, project managers can anticipate problems and so ensure the project runs smoothly. But they have to go further than this. Their job is to change people's attitude to completing tasks. Projects have to be run like rally races; the winning team passes the baton on to the next runner as soon as possible. Otherwise you will have a new constraint to tackle - inertia. Summary Buffer management is the control mechanism that can protect the constraint and indicate areas of the system that are not in control. To get the maximum out of buffer management , we must make sure that all the main processes impacting the constraint are stable, with an interval of variability that allows them to be managed. Reducing the variability of the constraint The power of integrating Deming's philosophy with Goldratt's Theory of Constraints lies in the strength of focus we gain. We know that variation is our number one enemy in achieving continuous improvement. If we try to reduce variation in every single one of our processes, we embark on an arduous and exhausting task. This is not wrong, but how can we fuel the constancy of purpose required for such an effort? We do so by increasing Throughput. By identifying and subordinating to the constraint of our organization, we focus on continuously improving our limiting factor and the processes most closely connected to it. This allows us to maximize the results of our efforts. We can use Buffer Management to identify the constraints we need to work on. These can be: Capacity constraints Policy and measurement constraints Authority constraints Sales constraints Human relationship constraints When our processes gain in stability, we can reduce buffer time. This means reducing manufacturing lead time and increasing Throughput. Once we have achieved a stable system, reducing variation is even more demanding; the forces acting on our system have achieved an intrinsic balance. The way out of inertia is a rigorous application of Deming's PDSA cycle. This must be coupled with a way of overcoming people's resistance to change as we progressively intervene in what were once familiar processes. toptop Dealing with change Every solution is a change. If we want to implement our solutions successfully, people must understand what the change is why they are doing it how they have to adapt their behavior This means communicating the solution to people so they can understand it and help to build it. Goldratt has identified six layers of resistance to change. There is a Thinking Process Tool that corresponds with each layer and helps to overcome it. Layer One: Disagreement about the problem (it's not my problem) Thinking Process Tool: Core Conflict Cloud; Current Reality Tree Layer Two: Disagreement about the direction of the solution Thinking Process Tool: Injection (solution that breaks core conflict) Layer Three: Lack of faith in the completeness of the solution Thinking Process Tool: Future Reality Tree Layer Four: Fear of negative consequences generated by the solution Thinking Process Tool: Negative Branch Reservation Layer Five: Too many obstacles along the road that leads to the change Thinking Process Tool: Intermediate Objectives/ Prerequisite Tree Layer Six: Not knowing what to do Thinking Process Tool: All tools as necessary to overcome personal obstacles The conflict: hierarchy versus system There is an underlying conflict beneath the hierarchical and systemic vision of organizations: Why do we want a systemic company structure? Because in order to manage our organization effectively, we must be able to see the interdependencies of the system. Therefore, we must manage our organization according to the systemic model. On the other hand, in order to manage effectively we must have control over the organization and therefore we must manage it according to the hierarchical model. Why? Because if we divide up our organization into so many boxes, we can exercise control over each one and over the whole thing. The new managerial structure emerges from the resolution of the conflict between hierarchy vs. system. As with any conflict, we solve it by surfacing the assumptions. The assumption underlying the hierarchical vision is that control has to be exercised everywhere in an organization. Managing a stable system In a stable system, the only part which needs to be controlled is the constraint. The mechanism that allows us to do that is buffer management. Accordingly, any effective form of control can only stem from the control of the constraint and the buffer protecting it. It is not possible to outline in general terms what a control structure for a systemic organization should be. This depends on the organization. A suitable structure for a hospital would be very different from one for a manufacturing company or a software house. However, it is clear that whatever governing mechanism we design, it must favor effective management of the buffer. If we want to implement a systemic, Deming-Goldratt based management approach that allows us to achieve results and long-term improvement, we have to create a suitable managerial structure. This means challenging our beliefs how and where control should be exercised, as well as how we measure, motivate and reward people in our system. All this requires great energy and courage. However, by this stage in our transformation process, we have achieved concrete results to spur us on. We also have the Thinking Processes Tools we need to bring about a change of this magnitude. The external constraint At this stage in our transformation process, we should not be surprised if our constraint has moved from inside the organization to outside: the willingness of the market to buy what we produce. If the market does not absorb all our products, we should reduce production. This means people have less work to do. If our thinking is limited to the cost-cutting paradigm, we will decide to optimize our improved system by cutting back on people. (The very people who contributed to improving the company). If our focus is on increasing Throughput, then we understand the importance of excess capacity. Using the Thinking Processes, we can find the most effective way of selling our excess capacity, thus increasing Throughput. Understanding what the customer wants We can greatly increase people's perception of our product's value by making them see that it solves a problem for them. If we identify our current and potential customers' problems, or Undesirable Effects (UDEs) in TOC terminology, we can find the core problem that generates them. If we can tackle this core problem, it will be the key to making a successful offer. In order to derive the core problem we have to stratify the data we have on our customers. We do this on the basis of the commonality of their UDEs. When we group organizations according to their common main UDEs, we are able to develop common core conflicts. The core problem of a prospect consists of the assumptions lying behind his/her core conflict. These assumptions are what prevents him/her from buying what we offer. If we use the Thinking Processes Tools in full, we will be able to properly exploit the knowledge provided by a correct stratification, fully understand customers' needs and greatly increase our chance of satisfying the customer. Using the Tools, we can develop an offer that is a solution to our customers' core problem. Moreover, it is a solution that provides benefits without creating negative effects for us or the customer. Our constraint has become sales. Once we have constructed the offer, our sales people will have to learn how to sell it, and how to continuously improve their ability to sell. Summary When our constraint moves from inside to outside the organization, it becomes sales. Using the Thinking Processes Tools we can construct unrefusable offers to sell our excess capacity successfully. Types of constraint and generic solutions By this stage the organization has gained the experience and the benefits of addressing most of the constraints one can expect to find in any organization. Generic solutions have been developed for some of the major types of constraints. They have been implemented since 1975 and have successfully produced results. Naturally, a generic solution needs to be customized for every specific environment, but the fact that a generic solution exists saves us a great deal of work. The main types of constraint are: Resource and capacity constraint: Drum-Buffer-Rope together with Buffer Management allows effective exploitation of the constraint and control over the performance of the system. Time constraints Critical Chain, synchronization of strategic resources (drum) and buffer management control and elevate time constraints. Policy constraints The Thinking Processes provide a systematic analysis of what is wrong with current policy and what to replace it with, without generating negative effects. Sales constraints The Thinking Processes have to be used to redirect the mindset of salespeople to focusing on the objective of selling and demonstrating their commitment to the goal of the company. Marketing constraints This is addressed through constructing an offer the market cannot refuse. (See Step Eight.) Organization (Structure) Constraints These arise when the developing business is held back by practices, functions and authorities that do not make sense anymore. The Thinking Processes must be used to develop a new structure suitable for the growing business. The Human Behavior Constraint This has been addressed throughout the Decalogue in setting the goal, defining measurements, designing and making the system stable, and controlling variation. The major cause for variation and the existence of the constraints is human behavior. Perhaps the deepest constraint in an organization is people's resistance to addressing the constraint. toptop Strategic choice of constraint Once we have gone through the steps of the Decalogue, we have become accustomed to the concept of constraints and we know how to address them. We know that a constraint is normal and healthy, as long as we have the tools to deal with it. Given that there must be a constraint, is it better to have it internally or externally? An internal constraint is something under our direct control, an external constraint is not. Given the choice, the simplest and easiest to manage is the capacity constraint. Ideally, we should have one resource or one department which is the strategic constraint. Continuous improvement should be done in controlled steps of increasing capacity while increasing demand. Continuous learning Organizations are open systems. This means they are continuously interacting with the surrounding environment, and this exposes them to change. To deal with this constant change, an organization has to be able to consistently generate and up-date the knowledge needed to manage the system. This is the only way an organization can achieve its goal of improvement. This ability can be fostered by the continuous learning cycle that Deming called PDSA (Plan, Do, Study, Act). We can underpin this cycle with a tool from the Theory of Constraints. The Future Reality Tree allows us to plan out the actions for continuous improvement, transforming the undesirable effects we experience into desirable ones. It also allows us to check that our actions are effective, and that new, undesirable effects don't crop up. The Future Reality Tree allows us to identify the changes necessary to bring about improvement through the interaction of two fundamental logical elements: Necessity and Sufficiency. The systemic nature of the Future Reality Tree can be seen by the fact that it contains a feedback mechanism (the equivalent of the ACT step in Deming's cycle). This mechanism consists of the combination of the Future Reality Tree with the Negative Branch Reservation. This mechanism should prevent the occurrence of unwanted effects in our process of continuous improvement. The Process of Ongoing Improvement We have to remember that knowledge increases continuously. We can visually represent how we know, learn and undertake the actions necessary for improvement in the form of the 'knowledge tree'. A continuous learning program not only provides management with all it needs for improving the system's performance, but it guarantees the future of the organization. This, in turn, fuels the energy and confidence needed to undertake new actions. We can be sure of achieving continuous improvement because we know the direction we have to go in: the goal of the system we established at the beginning of our journey. +( 8 categories of legitimate reservations Date: Wed, 21 Jun 2000 16:42:56 -0400 From: Tony Rizzo 1. Entity Existence 2. Causality Existence 3. Clarity, 4. Additional Cause, 5. Cause Insufficiency, 6. Predicted Effect. 7. Cause and effect reversal, 8. Tautology +( accounting From: Greg Lattner Subject: RE: Making good decisions on Contribution to and impact on Profit Date: Wed, 13 Oct 1999 11:29:35 -0600 Jim, Here are some comments below from a practicing TOC Management Accountant with TOC success stories in large and small companies. -- From: Jim Bowles[SMTP:jmb01@globalnet.co.uk] Reply To: cmsig@lists.apics.org Sent: Wednesday, October 13, 1999 9:33 AM To: CM SIG List Cc: Eli Schragenheim; POOGIforum@aol.com; John Andrew Bowles; ProChain Users Subject: [cmsig] Making good decisions on Contribution to and impact on Profit Hi everyone This is a how to do it question related to determining what to charge for a service, product, or project. Comments: What to charge (the selling price) should be based on a good Market Analysis with the Thinking Process and compared to the competition and the customers perception of value, not a safe, internal, introspective, introverted view. This means you need to engage the Marketing people in the TP on selling price decisions. It may need to start there, rather than with the accountants. If the Marketing people don't have sufficient intuition on their market (sometimes common), you need to engage the Sales Reps. The Sales Reps may need to learn what some call the "Constraint Sales Skill" to listen to find out where are the customers constraints and UDE's. Then the team needs to build the unrefuseable and valid offer. The Product Cost may be 2% to sales or 70% to sales, but the Selling Price is not caused by the Product Cost or any other internal perception of Value. Most Financial Directors that I meet today know that using margins and product costing as a basis for decision making is flawed. They want to know the impact that a given contribution will make to their profit. Or just decide whether it is a good job to take on. But even here there is a problem in that people use the term contribution in different ways depending on whether they regard people as a fixed or variable cost. Even this is flawed in that the original definition of variable meant variable with the number of products produced by a person as it was when associated with a piecework payments. Today people will still claim that labour is a variable cost because they relate it to capacity rather than number of products produced. And they cannot see that this is flawed. Comments: That's because they may be in the "Cost World" view that costs should be reduced, reduced, reduced, rather than aimed at the value found in good marketing and protecting Throughput. Many people have turned to ABC (Looking at cost drivers to form the basis of allocating costs) rather than TOC accounting (Direct Costing) to get their answers. However, I have observed that most Management Accountants will tell you that it isn't enough to do this and they are still looking for the software that will allow them to do it better. Comments: Some have said Activity Based Costing "Costs" a lot of money. More sophisticated software will only add to the complexity. Most ABC experts are trying to suggest to find the balance now. And Kaplan, the grandfather of ABC has introduced the "Balanced Scorecard" to finally bring some attention back to the customer and the top line. Even though the ultimate solution will never be found unless they check their basic assumptions. Comments: Right Most people will acknowledge the fact that they have to go on what the market will pay rather than use their cost base. But they get uneasy at using Throughput Accounting because it requires them to work out a procedure (Have a proven solution). Comments: They have to have a valid Marketing Plan based on the Thinking Process. Then they can look at the value they add compared to the competetive product/service in each market sector. Most of them are not capable of convincing themselves, let alone their bosses that this is the way to go. Comments: They may be unwilling, uninspired, unled (not incapable) to read enough books on TOC. In some cases, this may or may not be because of the same reason they went into accounting. It was once a safe, slow changing place to work a few decades ago but now computers have fast forwarded everyone into cyber-world. Keeping up with the new computers is hard enough, much less learning new paradigms. That's one of many reasons why Accounting may be the weakest link in productivity today. But more importantly since they are now following a set of standardised procedures that they take for grant they cannot generate a good procedure if they know one. Comments: Again it takes a "decision" to invest in learning or to remain in ignorance. Each accountant has made that decision. It's unavoidable. So the question is has anyone work out a good procedure for allowing these types of decision to be made Comments: Of course, there are many, but often successful companies will not allow accountants to publish articles because they do not want others to know they are using TOC. or do I have to wait for Debra Smiths book later this year. Comments: Debra's book will no doubt be good and probably will show on the EC's present in Management Accounting today. But that is no excuse to wait to read the plethora of information already available. =============================================================================== From: Greg Lattner Subject: [cmsig] The Battle of Constraints and Costs Date: Tue, 28 Dec 1999 13:09:06 -0700 "The Battle of Constraints and Costs" Which is better? To focus as a company, team, organization on CONSTRAINTS to Value/The Goal, or to focus on COSTS? Which is more proactive. Which will leave you better prepared? More agile? Which takes longer to get around to proving where the opportunities lie? Let's see, In the COST World, you calculate Efficiency Variances, Volume Variances, Purchase Price Variances, Cost Driver Rate and Consumption Variances (in ABC), etc. That takes some number of days into the next month to add up these variances. By the time these variances are all reported to management, it's history. Or archeaology. Decisions are sunk. Nobody can even remember what happened to make the data show a big variance. And this can then encourage a finger pointing atmosphere as the accountants catch people doing things wrong. Obviously a leading, visionary focus on the up and coming CONSTRAINTS is better. So why do companies focus on COSTS instead of the CONSTRAINTS, the leading issues, the visionary thinking? Why do companies find themselves in the trap of focusing on their COSTS? Let's try to list the reasons: 1. Communication. Functional Silos can stop communicating with each other. But why? 2. Fantasy. Sometimes people who don't like change may fantasize that companies don't need to change very much to survive. Some companies that once took risk and built so much momentum can actually survive in this fantasy for a while, until change is forced on them. During their ride down a death spiral they can continue to cut costs rather than confront the need to grow sales. But why? 3. Finger pointing. Companies can get into finger pointing syndromes where manufacturing blames marketing for a bad forecast (a poor method anyways) and marketing blames manufacturing for poor service (on time delivery). Then accounting comes along with their variance analysis and shows how everyone did a bad job and why, on a cost per part basis, which is makes it look like everyone is lazy and negative. If all they do is point fingers and top management doesn't straighten them out, sales won't grow. Then there's only one thing to do. Cut costs or spin off/bankruptcy, because investors will not tolerate a misuse of their assets. Investors do see the global picture. But why does this happen? 4. Management. The top management may be lacking in vision or leadership. Top management may have gotten to where they are by following the rules and never taking risk. Top management may be afraid. (e.g. I once saw a sculpture in Berlin that was forbidden during communist times. It depicted top Kremlin people that showed them as big leaders, who were full of fears and really tiny leaders, hiding behind a big posture. It was an interesting work of art. You can see why it was forbidden. After the fall of the Berlin Wall this artist was allowed to exhibit the sculpture. By the way, from my travels, I know an accountant who worked in a communist country. Communism was the paragon of Cost World dysfunction). .... But why? Now we are getting close to the real public enemy #1. 5. Accounting Measurements. The core opportunity (sorry I tend to avoid the word "core problem" - sorry Eli G.) seems to be the Accounting Measurements and proper accountability for management decisions. Accounting measurements that suboptimize rather than show the internal, GLOBAL T,I, and OE can create all of the above. 6. Are there other reasons? Whatever the reasons are for staying in the cost world, it seems that none of them should be desired. Perhaps the most dangerous of all is the Accounting Measurements, because they are institutionalized into a company's culture. To remove the old Accounting measurements and replace them with global T,I, and OE measures requires a significant amount of vision and courage in the first place. You are fighting people who have forgotten how to have vision, courage, faith, etc, and may end up resorting to finger pointing and communication breakdowns at times. Activity Based Costing was an attempt to answer valid critics of traditional Labor Based Costing. But it still was just tacking on "VISION" to the end through "Activity Based Management" and the ABM interview process. Vision was the tail. The Balanced Scorecard may turn out to be just something that was easy to research for academics, but not a real breakthrough. TOC puts VISION right at the front of the line. You can't do TOC without vision. If you haven't got or want vision, maybe you shouldn't even try TOC. TOC doesn't waste all the time of calculating the costs. But TOC does require confronting the constraints, and that again, takes vision, courage, diplomacy, good communication skills, relationship skills, organic style of management, valuing people, etc. I've enjoyed this discussion on Value and agree the best Value is in the Marketing Solution of TOC, because it is always ongoing as a company grows. Are there any other thoughts people have on the reasons companies choose the Cost World rather than Constraints? Greg Lattner, CMA ========================== From: Greg Lattner To: "CM SIG List" Subject: [cmsig] Re: The Battle of Constraints and Costs Date: Wed, 29 Dec 1999 13:27:28 -0700 Please allow me to add some more thoughts to this post. 1. Let's consider the fact that Accountability is an important element, because people by nature, and left to themselves can be selfish and could use resources of time and assets for themseves rather than an employer, who pays them. 2. Let's add into the discussion that TOC prioritizes Accountability as 1. T up, 2. I down, 3. OE down, and does not leave out OE from Accountability. Prioritized Accountability is better than just OE, OE, OE as all 3 dominant priorities. TOC also emphasizes that THE GOAL is to Make Money (Sales - Cost) both Now and In the Future. 3. Let's also add in that important jobs of management are Planning and Control. What this seems to add up to is that TOC supports Cost Control and Cost Justification as Necessary Conditions, but not the GOAL. TOC makes extremely clear and even dogmatic distinctions between the GOALs and Necessary Conditions. Often these entities are confused in people's minds. Often people think it doesn't matter which is the GOAL. TOC says it does matter. T can go up to infinity. I and OE can only go down to zero. So the big emphasis in TOC is Constraints to T. Rarely do you bother to talk about Constraints to reducing OE, because there is more POTENTIAL in T if you pursue it proactively. So in TOC the Accountability, Planning, and Control (of costs and process) is placed on managing the Constraints to Value and Profits in a Global, Prioritized, Proactive, Forward thinking way. TOC does not throw out Costs (OE). It just puts them in perspective. How do we explain exactly what TOC is and does on a day to day basis to the person in the Cost World that is afraid to let go of using costs to steer and go for the gusto of TOC's accountability for proactively confronting constraints and growth? Maybe we need to reassure them that there are times that Costs do Steer temporarily in TOC. For example, when people fantasize or act selfishly and start spending more than they should. Then TOC says OE is supposed to go down unless T goes up more. Get OE under Control reletive to Sales or you won't Make Money. Somehow it seems we have to get beyond talking past each other to understanding each other. We, who have experienced the benefits of TOC and TIOE need to learn to articulate exaclty what it is, where it's the same as the old, and where it departs from the old, and why it is better. But how do you articulate the above in a clean concise way? It seems like some TOC books just point at things in the cost world and say change is necessary. Then those in the cost world defend themselves and we in TOC don't give them credit for when they are right. And worst, we in TOC don't learn from the experience on how to articulate exactly what does TOC bring that is different and what does it value from this terrible Cost World. This seems to be our challenge. To communicate the exact thing that the Cost World is and is not. What then, is the Cost World to a TOC person? 1. A dominant focus on protecting cost reduction rather than protecting Throughput 2. A lack of emphasis on growth and identity of constraints as the key to growth 3. Especially an emphasis on cost per unit, standard cost, or divided cost by drivers, for costs that are really fixed for the time frame under consideration relative to decisions and performance evaluation. 4. Lack of accurate and valid segregation of Fixed and Variable costs 5. Lack of use of mathematical models that apply to Fixed and Variable costs 6. Misuse of mathematical models that violate the Axiom of Division for Costs per part 7. Others? What is good and valid and part of TOC common sense that could be confused as the "terrible cost world" by those who don't understand TOC? 1. Cost (OE) Control 2. Accountability for decsions that affect financial performance 3. Internal Control of Assets 4. Cost Reduction where T is not threatened 5. Others? These latter items can create defensiveness on the part of accountants who know in their hearts that some parts of accountablility are necessary conditions. Can others add to these two lists to better clarify the distinctions and segregations of 1. What is the Cost World to TOC people, and 2. What is the intersection of TOC and what could be confused as the cost world, 3. What is pure TOC, and not found in the traditional world (generally the opposite of #1) Thanks if you can add to this discussion to promote learning. Greg Lattner, CMA --- From: "J Caspari" Subject: [cmsig] Cost based pricing (was Productivity vs Eficiency) Date: Wed, 20 Sep 2000 23:21:33 -0400 Hi Bill - You wrote: << Productivity may give management some indication of how well you're doing internally, identifying areas that might need work. But it strikes me as risky to base external pricing on it. However, it would be interesting to hear from John Caspari on this. What say ye, John? >> OK, Bill, once again into the fray of product cost accounting. One wonders what discussions like this are doing on this constraints management discussion list. But, the darn cost based pricing issue keeps coming back. I agree with Tom Johnson, the Relevance Lost/Relevance Regained guy, who wrote, "Basing prices on your costs is a gamble at best, and a fool's game at worst." I am left to conclude that the existing constraint theory literature, and perhaps the extant theory itself, is inadequate in this area. For example, in the APICS book on the TOC literature (The world of the theory of constraints: a review of the international literature (Mabin and Balderstone) that you cited a couple of weeks ago in response to my combined financial variable query) "price" or "pricing" are not sufficiently important to warrant inclusion in the keyword index. The authors do observe, however, that a cluster of more than twenty papers "compares TOC with other accounting methods, particularly Activity-Based Costing. There appears to be a consensus emerging finally ...." Unfortunately, the authors do not tell us in what direction their perceived consensus lies. I will have to guess that their consensus is in the direction of combining TOC with ABC. They do suggest that TOC "purists" may not approve. [I am not sure what a TOC purist is, but I think that I may be one.] So what we have is a consensus by those who are not TOC purists [those who do something other than TOC?] that many decisions, including most pricing decisions, should be based on ABC in order to overcome the "significant limitations" of TOC. The authors do note that "Dr. Goldratt himself [is he a purist?] is reputed to .... ... not approve of ... the TOC and ABC combination ...." My point is that those who are advocating cost-based pricing for use with TOC have a legitimate, even if erroneous, position. By 'legitimate' I mean in the current predominate paradigm. A couple of years ago, Tony Rizzo suggested to the discussion group that "it would be better to slaughter a farm animal and read its entrails than to use activity based costing" for decision-making. In response to that posting I wrote a short paper, "Product Costing and Pricing Decisions," that discusses the traditional and legitimate use of full-cost-based pricing. This is a paper about pricing, not about constraints or TOC. However, it applies to reality. Whether TOC is used or not, reality is reality. This paper is available in the "Thoughts and Research" section of the Constraint Accounting Measurements website at http://members.home.net/casparija and I will not repeat it here. In areas other than financial management (for example, production scheduling, distribution, and project management) when the existing paradigm has been found to be lacking, TOC has provided concrete guidance. The same has not happened for the finance function. Rather, the traditional direct costing model has been suggested as a TOC replacement. For example, see the posting by Jim Bowles in response to Hayden Johnson's TOC accounting question earlier this week where Jim says, << TOC costing is a form of "Direct Costing" ... >> The truly variable cost of a product may provide a floor on a price when an apparent internal physical constraint does not exist. However, the problem with respect to pricing is that truly variable cost does not provide any guidance what-so-ever as to how far above the floor an asking price should be. Another place that the general TOC literature addresses the pricing decision is in an exploitation step involving product-mix trade-offs at an apparent internal physical constraint. Here the lost throughput of the product eliminated provides an opportunity cost measure which, when combined with the raw materials cost of the product being added, provides a floor for the asking price. Once again, this floor provides no guidance what-so-ever as to how far above the floor the asking price should be. As you might guess, I contend that an appropriate pricing model for use with a profit-oriented TOC implementation should meet the three requirements of Constraints Accounting. In particular, the elements of decoupling throughput from operational expense, and explicit consideration of the role of constraints are relevant. The only place that I have seen these issues addressed in the TOC literature is in the notes that I am currently preparing for the POOGI Bonus Seminar in October (see the Constraint Accounting Measurements website for details). Perhaps a larger question we need to ask is: "Why are we afraid of addressing the power of constraints?" As observed above, the APICS CM Mabin/Balderstone book appears to separate the practitioners of TOC into two camps: those in the consensus who hold the current predominate paradigm, and another group, perhaps on the fanatic fringe, known as the purists. What is happening here? Are we voting on the nature of reality? Earlier I asked the CMSIG group if anyone would, or could, explain the "combined financial variable." I received three replies. One asked for the context in which the term was used, one asked me to share the answer with the list if I found it, and you referred me to Professor Mabin. I inquired of Professor Mabin, but did not receive a response. I also looked more closely at the APICS CM Mabin/Balderstone book, and I think that I see what comprises the "combined financial variable." I describe this below, but I still have no idea about how to interpret it as a useful or meaningful statistic. Mabin and Balderstone tabulate the reported and quantified operating results of a number of cases that they found into six categories. Two of these categories involve financial measurements. I have some reservations about the authors' use of parametric, rather than non-parametric, statistics, as well as their unsupported decision to eliminate an outlier from the reported results. Additionally, I was unable to determine exactly what cases and amounts were counted in each category. Never the less, I think that the following analysis is at least approximately correct. The authors reviewed 82 cases. About 23 of these cases reported a quantified revenue or throughput change associated with a TOC implementation. The simple average (mean) increase in revenue or throughput of twenty-two cases is 63 percent. One observation of a 600 percent increase was excluded as an "outlier," but no justification beyond calling it an outlier is given for its exclusion. They report this as: "Revenue/Throughput: Mean Increase 63% (outlier exclusive)" The authors did not report profitability as a separate item. Eight cases, including three that were included in the 23 above, had a quantified change in profitability. These eight cases have a 120 percent mean increase in profitability. This statistic is not something that the authors reported, but rather is an amount that I calculated from their data on pages 22 - 30. They might have reported: "Profitability: Mean Increase 120%," but they did not. Rather than report this profitability statistic, they combined the data from both groups into a single statistic, which they then termed the combined financial variable, and which included 30 observations. This statistic, which is weighted about three times in favor of the revenue or throughput percentage increase versus the profit portion, and still excludes the outlier, is then reported as: "Combined Financial Variable: Mean Increase 76%" It seems to me that the average profit increase of 120 percent is a lot more interesting than the 76 percent combination. The only reason that I have been able to think of for not reporting the profit statistic is that the authors, as well as the reviewers within the APICS CM SIG, are afraid that real results, reflecting the positive power of constraints, simply will not be believed. The same reasoning would apply to the exclusion of the 600 percent observation that is dismissed as an outlier. The authors, then, have been extremely conservative in their reporting. This understatement is followed by the comment that, the "vast majority of cases reported only partial applications of TOC. We are left to wonder whether improvements would have been even greater had more of the methodology been applied." Well, Bill, I apologize for the length of this post. In response to your question, what I say is, "Come on out to the POOGI Bonus Seminar next month and learn how to fill the missing link in the constraint theory and harness the power of constraints management in a holistic approach to TOC implementation." Best regards, John Caspari, PhD, CMA - Original Message - From: "Bill Dettmer" Sent: Wednesday, September 20, 2000 10:41 AM Subject: [cmsig] Re: Productivity vs Eficiency > - Original Message - > From: Bob George > Sent: Wednesday, September 20, 2000 3:58 AM > Subject: [cmsig] Re: Productivity vs Eficiency > > > Jithesh, > > This does have an impact on the customer. Depending > > on what measure you use will determine the costing of > > the product. If you base your overhead figure on the > > amount of total available pieces you could produce as > > opposed to the actual number of parts you did produce, > > your overhead number will go down. Hence, the cost of > > the product will be reduced, hence, the selling price > > could also go down. This, then, could allow you to > > get into price competitive markets based on these > > calculations, when in all actuality, you may be losing > > money in the proposition. > > [Then it would seem that, somewhere in the business, someone is fooling > themselves. If you have two (or more) different ways of determining your > price, and it's a matter of management choice as to which one you choose, > then one of them (or more) is likely to be inaccurate owing to "wrong" > costs. > > The question is, "Which one of them is RIGHT?" TOC would suggest that none > of them is, if overhead is allocated to units of product manufactured, > potentially manufactured (available hours), or sold. Productivity may give > management some indication of how well you're doing internally, identifying > areas that might need work. But it strikes me as risky to base external > pricing on it. However, it would be interesting to hear from John Caspari > on this. What say ye, John?] =============================================================================== From: "Jim Bowles" Date: Mon, 18 Sep 2000 12:48:54 +0100 TOC costing is a form of "Direct Costing" (based on the original assumption stated Circa 1945 not necessarily those used in more modern texts) or Marginal Costing as it is sometimes called in the UK. The challenge is to Absorption Costing, Product Costing and Adding value at each stage of the process. In reality Value is only added at the end of the chain or pipeline. Hence transfer price is also challenged. All these "measures" lead to people protecting their Local optima whereas TOC is concerned with "Maximising" the Global Optimum. I recommend that you obtain a copy of Debra Smith's book, "The Measurement Nightmare". Another alternative is an earlier work entitled: "The Theory Of Constraints" and its implications for Management Accounting". TOC can be viewed as the flip side of ABC instead of Activity Based Costing which can lead to erroneous decisions we advocate decisions about performance which might be termed Constraint Based Activity. If you need a good example of how this works try the P&Q exercise in the book, "The Haystack Syndrome" by Eli Goldratt. +( Argyris - communicating theories From: "J Caspari" Subject: [cmsig] Re: Undiscussables Date: Thu, 9 May 2002 10:28:09 -0400 Frank - Everything that Larry mentions in his post is discussed in the article that I mentioned earlier (Argyris, "Good Communication ...," Harvard Business Review, July-August, 1994. Available in pdf download for USD7.00 at the HBR web site). The first thing that impressed me about this article was how firmly rooted in the Cost World it is. It is a great deal about cost reduction and efficiency. Argyris clearly had not crossed to the far side of the complexity divide in 1994. The second thing I noticed was that, while Larry is (I think) citing Argyris to support the position that the evaporating cloud is a poor place to start an analysis of reality, Argyris would probably embrace the cloud as a wonderful tool it he were familiar with the use of the technique. Argyris says, "I know of only one way to get at these inconsistencies, and that is to focus on them." What is a better tool for focusing on inconsistencies than the cloud? It provides a mechanical means of getting to exactly where Argyris suggests one should be. ----- Original Message ----- From: "larry leach" Sent: Thursday, May 09, 2002 9:13 AM > From: Frank Patrick > > >with full disclosure and dissection of the > mentionables, their impact is either touched on or at least minimized. > > >Regarding the seeking of clarity...But what are they?What other > >"unmentionable unmentionables" are out there? > > Frank, the undiscussables and undiscussable undiscussables I have been > working to draw attention to are the consistent set identified by Chris > Argyris in his research, and published in his many books. Unfortunately, all > of my books are now packed away for the trip to Idaho, so you will have to > put up with my poor recolletion of them. I strongly suggest reading one or > more of his books, especially Organizational Defense Mechanisms. > > Argyris found that ALL managers (across a wide range of cultures and > companies) show a similar pattern of 'esposed theory', that is, how they say > they work, and 'Theory in use', that is, what they really do. For example, > the espoused theory says they are open and honest. The theory in use is to > not say things that might upset the other party. He has them write down a > real case they particpated in with a 'left hand column' of what they said, > and a 'right hand column' of what they were thinking. They always differ > substantially when contentious issues are on the table (or hiding under it). > > The point is that their actual behaivor prevents learning. It prevents > identifying the assumptions and beliefs each side holds, and prevents > attacking the real issues. > > The speicifics of what they aren't talking about vary from case to case. > But they are always there. +( axiom of division From: Greg Lattner To: "CM SIG List" Subject: [cmsig] Using TIOE based ratios violates the Axiom of Divison Date: Mon, 25 Oct 1999 09:13:32 -0600 ALL numbers derived through the AXIOM OF DIVISION are only valid if they do not violate the given conditions of that AXIOM. The AXIOM OF DIVISION is one of the most abused mathematical axioms in business. The accountants are the worst at it. Ratio's are just one area where they are abused. Do you know that the famous "Cost per Part" or "Product Cost" or "Activity Based Costing" uses the Axiom of Division? And it violates the rules given to use the Axiom of Division. T/OE and T/I also use the Axiom of division and can lead to conclusions that violate it. WHY??? The Axiom of Division says: (um, let's see if I can remember now) A=B/C If and ONLY ONLY ONLY IF: Given 1. C <>0 2. A*C=B The second rule means NO FIXED COSTS!!!!!! IN THE DENOMIATOR. ONLY TRUE DIRECTLY VARIABLE!! Otherwise you won't get back to B again, when you use A for decisions. T/OE has Fixed OE in the denominator. What does this cause??? If you graph T/OE performance monthly (I have) you will find for example prior T=100 prior OE=25 prior T/OE=4.000 prior profit =100-25=$75 current T=150 (50% increase) current OE=50 (100% increase) current T/OE=3.0000 (25% DECREASE??!!) current profit=150-50=$100 (33% increase) This shows the case where the Axiom of Division is violated. When T goes up a slower percent than OE, but they are both going up, and T is greater than OE. If they go up at different SLOPES, they are NOT directly variable to each other. There are 6 combinations of possibilities of T and OE going up or down. This case is the one that suffers from the misuse of the AXIOM OF DIVISION. Your ratioof T/OE went from 4.000 to 3.000. That's bad. But the profits went up. The ratio is a bad model of reality in this case. I still think the ratio may be worth putting on your monthly TIOE graph, but give the T=OE contribution to profit number ( a real number) the most prominence and warn people of this case. Just like unit costs, you should not divide until you've thought about it a lot. Division is the root of many evils. It's the foundation of many things in the Cost World. Division has divided up our companies into functional silos and "Divisions"! It divides up costs for transfer prices too. More dysfunction there. +( Axioms From: Peter Evans Date: Mon, 19 Jun 2000 15:05:45 +1000 Tony Rizzo suggests a basic axiom: For any real system, there exists one independent variable for which the response of the system per unit change in the variable is greater than the response of the system per unit change in all other independent variables. Bill Dettmer has a list at the next level: I would suggest that there are four basic assumptions upon which the effective application of TOC rests, and Five Focusing Steps. I believe these four basic assumptions to be: 1. Every system has a goal and a set of necessary conditions that must be satisfied in order to maximize achievement of the goal. 2. Systems are analogous to chains. The performance of a system is limited by very few "links" (constraints) at any given time, maybe only one. 3. The system optimum IS NOT the sum of the local optima. 4. There are valid cause-and-effect relationships behind any effect within an organization. Could we add as 2nd level axioms? * As people are measured, so they will behave * systems that are not on a POOGI will eventually disappear, ie if energy is taken out of a system at a rate greater than energy is added, the system will eventually run out of energy System (Level 1): Systems are defined by the fact that their elements have a common purpose and behave in a common way, precisely because they are interrelated toward that purpose. Could each message add to/improve the list of axioms, and identify at which level they are, Level 1 are basic, Level 2 are derived from Level 1 and so on? --- From: Peter Evans Subject: [cmsig] RE: The science of TOC. Date: Thu, 22 Jun 2000 16:54:21 +1000 Tony Are you saying every complex system has a constraint? Is my restatement valid? For every complex system [a large number (n) of interacting variables with a common purpose] there exists a subset (m) of variables where (m)<<(n) for which the response of the system per unit change in the (m) variables is greater than the response of the system per unit change in all other variables. Peter Evans >-----Original Message----- >From: Tony Rizzo [mailto:TOCguy@PDInstitute.com] >Sent: Thursday, 22 June 2000 13:44 > >OK. Here's a start, again. > >The following statement is based upon my observations of systems, >during the last 17 years. If you accept the statement, please >let us know. If you do not accept it, then let us know too, >and please explain why you don't accept it. > >"For every complex system subject to a set of m input variables >there exists a subset of n input variable, with n<the response of the system to a unit change in any element of >the subset is measurable." > >The word complex means that m is large, very large. > >The term input variables refers to all those things that might >be changed in an effort to affect the response of the system. > >The term, response of the system, is the objective function of >interest. A system may have more than one objective function, >i.e., it may have more than one response of interest. --- From: Peter Evans Subject: [cmsig] The TOC Axioms Date: Mon, 19 Jun 2000 15:05:45 +1000 Tony Rizzo suggests a basic axiom: For any real system, there exists one independent variable for which the response of the system per unit change in the variable is greater than the response of the system per unit change in all other independent variables. Bill Dettmer has a list at the next level: I would suggest that there are four basic assumptions upon which the effective application of TOC rests, and Five Focusing Steps. I believe these four basic assumptions to be: 1. Every system has a goal and a set of necessary conditions that must be satisfied in order to maximize achievement of the goal. 2. Systems are analogous to chains. The performance of a system is limited by very few "links" (constraints) at any given time, maybe only one. 3. The system optimum IS NOT the sum of the local optima. 4. There are valid cause-and-effect relationships behind any effect within an organization. Could we add as 2nd level axioms? * As people are measured, so they will behave * systems that are not on a POOGI will eventually disappear, ie if energy is taken out of a system at a rate greater than energy is added, the system will eventually run out of energy System (Level 1): Systems are defined by the fact that their elements have a common purpose and behave in a common way, precisely because they are interrelated toward that purpose. Could each message add to/improve the list of axioms, and identify at which level they are, Level 1 are basic, Level 2 are derived from Level 1 and so on? --- From: Peter Evans Subject: [cmsig] RE: The science of TOC. Date: Thu, 22 Jun 2000 16:54:21 +1000 Tony, are you saying every complex system has a constraint? Is my restatement valid? For every complex system [a large number (n) of interacting variables with a common purpose] there exists a subset (m) of variables where (m)<<(n) for which the response of the system per unit change in the (m) variables is greater than the response of the system per unit change in all other variables. >From: Tony Rizzo [mailto:TOCguy@PDInstitute.com] >Sent: Thursday, 22 June 2000 13:44 >The following statement is based upon my observations of systems, >during the last 17 years. If you accept the statement, please >let us know. If you do not accept it, then let us know too, >and please explain why you don't accept it. > >"For every complex system subject to a set of m input variables >there exists a subset of n input variable, with n<the response of the system to a unit change in any element of >the subset is measurable." > >The word complex means that m is large, very large. > >The term input variables refers to all those things that might >be changed in an effort to affect the response of the system. > >The term, response of the system, is the objective function of >interest. A system may have more than one objective function, >i.e., it may have more than one response of interest. --- Date: Thu, 22 Jun 2000 17:54:03 -0400 From: Tony Rizzo Ah, Richard! I just love your shorter messages. :-, You call it M1. I call it the input variable that has the biggest impact, or the largest main effect. It's all the same thing. Lee Perla says that the statement is true even for systems that one might not consider large or complex. He's right, of course. So, here's a re-write, in an attempt to soothe you and Lee: "For every system subject to a set of m input variables, there exists a subset of n input variables, with n< > Do you mean, one variable affects the system more than any other? > > "For every complex system, subject to a set of m input variables, > there exists a subset of n input variable, with n< the response of the system to a unit change in any element of > the subset is measurable." > > Since you can define your variables anyway you want (standard scientific > practice), why not just define variable M1 to be "the variable that impacts > the system the most"? (For example, if you did a factor analysis, M1 is the > first factor.) Then no one can disagree with you... and we can get to the > real question, which is how to find M1 for a given system nonquantitatively. > > Is M1 the "leverage point" of the system? Or the constraint? > Does that which limits the system most have to be the variable which impacts > the sysem the most? --- Date: Fri, 23 Jun 2000 11:28:59 -0400 From: Tony Rizzo Corey, We need another stone in the foundation, before we can continue: "For every system subject to a set of n input variables, such that the effect on the system of each input variable in the set is measurable, there exists one input variable in the set, for which the effect on the system is greater than the effects of the remaining input variables in the set." How do you feel about this stone? --- Date: Sat, 24 Jun 2000 18:16:26 -0400 From: Tony Rizzo Yes, let's do that. So far, to the best of my memory, we have: "For every organizational system subject to a set of m input variables there exists a subset of n input variable, with n< Subject: [cmsig] RE: Systems Date: Sun, 26 Aug 2001 18:15:02 -0700 Perceptive observation... (see comments below) -----Original Message----- Sent: Saturday, August 25, 2001 10:18 AM Subject: [cmsig] Systems We pick systems that are generally sub-systems of a greater system and then apply TOC thinking. The 5 steps at least seem to refer to a system as though it is a closed system - although step 5 is really an exhortation to expand your scope (eg the constraint has moved to the market). [One could always say that there is a larger system, because any system (that most humans can conceive) exists within an external environment that might (or might not) be easily defined as a system.] Expanding the scope is fine BUT with any system we care to look at there are countervailing forces out there that are working against us, but that we are ignoring because they are small or insignificant. However as we change our system through the 5 steps these outside factors (outside our consideration) can start to become more and more important - in fact can be devastating. How, in the scheme of things does an organisation recognise an exterior factor that is gong to be a constraint on the organisation?? In this case the photo industry did not even originate the exterior factor. However, our actions can be the start of strengthening the impact of something else that at first we are blissfully unaware of. The subsistence farmer who measures his wealth in the number of cow he has will make the logical connection that more cows = greater wealth and comfort etc by selling their milk. He will buy more of them. But these animals need grass! At first he will not notice any problem - there is plenty of grass, but soon the cows are eating more than his land can produce. Disaster - milk production goes down (which he might try to compensate for by buying more cows). Is there a warning in the 5 focusing steps (or should there be one) that cautions people to make sure they are considering the correct (sub)system? [Eli Schragenheim has suggested the best way that I know to make this determination, and it's not Step 5. It's the SECOND iteration of Steps 1 and 2, performed BEFORE you execute Step 4 (Elevation). He suggests that you mentally or hypothetically apply Steps 1 and 2 based on the assumption that whatever you're contemplating doing in Step 4 WILL break the constraint. In other words, it's kind of like predicting where the constraint will go and what kind (and magnitude) of efforts will be required to deal with it (exploit) there afterward. This may influence your decision to elevate or not in the first place. Or it might affect your choice of elevation options. In any event, I think I'd treat the contemplated elevation as an injection and construct a non-negative branch to determine where the next constraint might be and what kind of branch-trimming injection (exploitation/subordination) would be needed to handle it there.] +( Bayesian Belief Networks From: "Binayak Banerjee" To: "CM SIG List" Date: Thu, 01 Jun 2000 09:09:49 GMT >From: "Maggard, Mark A." >Date: Tue, 30 May 2000 12:48:52 -0500 >Are/How are TOC and Bayesian Belief Networks related? >Attached is a link discussing Bayesian Belief Networks..... >http://www.hugin.dk/hugintro/index.html Bayesian belief networks are related to TOC sufficiency trees. 1. Both Bayesian Belief Networks (BBN) and TOC Sufficiency Tree (TOC) nodes are boolean statements about the state of the system. 2. The edges in both BBN and TOC represent a causal link. 3. The edges in BBN are labelled with the probability that the source situation brings about the target situation. In contrast, the probabilities of the source bringing about the target in TOC are always taken as 1. 4. TOC permits the AND relationship between source nodes (i.e. if A and B and C then D). BBN does not. 5. BBNs are Dags (Directed Acyclic Graphs). TOC graphs can have loops. 6. TOC has the Categories of Legitimate Reservations that allow the graph to be validated. I'm sure that there are other similarities and differences, but I can't think of them off-hand. To my mind, the TOC tree represents a qualitative model of the system. The BBN represents a quantitative view of the system. TOC models are easier to create, and to understand for the layperson. The BBN model has utility when the sytem is completely understood, and must be quantitatively modeled. +( bottleneck - moving From: "mark Woeppel" Subject: [cmsig] shifting bottelenecks Date: Mon, 22 Oct 2001 12:59:14 -0700 There is a concern that ToC is difficult to operate with when product mix changes and the bottleneck (constraint?) moves. Have a look at the Haystack syndrome for a treatment on the wandering bottleneck effect. Bottom line, there is a difference between a capacity constraint (constrainted) resource and the constraint. The bottleneck might be the constraint (rarely), but not all constraints are bottelenecks. The fact that the bottelenck moves around is a symptom of a problem in material release and batching, not shifting product mix. Eli G. often states that if your bottleneck is moving, you have a policy constraint. So true!! We will have resources that have temporary overloads, but they are not the constraint. -----Original Message----- From: Rick Denison [mailto:rdmax@uswest.net] Sent: Monday, October 22, 2001 7:12 PM The Problem with TOC:- TOC, on the other hand, focuses on eliminating constraints that impede the flow of work through the shop (throughput).- The problem here is that constraints or bottlenecks are moving targets in a job shop environment. -Bottlenecks are created and disappear hourly with changes in customer priorities and schedules. -An operator fails to show up for work-a bottleneck is created. -A customer changes a specification-an existing bottleneck may disappear, and a new one may or may not be created. The progress of an order through a shop often has any number of starts and stops that have nothing to do with constraints. -The drum-buffer-rope technique, central to the theory of constraints, is not particularly useful in the dynamic world of the job shop vs. a more stable volume production environment where a constraint is more likely to stay put. Sounds like the author doesn't have a good understanding of TOC. +( Boyd Cycle - Maneuver Warfare vs. Agile? "Tony Rizzo" Date : Fri, 11 Apr 2003 10:59:55 -0400 Subject : [cmsig] RE: Maneuver Warfare vs. Agile? From: "Richard Zultner" To: "CM SIG List" Sent: Friday, April 11, 2003 10:50 AM Subject: [cmsig] RE: Maneuver Warfare vs. Agile? > "Tony Rizzo" tocguy@pdinstitute.com wrote: > I'm not arguing for a protectionist model. I favor a maneuver warfare > model, whereby the manufacturers that are currently threatened by China's > cheap labor operate in a different market segment, one where the Chinese > suppliers are incapable of existing. > [REZ] Tony, can you provide a brief explanation of what the maneuver warfare > model is, and just how it differs from the currently trendy "agile" > approaches? [Any good references on maneuver warfare?] O = Observe O = Orient D = Decide A = Act Observe: I gather information about my customers, current and potential. I also gather information about the capabilities of my competitors, and related technologies. Orient: I synthesize the information of my customers, competitors, and related technologies. TOC provides a useful tool for this. It's called the Current Reality Tree. Decide: I define a course of action for myself, one which renders my competitors' strengths useless and which maximizes my strengths, e.g., I figure out how I can maximize the value that I bring to current and potential customers with the things that I can do right now. The course of action must be actionable within a relatively short interval, or it's the worng one for now. I also consider what I can develop for future use. TOC provides a second useful tool for this. It's called the Future Realilty Tree. Then I plan my effort, usign Lean Project Planning. Act: I execute my project, all the while collecting information for my next pass through the O.O.D.A. loop. And I go through the O.O.D.A. loop forevermore as fast as I can. --- Date: Mon, 3 Jan 2005 14:48:41 -0500 From: "Potter, Brian \(James B.\)" Binayak, Maneuver Warfare is somewhat nonprescriptive. View the following crude explanation in the light of the famous Zen _koan_, "What is the sound of one hand clapping?" The O-O-D-A Cycle illustrates the approach without explaining "hot to do it." One must practice. Observe: Collect information about the environment. 1. If some potentially lethal threat exists, make some immediate action (skip to "Act") which may eliminate the threat or at least create a situation where survival is possible. Do not waste time or resources by orienting or deciding (in Nike terms, just do it). This is a strategically undesirable reactive mode, but any other course leads to immediate failure. 2. If an urgent situation exists, decide how to best deal with it using the incomplete and partly integrated information currently available (skip directly to "Decide"). 3. While continuing to "Observe," "Orient" with the environment. Orient: While continuing to observe, organize information about the environment (in a business context: the markets where one's organization competes [for sales, for supplies, for employees, for investors, ...], one's competitors, product technologies, production technologies, logistics technologies, adjacent markets [of all kinds], one's potential competitors, potential new customers, potential new products, one's own capabilities, one's potential capabilities, competitors' current capabilities, competitors' potential capabilities, potential market needs one might meet with current or potential capabilities, ...). 1. Whenever environmental awareness suggests an opportunity ... a. While continuing to "Observe" and "Orient," move rapidly to exploit any transient opportunity (a customer with an urgent need, a new market with no powerful competition established, new technology which no competitor has yet leveraged, ...). b. While continuing to "Observe" and "Orient," develop (advance to "Decide") potential ways to exploit the potential opportunity. 2. When information provides a poor or confusing guide ... a. Focus "Observe" activity in areas which may better illuminate the situation. b. Focus on environmental areas where changes might force customers, potential customers, or competitors to reveal useful information (advance to "Decide" or "Act"). Decide: In light of environmental awareness, develop a plan for a future action. 1. Select an "Act" which will exploit an actual opportunity. 2. Select an "Act" likely to exploit a potential opportunity (or transform a potential opportunity into a current opportunity). 3. Select an "Act" which will change the environment so that competitors or customers must change their behaviors in response to the "Act." 4. Select an "Act" which will obstruct, thwart, or complicate competitors' normal business methods without harming one's own. 5. Select an "Act" which will benefit suppliers, customers, employees, and other such partners in the markets where one competes without harming one's own interests. 6. Generally, select a way to modify the competitive environment first to one's own advantage, second to the advantage of one's allies, third to no harm to oneself or one's allies. Act: Execute necessary, urgent, or planned actions as determined via the "Observe," "Orient," or "Act" phase activity. Feed the fact of the "Act" forward into the "Observe" and "Orient" phase activity so that those processes will benefit from awareness of both the change and the reactions which the change drives in both the market and in one's competitors. Note that this is an explicitly parallel process. Even though a "main loop" continually flows through an "Observe->Orient->Decide->Act->Observe" cycle, numerous feedback and feed forward signals either short circuit the cycle or redirect it. An O-O-D-A organization has multiple O-O-D-A cycles operating at very different frequencies (each feeding the orient phase of all the others). One high frequency cycle in each operating area handles daily operations and process improvements. Intermediate frequency cycles address logistics planning, sales planning, purchasing planning, financing, investing, and the like. Lower frequency cycles focus on marketing, capital planning, product development, research, competitive intelligence, and the like. How this might apply to any particular organization rests in the hands of the organization. Typically, only very small (e.g., "start up") organizations can pull it off successfully. Boyd originally developed the concept for one-against-one combat between pilots of two adversarial fighter aircraft. Only later did he extend the concepts (via Sun Tzu's thoughts on strategy and tactics) to organizations. As far as I know, some U. S. Marine Corps units may be the only large organizations successfully deploying O-O-D-A methods. Senior U. S. military commanders apparently have not yet accepted Maneuver Warfare as a standard principle (indeed, command decisions in both Gulf wars undermined the successes of units acting on Maneuver Warfare principles). Even though people like Rumsfeld and Cheny understand Maneuver Warfare quite well (and relied upon it in both Gulf wars), the total (civilian and military) command structure neither understands not trusts it. The key thought here follows the military "Hit 'em where they ain't," philosophy. Rather than engaging competitors directly (offer similar products to similar markets under similar terms), attack tangentially (offer slightly [but significantly] different products to adjacent [or overlapping] markets with different terms) so that your sales will face limited direct competition while siphoning some sales off from competitors. _Komatsu's_ successful envelopment attack on Caterpillar offers one dramatic example of such a plan at work. Some Maneuver Warfare experts believe that Toyota and Honda have done this to the European and North American automobile companies. They may be right, but neither Honda nor Toyota really seems fluid enough to be an O-O-D-A organization. Their competitors just make them look that way. :-) Brian -----Original Message----- From: Binayak Banerjee [mailto:binayak.banerjee@gmail.com] Sent: Friday, December 31, 2004 1:53 AM To: Constraints Management SIG Subject: Re: [cmsig] Where is the Constraint Role? On Wed, 22 Dec 2004 14:36:54 -0500, Potter, Brian (James B.) wrote: [SNIP...] > As far as I can tell, a minimum of four responses to variation exist: > Maneuver Warfare is the only approach which knowingly treats variation (an all but certain reality) as a potential ally. When possible, this approach will surf on the waves of variation rather than merely absorbing variation on wide beaches (buffers) or attempting to wall out the ocean (variation reduction). > Based on similar statements in this group, I actually spent $$ on the Boyd book in the expectation of finding the holy grail of managing (variation). Don't waste your money, as the book is a hagiographic paen to Boyd, and his current disciples Cheney and Rummy. Current events in Iraq should at least cause some people to question where their heads are at. I suggest that, at the very least, you read the following article: http://www.theatlantic.com/doc/200401/fallows Which provides some indication of where and how Maneuver Warfare by the aforementioned gentlemen went askew. Also Brian, I request clarity on how the OODA cyle allows one to "...surf on the waves of variation rather than merely absorbing variation on wide beaches (buffers) or attempting to wall out the ocean (variation reduction)." Thanks, -- Binayak --- Date: Tue, 04 Jan 2005 00:52:38 -0500 From: Brian Potter Subject: [cmsig] Where is the Constraint Role? Steve, The short circuit link from "Observe" to "Act" handles reflex actions. See an enemy fighter on your six---DO SOMETHING! NOW! What you do is probably much less important than doing the "right thing." With a little luck, doing something will buy enough time to do something better considered with the second maneuver. Pausing---even for a millisecond---orienting and deciding what to do gets rewarded with incoming bullets or missiles. In nonlethal organizational environments very few situations (perhaps, none at all) will carry this particular combination of urgency and importance. Regarding connections among O-O-D-A, ToC, 6-S, TQM, Profound Knowledge, and that ilk ... - The ToC TP tools fit nicely in the "Orient" and "Decide" phases as does the Profound Knowledge "Plan" phase. - The Profound Knowledge "Check" and "Act" phases overlap the O-O phases. - The Profound Knowledge "Do" phase corresponds with the O-O-D-A "Act" phase. - 6-S and TQM tend toward a tactical view leaving the cyclic nature of improvement to some "master plan" specifying where one should best deploy improvement resources (or to the quite wasteful "go improve everything" approach), but generally the DMAI part of the DMAIC sequence corresponds with one cycle of O-O-D-A (but no short circuits allowed). - ToC generic solutions, specific solutions developed via TP, and specific TP adjusted generic solution applications all fill in the "Act" and "Observe" bits of the O-O-D-A cycle. - The ToC POOGI cycle also corresponds with the O-O-D-A cycle, but the former is less general (though quite useful in many organizational contexts). Note that ToC, Deming's Profound Knowledge System, and methods closely connected with either share a cyclic world view with O-O-D-A. Most other approaches encourage thinking which is both linear and tactical---trusting that multiple local improvements will eventually yield useful strategic results. However, O-O-D-A stands nearly alone in officially "allowing" both "short circuit" feed forward activities and explicit feed back activities (without repeating the full cycle). In practice, I suspect that experienced ToC or Profound Knowledge implementers actually "skip steps" and "repeat" prior steps when their system tells them that running through the full formal cycle makes no sense. Similarly, good 6-S and TQM people probably actually operate in a cyclic manner rather than the linear manner laid out in the training and "official" publications. Misfortunately, 6-S, TQM, and similar methods lack useful strategic direction without the "Orient" phase, Profound Knowledge, constraint awareness, or some similar guiding principle. "Satisfy the customer" does indeed provide a good direction, but knowing just what that means in a strategic sense (and thus sane tactical actions supporting the strategy) to product development, purchasing planning, or inbound logistics might prove a bit on the thorny side in practice. O-O-D-A simply offers a more general approach, and than generality supports exploiting variation (even deliberately causing it for one's own benefit) rather than suppressing it or absorbing it. True, Honda and Toyota need not be all that fast to win against most current competitors. Yet, Suburu, the Koreans, many tier one suppliers, and surviving fragments of the sleeping giants all offer environments where a genuine O-O-D-A approach may take root in a business organization. The Toyota Way is so deeply (and so by design) embedded in Toyota corporate culture that they might actually become a helpless beached whale were they faces with an actual O-O-D-A organization as a competitor. Honda might fare better against such (now only hypothetical) an adversary. +( Braess paradox From: "Opps, Harvey" Date: Thu, 5 Jul 2001 10:09:39 -0400 Dietrich Braess is a Professor of Electrical Engineering in Germany. a very bright guy and rather humble in OUR experience. I think it was about 25 years ago he discovered a paradox regarding Electrical / Electronic / Communications Networks. THE MORE CAPACITY YOU ADDED TO THE SYSTEM THE SLOWER THE AVERAGE SYSTEM SPEED. As communication networks were starting to really spread out, Electronic systems people started to discover the same thing. Within a few years people in the Highway / transportation infrastructure business started discovering the same thing. A Professor at the Rockefeller Institute in Manhattan ( about 6 years ago) was building a model of a cross Manhattan ( upper west side) highway. The model indicated that the average speed of traffic in Manhattan after the highway would be slower than without the highway. I am sure that most of you have noticed similar systems, There is still traffic, It seems to be worse than ever even after they built a new highway. The average speed of traffic in Manhattan, when I went to school there in the mid 1960's, was the same as by horse drawn streetcars in the 1890's. Anyone notice how much faster the WWW ( WORLD WIDE WAIT) is these days. With all the additional capacity added to the system, thousands of miles of Fiber Optic cable, Pentium 4 PC, High speed Linux/ Apache etc Servers.. The basic paradox is that ++== For unmoderated systems, as the capacity expands the demand for the capacity expands at a faster rate.== An unmoderated system is one where the customer is allowed to order WHENEVER s/he wants. For example "Rush hour". Everybody shows up at the bridge or tunnel ( "constraint) at the same time. There is no rush hour at two in the morning. In Tony's earlier discussion on the Algorhythms used by the phone companies when there was too much demand for a dial tone , =WAS TO MODERATE THE SYSTEM BY KICKING OFF THE OLDEST ORDERS TO GIVE THE NEWEST ORDERS THE AVERAGE FASTEST RESPONSE.== In the manufacturing world this would be like having a three day lead time. Any customer orders older than three days, where production has not started will be cancelled and replaced by the system as a new order with a new order date and delivery date. In the highway / transportation world this happens when there is a traffic light system at the entrance ramp onto the highway. The cars wait , they are moderated by the system. When the system senses that there is an opening or that the traffic is getting lighter they allow a car on. The result is that the average speed ( lead time) of the system ( traffic speed) does not decrease. With some of the newer highways if they compete with older highways for traffic often then start out as toll roads as a way of moderating traffic. I have collected many papers on this subject over the years, they are in paper copy only. YES THERE IS A DIRECT CONNECTION BETWEEN BRAESS'S PARADOX AND THE EFFECTS WE SEE IN OPERATIONS IN TOC. The MPCC approach is a way to Moderate the rate of incoming demand into the system. In its own way Braess' s Paradox has had an impact in systems similar to Moore's law, just that almost no one knew about it till now. There is a possibility that this will change with Optical Cable. If the increase in capacity in that part of the system increases in a short time by 100 fold or more ( as some of the technical writers are suggesting). Glass photons vs. Copper electrons are a different ball game. However two issues will validate Braess again. Someone will invent a Napster for Videos and other movies and the demand for capacity will increase exponentially ( or some other cause of major increase in demand or load), and everyone will want graphical html email so the load will increase geometrically The constraint in the WWW will move to a small and insignificant part of the system with no technical work around for years ( MURPHY LIVES) +( BSC balanced scorecard Date: Sat, 01 May 1999 16:28:05 -0500 From: Scott Surbrook To: "CM SIG List" Subject: [cmsig] Re: cmsig digest: April 30, 1999 Tim, My response below isn't meant to be critical, but I have to admit that your comments about the BSC are in conflict with what I've learned through an indepth study of it during this last semester... CM SIG List digest wrote: > Subject: Re: Why is Balanced Scorecard incompatible with ToC? > From: "Flewelling, Tim (JUS)" Tim.Flewelling@gov.nb.ca> > > BSC is strategic. ToC is tactical. No, that's not quite correct. Practically, that is a pretty good summation of the BSC/TOC relationship... > BSC does not provide the predicted effect logic of ToC or SD to explain how benefits in one change can results positive or negative effects in other measures. It does not provide it at the depth of TOC, but it DOES provide it... > Because BSC does not use cause-effect-cause logic or predicted effects, BSC does not provide a methodology of organizing change to achieve the maximum benefits from minimum change as provided through the focusing methodology of ToC or the simulations of SD. I haven't had the time to follow this discussion for very long (due to finals at school), but I'm wondering if you are talking about the "Balanced Scorecard" by Kaplan and Norton? If you are, I'm wondering how you missed the authors emphasis on the REQUIREMENT of the BSC strategies have a clearly defined cause and effect relationship? From a strategic perspective, this is one of their more significant contributions. While their requirements of cause and effect are not as structured or thorough as that of TOC, the relationship is still mandatory. > BSC does not provide a communication process to counter all the been-there-done-that, yeah buts. BSC relies on a leadership principle that suggests if enough leaders propose to believe in something, then if will come about. Again, the BSC book CLEARLY spells out that the BSC itself is a formal communication process and device that can be used to communicate the organization's strategies throughout all functional areas. In fact, Kaplan and Norton explicitly state that, as a communication medium, it is an excellent tool to use to drive organizational change. Also, your assertation about the BSC relying solely on a leadership principle (the leader's perspective of what will work or not) is not correct. The BSC also has an explicit feedback loop built-in that allows for feedback to the executives from all levels of employment from line employees on up. According to Kaplan and Norton, this feedback loop is mandatory. > BSC does state that there are more variables in an equation leading to continuing success than just profit. These other variables include human resources, customers and innovation. This is essentially correct, but falls far short of the full impact and coverage that is inherent in the BSC. The four perspectives of the BSC (Financial, Customer, Internal Business Process, and Learning and Growth) do indicate there is much more to a successful enterprise than just short-term profit. > I like BSC. People who are open to BSC and measurement can be more open to suggestions that adding additional processes to BSC may result in a greater likelihood of success. I agree with you wholeheartedly here! Personally, I believe there might be a significant synergy between the BSC and TOC. The BSC addresses structure and strategy at a level that the TOC currently does not, while the TOC addresses a structured method of tactical problem solving that the BSC acknowledges is needed, but makes no attempt at. One other, very important, point that the BSC makes is that there needs to be an explicit cause and effect link between the performance measures chosen and strategies. The chosen performance measures must be supportive of the strategies, unlike typical financial measures which often support decisions which are detrimental to the strategies and health of the organization. In addition, touching on the HR component of the BSC, these performance measures must also be connected to a reward system for their achievement, to align all areas/aspects of the organization with the organization's strategy. The BSC offers a management and strategic structure that the TOC has not. Rather then viewing the two as opposing philosophies, I prefer to think of them as synergistic. They both offer strengths that create a whole that is stronger than the component parts... _________________________________________________________ From: "Potter, Brian (AAI)" To: "CM SIG List" Subject: [cmsig] Re: Why is Balanced Scorecard incompatible with ToC? Date: Mon, 24 May 1999 15:23:30 -0400 Danilo, Chan, Scott, et al, Perhaps (from my memory of the GSP session in question), the issue is the number of CONTROL POINTS. If a BSC with many measures has several (maybe, quite a few) measures (leading, real-time, lagging, varying precision, varying sensitivity, etc.) for each control point, possibly, the multiple measures may offer improved insight into the appropriate control response. Holding that thought ... - If each control point for the BSC model would also be a control point (an internal CCR or a market segment) under a ToC model, and each ToC control point is also a BSC control point ... 1. If for every system state, each model "approximately agrees" with the control response suggested by the other, the ToC model and the BSC model support each other (all who like both BSC and ToC, please, take a bow). ... and ... 2. If for any system state the BSC and ToC models suggest contradictory control responses, the two models conflict with each other (make sure we keep the baby when we dump the bath water). - If the BSC control points differ from the ToC control points, ... 3. The ToC model has a control point absent from the BSC model: ToC conflicts with the BSC model (by suggesting a control response at the control point absent from the BSC model when the BSC model recommends no action on the control point it ignores). The BSC model fails to control something which probably needs attention. ... or ... 4. The BSC model has a control point absent from the ToC model: The BSC has redundant control points, and Ockham's Razor applies (at minimum) to the redundant "control point(s)" and their corresponding measures. In the single case (#1, above) of four where BSC and ToC harmonize, one may dispense with BSC altogether or use the BSC model to communicate the ToC measurement and control response system. For cases #2 and #3, the BSC model probably causes or allows undesirable decisions. In case #4, the BSC model probably causes no active harm, BUT it could wastefully divert attention from issues that matter to inconsequential redundant "control points." The effort spent "managing" the redundant "control points" represents a lost opportunity to employ that effort constructively. The BSC model may lack tools for identifying control points and determining appropriate responses to measured performance. The ToC "thinking process" model offers powerful generic tools designed for exactly this function. Given such circumstances, a ToC model may often deliver superior performance in the critical areas of control point identification, control point performance measurement, and control response determination. If the above reasoning stands under review (have at it, gentle readers), BSC offers (at best) a mechanism one might use to communicate ToC performance measures and appropriate control responses to those performance measures. > -Original Message- > From: cstevens@ctlaerospace.com > Sent: Monday, May 24, 1999 9:04 AM > To: CM SIG List > Subject: [cmsig] Re: Why is Balanced Scorecard incompatible with ToC? > I think the Goldratt version of simple versus complex systems could be summarized as: > The traditional belief is that the 10-25 measurement system, with a set of cause and effect diagrams that make it look like a spaghetti factory is ACTUALLY THE MORE SIMPLE SYSTEM according to physics and the other sciences. In that system, you could change the behavior of the entire system by implementing change on one component of the system. It's tougher to understand this type of system, but simpler to manage once you've identified the relationships. > The traditional belief that the 4 disconnected entities is simpler doesn't hold in physics. In order to effect a change in the system, you would have to initiate change in each entity independently. > I suspect if we did a tree on the issues with BSC, we would probably find causes in the neighborhood of 1) failure to properly map out the relationships between variables (measurements and departmental performance) and the expected performance of the system, and 2) The hope that a small number of "key" indicators can eliminate the need to monitor the entire system. I would describe 1) as the business equivalent of the grand unification theory... --- > On Fri, 21 May 1999, Ward, Scott wrote: > > I'd like to throw another perspective on BSC. > > > > One issue I have with the program, ... Go to the 3rd layer of > management and it gets worse. > > > > No wonder very little gets accomplished on our (non-TOC?) strategies--no > > matter how well we analyze the cause-effect relationships. > > > > How does Kaplan, et. al., overcome this phenomenon? > > This problem is mentioned in the book. They suggest...: "When the > scorecard is viewed as the manifestation of one strategy, the number of > measures on the Balance Scorecard becomes irrelevant, for the multiple > measures on the scorecard are linked together in a cause and effect > network that describes the business unit's strategy." p. 162. > > Why is this interesting? In one of the satellite session, Eli talked > about two models. One is a 4 disconnected "entities." The other model is > about 10 entities, maybe more, where all of them are connected with cause > and effect arrows. [... to a SINGLE CONTROL POINT...(Potter)] Then he > asked which model is more complex the answer: the one with more entities. > [... from a nonscientific perspective because it has more entities to > control. From a scientific perspective the system with fewer entities is > more complex because it has a control point for each disjoint > (independent) entity and the system with many entities responding to a > single control point is simpler ... (Potter) ...] The reason: the number > of entry points needed to affect the second system--the degree of > freedom- is lower in the second case. I see a clear connection between > the two arguments. Do you? > > Danilo Sirias, Ph.D. > Christopher Newport University --- From: Norm Henry Subject: [cmsig] RE: Balanced Scorecard Date: Mon, 4 Dec 2000 15:18:20 -0800 Debra Smith addresses this, although briefly, in her book The Measurement Nightmare. She says, "the balanced scorecard, as currently being implemented, also ignores the implications of a scarce resource." "The reason the balanced scorecard will not solve our dilemmas is because organizations do not have a model of the environment that can predict the effect on the entire organization. They do not understand or agree as to how to best manage and prioritize the transfer of work and the allocation of resources between the functional areas." Balanced scorecards are typically "balanced" but not considered in relation to the organizational constraint or scarce resource. Thereofore, while the idea sounds good, in practice, one ends up with local measurements which conflict with each other and secondly are not identified as the effect on the constraint. So one gets a good score in one area, but that may not be the score which is needed for that area to have the best effect on the overall organization. If an area needs to subordinate to another area for the good of the organization, the "balanced" scorecard may prevent this from happening. The best "balanced" scorecard is considering what happens to 1) Throughput, 2) Investment, and then 3) Operating Expense. --- From: "Potter, Brian (James B.)" Subject: [cmsig] Balanced Scorecard Date: Tue, 5 Dec 2000 17:05:11 -0500 My thinking on BSC has (unlike my title and e-mail address) not changed significantly since the posting (below) Hans dredged up from May 1999. In brief: - Under ideal circumstances, BSC may be a useful tool which communicates a list of constraints (leverage points, controls), reports their current state, and offers some clues about desired responses to system state changes. - Any BSC which differs from the list of an organization's constraints may misdirect the organization by either (or both) ... - leaving one of more controls unmanaged - diverting attention away from the system constraints to "manage" things which have no significant influence on the total system - BSC lacks the tools needed to identify constraints and decide how the organization should manage those constraints. - Why add the "extra layer" BSC represents when other methods provide all benefits claimed for BSC without the extra layer? +( buffer equals rope ? From: Kelvyn Youngman Date: Tue, 24 Jan 2006 13:48:37 +1300 Subject: [tocleaders] Buffer Equals Rope? I received a reply which was off-list but was pertinent to the previous discussion on ropes and buffers. Let me paste an anonymous extract because it fairly well prompts an issue that I couldn't articulate the other day; "First I agree that the use of the word rope has diminished. My theory is because most people understand and use the term as almost synonymous with buffer. Briefly, I believe that in counterbalance to the buffer which ensures you release the materials "early" so they'll have a very high probability of being processed on time, the rope prevents you from releasing materials too soon. It is meant to block the notion that the earlier we start the better. The rope has no length, though. It is a "mechanism" that can take many different forms. Essentially it synchronizes the release of materials with the flow at the point that is being buffered. It is truly the PULL signal from the constraint to material release." I know that I didn't leave much room for ambivalence when I said that "to me ropes and buffers are distinctly different." So I guess that I can't wriggle out for the moment, and in fact I am comfortable with this. The mental model that I currently have is an old-fashioned brass curtain track up-turned on a table as my rope, and individual curtain runners/hangers - the work - are released at different spacings (start times) determined by the drum and move individually - at different rates but always FIFO - towards the other end where they accumulate - queue - at the buffer origin. What I couldn't articulate the other day was the case where different proportions of mice and elephants are in the mix . We decided that these two types of work could have different ropes (two different curtain tracks feeding a common buffer origin). In fact I invented a common rope for both elephants and mice of 4 weeks (it used to be 8!). This was my fudge to avoid having to address the problem that I couldn't articulate. Someone raised exactly the same problem the other day; they wanted not two but five ropes - or at least that was the hypothetical question. So lets stick to just two; Elephants at 2 weeks (8 hours day this time and 5 days a week) Mice at 1 week (same conditions). If I only do whatever it is that we are doing to mice (our mix is 100% mice) then our WIP will be 40 hours of mice (one mice takes let's say 4 hours of constraint time so this means 10 mice as WIP up to the drum). If I only do whatever it is that we doing to elephants (our mix is 100% elephants) then our WIP will be 80 hours of elephants (one elephant takes let's say 2 hours of constraint time so this means 40 elephants as WIP up to the drum). Note there is no throughput assigned so we can't even guess the relative value of this silly example. Now as I see it we have two ropes but my buffer (to a common constraint) is going to vary. It can be as little as 40 hours at one extreme of the mix (assuming we are externally constrained) and as much as 80 hours at the other extreme. It has to be this because we can't release work in the past (well O.K., I know that we can be as late as 1/3rd of the buffer duration but that is a rule for an exception not a rule for a rule). Pieces, for those still interested, could range from 10 to 40 pieces (and the space required will differ by orders of magnitude). Intermediate mixes will produce a buffer duration of an intermediate nature. Rope lengths are defined and invariable but the resulting buffer varies. This at least helps me to understand; (1) why most people choose the longer rope as a unitary buffer. The result is that work from the short rope will arrive often arrive too soon, but this is manageable. The opposite of choosing the shorter rope as a unitary buffer will often mean that it is somewhat empty and work is apparently late. This is impossible to manage. (2) why individual job buffer status ala Schragenheim (especially under mixed make-to-order/make-to-stock) is preferable when more than one significant rope length is involved. Let's take a 50:50 mix (5 mice 20 elephants - we are talking time not pieces), the buffer should contain half of 40 hours for mice = 20 hours of mice, and half of 80 hours of elephants = 40 hours of elephants = 60 hours in total (for 2 different ropes of 1 week and 2 weeks duration). Does this make sense? I hope so. And maybe it does mean that the rope really is only the gating offset. But hopefully it also means that the rope length (which I take to be real) and the buffer duration are not synonymous. Pieces make no sense without reference to time consumed on the constraint, the buffer is not equal to either of the ropes but ranges somewhere in-between. Now I guess that someone will say that this proves that we need computers and that we need "dynamic buffering." To which I would reply that all we need to know is the rope length for the particular material and the date that it is scheduled forward on the drum (which gives me the release date and the zone 1 penetration date which are needed for buffer management ala individual job buffer status). If ropes are no more than the gating offset then once again they must be a foreshortened offset, not one that is too small and not one that is long enough (or too long) to work without active management (intervention) by exception on a significant few. --- From: Brian Potter Subject: [tocleaders] Buffer Equals Rope? Justin, Yes, I was serious. In an automated environment, one might let a sufficiently clever ERP system compute "accurate and precise" rope lengths for each SKU. Since normal system variation destroys such "accuracy and precision," you do not need it. Something close will do. The Fibonacci numbers [hence, F.(n)] have two useful properties . . . - They are easy to calculate: F.(n) = F.(n-1) + F.(n-2) where: F.(0) = F.(1) = 1 - They are approximately geometrically spaced each being about 1.6 times its predecessor Thus, they automatically offer a "rope length menu" so that one of the "standard rope lengths" will not be too far from an "ideal" rope length for any particular SKU. Just pick one near the expected lead time from release to one buffer zone before scheduled drum start time. You always have two choices---one a little short and one a little long. If the shop is nearly empty or the SKU is easy or lots of active SKUs need similar setups (easy order joining all along the routing), pick the short one---the order will sail through; no need to start early. If the SKU is new and Murphy might be off scale, pick the long one---the order may need the extra time an early material release will provide. As F.(n) gets large, so does the "gap" of "unavailable" rope lengths between F.(n) and F.(n+1). However as need for longer ropes increases, so does variation. Thus, the greater tolerance for imprecision and inaccuracy in rope lengths also happens for long ropes corresponding with large values of F.(n). Variation in production will smear all the rope lengths away from their initial values, and the drum buffer will absorb variations introduced by both production processes and "non-ideal" rope lengths. A sane guess is good enough. In a manual system, a menu (like the Fibonacci sequence) might help someone focus on something that matters rather than squandering time/effort trying to pick the "ideal" rope length for each order before releasing it into the system. Naturally (even in a manual system), if you already know a "good" rope length for a SKU, use that length, again. However, do not make a big production of rope length computation. If an order is a one-off, it won't matter all that much---you won't see the order, again. Thus, you cannot learn form experience to pick a better length next time. If an order will (essentially) repeat several (or many) times, for each repeated order, you can use prior experience to pick a "better" rope length than last time. In both cases, the initial rope length choice just does not matter all that much as long as the guess is somewhat sane. Variation will happen, and the drum buffer (or expediting) will absorb the variation. --- From: "Jim Bowles" Subject: RE: [tocleaders] Buffer Equals Rope? Jack It was the “A” plant that led some smart person / people to develop the Critical Chain application. If you examine the flow that you have described we would see that there are convergence points along the routing. Assembly A1 (10hr) has a shorter chain than Assembly A2 (40hr) and then they converge at the apex. Where you say the plants constraint resides. When we talk buffers we are of course talking “time buffers”. So if we use the production solution we would probably have – a route synchronisation buffer or assembly buffer [this might also be called a space buffer if the items are large] in the shorter lead time leg (feeder buffer in CC) – an assembly buffer (capacity buffer in CCPM) – a shipping buffer (project buffer CC). We would start with the due date – back off in time (tie a rope) to schedule the CCR (Shipping Buffer). Then we would back off in time to the assembly buffer (tie a rope). Then we would back off in time again from the assembly buffer (tie the ropes) to the material release schedules. Since we are talking about a synchronised schedule we would want to protect the longer lead time assembly from delays in the shorter lead time assembly. The real beauty of this approach is that it provides us with some very clear control points in the system: Release schedule, feeder buffer, assembly buffer, CCR schedule and shipping buffer. Apply good buffer management methods and you have the means to provide a highly effective control system and mechanism for focusing you improvement efforts. For those people coming from a JIT system which uses physical buffers it can be hard to get their heads around the fact that we are talking buffers equal time. Of course on the floor itself these will translate into physical items. But the volume of these (or lack of them) is an indicator of whether we are doing the right things and doing them right. That’s my take on the subject. --- From: Brian Potter Subject: [tocleaders] Buffer Equals Rope? Kelvyn, I believe that the assertion, "Buffer = Rope," is generally true only in the single SKU case. Consider the following four SKUs (view in a monospaced font for best visual impact): SKU Mean time to Drum Standard Deviation of Mean time to Drum A m.A s.A B 10 * m.A s.A C m.A 10 * s.A D 10 * m.A 10 * s.A Clearly, using a single rope length with "Buffer = Rope" for these four SKUs will either tie up excess cash in WiP (by having too much WiP for A, B, and C) or drive too much expediting (or even late orders) for D (and maybe B or C as well) by failing to release material soon enough. So, how long should the assorted Ropes and Buffers be??? Assuming we want 3-sigma protection at the Drum, the total buffer lengths for SKUs A and B should be 3s.A and 30s.A should work nicely for SKUs C and D. Part of the rope length should be some portion of this buffer length. If the processes upstream of the Drum were a "black box," we'd want the full 3-sigma time interval as part of our lead time---part of the Rope length. Since those processes are not only visible but under our control less will usually do. Assuming that the average lead time has a somewhat normal distribution (not a big stretch), about 84% of all orders will reach the Drum before the average lead time plus 1-sigma has elapsed. Without intervention, the other 16% will take more than the average time by more than 1-sigma. BUT since we have control, we need not let those orders just be late. We can intervene! Periodically (more often in shops with short lead times and less often in shops with long lead times), we can ask ourselves, "Assuming average progress through rest of the system, when will each order reach the Drum?" For orders that will probably arrive early or less than one-sigma behind average (the normal 84%), do nothing---they will probably be fine. Orders that seem likely to arrive late by more than 1-sigma but less than 1.5-sigma (about 9%) to 2-sigma (about 13%) get tagged with a watch notice. These orders may "get lucky and arrive on time without intervention, but they may need future intervention. Orders more than 1.5-sigma (7%) to 2-sigma (2%) will almost certainly be late unless we intervene. Expedite these orders immediately. There is nothing sacred about the particular break points mentioned above. The three zone sizes (green or neglect, yellow or watch, and red or expedite) should adjust according to one's capacity for expediting. The expedite zone should have a time long enough so that some expediting happens but no so long that many orders need expediting. The watch zone should be wide enough so that need to expedite an order should rarely (if ever) come as a surprise (longer than the interval between buffer reviews---which will become more frequent as the system improves). The green zone (nominally the time before mean lead time plus 1-sigma) should be wide enough so that the large majority of orders reach the Drum inside the green zone. Thus for A, B, C, and D---our original four SKUs--- and for an arbitrary SKU (X) we get the following Buffer and Rope sizes (again, view in a monospaced font for best visual impact): Standard Deviation of Mean Time Mean Time SKU to Drum to Drum Buffer Rope A m.A s.A 3 * s.A m.A + s.A B 10 * m.A s.A 3 * s.A 10 * m.A + s.A C m.A 10 * s.A 30 * s.A m.A + 10 * s.A D 10 * m.A 10 * s.A 30 * s.A 10 * m.A + 10 * s.A X m.X s.x 3 * s.X m.X + s.X As we attack major variation sources both the mean lead times and the variations in mean lead times will shrink. Thus, as we improve, both the buffers and the ropes will shrink, but the protection they provide against late orders or lost Drum time will remain the same. Leading into the next topic, note that for all SKUs, both m.X and and s.X are usually guesses. The system has probably improved since the last order for whatever you are making, today. Thus, historic data about m.X and s.X is probably not all that helpful when what you want is m.X and s.X for the SKU in TODAY's system. Thus, the formulae above may have academic interest and some value in a static system, but DBR is NOT STATIC. DBR system improve as their operators attach the damaging variation sources guided by expediting causes and quality failure causes. Fibonacci Numbers as Guides to Rope Length: I cannot prove anything special about Fibonacci Numbers as "ideal" selections for a "Rope Length Menu." They just tend to crop up "naturally" in many places for no obvious reason. Why not here? I'd guess that the factor in a geometric series for a "Rope Length Menu" should be less than two (or "gaps" might get too wide too quickly) and enough more than one so that adjacent entries are not too much alike. As far as I can tell, the square root of two, 1.6180339887... (the Fibonacci sequence), and the square root of three (and any factor between the extremes) all look like sane candidates. As I said before, ease of computation contributes to my recommendation for F.(n) as the Nth entry in a "Rope Length Menu." --- From: Brian Potter Subject: Re: [tocleaders] Buffer Equals Rope? Kelvyn, I believe that my carefully contrived example exhibits a "need" for two buffers and four ropes while clearly indicating that Buffer <> Rope. As a practical matter, the example may be too contrived. Yet, the general principle that Buffer duration is proportional to variation and Rope length is proportional to average lead time remains sound. Thus, in the general case, Buffer and Rope are not identical. However, as a pragmatic matter, they may be close enough to the same that we may ignore the differences. One aspect of the hypothetical "Buffer = Rope" identity bothers me. If Buffer = Rope, that identity implies some non-zero probability (say, half of what lies beyond 3-sigma, about 0.0015) that a lead time of zero between material release, upstream processing, and arrival for Drum processing can happen. It seems sane that Rope > Buffer (meaning that for practical purposes, material will have a zero probability of reaching the Drum in zero time after upstream processing). Per my earlier comments about Fibonacci numbers for Rope length menus (probably works for buffer sizes, too), similar duration Ropes and Buffers might harmlessly lump together in a manual system if doing so simplifies the order tracking process and buffer management reporting. Since the Buffer is a time interval during which we expect the order to reach the Drum, buffer management reporting amounts to pretty much the same thing as buffer penetration reporting in a CC/PM or CC/MPM environment. --- From: Tim Sullivan Subject: RE: [tocleaders] Buffer Equals Rope? Apparently the interest in this topic has waned. I have continued to think about why there is not clarity on this subject, so I offer this one last post on my thoughts. Ok, I drop my objection to the notion of “rope length.” And I claim that the rope length is always equal to the length of the time buffer to which it corresponds. Putting that in the context of the metaphorical troop of marching soldiers in “The Race”, the rope becomes taught and stops the lead soldier from marching any farther when the WIP is equal to the buffer duration. If the time buffer is 4 hours and there are 4 hours of ground (WIP) between the lead soldier and Herbie, then the rope becomes taut which is the SIGNAL that lets that soldier know s/he needs to stop. Similarly, when Herbie has covered more ground so there is less than 4 hours of ground (WIP) between Herbie and the lead soldier, then lack of tension on the rope SIGNALS the lead soldier to start moving again. How far does the soldier go? Not to some predetermined point, but only until there is 4 hours worth of ground (calculated at the rate at which Herbie hikes) between the lead soldier and Herbie. When that happens, the rope becomes taut and SIGNALS the lead soldier to stop. So the terminology of “tying the rope” means ­ create the mechanism that will signal the appropriate control points to start and stop processing according to the rate at which the CCR and/or market or replenishment buffer is processing/consuming/being-consumed. The rope(s) is(are) the feature(s) that SYNCHRONIZES the production system. In traditional DBR the shipping rope synchronizes the active constraint with the market and the constraint rope synchronizes the gateways with the active constraint. Note, of course, that there is no rope to the soldiers between the lead marcher and Herbie. They all do Roadrunner ­ they march as fast as they are able whenever there is ground between them and the marcher immediately in front. When they catch up, they stop. When the marcher in front moves again, exposing ground (creating WIP) they immediately begin to march (if they are able). Another note regarding the frequency at which synchronization should take place... in my opinion, that depends on the buffer duration, or rope length (since they are equal). If the buffer is only 30 minutes, synchronization must be real time. On the other hand, if the buffer is 2 weeks, then daily synchronization is probably sufficient. --- From: Brian Potter Subject: [tocleaders] Buffer Equals Rope? Jim, Kelvyn, Tim, _et al,_ Points to ponder . . . Where's Herbie: The "Rope"---tying the material release rate to Herbie's pace---merely prevents releasing too much WiP into the system. Introducing work into the system faster than Herbie can do it merely converts CASH (a versatile, liquid asset) into WiP (with little or no market value until it is sold [very low liquidity] and little if any capability for conversion into more than one or a handful of salable items [very limited versatility]). Herbie can be anywhere in the line of march as long as a "Rope" connects Herbie to the leader. When Herbie leads, the Rope length is VERY short---perhaps only a few minutes before Herbie begins the setup for his next operation. If Herbie is the last operation (perhaps, packaging or sales), the Rope might be quite long (days or months). Buffer = Rope?: In the general case, the Rope length will be longer than the Buffer. The Rope length is the total expected lead time from material release until the Drum begins processing the order PLUS some allowance for variation. One might use the average (arithmetic mean) lead time, the median lead time, or the modal lead time to estimate the expected lead time. The variation estimator might be based on the lead time standard deviation, lead time range, or lead time moving range. Fortunately, DBR is so robust that one may pick a semi-arbitrary Rope length somewhat near the "right" Rope length and later "tune" the Rope length based on actual system performance. The Buffer is usually a shorter time interval than the Rope. This is one of the virtues of DBR. One need not carefully watch every order as it flows (perhaps along many converging routings) toward the Drum and the shipping point. One needs watch ONLY the orders which SHOULD have reached the Drum or which SHOULD be within one Buffer time (at expected processing rates) of reaching the Drum. The Drum schedule plus the Buffer length identifies those orders for you. Thus, DBR shop floor management consists primarily of checking all the Drum schedules (often, only one or two) to identify the orders with scheduled Drum start times from now to one Buffer time into the future. Locate those orders. Usually, finding them is not hard because, the time and materials system will tell you where they were the last time someone/something worked on them. Each order "in the Buffer" should be either where it was last worked on or waiting for its next operation. Note that orders with convergent routings add a complication in that they have one "location" for each converging routing. Orders that---when processed at a normal pace---will reach the Drum before the scheduled Drum start time by a comfortable margin (or have already reached the Drum) are "green;" pay them no particular attention for now. Orders that---at normal processing rates---will reach the Drum before scheduled Drum start time with some "cushion" are "yellow;" watch them for signs that they are falling behind. Orders that---again, at normal processing rates---will reach the Drum AFTER the scheduled Drum start time (or before that time by an uncomfortably thin margin) are "red;" begin expediting those orders immediately to avoid late arrival at the Drum. Such late arrivals at the Drum will disrupt the Drum schedule and perhaps put the order's on-time delivery in jeopardy. If one has the data for the computation---after a few production cycles under DBR, one will---one might set Rope lengths to average lead time plus one standard deviation of that same lead time. The Buffer length (the amount of time before the scheduled Drum start time that one actively watches order locations and advancement rates) might be about three standard deviations of the lead time from material release until scheduled Drum start time. Note that these times need not be calculated with high accuracy and precision. Actual performance will surface errors in the Rope length and Buffer length estimates. Too much expediting (more than 5% to 10% of all orders or more than the expediters' capacity) indicates a need for a longer Rope. Too little expediting (less than 5% to 10% of all orders) indicates that the lead time is inflated and the Rope should be shorter. Large numbers of "yellow" or "red" orders which nonetheless reach the Drum before their appointed times probably indicate an over size Buffer (Shop floor logistics management is wasting effort tracking orders that need no attention.) Orders arriving late (or barely on time) without raising a "yellow" or "red" signal leaving sufficient time to expedite probably indicate a buffer that is too short (Unpleasant surprises indicate that shop floor logistics management is not tracking enough orders.) With an automated system, one might track EVERY order's status relative to the Drum schedule. Then, one could to a CC/MPM-like buffer penetration calculation and expedite every order likely to reach the Drum late or "nearly late." This might make managers and executives feel good about on-time capability, but would offer little real benefit. Expediting early in an order's production chain ignores the reality that later processes may make up the "lost" time or put the order "behind" again. Normal variation caused by upstream resource conflicts and routing convergence points will temporarily set one order "behind" and another "ahead." Because the pre-Drum resources all have more capacity than the Drum, such variations are a tempest in a teapot. Ignore them. Expedite ONLY the handful of orders that threaten late Drum arrival as they approach the Drum (while they are "inside" the Buffer). The "extra" capacity in the resources upstream of the Drum provides the "sprint" capability necessary for successful expediting. Note that less "excess" capacity implies a need for a longer Buffer and more sprint capacity among the non-constraints allows a shorter Buffer. The Buffer is usually shorter than the Rope because DBR trades a small investment in WiP (about one standard deviation of the lead time) for robustness which allows one to exploit the potential for canceling random processing time variations over each order's processing routing. Note that in a JiT operation with negligible raw/purchased material inventory in house, the Buffer may extend outside the organization into the logistics network or even into suppliers' shop floors. In such operations, internal "material release" merely indicates a location transfer from material handling to some production operation. Such material release is not a meaningful control point. The actual material release that matters in the DBR logistics management probably happened when the JiT system triggered a production work order at a supplier. Thus, in JiT shops both the Rope and the Buffer often reach into the incoming materials supply network. In these operations, DBR will help the inbound logistics management team (whose work is often critical to stable in house operations) focus their attention on orders that are "in trouble" or that might soon be "in trouble" without paying much attention to the majority of orders that will flow smoothly to the right place at the right time. --- +( buffer penetration From: Kelvyn Youngman Date: Thu, 15 Dec 2005 21:51:01 +1300 Subject: RE: [tocleaders] Process Measurement & Improvement Questions Santiago, What happened, did your computer develop some fuzzy logic? I received your e-mail and I was going to e-mail and say that I would try and reply this evening however I see that Brian had done so already (although I didn't see any evidence that the original e-mail went through TOC Leaders). Never mind, I too think that these are the types of questions that need greater circulation. I wanted to answer these questions from a slightly different angle to Brian. Maybe from the point of view of a less automated environment. And I also wanted to expand on some of the themes. (1) You asked about buffer penetration information. Let's make this more generic, let's apply this to both make-to-order and make-to-stock. Firstly if you have a red zone penetration (this is called zone 1 of the buffer regardless of what the TOC ICO dictionary says - I have made a submission (again) why zone 1 is zone 1, but TOC ICO are clever they now have 3 committees to consider this application) then we should be talking about an exception, not a commonality. A red zone buffer penetration should not be common, the suggested numbers that float around are 5-10% of all jobs. It is exception reporting. It is a mechanism to make the ropes shorter and still maintain adequate subordination to the area that it protects. So what happens when I have a buffer penetration? All I have is an incidence of a job which has consumed 2/3rds of its buffer time. I expected to see it physically located at the buffer origin at this date and it is not there. So I have to go and look for it. It might still get to the buffer origin by the due date (let's make this a drum and call it as John Caspari does the Drum Due Date), it might not. If not we will have to "help" it along. That is the first job of buffer management, note the incident and investigate it. The second job (if it is necessary - we don't want to create a new bureaucracy) is to record the time that the job actually does arrive at the buffer origin. This is the delay that our incident incurred. We can accumulate this data and make us of it. Robert Stein in both of his books (the mid 1990's ones) gives histograms of both frequency vs. location and also value (throughput $ days) vs. location. It is this second one that is most valuable. It tells us important information about small jobs that are very late and large jobs that are slightly late. This is the measure that we should use. If you look at John Caspari's more recent treatment in Management Dynamics he shows how to use the frequency data alone in conjunction with statistical process control to make determinations. I would (and have) argued that throughput-dollar-days is more valid than frequency alone. So zone 1 penetrations will record only the place at which the job was located when 2/3rds of the buffer was consumed. This may not be the actual place that caused the problem. Who cares? No one, it is still a damn good indication at the worst of the rough whereabouts of the problem (if it is indeed a persistent problem) and at best a much, much better than how most systems collect data now. To "deepen the statistic" we should use zone 2 buffer penetrations - called the tracking zone in Haystack - which now reminds me why you are asking this. My answers to these questions are on the next page from drum-buffer-rope. It is called "implementation details." http://dbrmfg.co.nz/Production%20Implementation%20Details.htm If you look on that page you will find this already drawn and explained under the headings; Local Performance Measures - Lateness, So now I don't need to repeat anymore of that. The place that you find the job is the place that you record. And maybe a eyeball check is the best way to do this. (2) Now I can answer this one by saying check on the same page for; First-In First-Out (FIFO) Discipline and I can quote Stein and Woeppel as the authorities. My own rationale for this is that given the old bad habit of putting most urgent first (because in the old days everything eventually becomes urgent) we must break this. By putting most urgent first, we give ourselves permission to work slower later on less urgent jobs. This ruins road runner. A less urgent job in front of a more urgent job will be pushed through the system by the more urgent job. That is what we want. It is only once we have a red zone penetration that this rule should be overturned. I argue that it should only be the buffer manager (a planner for that line) who gives this permission. Otherwise everyone will develop their own logic for doing so and soon you have no logic at all. Some people say that you should let the foreman work from the drum due dates - BUT if you have different ropes a job with a long rope and a significant buffer consumption can be delayed for a job with a short rope and little consumption (because the due date is sooner). Things that go through heat-treat and things that don't go through heat-treat in the same line spring to mind. FIFO and buffer consumption are the determinants. (3) T$D is a negative measure. A measure of a problem, I$D also. However, you know the schedule for the planned completion of jobs and the value to the system, graph this in Throughput$ vs working day cumulative for the week or the month and graph actual completion. This is positive (sure you might get a hockey stick on actual but that just tells you what sort of untapped potential you still have - and planners know how to stack "good" work in make-to-stock near the end of the month). Better still put a bonus on increased value, this is more positive. It works for executives, why wouldn't it work for labour? (Bonus based upon additional Throughput$ earned compared to an historic base rate - it is reward for output that wouldn't not have been earned otherwise = increase in productivity). In fact you have to graph the system's output and circulate it to everyone. Some people will feign indifference, but they are not indifferent to it. Actually let's hammer this. If you are doing a good job people will believe they are doing less, you need the output graphs to show that they are not. The only other thing that I wanted to mention is that since becoming aware of Eli Schragenheim's buffer status approach, I believe that there are two valid notions for the buffer. (1) There is the old, oops, original notion that I will call the viewpoint of the constraint. This works well in non-computerised systems. It also tends to cause ropes to be the same length which isn't really a big problem if you use the longest rope for all jobs. Some jobs (job types not job sizes) arrive early. They wait. Really all you are doing is checking the drum schedule against reality. (2) If the system is computerised than you can use buffer status for individual jobs (in fact you can do it graphically on job cards for a non-computerised system too). But to my thinking this is now the viewpoint of the job. Each job has its own buffer. I find this interesting. It is the same thing viewed from two different angles. Bigger oops. I had on the DBR page that 1/6th of the original work is sitting in front of the constraint. This is incorrect, it should be more like 3/12's (Zone 1 full plus zone 2 partially full). Santiago, hope that this helps. Clear the bug out of your e-mail, but keep those questions coming. Cheers Kelvyn -----Original Message----- From: tocleaders@yahoogroups.com [mailto:tocleaders@yahoogroups.com] On Behalf Of Potter, Brian (James B.) Sent: Thursday, 15 December 2005 10:46 To: tocleaders@yahoogroups.com Cc: Santiago Velásquez Martínez Subject: [tocleaders] Process Measurement & Improvement Questions Santiago, These are good questions. They deserve on-list treatment. 1: I recommend using Throughput-Money-Days of delay as the metric: Compute this metric as Throughput from the sale multiplied by days of delay (increased buffer penetration) caused at the resource (and each resource up stream from the resource holding the penetrating order). Resources with consistent, high values for this metric deserve attention regarding improving subordination or treatment as an internal constraint. 2: Your production plan tells you when you expect each order to reach the shipping dock and each constraint it uses. Your average lead times tell you how long it will take an order to move from its current location to the next buffer it will reach. Your time worked and materials consumed system will tell you where and when each order was last in process (presumably, it is queued at the next resource[s] it will require on the production routing). If it will help, you can add pseudo-resources (e.g., Transferred from Layout and Cutting to Fabrication meaning the order with this as it's last activity is queued in Fabrication) to help you locate orders without searching the shop floor. For each order, add lead time along routing from current location to the next buffer, and you have your buffer penetration for every order. You can probably do this automatically with each transaction in your time worked and materials consumed system (with a periodic adjustment [perhaps, daily or once per shift] for each order that remains static for a full buffer reporting cycle). If an order has multiple paths converging at a buffer, use the path with the maximum buffer penetration. 3: Use #2. Remember, in a system with serial dependencies, there is no such thing as individual success or failure. Either the whole system wins or everybody loses. The system performance metric identifies processes and resources that contribute unfavorable variation. Policies, historic practice, procedures, process design, product design, materials, and tooling typically drive unfavorable variation much more powerfully than individual behaviors. The metrics will identify symptoms (unwanted lead time increases and delayed revenue) and locations (resources with high TDD). Identifying and eliminating causes will require investigation and process improvement (as in SPC, TQM, 6-Sigma, process reengineering, and the like) action. :-) Brian The question is not whether a business is successful, but why? and why was it not more successful? ---W. Edwards Deming, Out of the Crisis, p129 -----Original Message----- From: Santiago Velásquez Martínez [mailto:ifsavel@gmail.com] Sent: Wednesday, December 14, 2005 3:59 PM To: Kelvyn Youngman Subject: Questions Hello I hope you don't mind.... I have these questions, I don't want to send them to the list as I believe they have been discussed upon before. I woudl appreciate if you would give me any help! 1. In a MTS environment...if I have a red zone penetration of any SKU...I understand I have to jot down the resource where the material in process is stuck at...do I register this as an incidence, or should I count also the number of time units (days) the material stays stuck at this resource? Do I have to jot down also the other subsequente resources and jot them down as "buffer penetrators" as well, or should I only limit myself to the first resource that caused the penetration? 2. In any environment, under DBR...how do you manage the priorities of orders in the plant? I know it is with buffer status. But how do you physically do it? Do you need someone to physically go and update all the current orders in the plant and change the buffer status? Would a FIFO or Earlies Due Date mechanism work? I am looking for the real, implementation details :) 3. How can you measure people in the plant? I know TDD and IDD are the recommended measures...but how do you physically do it? Or are there other measures to see if a worker is performing well or not? --- From: "Potter, Brian \(James B.\)" Date: Thu, 22 Dec 2005 17:16:49 -0500 Subject: [tocleaders] Process Measurement & Improvement Questions Kelvyn, I see no intrinsic flaws with the IDD and TDD metrics described in Haystack chapter 24. Undesirable values are symptoms. In a shop with generous computation support and frequent updates (daily or with each order location change) to time and materials systems, TDD can spot buffer penetration risks very quickly. Lacking computer aided calculation and a corresponding automatic buffer penetration report, a manual buffer management system can deliver the same information with less immediacy. For each hole in each buffer do the following: - Locate the corresponding order at its current position (positions if an integration point happens between the current locations and the buffer). - Compute the candidate order's TDD for each resource from the current position(s) backwards to material release. Include both end points in the calculation. - Compute the differences in TDD (How much did TDD change from entry to exit at each resource along the routing?) for each resource. - Log the dTDD (signed TDD change values) for each resource. - As long as the dTDD values for a resource have have a pattern indicating low risk for large, cumulative unfavorable dTDD contributions, you have no strategic process improvement action items. - Any resource which averages unfavorable dTDD values might deserve local attention (either in terms of local performance improvements or in terms of identifying upstream resources most likely to pass delayed orders to the resource). Resource managers can handle these situations without strategic intervention. - Any resource which averages statistically significant unfavorable dTDD values or which (no matter its average) frequently creates statistically significant unfavorable dTDD events, deserves strategic attention. Either that resource or something up stream from it has trouble subordinating to the drum. - Note that this is an SPC application finding the exceptions among the exceptions in the buffer report. As always with good SPC work, you find the exceptional symptoms and run down their causes (usually systemic rather than individual) so that you can apply a corrective action which eliminates the cause from the system. - Note also that rare high unfavorable dTDD events and small, stable unfavorable dTDD averages get ignored (first, because they seldom trigger in the buffer report and second, because the SPC analysis on buffer exceptions will usually indicate that such cases are pocket change not requiring one's strategic attention). - Third note: This local measurement approach is NOT measuring people or the equipment they operate. It measures the interaction between them and the total system. Sound actions based on the analysis focus on improving the interaction between the dTDD exceptions and the total system. :-) Brian -----Original Message----- From: tocleaders@yahoogroups.com [mailto:tocleaders@yahoogroups.com]On Behalf Of Kelvyn Youngman Sent: Thursday, December 22, 2005 2:43 AM To: tocleaders@yahoogroups.com Subject: RE: [tocleaders] Process Measurement & Improvement Questions John, Excuse my delay in replying. Thanks for asking the question, because in doing so I believe that I already see the problem (with my poor expression). I see throughput-dollar-days late and inventory-dollar-days wait as system measures. They are operational measures that measure the degree to which we are "not doing what we are supposed to be doing" and "doing what we should not be doing." They are relevant as local operational measures only to the section of the system that is controlled by the rope or the buffer that they apply to. I don't believe that they can be subdivided or assigned to anything smaller than the section under consideration. So, for S-DBR they can only be applied locally to the whole system! So where does the confusion come from? How about from the Haystack Syndrome? In the Haystack Syndrome for instance the second half, the half that confuses everyone, deals with a clever solution to interactive constraints - except nowadays most people wouldn't worry about interactive constraints, they would try to get rid of one or both of them. Moreover, my understanding is that OPT, the predecessor of DRB, was designed to handle up to 5 constraints (per chain?) in its computations. I rationalise this as evidence of the fact that most implementations of OPT were in true cost world environments where everything was indeed operated optimally (or at least sprint capacity was well hidden which any good operator/manager worth his/her salt is skilled at doing). This multisectionalization of a chain (as a hang-over from OPT) is where I believe this "localisation" of throughput-dollar-days late and inventory-dollar-days wait comes from. This is what we see in HS (for throughput-dollar-days late) around pp 147-155. This is for buffer penetrations across a department (not between centres within a department). Personally, I don't see how anyone will own the "half-baked" "hot potato" (even at a department level) that was already caused by another department to be late - we all know whose fault that was, it was their's not ours (OK that is unfair, most people aren't that hard-nosed; but put blame on them for someone else's "error" and you will get that response). Its not the avoidance of the negative (accumulating TDD) that is important, the desire to attain the positive - to improve that is important. Throughput-dollar-days gives us the measure that we need to improve; one which is entirely consistent with the overall aim of Theory of Constraints. Throughput-dollar-days late and inventory-dollar-days wait are (to my mind) very important operational feedback loops for the section of the system that they pertain to. Throughput-dollar-days applies to the buffer. Inventory-dollar-days applies to the rope. I did have a nice example of where I would have given anything to have these two measures in place (but they weren't). I'll leave that for the moment. I hope that I have addressed this at a generic level rather than specifics. Tell me that the disagreement is over the word "local" and "assign" and I will say that there is no disagreement at all. The measures are operational (feedback), global, and systemic. Oops, you asked do they apply to the 90% who do not handle the product. Of course - these people are part of the system aren't they? Cheers Kelvyn P.S. I just wondered how consistent I was in drawing this. I am not consistent at all. I have unit-days-late as per buffers. BUT I see that I put unit-days-wait as per department (rather than ropes). Darn, darn, darn. I have some work to do making changes - sorry improvements! Thank you for making me see that. -----Original Message----- From: tocleaders@yahoogroups.com [mailto:tocleaders@yahoogroups.com] On Behalf Of Caspari@aol.com Sent: Sunday, 18 December 2005 16:24 To: tocleaders@yahoogroups.com Subject: Re: [tocleaders] Process Measurement & Improvement Questions In a message dated 12/15/05 3:59:15 AM Eastern Standard Time, youngman@dbrmfg.co.nz writes: Robert Stein in both of his books (the mid 1990's ones) gives histograms of both frequency vs. location and also value (throughput $ days) vs. location. It is this second one that is most valuable. It tells us important information about small jobs that are very late and large jobs that are slightly late. This is the measure that we should use. If you look at John Caspari's more recent treatment in Management Dynamics he shows how to use the frequency data alone in conjunction with statistical process control to make determinations. I would (and have) argued that throughput-dollar-days is more valid than frequency alone. Hi Kelvyn- I agree that we do not agree on this point and I would like to understand the reason(s) for differences in our opinions better. (1) Could you provide an example of either an Evaporating Cloud and Injection, or the entities near an injection that "Throughput-Dollar-Days-late (TDDl) is used as a measure of local performance" and/or Inventory-Dollar-Days-wait (IDDw) is used as a measure of local performance"? (2) Do the TTDl and IDDw metrics apply to everyone in the organization or just the approximately 10% of the organization that works directly on the product? +( Buffer Sizing in Production From: "Potter, Brian (AAI)" To: "CM SIG List" Subject: [cmsig] RE: Buffer Sizing Date: Tue, 7 Dec 1999 11:45:05 -0500 TIA, The intuition in your first paragraph looks good to me. I would not be overlay concerned about "different buffer residence times" for parts with different routings and lead times before the constraint. Remember the "buffer residence time" is really only a "safety factor" in your material release time. Your lead time suggests material release at time "drum_start - lead_time", but you actually release at time "drum_start - 1.5 * lead_time". Using the longest lead time along any routing leading to a drum operation, would surely inflate your WiP inventory without offering much extra protection. Why? Worse, depending upon your other resources, the earlier than desirable release of some material might cause temporary overloads or confused priorities at some non-constraint resources. Why? To manage the buffer you must know two things ... 1. What is "missing?" What parts will the drum expect within 50% of a lead time (for those parts) from NOW which are not currently ready for the drum? 2. How much time do you have to get missing parts to the drum? What fraction of a lead time (for the "missing" parts) remains before the drum will need material which has not yet arrived? How do different lead times cause a headache? After you answer the two above questions; you apply a three zone rule, take any needed expediting actions, and keep records about your expediting (to help you discover any protective capacity erosion gracefully). One possible handling scheme: 1. Keep a record of jobs scheduled for the drum. 2. When parts for a job reach the drum, so indicate in that job's drum record. 3. To manage the drum buffer, inspect the drum records in the order you plan the drum to work the jobs. 4. For each job, check the time you expect any "missing" parts to arrive (scheduled drum start time minus 50% of that part's lead time) against the current time and "bin" the part according to your "three zone rule." 5. Use the results of step 4 to drive your expediting and CCR management activities. If your shop has a computer doing its scheduling, you need only "tell" that computer when parts arrive at the drum. With that information, your IS folks can program your computer to perform steps 1-4 above and spit out your buffer management action list. If every resource "tells" the computer when material arrives (or maybe when material leaves or maybe both; I once implemented this by adding a "job completed at this resource" operation code to the operation codes operators used for time reporting within the managerial accounting system), the computer can also deliver an "expedite" list to each resource based on the analysis in steps 1-4. Any other scheme that notes part arrivals (or absences) at the drum, creates appropriate "three zone rule" focus, and logs information supporting CCR management will do as well (or better). How can you fit the concept to your shop's material and information flows? +( buffers and scheduling From: "Murphy, Mark" Subject: [cmsig] RE: batch sizing... Date: Tue, 3 Apr 2001 18:32:31 -0400 I have been reading your posts about batch sizing (old posts now - I am behind in my email!) Is your plant (for the purposes of the discussion you were having with Mark Woeppel) capacity constrained? How long are your lead times versus how long is your process time for one unit? The way we do it here is this ( and we are strong ToC believers! ): we have safety stock levels, when stock falls below safety it creates demand. We also have make orders that create demand. We run on a 2 day lead time. Batch sizing for make orders (items not generally held in stock) is based on actual order content - we have no way of knowing if they will order more of the product they are ordering now before the product would expire if we made extra now. So we make exactly what they order. Batch sizing for stock product is based on Safety Stock minus Actual Stock level, prioritized by those items that are furthest below safety stock (on a percentage basis, not on a unit basis). If safety stock is 2000 and we have 1443 on hand (37% below stocking level) we make 557 units. We have had very few (if any) stock-outs of safety stocked goods in over a year. There are no limits to this system - if safety is 2000 and we have 1999 on hand we make one unit. If safety is 2000 and we have zero we make 2000. *However* - our prioritization scheme (we schedule in order of lowest percentage of safety stock first) ensures that we almost never run out, and we almost never make ultra-small runs. Under this scenario all you have to do is keep an eye on how often an item goes below safety and how far below it goes. If we find ourselves scheduling a product everyday, we raise the stocking level. If we are only scheduling a product once a month, we lower it. We target about a one week turn (production cycle is very small per unit - 3000 or more units produced per day). This stock turn number is not picked arbitrarily by management - it is not driven entirely by batch size - it is not based on expiration dates. It IS based on maximizing T (throughput) while also minimizing I (investment, in this case inventory) and OE (operating expense - in this case, buffered capacity in the form of labor and machinery). We strike a balance that allows us to meet our 2 day lead time (usually 24 hour shipments) while carrying a relatively small inventory ( a week or so ) yet still allowing reasonable batch sizes. Finding a win-win like this, IMO, only comes when you have discarded the false metrics and focused on T, I, and OE. I am not sure the solution I use here will work for you, but I am sure that if you focus on these three things, and believe that ToC applied correctly really does work, you can find the win-win in your plant. ---------Original Message----- ----From: Jean-Daniel Cusin [mailto:jdcusin@cybernostic.com] ----Sent: Tuesday, March 27, 2001 7:12 PM ---- ----I'm quite familiar with the TOC references you suggested ----and how TOC deals with buffers. ---- ----After reading your comments, I think the basic disagreement is regarding ----whether or not the establishment of lot-sizes is something one can leave to ----the last minute (ie, something that you do as you schedule production) or if ----it is something that needs to be planned. ---- ----My position is that lot-sizes can and should be planned because they impact ----stock turns, efficiency, lead-times (based on the lot-sizes and the number ----of items in the product mix) and they impact costs. ---- ----Take the shelf life issue. You say it has to do with an inventory policy. ----Well, not really. If your batch size generates 10 months of stock and you ----have a 3 month shelf life, the problem is driven by your batch size. There ----is a direct cause and effect here. I think this "cause" needs to be managed ----before it generates the surplus or old stocks that need to be ditched ----because of some "inventory policy". ---- ----You say stock turns are an inventory policy issue as well. Well, where do ----these stocks come from? They come from batch sizes and safety stocks, and ----these safety stocks are tributary of demand variability over lead-time, and ----one important component of that lead-time is driven by batch sizes and the ----size of the product mix (the time it takes to cycle through the various ----items in the product mix). Its all integrated. ---- ----Yes, most of the time, we will be using forecasts to determine lot-sizes ----because we need to give the ERP appropriate lot-sizing parameters. ERP uses ----forecasts as well, to estimate loading and plan capacity etc. We need such ----parameters to know that we can process our product mix in a given number of ----weeks. This allows us the possibility to ensure appropriate stocks, based ----on the lead-times that are inherently based on the batching we plan on ----doing. ---- ----If batching was only a scheduling issue, none of the above would be possible ----or would have any bearing. And I'm not convinced that most companies, if ----any, can do away with that level of planning, TOC or not. ----Jean-Daniel Cusin ----Sent: Tuesday, March 27, 2001 10:16 AM Thanks for entering the debate. I follow your logic, but don't agree with it. Batch sizing isn't just a capacity/scheduling issue simply because batch sizing impacts the ability of the work center to produce a bit of everything everyday, and therefore, it impacts lead-times, and any stock you maintain on account of lead-times (order points, safety stocks) or the make-to-order equivalent, the back-log. There are other issues besides market demand, bottleneck capacity and transfer batch size. For instance, shelf life limits may constrain batch sizes, forcing additional set-ups. Where set-ups are performed by specialized workers, the bottleneck may be constrained to a number of set-ups that are smaller than the number that would otherwise be available simply based on available capacity. True, but shelf life issues are usually addressed in inventory policies, not process batch size (yes, I know there are exceptions). My point was, that if you are capacity constrained, then the first issue to address is satisfying the market demand by finding more capacity, which may include larger process batches. However, larger process batches result in larger amounts of inventory and may delay other orders and create secondary bottlenecks. This cannot be known ahead of the time you evaluate your resource availability and order book. This is what I mean when I say that batch size is dependent on capacity availability. If you are working to a forecast, then you are basing your logic on anticipated demand and anticipated capacity. If specialized workers are required to perform setups, and that is affecting my exploitation strategy, wouldn't that make them the constraints of the system? Then I would be focusing on their availability and my batch size would be dependent on their capacity. Management may have stock turn objectives as well. Yes, they do. That however, is a function of inventory policy and capacity availability, not process batch size. If I have a great deal of excess capacity, I can perform many setups with small batches. The opposite is true as well. If my plant is full, then larger batches are required to squeeze more product out of it. The relationship is between capacity and inventory. The greater the amount of extra (protective) capacity, the less inventory I need. The lesser amount of protective capacity requires more inventory to buffer against variation in the process. But the more significant contraint is often that of make-to-order lead-times, where batches are consolidated based on booked orders, and one needs identify what is the right amount of backlog that will allow a level of consolidation so that the number of resulting set-ups don't exceed bottleneck capacity. Correct. That is why batch sizing (consolidation) is done at the time of scheduling the plant. So the optimization can have several objectives, depending on the context. That's why I started with the objective. The ToC philosophy is basically stating that Throughput (generating money) short and long term is the prime optimization objective. All others are secondary, and more of necessary condition rather than optimization objective. To say that batch sizing is only a capacity issue ignores the fact that batches cause inventories and lead-times, and these are significant issues that are of a strategic issue. Yes, the amount of inventory in work in process has a direct impact on lead times. Yes, they are important to the competitive edge of the company. What I've tried to show is that there is a direct relationship between batch size and capacity. The batch size is derived from the capacity utilization strategy. Moreover, batch sizing is not just a tactical issue, something that one does when scheduling next week's production, simply because of the stock and lead-time impacts of batching. That is why batching needs to be looked at from a more strategic vantage (by fine-tuning those batching assumptions we put into ERP systems as order policies) and by adjusting the other ERP planning parameters (lead-times and stock trigger points) appropriately, based on the consequence of the chosen batch sizes. Yes, the batching parameters within ERP are a problem, and they need to be modified, but there is no connection between process batch size and inventory policy (reorder points, etc). The result of an item falling below an inventory level is to create a demand on the plant. These kinds of demands have the same impact that orders do; they create load on resources. One cannot plan batch size without taking into account the load on resources. This can't be known ahead of time. However, you can establish policies and targets for inventory turns (buffer sizing), but again, this is based on the amount of protective capacity the plant has and the variability the plant experiences, internally and from the market. All this must happen before any thought of scheduling, which is the domaine of OPT. I can see where OPT can optimize the scheduling, but OPT needs to operate within a planning context where the production constraints and resulting stock or lead-time buffers have already been worked out (that's where Lance-Lot comes in), so that the schedule can unfold without constant disruptions due to unbuffered emergencies. I suggest you read up a little on buffers and how they are established. The buffer is established for one reason: to protect plant performance (throughput, delivery performance) from normal variation. When a plant has protective capacity, not much buffer is required, because the non-constraint resources can compensate for variation. When the plant has little protective capacity, more buffer is required, because it takes longer for the non-constraints to compensate for variation. The best ToC packages like OPT and Thru-Put have a process to dynamically size these buffers, based on management's tolerance for risk. Those who do not have such packages make simplifying assumptions about batch sizes and thus must maintain a higher level of protective capacity (higher inventories and longer lead times). If they are managing the buffers correctly, they can validate the buffer sizes and make adjustments (of capacity or batch policy) based on those results. Very few companies really know how much variation exists in the process or market demand until they do this. Jean-Daniel, have you read The Haystack Syndrome? What Is This Thing Called The Theory of Constraints? These will give you more insight into the batch size dilemma and the ToC approach to solving it. --- From: "Jean-Daniel Cusin" Subject: [cmsig] RE: Drum and Capacity Buffers Date: Fri, 4 May 2001 14:07:45 -0400 What I did in one instance was to establish buffers with three clear levels. When the buffer was filled to the top third level, a red light came on and that was a signal to move a pre-established number of persons from the feeding work center to the downstream work center. When the buffer reduced to the middle third level (yellow light), the people moved back to their original department. When it hit the lower third level, (green light) personnel from the downstream department went to work for the feeding department until we moved back into the yellow zone. With time, the total size of the buffer was reduced as the inherent variability between the two processes reduced (although a lot of it was caused by the model-mix). So, in this situation, the buffer was used to absorb "normal" variability, but when the variability exceeded the capacity of the buffer to absorb, capacity was mobilized, triggered by the buffer, to preserve throughput through the "new" bottleneck. -----Original Message----- On Behalf Of Fenbert, Jeffrey A Sent: Friday, May 04, 2001 13:55 I would like to understand the experience people have had with the use of capacity and drum buffers. First, have people set up systems which contain both capacity and drum buffers? If not, how did you evaluate the need for one versus the other? Also, if you choose to use only capacity buffers, is there a typical system response that would tell you that the drum buffers should be added? --- From: "Potter, Brian \(James B.\)" To: "Constraints Management SIG" Cc: Santiago Velásquez Martínez X-OriginalArrivalTime: 25 Jan 2005 14:57:30.0484 (UTC) FILETIME=[31704340:01C502EE] X-MW-BTID: 100025000020050255379400004 X-MW-CTIME: 1106664994 X-MW-SENDING-MTA: 136.1.7.9 HOP-COUNT: 1 X-MAILWATCH-INSTANCEID: 010200084859fd0a-eaa5-4571-9e53-89ff8a508c52 List-Unsubscribe: Reply-To: cmsig@lists.apics.org Santiago, In either DBR or S-DBR, every order has a planned arrival time at the shipping point (usually, at or a little before the promised shipping time). Orders which actually reach the shipping point by or before that planned arrival time are in the "green zone." Also in the "green zone" are orders which currently have EXPECTED arrival times by or before the planned arrival time. "Yellow zone" orders are those which currently have EXPECTED arrival times after the planned arrival time BUT normal lead time variation is large enough so that these orders might "catch up" as a consequence of normal processing variation. These orders go on a "watch list" so that if variation drives them further behind (rather than helping them catch up), operations management may apply expediting methods to process them more quickly. "Red zone" orders have EXPECTED arrival times at the shipping point so far behind the planned arrival time that it is most unlikely (say, a 5% to 10% or less chance) that they will arrive on time. Operations management begins expediting these orders immediately. The DBR drum buffer works the same way. Notice that the WiP "in the buffer" is (mostly) not physically at the shipping point (or constraint). Buffer management actually comprises making predictions (based on what one knows of historic lead times and variations in same) of when each order will reach each control point (constraint buffer or shipping buffer) in its routing. When calculations show (for example) a better than 50% chance of on-time, the order is "green." 10% to 50% chance of on-time is "yellow," and less than 10% chance of on-time is "red." Depending upon one's prediction accuracy, capacity for watching yellow orders, and capacity for expediting red orders, one might adjust the percentages to suit one's own operation. Further note that POOGI exploits information from the buffer management. Resources frequently hosting "red" orders (make a Pareto List) come under investigation because they may have insufficient protective capacity or for some other reason (long setups, poor local management, process problems, material problems, upstream logistics problems, ..., you will discover when you "go to the spot") have difficulty effectively subordinating to the constraint. When you eliminate the bulk of your "red zone" drivers (which will include quality improvements [because fixing quality problems is one cause driving delays] as well as direct lead time compressions), you can shorten your lead times, reduce your process variation expectations, and keep right on going with more improvements. Since your lead time compression activities will include shortening setup times, you will have flexibility to setup more often (except at a known constraint). Thus, you may reduce transfer batch sizes and further shorten lead times correspondingly. :-) Brian -----Original Message----- From: Santiago Velásquez Martínez [mailto:ifsavel@gmail.com] Sent: Monday, January 24, 2005 6:20 PM To: Potter, Brian (James B.) Subject: Doubt Hello Brian Sorry to bother you, but I am confused with something. After reading several books on DBR, I have seen that people use Shipping Buffer with different definitions. What I mean is: Some treat Shipping Buffer as the time between the liberation of raw materials and the shipping point. Others treat Shipping Buffer as the time between the CCR (if existant) and the shippig point. What is the proper definition? I know concepts are what counts, but I wanted to have this definition clear. As I understand it: Shipping BUffer is the point between the CCR and the shipping point. If no CCR is active (simplified DBR), then the shipping buffer is between liberation of materials and the shipping point. +( Cadillac and ToC From: "Bill Dettmer" Subject: [cmsig] RE: General Motors Case Study Date: Thu, 22 Mar 2001 07:56:05 -0800 Several people have contacted me off-list for a copy of the SUCCESS Magazine article (Feb 1995) that has the story about Cadillac and Eli Goldratt in it. Because SUCCESS sells reprints and single back issues, they have an interest in protecting their copyright. I won't violate that right. The URL for their archive is: http://www.successmagazine.com/Archivesnew.html If you contact them directly (contact information on the web site), I'm sure they'll send you a copy of a back issue if they have it. If they don't have one, the only recourse I can think of is to contact the Goldratt Institute. They had a large volume of reprints of an earlier SUCCESS Magazine article about Goldratt. I can't imagine that they wouldn't have this one, too. +( calculation of savings From: OutPutter@aol.com Date: Mon, 5 Jun 2000 09:22:29 EDT Subject: [cmsig] Re: Depreciation, a TOC topic - clarity request In a message dated 6/3/00 11:21:17 AM CST, caspari@iserv.net writes: > Hi Jim - > > You raise an interesting point that I have not previously addressed on the > list. But first, I need a little clarity to be sure that I have my > definitions right. > > > S = savings per year = 1000 > > How are these calculated? Do they represent the estimated differential cash > flow? Or are they the estimated change in operating income as calculated by > the accounting system? I didn't specify because I didn't want to start a debate over whether the savings were real or not. To make the savings real, you would assume S represents differential cash flow. > > Payback period = (CN-R)/S IF (B-R)/CN < .1 > > > Payback period = (CN-R)/(S+B) IF (B-R)/CN > .1 > > I don't understand the reasoning behind combining the annual savings (S) > with the lump-sum amount of the net book value (B) in the denominator of the > second equation. Ooops, correction: The second equation should read Payback period = (CN-R)/(S-B) IF (B-R)/CN > .1 That's my way of saying the person calculating the savings when there is a "large" book value probably inflated the savings. I counter that inflation by subtracting the book value. An attempt to inflate beyond that point would likely be obvious. > In the first equation, the result is the number of years required to recover > the incremental investment (the payback), but a similar statement cannot be > made for the second. If the assumption about the estimated savings in the second equation is true, inflated savings that is, then the two equations will reduce to the same equation. > > Maybe we should have two different methods. One for constraint > > decisions and one for the others. > > > Constraint Payback Period = (C-R)/S > > Is C the same as CN above? Ooops again, yes. > > Non-constraint Payback Period = (C-R+CO)/(S-B) > > Is this equation analogous to the second payback period equation above? > (Note the sign of the expression in the denominator.) > > Is the purpose of this formula to block investments at non-constraints? > > Best, > John It's analogous to the second payback period equation above with the added penalty of a larger numerator by the amount of the old machine. Hence the question, is it too harsh? I know there isn't a theoretical basis for such a calculation yet, I'm just trying to deal with a real world delima. Too many times in my world decisions are made regarding non-constraint equipment and without a TOC perspective, it's impossible to understand the futility of such decisions. --------------- At 02:31 PM 06/02/00 EDT, Jim Fuller wrote: >John, > >Did you consider the unstated fact that your P&L will be charged $300 (800 >book value less 500 recovery) in Case 1 and nothing in Case 2? (Add three >zeros first.) If you're the manager and it's one year after you spent a >considerable sum on the older machine, would you even look at the new >machine? Let me take a stab at another formula for payback. > >CN = cost of new machine = 1500 >CO = cost of old machine = 1000 >S = savings per year = 1000 >R = residual value of older machine = 500 >B = book value of machine net of depreciation > >Payback period = (CN-R)/S IF (B-R)/CN < .1 >Payback period = (CN-R)/(S+B) IF (B-R)/CN > .1 > >That means the person who calculated the savings probably inflated the >savings if the decision is considered after year one. After five years, the >calculation is probably closer to normal. The reasons are all CYA >psychological type reasons I'm sure. Most of the time, since inflating >savings are less and less likely to pass close scrutiny, the option is never >considered. That's OK because we had to assume away all the negative effects >of doing the project when it's not the constraint anyway. > >Maybe we should have two different methods. One for constraint decisions and >one for the others. > >Constraint Payback Period = (C-R)/S >Non-constraint Payback Period = (C-R+CO)/(S-B) > >Is that too harsh? > --- >In a message dated 6/1/00 1:28:48 PM CST, caspari@iserv.net writes: > >> John had written: << I would make the decision based on the estimated >> future differential cash flows and elevating a constraint or satisfying >> necessary conditions. >> >> >> Jim asked: >> >> << Consider the following two situations. >> >> Given - I bought a machine that is being depreciated over five years that >> cost $1000. I've got no internal constraints. The new machine can only be >> justified with cost savings (assume them to actually exist). Ignore tax >> effects for now. >> >> Case 1: >> One year later I consider a new machine. >> Accumulated depreciation = 200 >> Scrap value of old machine = 500 >> New machine costs = 1500 >> Savings over next two years = 1000 >> >> Case 2: >> Five years later I consider a new machine. >> Accumulated depreciation = 1000 >> Scrap value = 500 >> New machine costs = 1500 >> Savings over next two years = 1000 >> >> Could you please illustrate the decision making process you would use. > No >> trick question - I'm honestly just confused how you do it without >> depreciation affecting the decision. >> >> >> Sure, but first, my assumptions: >> (1) the savings do not involve labor cost savings, >> (2) the savings are real cash savings, >> (3) the other items are also cash items, >> (4) throughput is unaffected, >> (5) case 1 and case 2 are mutually exclusive and >> (6) at the end of two years there are no further anticipated savings > and >> the company would be in the same shape that it would have been without the >> new machine. >> >> My financial analysis follows (note that both cases would have exactly the >> same analysis because the cash flow patterns are the same): >> >> The initial cash investment required = (new machine cost of $1,500 less the >> residual value of the existing machine of $500) = $1,000 >> >> Future annual cash flows: $1,000 for each of two years. >> >> Payback period: 1 year, internal rate of return approximately 60% for the >> two year period. >> >> This investment is nothing spectacular, but go ahead an make it (subject to >> the things that you mentioned previously about flexibility, etc.). +( Cash Flow From: "Michael Seifert" To: "CM SIG List" Subject: [cmsig] RE: Calculation of Inventory/Investment Date: Sun, 5 Dec 1999 19:53:02 -0500 Please correct me if I am wrong but RM incremental to a sale would not be I or OE. It is covered through T (same as a sale commision which is incremental). T is equal to the cash coming in (sale$) minus the truely incremental cost such as the RM used, commissions directly tied to the sale, and even freight if directly incremental. I is the investment (which includes inventory held "in the belly of the beast") which we have intent to consume. Cash flow is T-OE- (delta)I. The truely variable element of the sale is covered through T. I is such things as the dollars tied up in building, machines, RM, WIP, and FG currently not being converted to cash. Some examples we use a machine. The purchase cost becomes I. As we depreciate it, time based over 10 years, the depreciation is OE and we reduce the value of I. When we buy RM, approximately 180,000 lbs at a time, it goes into I. As we sell an item we take the sales$ minus the cost of the RM to drive T and reduce I the RM amount. +( Caspari John From: Caspari@aol.com Date: Mon, 20 Dec 2004 11:36:58 EST Subject: Re: [tocleaders] What's it take to learn new paradigms In a message dated 12/20/04 8:50:44 AM Eastern Standard Time, j_m-bowles@tiscali.co.uk writes: The question that this raises is how do we best teach someone a different paradigm without the use of a simulator? Hi Jim - My background is as a cost accountant. I am also the revision author of the section of the third (1980) and fourth (1992) Management Accountants' Handbook section on Overhead Costs--Distribution and Application (as well as the original author of the fourth edition supplement (1993) section on the Theory of Constraints). I mention this to show the depth of my Cost World paradigm. I must admit that my paradigms began to change strictly as a result of exposure to the Schragenheim simulator in an Executive Decision Making (EDM) seminar led by Bob Fox. Once the initial shift took place, further shifts were much easier and did not need simulation. (Nevertheless, I have found, and still find, simular solutions to be enlightening and useful in understanding the interrelationships in a situation.) In spite of such a bold admission, I must also admit that it took me another seven years to completely shift to the paradigm of the simplicity of the Throughput World that lies on the far side of complexity. Looking back, there seem to have been two things that were instrumental in my making the paradigm shift(s). First, I needed a strong reason to pursue the new paradigm. Second, I needed to understand why I could not stay in the old paradigm, thereby eliminating the risk of leaving that comfort zone. As to the first, a weak motivation was provided by the general dissatisfaction with product costing methodologies as control mechanisms evidenced in the early 1980s. This dissatisfaction was verbalized by Johnson and Kaplan in *Relevance Lost* and Goldratt in his *Cost Accounting: Public Enemy No. One of Productivity* presentations to APICS and later to the Institute of Management Accountants (1985). These verbalizations were confirmed by what I was hearing from clients in my consulting practice. A stronger motivation was provided three years later in the EDM. Here I took a "Plant 10" that I had learned to run effectively and turned it into a disaster by implementing efficiency and batch sizing policies that I had been teaching for twenty years. The stronger motivation seemed to come from the "emotion of the inventor" provided by the EDM simulation. So, how to teach a new paradigm without a simulator? We believe that it can be done in real time utilizing something like a POOGI (Process of OnGoing Improvement) Bonus which is paid when global constraints are elevated and which is clearly tied to that elevation. The possibility of such a bonus provides a weak motivation ("OK, we'll wait and see"). When a substantial bonus actually has been paid three or four times, the new paradigm is confirmed and the perceived risk of leaving the old paradigm is reduced, providing a strong motivation ("OK, let's go for it"). John Caspari Constraint Accounting Measurements (616) 940-6075 http://casparija.home.comcast.net/ +( categories of legitimate reservation From: "Mark Woeppel" Date: Tue, 2 Oct 2001 11:16:37 -0500 Reply-To: tocexperts@yahoogroups.com Subject: RE: [tocexperts] Categories of Legitamite Reservation They came out of the development process of the thinking processes. As we worked to develop processes to check our logic. We came up with the "rules" for challenging the work product of each other. We called them legitimate reservations. My memory is quite dim on this, let's see: 1. Entity existence - what you are stating does not exist in reality 2. Causality - this doesn't cause that effect - must be stated with another cause 3. Sufficiency - I agree that this does cause that effect, but is insufficient by itself to create the effect - another contributing cause must be stated 4. Clarity - I don't see how this causes that - the "long arrow" --- From: "Scott D. Button" Date: Tue, 02 Oct 2001 09:49:30 -0700 Subject: [tocexperts] Categories of Legitimate Reservation (CLR's) I don't know where the CLR's came from, but in response to Mark's note, here is the full listing of the CLR's. The Categories of Legitimate Reservation (CLR) are simply rules for validating logic. They can be applied to clouds, trees, and also (with great care) in discussions. The proper use of the trees and clouds requires adherence to the Categories of Legitimate Reservations. The CLR are also known as manners in the TOC for Education field. Level One Reservations Source: "Thinking for a Change" Clarity: What do you mean by the words in the box? What do you mean by the arrow that connects the boxes? Entity Existence: Do the entities in the boxes really exist in our current reality? Causality Existence: Does the cause really make the effect happen? Level Two Reservations Source: "Thinking for a Change" Additional Cause: Couldn't the effect be explained by some other (new cause)? Additional cause means the effect could be completely explained by some other cause. Insufficient Cause: That cause alone could not create that effect, there must be something else, such as (other cause). The other cause is "anded" in using that elliptical "and connector". Predicted Effect: If that cause were there, I'd expect to see (new effect). Cause Reversal: The arrow is going the wrong way Tautology: Circular logic. The effect is justification for the cause. Also, don't forget Dogbert's CLR's 1. Faulty Cause and Effect: Example: On the basis of my observations, wearing huge pants makes you fat. 2. The Few are the Same as the Whole: Example: Some Elbonians are animal rights activists. Some Elbonians wear fur coats. Therefore, Elbonians are hypocrites. 3. Total Logical Disconnect: Example: I enjoy pasta because my house is made of bricks. 4. Circular Reasoning: Example: I'm correct because I'm smarter than you. And I must be smarter than you because I'm correct. 5. Incompleteness as proof of defect: Example: Your theory of gravity does not address the question of why there are no Unicorns, so it must be wrong. 6. Reaching Bizarre Conclusions Without Any Information: Example: The car won't start. I'm certain the spark plugs have been stolen by rogue clowns. 7. Faulty pattern recognition: Example: Her last six husbands died mysteriously. I'm hoping to be husband number seven. 8. Inability to recognize additional causality: Example: The Beatles were popular for one reason only: They were good singers. From "The Joy of Work" by Scott Adams. +( change don't try to change the direction of the wind - set your sails according to the given wind Date: Fri, 01 Sep 2000 13:27:03 -0400 From: "MARK FOUNTAIN" Subject: Re: [cmsig] Re: Continuous Improvement (POOGI) For Organizational Change and Development the following 8 points will probably be the drivers to implement permanent useful change: 1. Create a sense of Urgency 2. Put together a strong enough team 3. Create an appropriate vision 4. Communicate the new vision 5. Empower employees to act on the vision 6. Product sufficient short term results 7. Build momentum and use on tougher problems 8. Anchor the new behavior in the corporate culture +( change Tony Rizzo : I recall a brief story that I read about Sun Tsu. I paraphrase. In the story, the king tasked him with getting the army in shape. Sun Tsu decided to illustrate a few principles using the king's concubines. He selected the king's two favorites, told them that they were the officers, and gave orders to the two, to have the rest perform some activity. The two looked at him, smiled, and ignored him. Sun Tsu explained to the king that his own orders may have been unclear. Therefore, Sun Tsu's first obligation was to provide clarity to the two concubines in a management position. He did, and then he repeated his orders. Upon being ignored a second time, Sun Tsu stated that once clarity had been achieved, the problem could no longer rest with the leadership. Now it was clear that the managers were the problem. He promptly had the two concubines killed, despite the king's objections. When he assigned two new concubines to the management positions, Sun Tsu was no longer ignored. --- Tony - to me your e-mail below typifies the reaction to behavior change I've seen in the workplace. Let me explain. In the book I keep referring to, the author states that consequences are 4 times more effective at getting permanent behavior change than antecedents. In your e-mail below, you take great pains in defining what I think of as antecedents, but spend much less effort explaining consequences. I'm not saying that clearly communicating isn't important; I'm saying that everyone I've ever worked with who was trying to get people to change behavior always worked the antecedents, but largely ignored the consequences. Exactly the opposite of the appropriate approach. To me we've come full circle. This e-mail thread started with a proposal to list positive consequences managers could use to elicit appropriate behavior change. If their hands are tied in terms of salary and bonuses, you've got to be creative. This is where I get stuck. Terry -----Original Message----- From: Tony Rizzo [mailto:tocguy@pdinstitute.com] Sent: Thursday, March 28, 2002 9:13 AM To: tocexperts@yahoogroups.com Subject: Re: [tocexperts] Digest Number 355 Your last sentence says it all: "... people ... don't permanently change their behaviors very easily, at least using the methods I've seen in the places I've worked." And there's the rub. You already have the tools with which to identify the conditions that cause permanent changes in behavior. I assume that you have a group of engineers that continue to do that which they have always done, despite our respected, executive friend's wishes to the contrary (say helly to A for me). First, make sure that clarity has been achieved. Ask our friend to hold an open meeting with all the affected parties, during which he can achieve clarity. Second, confirm that clarity has been achieved. Ask, no, require everyone to acknowledge with a signature that he/she fully and clearly understands the exec's instructions. Anyone who refuses to sign gets referred to a psychologist for help. There's no reason to not sign. The signature signifies only that the person understands the message. When you have all the signatures, you have supporting evidence that everyone understands what's being asked of them. Third, at the same meeting, require everyone to sign a letter of intent. They either intend to comply, or they intend to not comply. No one need be punished or even threatened with punishement. The purpose of this is simply to identify those who really are the resources of the organization. There's no point in assigning work to people who fully expect to not comply with the exec's instructions. The work won't ever get done anyway. It's better to plan the projects using only the real resources of the organization. But to do that, you have to identify the real resources of the organization. These steps give you compliance, which is the first condition required for permanent behavior change. Compliance affords management the initial opportunity to create the positive reinforcements that make the behavior change permanent. To make the behavior change permanent, management must take every opportunity to positively reinforce the right behaviors, starting with the very first instance of any behavior that even comes close to being correct, like the compliance step above. From then on, management should lavish rewards upon those who behave correctly and completely ignore (particularly at performance review) al the rest. Engineers are not motivated by money alone. Some are not motivated at all by money. Jucy work assignments and acknowledgement work for many engineers. Now, I suppose that some lurker is going to point at some low-probability event, like, "what if they all refuse to sign?" and try to shoot holes in this. If you're such a lurker, wake up. It's pointless to make policy aimed at precluding low-probability events. Policies must be designed to influencing what most people do most of the time. ----- Original Message ----- From: "Tsuchiyama, Terry K" To: Sent: Thursday, March 28, 2002 11:48 AM Subject: RE: [tocexperts] Digest Number 355 > OK - so where do we go from here? The book I referenced in my first e-mail > was written by a behavior scientist. Avoid pain may be a fundamental human > characteristic (I believe it is), but there's a lot more involved if you > want to permanently change behavior. Unfortunately were not working with > robots; people are very complex intellectual machines and many (most?) don't > permanently change their behaviors very easily. At least using the methods > I've seen in the places I've worked. > > ----- Original Message ----- > From: "Tsuchiyama, Terry K" > To: > Sent: Thursday, March 28, 2002 11:25 AM > Subject: RE: [tocexperts] Digest Number 355 > > > > Tony - I think part of the difference (maybe most) between what you and I > > wrote has to do with the assumptions. So let me state them here. > > > > I'm assuming that management has prioritized the projects and that they are > > not sending conflicting messages. Meaning, they are not asking for progress > > on several projects at the same time. To summarize, management has told the > > workforce that they want them to work to the priorities and has told the > > workforce what the priorities are. But, for reasons I have yet to figure > > out, the workforce more or less continues to behave in the same way they > > always did. > > Another way of stating the above paragraph is: one my fundamental > > assumptions is that if management is really serious about implementing > > critical chain, they must behave appropriately. This means they must > > prioritize the projects and tell everyone to work to those priorities (as > > I said in the first paragraph). > > My experience at Boeing mirrors the results of Larry's poll results. > > Based on what you wrote in the e-mail below and what you've written previously, > > I think your experience doesn't agree with Larry's poll. I've pasted the > > results of Larry's poll below: > > > > Our poll on multitasking is a bit enemic; but the results to date > > (Which do you have most difficulty with) are: > > > > 1 Listing Priority > > 3 Organizing Around Priority > > 9 Discipline to execute to priority > > > > So before we move any further in this discussion, please let me know if I've > > correctly captured your thoughts or not. I'm not trying to put words in > > your mouth, but to just understand the assumptions behind your e-mail. > > > > -----Original Message----- > > From: Tony Rizzo [mailto:tocguy@pdinstitute.com] > > Sent: Thursday, March 28, 2002 7:52 AM > > To: tocexperts@yahoogroups.com > > Subject: Re: [tocexperts] Digest Number 355 > > > > Sorry! I beg to differ. > > > > Certainly, consequences do shape behavior. But It's not just a matter of > > creating positive reinforcements. The negative consequences that are > > avoided by multitasking also must be eliminated. In fact, these must be > > eliminated first, or no set of positive consequences, no matter how > > positive, will have any significant effect. > > > > To eliminate the negative consequences that managers and resources > > avoid with their multitasking, the leadership must prioritize the organi > > zation's projects. Unless the projects are prioritized, nothing changes! > Then, we can begin discussing additional positive consequences with which to > > reinforce the right behaviors. > > > > Why do I say this? I observe that nearly all people nearly all the time > > behave according to the following rules: > > > > 1) AVOID PAIN. AVOID PAIN. AVOID PAIN. AVOID PAIN. > > > > 2) Do what feels good. +( chronic conflicts From "Stefan van Aalst" To "Constraints Management SIG" Subject [cmsig] Re Strategy Tree Date Tue, 9 Nov 2004 102622 +0100 MIME-Version 1.0 Content-Type text/plain; charset="Windows-1252" X-Mailer Microsoft Office Outlook, Build 11.0.6353 Thread-Index AcTFlhyrvUdJTx5PRIiFB8G5sCxprgAo6iZg X-MimeOLE Produced By Microsoft MimeOLE V6.00.2800.1441 In-Reply-To X-AntiVirus checked by Vexira MailArmor (version 2.0.1.16; VAE 6.28.0.12; VDF 6.28.0.64; host postbode02.zonnet.nl) List-Unsubscribe Anything that is chronic can't be 'resolved' in the way for instance a specific cloud is being addressed. One of the characteristics of something that is 'chronic' is that it expresses itself in different ways. Unless we're dealing with a physical law (and this is by some even questioned) then the cause for the 'chronic' lies in our minds. Not so much on the surface, but deeply embedded in our core values, norms, axioma, etc. and thus in a real sense forms part of our own identity and thus how we (re)act in specific situations. When something is 'chronic' a firm decision is necessary but for sure not sufficient. First note that the decision is not related to a specific dilemma/conflict/situation ...the decision is aimed at changing a core value, norm, axiom, etc and thus must result in changing our identity and behavior in every similar situation that fits the 'chronic cloud'. Old habits die hard and this makes it necessary to venture on a process of outgrowing this old behavior. Initially the focus must be on changing the behavior rather than the results. And in the change process itselfs lies an interesting pitfall. Basically the question is can we be a Baron from Munchausen? Can we pull ourselves by our own hairs out of the swamp? Deming, Einstein, etc they all had there explicit views on it. Quite often that what we causes an 'alergy' is that what needs to come into place. The reason for the alergy is coming from the 'strong point(s)'. When venturing on a road of change with all the unknowns etc, it is very tempting to rely on the strengths and back we fall into the trap. An interesting thing is the assumption of a 'common objective'. Formulated in this way it means that either we both have it, or nobody has. In my experience when one dives a bit deeper in it, it is more an 'objective in common' that is mistaken for a 'common objective'. The difference being I (believe that I) don't (necessarily) need the other to achieve the objective. Bill Dettmer and others go even further and have pointed out that a conflict can exist easily without a common objective or even an objective in common. Not only in huge scale situations, but also all the way to the individu. It is a pattern that seems embedded in us humans. Applying some of the application principles of TOC don't try to manage the unmanageable ...when possible create a buffer to protect oneself against it, a solution appears. Instead of trying to focus on resolving the conflict (friction with others) or dilemma (friction within oneself/the group), create a situation where nobody needs to change (much) and the conflict/dilemma isn't relevant any more. A standard direction for a solution in TRIZ is separate in time and/or space. Now, instead of venturing a difficult and complex situation, two Viable Visions can be created almost independent of one and other ...as long as the seperation is honored and strictly applied. Later in time the situation might have changed sufficiently that instead of two Viable Visions, one more optimized single Viable Vision can be created and implemented. Of course factions within factions will give a lot of headache. But a process of development is seldom one of full consensus, for this leaders are required willing to subordinate to the overall goal by making decisions that nicely exploit the current (strategic) constraint and subordinate all the non-constraints. Of course the leader needs to have a Viable Vision in mind for this. Very colored by an interpretation of the Focusing Steps, but are there alternatives? Stefan -----Original Message----- From Jim Bowles [mailtoj_m-bowles@Tiscali.co.uk] Sent Monday, November 08, 2004 223 PM To Constraints Management SIG Subject [cmsig] Re Strategy Tree Hi C.J. Eli Goldratt will tell you that there are chronic conflicts that we can resolve and chronic conflicts that we cannot. The tools could provide the means to facilitate the needed change but both sides would have to be willing to agree that they have a common objective. Two strongly opposed "paradigms" would make this extremely difficult. But wouldn't it be great if the opposing factions could produce a Viable Vision. From my own experience of that region the hardest part would be to decide, which are the two sides. There are so many divisions even within each community. J B Expanding?!? How far can this go? Who believes that Goldratt will solve the Middle-East conflict? C. J. Em Sáb, 2004-11-06 às 2255, ConsultSKI@aol.com escreveu > Santiago > i cannot tell from your short query. i have heard goldratt is expanding the strategy and tactics arena of TOC. i personally prefer dettmer's approach. bill has used Boyd's OODA loop and TOC's Logical Thinking Processes to create "Strategic Navigation." > > http//www.applyingcommonsense.com/SN_Review.html > > or go to the source www.goalsys.com > > -ski +( clouds = +( trees From: "Larry" Date: Tue, 31 May 2005 08:20:36 -0600 Subject: [tocleaders] Using the EC Hi, Michael Very nice summary and example. I agree that a group does not have to know how to use the cloud when facilitated. However, I have found that some people have a huge problem learning how to build a cloud on their own. Perhaps it is in the wording I use when teaching it (I have used the wording in the AGI MSW package), but some people can't seem to get the needs (requirements) and wants (prerequisites) separate in their mind, or consistently reverse them. Others just seem completely unable to use the model to verbalize the cloud, "In order to have (A,B,C) I must first have (B,C; D; D') because of (assumption)." No matter how many times I read it for them that way, and coach them to read it that way, they stumble. I think that is tied up in their inability to decompose the problem to a need and a want. I would add to your requirements: the problem (or conflict) is reducible to a binary situation. Although I have seen people attempt approaches to use the cloud to decide amongst multiple options, I do not find it useful for that purpose. For multiple options, I prefer one of DeBono's tools, or, for more substantial complex cases, robust decision making (see www.Robustdecisions.com). Sometimes the multiple option/criteria decision tools can lead you to a residual binary decision, where the cloud can help. The cloud is better when you can use it because people like to ascribe a single reason to their decisions. Multi-attribute decision tools often leave a sense of uneasiness, because all you can say is, "the weighted criteria evaluation favored this solution." Often, those that do not buy in have different weightings. The Accord tool helps you work through that. Could you elaborate on your statement, "Success results from...You find that no common objective exists?" Regards, Larry Leach Date: Tue, 31 May 2005 01:20:32 -0600 From: "Michael" Subject: RE: The Cloud Time and time again the "Cloud" has helped me to resolve difficult conflicts. But it cannot solve any kind of problem. For the cloud to work we need to have the following: (1) A mutual objective. (2) An understanding of what you want. (3) An understanding of what your employee / lieutenant / spouse / child / system wants. (4) An understanding of why you want what you want. (5) An understanding of why your employee / lieutenant / spouse / child / system wants what they want. (6) An opportunity to present the cloud and surface and challenge the assumptions surrounding the conflict. (The tug of war over the matter) a. This can be tricky. Sometimes the cloud is visually presented and other times it is presented verbally. b. I like the visual method as it keeps the discussion focused and stops the other member of the discussion from changing the topic in rapid succession in order to divert attention away from the conflict at hand. (7) Success results from the following: a. You find that no common objective exists. b. You find one or more erroneous assumptions that cause the cloud to collapse. c. You find a common solution that meets that needs of both sides of the conflict again causing the cloud to collapse. I find that no explanation is needed to use the cloud as most people get it. The first time I ever used the cloud was when I walked in to one of my technology clients in an all out verbal brawl over how a software module was to be implemented. Petrified as all eyes focused on me I pulled out a blank piece of paper and drew the cloud asking each side in the battle what they wanted, why they wanted it, and what their common objective was. I then proceeded to surface the assumptions each party was making by using the key word "because". Soon one of the parties conceded that one of his assumptions was wrong as he verbalized it out loud. The brawl was over in less than 5 minutes. Whew. +( clouds - strategy and tactics tree see also +( strategy and tactics To: cmsig@yahoogroups.com From: "gothevole" Subject: [Yahoo cmsig] Re: S&T trees Well, since the last reply didn't seem to work, let me try again. S&T trees, or Strategy and Tactic trees, are a big subject, needing a good chapter or two. But here are the basics, as I understand them, after attending the TOC-ICO down in Miami last week. Eli Goldratt developed them to help implement his Viable Vision (VV) process. He found he needed a tree to show a logical breakdown of how all the steps in a VV were integrated together. He also found that his past Buy In process (Creating UDE's, tying them together in a CRT, exposing the root cause, using Clouds to resolve the conflict around the root cause, creating an FRT and a TrT) was creating some confusion in the VV process. He realized that this process, which he termed a "Minus Minus" Buy In process, was not the correct one for VV. MM processes were centered on removing a bad thing, like "eliminating waste" in Lean, or "resolving the core conflict" in TOC. Rather, since the advantages of VV where so large, he needed to use a Plus process to get buy in from his clients. The S&T trees form the structure of that process. The method is to move through the levels, talking about the strategy and tactics of each one, and getting buy-in at each step. This is more of a leading process than a Socratic process, but checking with the client at each step verifies your solution is in the right direction and brings out possible obstacles. When used, the consultant starts at the top of the tree with Level 1, the Viable Vision. Usually, this is "having your current Revenue be your Net Profit in four years." The consultant checks with the client, "Certainly this is something you want to take on, is it not?" There may be discussion for clarity, but we are looking for agreement or disagreement. Agreement means we go on, disagreement means we look for clarity or understanding. Disagreement may also mean we stop, if we have convinced ourselves that the tactic is wrong, or that the client faces an obstacle that is too big to allow implementation. Then we look at the assumptions (called parallel assumptions) this statement draws out, such as: "For the company to realize the VV its T must grow (and continue to grow) much faster than OE." "Exhausting the company's resources and/or taking too high risks severely endangers the chance of reaching the VV." Again discussion, clarity, and agreement. Then the tactic: "Build a decisive competitive edge and the capabilities to capitalize on it, on big enough markets without exhausting the company's resources and without taking real risks." The wording is critical, and the consultant spends a lot of time making sure there is clarity here. We haven't revealed how to do this yet, but the consultant makes sure the client agrees that if this tactic could be achieved, the strategy (the VV on Level 1) would be achieved. Then we look at the assumption (Necessary assumption) that links this level with the level below: "The way to have a decisive competitive edge is to satisfy a client's significant need to an extent that no significant competitor can." Again, clarity and agreement is achieved, and then the consultant summarizes, goes back to the overall tree, and then down a level. This breakdown continues until its clear to management what has to happen to reach the VV, but not so far as to tell everyone (engineers, supervisors) how to things they already know how to do. That's about as brief as I can make it, and I'm sure there are a lot of questions, but that's how I understand the basics. You could use S&T trees to tailor a strategy for your company or client, but I doubt what I have presented here will help do that. A book that goes into the "how to" method has been promised, however. +( clouds - Effrats Cloud To "Constraints Management SIG" Subject [cmsig] Re Efrat's Cloud So you already have Efrat's cloud You should know it well!? In order to be A Happy now and in the future B We must have Security (protect ourselves and our emotions) In order to have security D We must comply with "Not Changing" (Resist change to the unknown) C We must have Satisfaction (improve our life) In order to have (More) Satisfaction D' We must have Change This is an interesting contrast to the work of Maslow which looked at these as a hierarchy. This cloud shows something different. Sometimes (often) people will disregard security for the sake of satisfaction. Jim Bowles +( CLOUDS : process of breaking a cloud From "Stefan van Aalst" To "Constraints Management SIG" Subject [cmsig] RE cmsig digest July 15, 2004 Date Fri, 16 Jul 2004 091927 +0200 MIME-Version 1.0 Content-Type text/plain; charset="us-ascii" Content-Transfer-Encoding 7bit X-Mailer Microsoft Office Outlook, Build 11.0.5510 X-MimeOLE Produced By Microsoft MimeOLE V6.00.2800.1441 In-Reply-To Thread-Index AcRq/73+cCAClGNiR42NCfLY/dNIZAAANqKQ X-AntiVirus checked by Vexira MailArmor (version 2.0.1.16; VAE 6.26.0.5; VDF 6.26.0.30; host postbode01.zonnet.nl) List-Unsubscribe Donovan Please e-mail me your steps in how to construct a cloud and a good enough worked out cloud of a real case. You can do this off-list if you want. For breaking specific clouds the following technique is practical and has a good success rate (not only in breaking but rather more importantly getting the results) - identify the prefered action (usually D) Since D (otherwise your E) should block or prevent the necessary conditions in C, brainstorm on - If I/we do D then it is impossible or hard to get C because ... For each answer ask the questions alout (very important) - Is this true? Am I/are we sure this is true? Does it need to be true? Am I/are we sure this is true? (don't underestimate the need for asking for verification ...and all this needs to be done alout). Usually this triggers an opening on which you can build further. But not always. In this case, given the knowledge and experience available, no way out seems possible (doesn't mean that there isn't one. As facilitator I tend to look at similar situations in totally different industries (eg because "our products are too expensive we can't sell more" might seem like true, but since there are Lada's and Mercedes sold, I don't buy this and after pressing a bit more as facilitator it is not unlikely that the true reason is "our customers see our products no different than that of our competitors", does this need to be true? No. What can one do about it? Train the customers to see the difference, help them to decide on ...your marketing) But when you're stuck with your prefered action and can't find an opening then a rather more difficult conclusion must be drawn okay, my/our prefered action can't be made to work, let's embrace the fact that we can't work the fact and put the prefered action behind us. Close the door as Oded Cohen once said. Now go to the alternative action and repeat it. An alternative way that also works very well is to describe in what situation D and E (can also be a full negation of D) are in conflict. The question then that must be answered with brainstorming is in what situation are D and E in conflict? Followed by per item named is this true? ...etc. For sure this should be tried first when dealing with projects or any other investment decision that fight for resources ...usually the only reason why they are in conflict is that two or more projects or investment decisions are perceived to be needed now or as soon as possible. Usually this is a not true or can be falsified, staggering the projects or investment decisions takes away most of the headache. There are other ways as well, all the way to trying to negate the objective/goal in order to get rid of the dilemma. But the frequency at which this is needed is very very low. Some background info on the process - alout (even preferably backed up with writing it down) This is important. Usually our thoughts are very flexible and this is good. But this also allows for croocked thinking and too easy we assume things to be true when they are not or don't need to be. So in order to trigger the grey cells a bit more, auditory and visual input helps a lot. - The repetition of Is this true? Am I/are we sure this is true? Does it need to be true? Am I/are we sure that this needs to be true? It is almost like a mantra. There are several places I and others use something similar with great success (IO-mapping or PFD+ for instance). Initially it sounds to get boring and that the obvious is being asked too often. This is not. First of all the repetition of the same short frases quickly helps people to concentrate on the content rather than the questions being asked. It is for the facilitator far more easy to remember them, concentrate on the proces rather than try to be 'not dull'. And most of all, quite often when the answers are yes, yes, yes, yes and then by going to the next brainstormed item we get "wait a minute..." - When the facilitator is open for non-verbal input, usually it is quite a good pay off to probe further (even to high irritation levels, especially when there is irritation) when the answer comes too quick, the answer comes takes too long (compared to what a normal answer would be), trying to evade the issue, they eyes move away were they where first, they start to become a bit uncomfortable. - Also verbal input is important to pay attention to trying to 'walk away' from the question asked, changing the subject, etc. - The two above have to do with the assumption that if it was easy to 'find' a solution, they/you already implemented it. The reason why it is not done is by overlooking the 'obvious'. Many reasons can explain why this happens, the issue is to break through them and breaking is usually accomponied by resistance or a desire not to address it. Hence the signals one can zoom into. - The use of different environments with similar situations but with different 'solutions'. Thinking out of the box is a quite often a necessity. Trying to do this with a specific and often quite important/urgent issue is very difficult, the more because a lot of emotion is involved. High levels of emotions cloud clear vision. Therefore taking to a similar situation but different environment removes the blockages of the emotion following from the attachment to the issue. This requires very quick creative thinking of the facilitator on the spot, or allow yourself time to discuss and think it over with others. In a way it looks very much like finding best practises in different industries. +( clouds with more than 2 branches From: Brian Potter Date: Wed, 28 Feb 2007 01:37:58 -0500 Subject: [Yahoo cmsig] "Super Clouds" with More than Two Entities in Conflict Santiago, Logically, there is no reason why you cannot have clouds with one goal, N requirements and N conflicting prerequisites for those N requirements. However, N-clouds are not really necessary. Consider the following "three-cloud:" +--> B1: Requirement 1 --> D1: Prerequisite 1 / / A: Goal +-----> B2: Requirement 2 --> D2: Prereqiesite 2 \ \ +--> B3: Requirement 3 --> D3: Prereqiesite 3 Where: D1 implies NOT D2 and D1 implies NOT D3 and D2 implies NOT D3 This "three-cloud" is logically equivalent to the following three ordinary clouds happening concurrently in the same system: D1: Prerequisite 1 B1: Requirement 1 A: Goal B2: Requirement 2 D2: Prerequisite 2 D1: Prerequisite 1 B1: Requirement 1 A: Goal B3: Requirement 3 D3: Prerequisite 3 D2: Prerequisite 2 B2: Requirement 2 A: Goal B3: Requirement 3 D3: Prerequisite 3 In the general case, an N-cloud is equivalent to K ordinary clouds where . . . K = N C 2 (combinations of N things taking two at a time) = N! / [(N - 2)! * 2!] = N * (N - 1) / 2 When an organization has several (three or more but not a whole lot more) interacting conflicts, which conflict is more important will usually be apparent. If not, each ordinary cloud you resolve does two things . . . 1. It reduces the situation from an N-cloud to an (N-1)-cloud by eliminating one of the N conflicting prerequisites from further consideration (because the remaining prerequisite in the resolved simple cloud dominates the eliminated prerequisite). 2. Surfaces assumptions about the surviving necessary condition and prerequisite that may help break one or more of the remaining N-1 clouds that one must consider to fully resolve ALL of the mutually conflicting prerequisites. If you see a real organization facing an N-cloud where N is more than 2, 3, or 4; document the situation. This much core conflict should (according to ToC hypothesis) cause an organization to self-destruct (or mutate very quickly into something with less internal conflict). Since ToC predicts that organizations continually facing many fundamental internal conflicts should not long continue in such a state, any such an organization would be either (1) a counter example voiding ToC (or restricting it to a more narrow applicable domain than the domain we believe it covers) or (2) a VERY interesting organization. --- From: Brian Potter Date: Tue, 06 Mar 2007 10:25:57 -0500 Subject: [Yahoo cmsig] "Super Clouds" with More than Two Entities in Conflict Humberto, We have a notation confusion. You did not hear what I intended to say. The N Cloud method you mentioned observes multiple instances of conflicts between two entities within a single organization. We represent EACH of these N conflicts with a single 2-cloud (an ordinary cloud with one goal, two necessary conditions, and two [one for each necessary condition] conflicting prerequisites). I heard Santiago inquire about an instance where there is one goal, 3 (or more) necessary conditions, and 3 (or more, one for each necessary condition) MUTUALLY CONFLICTING prerequisites. Responding to that question, I invented the notation "N-cloud" (distinct and different from "N Cloud") to label this beast with one goal, many necessary conditions, and many mutually conflicting prerequisites. An organization with an N-cloud for N larger than three or four probably faces multiple core problems (perhaps, as many as N-1). I tried building a case that there are two reasons that we do not need N-clouds: 1. If we actually encounter an N-cloud in a real situation, we can decompose the N-cloud into ( N * ( N - 1 ) ) / 2 ordinary clouds and actually resolve the N-conflict by breaking only N of those (many more; e.g., 21 when N is 7) ordinary clouds. 2. An organization that faces so many mutually conflicting prerequisites for so many necessary conditions to its single goal will (my hypothesis) have so many core problems that it will either self-destruct before we can notice it or it will mutate into something with less internal self- contradiction as a survival response. Apparently (from your post and Santiago's response), all I did was confuse Santiago and you without actually doing anything interesting. In the eternal words of secret agent Maxwell Smart, "Sorry about that." :~\ Brian -------- Original Message -------- Subject: Re: [Yahoo cmsig] "Super Clouds" with More than Two Entities in Conflict Date: Fri, 02 Mar 2007 16:38:30 -0300 From: "Humberto R. Baptista " When you say: If you see a real organization facing an N-cloud where N is more than 2, 3, or 4; document the situation. This much core conflict should (according to ToC hypothesis) cause an organization to self-destruct (or mutate very quickly into something with less internal conflict). Since ToC predicts that organizations continually facing many fundamental internal conflicts should not long continue in such a state, any such an organization would be either (1) a counter example voiding ToC (or restricting it to a more narrow applicable domain than the domain we believe it covers) or (2) a VERY interesting organization. We should note that the facing of N-clouds is very common. What is uncommon is facing N core clouds (the mere name sound strange). The point is what you mention above on fundamental conflicts. When we have UDEs each one is held in place by a cloud, but by a CRT or a 3-cloud method we can converge on the core conflict or core cause and then proceed to evaporate it. I cannot say whether there is an organization with many different fundamental conflicts. Different in this respect means with little or no cause an effect relationship among themselves. If we say we're dealing with moderately complex systems (in the sense of having many cause and effect relationships within them) the chances of encountering such a system are slim to none. --- From: Brian Potter Date: Wed, 28 Feb 2007 08:28:09 -0500 Subject: [Yahoo cmsig] Multiple Conflicts, Multiple Goals, and Necessary Conditions Justin, As anyone who has spent much effort attempting to optimize multiple objective functions over the same set of constraint equations under general conditions can tell you, "multiple goals" can easily exemplify conflict. One approach to such situations amounts to converting the multiple "objective functions" into a single objective function by effectively prioritizing the multiple "objective functions" in (for example) a weighted average. This is a compromise which (though it abandons any hope of really "winning" with respect to ANY of the original objectives) sometimes has its charms and uses in terms of finding a "good enough" answer. Applying this model as a means to manage a business might actually lead to destroying the enterprise by failing to meet one or more necessary conditions. We often hear one popular name for this approach in the business world: Balanced Scorecard. Sounds nice; two good, understandable words; but the concept is fraught with peril. Scorecard: A single place to check the score. What could be wrong with that? Balanced: Isn't balance good? Great name; great marketing; lousy idea. So what's the problem? Building a "balanced scorecard" requires selecting operational metrics and objective levels for those operational metrics. If you know your constraint(s) you do not need a balanced scorecard (just manage your constraint[s] in the usual ToC ways). If you do not know your constraints, you do not have the information needed to build a balanced scorecard (or any other useful management system, actually) because your constraint(s) run the business (and the management team) rather than the other way around. ToC chooses a different tactic. All objectives except one (effectively) get transformed into constraints. These goal-constraints are special in that they must be large (or small) enough to satisfy effective system behavior (e.g., rather than the objective "minimize late deliveries" [attempt to get them to zero] we might have a necessary condition like "have 90% fewer late deliveries than the most reliable competitor AND work with customers to minimize adverse effects on customers caused by our late deliveries.") There is no such thing as "merely a necessary condition." Necessary conditions are certainly vital to organizational success (often, vital to "mere" survival). Worse, we often lack "hard numbers" that give us firm knowledge about how much is enough (In the late delivery example above "50% fewer late deliveries" might have been sufficient.) Even worse, necessary condition thresholds probably shift according to market conditions (A "provide employees with satisfying pay, benefits, and working conditions" necessary might change radically if a business in a different market suddenly creates a high demand for talent critical to our business. Further, we probably did not have that firm a grasp on just where "satisfactory pay, benefits, and working conditions" were in the first place.) TWO things are special about a GOAL: More (less) is unquestionably better than less (more). If the level of a GOAL slips below (above) some threshold for a sufficiently long time, the effects of being on the wrong side of the threshold will kill the organization. With necessary conditions, we have the convenient property that being on the "right side" of the threshold suffices. We can have "enough" of a necessary condition. "Improving" any necessary condition beyond its threshold does not improve the business. Consider a non-constraint resource. If it frequently fails to subordinate to a constraint, it should be improved (increase its performance, change how we plan its use, SOMETHING) so that it rarely (or even never) fails to subordinate to a constraint. Past some (often definable) threshold, the non-constraint threatens a constraint so seldom that further "improvements" are wasted effort because the "improvements" create no net benefit to the total organization (Buffers at the constraint reliably absorb introduced variations with no threat to the constraint.) Things labeled "necessary conditions" are like that, but often the threshold location is less well known (maybe, less knowable; maybe, different for different markets, customers, employees, etc.). We can say (if not always understand) many things about necessary conditions, BUT the "necessary" in the name means that they are NEVER "merely." I realize you know this, but others could too easily misconstrue your post to conclude that necessary conditions are in some way "less important" than the goal. "Too little customer satisfaction" will kill an organization as thoroughly dead as too many consecutive quarters of losses (rather than profits), indeed "too little customer satisfaction" (or failure to satisfy any other necessary condition) can easily be a CAUSE for "too many consecutive quarters of losses" (and the resulting organizational death). -------- Prior Post -------- Subject: [Yahoo cmsig] "Super Clouds" with More than Two Entities in Conflict Date: Wed, 28 Feb 2007 16:41:15 +1000 From: "Justin Roff-Marsh" Very interesting I've got a hunch that this parallels the argument against multiple goals (with finite resources one goal must take primacy over others -- which then naturally become necessary conditions). In fact maybe 'multiple goals' is just another way of saying 'conflict' -- in which case the injection is will, in effect relegate one of those goals to a mere necessary condition? +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 1 basics GOAL <---> NEED <---> PREREQUISITE ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» B ºwhat need is beingº D ºwhat action of º ºsatisfied by the º ºyourself do you º ºaction of D º<ÄÄÄĺfind yourself º º º ºcomplaining aboutº A ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» ³ ºwhat is the objectiveº ³ ºachieved by having º ³/³ ºboth B and C º ³ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ V ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» C ºwhat need is beingº D' ºwhat is the de- º ºsatisfied by the º<ÄÄÄĺsired opposite º ºaction in D' º ºaction of D º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ GOAL <---> NEED <---> PREREQUISITE 1 ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» 3 ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» ºWhat need of º ºThe lieutenant º ºthe system is º ºbreaks the ruleº ºgoing to jeoparº<----º º ºdized by the º º º ºfire ? º º º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ 5 The need of the The lieutenants ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» system under the action ³ ºWhat is the º lieutenants ³ ºlowest common º responsibility ³ ºobjective both º CONFLICT ºneeds are try- º ³ ºing to satisfy ?º V ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ 4ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» 2ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍ» Objective of ºWhat need of the º ºWhat rule pre- º the system ºsystem is protec-º ºvents lieutenantº ºted by the rule ?º<--ºfrom putting outº º º ºthe fire º ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍÍÍͼ The need of the The system rule system that the that limits rule is protec- lieutenants ting authority GOAL <---> NEED <---> PREREQUISITE ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ³Necessary condition :³ ³Interfere with ³ ÉÍÍÍÍÍÍÍÍÍÍÍÍÍÍ»<-³get the job done ³<--³lieutenants work³ º º ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ º Objective : º ºto manage wellº CONFLICT º º ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÚÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ¿ ÈÍÍÍÍÍÍÍÍÍÍÍÍÍͼ<-³Necessary condition :³<--³Do not interfere³ ³empower your lieutnts³ ³with lieutnt wrk³ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ ÀÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÙ Writing cause-effect-cause diagram's (Current Reality Trees = CRT) 1) write concisely the negative ramifications that you see (UDE) 2) write concisely the part of the idea that triggers the negative ramification. Connect 1) and 2) with a "transatlantic arrow" 3) complete the sentence : IF (the bottom of the arrow) THAN (the top of the arrow) BECAUSE (the cause x, y, z ...) 4) ask if these statements currently exist : if "no" than this belongs inside the "transatlantik arrow". If "yes" it is a side branch. --- From: "Jim Bowles" Date: Wed, 1 Nov 2000 09:34:13 -0000 Prerequisites ============= An ability to draw trees using If then logic A desire to surface hidden assumptions. Creativity to produce Injections to trim Negative Branches. Phase 1 Construction ==================== Start by making a list of Positive Outcomes if the ideas are implemented Then make a list of Negative outcomes to the idea currently on the table Note Phase 2 is about communication so please be careful not to do this until you understand the mechanics of How to do it. Once you have the list of negatives determine which is the worse one that you can envisage. Now take a sheet of paper and write in a box close to the bottom of it the proposed idea. Then draw a further box about 3/4 of the way up the sheet and write in the worst outcome that you have listed. Now check what you have written using If .... Then If we do this....... then we can expect this....... Now look at your list of negatives and determine whether these can be placed at some point between the two entities. Or whether they are parallel outomes that would be manifested. You should be able to build up a small current reality tree of your concern. Note the word concern not concerns. This is important for the next step of communication. Check each link using the Words If.... Then....Because.......... The becauses should reveal hidden assumptions that will strengthen the logic and poit the direction to Injection to trim the branch. At this stage your gut reaction is telling you that the ideas do not have the buy-in from corporate. If you can verbalise all your concerns in this way it is possible to develop a powerful means of communication. If you would like to try this step and post your work to me I will scrutinse your efforts and advise you on the next step. when you read the tree make sure that you read it aloud to yourself. The ears have a different method of checking than do the eyes. --- From: "Jerry Keslensky" To: "CM SIG List" Subject: [cmsig] RE: Use of TP trees with unfamiliar people Date: Thu, 9 Dec 1999 15:10:59 -0500 Dirk, Yes, I have had considerable experience doing this. Here are the steps that have evolved successfully for me. 1. Establish a calendar of work sessions for the group (duration 2-3 hours each) with specific session objectives stated in advance. 2. The first session should be devoted to introducing the methodology, tools and showing case examples of application of the tools. The big picture. This is very important and requires considerable preparation of materials for presentation. 3. The next session should be devoted to group engagement to gather observed undesirable effects. 4. The next session should be to present a multi-cloud approach to developing a CRT as well as an initial CRT (you should prepare these prior to the session) 5. The next session should be devoted to detailed review of the CRT with introduction of legitimate reservations and let the group expand the tree. 6. Keep expanding the CRT and reinforcing the categories of legitimate reservation as a guide to validation and discovery. 7. Once the CRT is fully developed introduce the usage of clouds to drive injections. Let the team build and validate the second to Nth clouds, you do the first one as an example, but let them refine it. 8. By this point they will be ready to work as a group building the FRT. You just need to facilitate. 9. The same is true for PTs and TTs. This process will take about 15 sessions. Depending on the group you may find they are real good at reviewing and validating but not at developing trees. If so do the development work between sessions and just use the sessions to review, validate and enhance the trees. The key to success is engagement. The more you can engage the participation of the group the better the results. Engagement is most effective when it revolves around discussion driven by good questions and lots of patience. The mechanics of building trees is boring to most groups so be prepared to gather ideas in the sessions and do the grunt work yourself between sessions. Engagement also comes from reviewing each iteration of the trees. =========================================================================== From: Greg Lattner To: "CM SIG List" Subject: [cmsig] Re: Use of TP trees with unfamiliar people Date: Fri, 10 Dec 1999 08:35:07 -0700 Kathy, thank you for your response. I enjoyed your people oriented common sense. I guess building trees is one of the most enjoyable things I've done in groups because you get to know and help people in their life and confidence to communicate. Kathy's people oriented insights below are a good explanation of the challenges and fun involved in bringing people together in their thoughts so they can become a team. I don't think I've done anything so much fun in business than the TOC Trees. The closest thing to it is camp counselling with a bunch of kids when we really gelled together as a cabin. The best part is when people move to succeed together. It doesn't matter that profits increase. The part you remember is the relationships you build with the people. Sometimes people have a barrier to communicate with each other. That is sad. It's very enjoyable to be part of creating the vision for people to move over the barriers. It seems that the Trees and Reservations (CLRs) tend to make it look easier. I'd recommend plenty of practice on your own first so you learn how to get a feel for where things are going and where they need to go so you can steer the group. And I also recommend getting to know the participants and where they're coming from before beginning the session. I'd think of it as a people process, not a logical process, as much as possible. The logic should be subordinated to the people side. Others have also added good comments too and I'd just be redundant to repeat their already good comments on creating a timeline, schedule, preparation meeting, doing some things off line, etc. ---------- From: Kathy Austin[SMTP:kaustin@aptconcepts.com] Sent: Thursday, December 09, 1999 4:38 PM Subject: [cmsig] Re: Use of TP trees with unfamiliar people >Has anyone had success creating any thinking process trees in a group >setting where only the facilitator is familiar with the methods? > >I am assembling a team of intelligent, openminded, loudmouthed people to >examine our product development system, and I wonder if I will shoot myself >in the foot by trying to apply the graphical tools in the meetings. > >How about using the meetings as a complaint session, and assembling the >trees myself, later on? I've used the process both ways (creating with a group and getting data, then going off to create the trees and only then coming back to the group to present and get scrutiny). To me, it's one of those "it depends" situations. - It depends on your skill in building trees of someone else's situation. - It depends on whether or not you are a member of this group or are serving as a facilitator. (If you are trying to build the trees in a group and you are not able to divorce yourself from the situation, you are in an impossible situation. I don't recommend the group situation at all then.) - It depends on whether or not you are able to stay on the process of converting the discussion to trees (you provide/diagram the boxes and arrows, they provide the content of the boxes and the arrangement of the arrows), instead of turning the session into one of teaching them to build trees and building the trees at the same time - that's a very easy trap to fall into -- they may want to learn how to do what you do, but that's not the purpose of this meeting, is it? - It depends on your ability to listen to them, hear the logic, capture it in writing, and scrutinize yourself along the way so that you can ask for clarity, etc., all almost simultaneously. Other comments: - you are talking about "our product development system". If you are part of the system, it may make sense for you to do the tree from your perspective and then present it to others for scrutiny/validation. - complaint sessions to gather data are sometimes difficult to keep focused on the topic, and to derive the necessary data for building the trees later. - for me, doing clouds and NBRs are easier with almost any size group (than doing trees) because they are shorter and easier to keep people on topic). My preference when doing trees is to do them off-line and then present for scrutiny/validation Just my thoughts and preferences, Kathy Austin ======================================================================= From: "Button, Scott D" To: "CM SIG List" Subject: [cmsig] Re: Fwd: TP tools with a new team-Three Cloud Date: Tue, 4 Jan 2000 06:47:47 -0800 The Three Cloud Method on the web: http://www.vancouver.wsu.edu/fac/holt/em526/ppt.htm http://www.thedecalogue.com/Tools/crt.htm#identify and http://www.cmg-toc.com/html/the_jonah_process.html -------------- Examples: http://www.goldratt.com/academy/emba.htm http://www.vancouver.wsu.edu/fac/holt/em526/ecexamples.htm -------------- In the literature: Deming and Goldratt: The Theory of Constraints and the System of Profound Knowledge LePore, Domenico and Cohen, Oded The North River Press 1999 and "Genesis of a Communication Current Reality Tree" Button, S.D., APICS Constraints Management Symposium Proceedings, March 22-23, 1999 --- Date: Fri, 09 Feb 2001 03:33:34 -0500 From: Brian Potter Superficially, this looks like a variation on the "fire fighting cloud" in which tactical crises consume so much energy that one can not focus on the long term plans and systemic improvements which would prevent the "fires" from happening. How well mig ht that view match your circumstance? How much of the time do the short term issues which reach your attention rest upon a conflict between two (among three or more) things you and your people "must do" to run your operation? Examples: - Sales must promise short lead time to close the sale AND Sales must quote a longer lead time for production to meet the delivery date. - Purchasing must buy in large lots to get a good price AND Purchasing must buy in small lots to avoid ... ... high inventory investment ... obsolescence expenses ... scrap caused by spoilage ... excessive inventory management expense - ... the possibilities seem innumerable ... The generic solution rests on finding "core problems," "root causes," or "core conflicts" which produce the visible "undesirable effects" (the crises of the moment) as symptoms of the deeper conflicts. The ToC TP tools offer two primary approaches: - Construct the clouds for three (or so) specific surface conflicts (fires, problems, ...). - Use similarities among the clouds to expose a deeper conflict driving all the surface conflicts. - Construct the cloud for the deeper conflict. - Break the cloud for the deeper conflict and simultaneously solve ALL the "problems" which were symptoms of the deeper conflict. ... OR ... - Build a CRT which exposes the root cause of many UDEs (symptomatic "problems") - Develop an injection (perhaps by evaporating a cloud) which either removes the root cause or prevents it from spawning it s current UDEs - Build an FRT including the injection, any UDEs (negative branches or NBRs) the injection may cause, and the necessary additional injections needed to contain the NBRs. - Implement the FRT (if necessary, building a PRT and a TT to plan the implementation). Successful application of either approach lets you put the short term issues permanently to bed by modifying the system which spawns the "problems." The modified system produces the stuff you want to produce WITHOUT also producing the unwanted "problem s." Sorry about the abstract response, but abstract queries tend to yield abstract responses (or long, circular discussions ...). -------- Original Message -------- Subject: [cmsig] Cloud Date: Thu, 8 Feb 2001 18:51:51 -0500 From: "Murphy, Mark" To: "CM SIG List" D: Do the things that allow long term improvement B: Plan for the long term A: Make more money now/future C: Handle short term issues D': Do the things that keep things running daily --- From: "Michael Carroll" Date: Wed, 10 Apr 2002 11:40:43 -0700 Subject: RE: [tocexperts] RE: three cloud method When constructing a Current Reality Tree (CRT) the three cloud process is a well thought out way to start your analysis. The Three cloud processes is a relative new way to do this compared with the examples that you might find in Bill Dettmers books. The process was derived from another type of tree you may have also heard of "the communication tree." The "C3" cloud process simplifies the creation of the CRT and allows you to drive a stake in the ground to give your CRT tree the strength to grow correctly. 1. Frame your analysis. If you were painting a picture what do you want to fit on the canvas? Will it be a portrait of your best friend or is it a landscape picture Washington DC. a. The metal clue here is to ask yourself "what story am I trying to tell?" 2. Collect at least 10 UDE's (remember the rules for UDE's) 3. Select 3 diverse UDE's from your list 4. From those 3 diverse UDE's construct three separate clouds a. The mental clue here is that you really have to feel the conflict around the selected UDE. For example let's take an UDE and see the conflict: A couple of weeks ago Oracle announced that it was no longer supporting customer modifications to software. So the UDE that I will choose for this example is: UDE: Our mission critical software is not supported. So I ask my self what is the struggle over this UDE? On one hand I need to have the software match my company procedures and so I make modifications. Yet, on the other hand I need to get support from my software vendor so I modify my procedures to fit the software. Do you feel the tug of war between both sides? Now let's write the cloud. D: Make modifications to our software D': Do not make modifications to our software What is the need of D? What is D trying to protect? So let's write B: B: Automate our company functions and procedures Let's read it: In order to (B) "Automate our company functions and procedures" we must (D) "Make modifications to our software." On the other hand. What is the need of D'? What is D' trying to protect? So let's write C: C: Maintain Software support from our Vendor Let's read it: In order to (C) "Maintain Software support from our Vendor" we must (D') "Do not make modifications to our software." Ok . is this making sense? Now we need the common objective. Why are we struggling with both issues? In this case I will state (A) as "To run our company" Mental Clue -- Do not try for perfection "Good enough" is the key phase. Ask yourself? Does this cloud adequately describe the tug-o-war that is going on around this selected UDE? Now back to the steps: 5. Arrange all three clouds that you have created. I like to stack them vertically on top of each other with the UDE for each cloud listed to the right and slightly above each cloud. 6. Now create a blank cloud at the bottom and begin to build the generic "C3" cloud. You can start with any entity here. a. Mental Clue - You are "boiling cabbage down" here. Your bottom cloud will most definitely be philosophical. If you put all of your (D) statements in a bucket and boiled them down what would be the result. Repeat this process for (D'), (B), (C), and (A). b. Please be aware that some of your clouds may be flipped backwards. c. Yes it feels weird. 7. Read resulting "C3" cloud and apply the categories of legitimate reservations. 8. Copy the "C3" cloud to a new page and rotate the cloud so that (A) is at the bottom and (d) and (d') are at the top. 9. Change the direction of the arrows for sufficiency logic. So that we can read the cloud in "effect cause effect" language for example: "if (A) then (B)" as compared to the necessary logic used in building clouds "in order to have (A) we must (B)". 10. To keep our new "C3" cloud with sufficiency logic from falling over we need to fill in the supporting legs by saying the trigger word "Because". a. First read the "C3" cloud with necessary language. In order to have (A) we must have (B) because ... The answer here is your sufficiency leg. Now test it. If (A) and if (new entity from the answer to the "because" trigger.) then (B). b. Repeat step a. for all entities of your generic cloud until you have sufficiency. Remember to apply the categories of legitimate reservations. 11. Add your 10 UDE's to the page above your now completed C3 cloud. 12. Using a dotted line connect UDE's using sufficiency logic. 13. Again, using a dotted line connect (D) and (D') to your UDE's at the top of the page. 14. Add in causes and effect entities until you have connected (D) and (D') to all of your UDE's. a. Mental Clue - Select one line and focus, then move to the next line. Do not stop to explain, just build. Just a few more notes and then I will stop. - Remember that with the introduction of the "C3" cloud the construction of the future reality tree has changed slightly also. I will save that for a future post. - Remember the phrase "good enough" the purpose of a CRT is to help you to build a new reality using the FRT. Do you understand what is causing your UDE's? Then move on. On the other hand if you are struggling with your FRT then you may need to go back to your CRT and flush out some more logic. - The process is simple. Finding the time and focusing on all of the steps is like drinking a slurpee to fast -- Brain Freeze - it hurts until you get the hang of it. --- From: "J Caspari" Subject: [cmsig] Re: Three Cloud Method Date: Fri, 12 Apr 2002 11:17:43 -0400 Larry's provocative posting happened to arrive as I was writing the following description. I offer it as a response to Larry. My personal opinion is that the three-cloud (or n-cloud) approach is a wonderful improvement for creating CRTs (current reality tree). The difference between the "old way" and the 3-cloud approach is that the old way starts with UDEs (undesirable effects--the top of the tree) and we dive down until we find a reasonable core problem. Unfortunately, we don't know where we are going so we end up exploring a lot of places and, when we get there we are not quite certain that we are at the right place. In the three-cloud approach we have the same top of the tree (the UDEs), but we also know the conflict at the bottom of the CRT. Now we know where we want to go (from the conflict to the UDES) and we know what waypoints are along the way (the policies that have been established to allow the organization to live with the conflict, the measurements of adherence to the policies, and behaviors that result from chasing the measurements). We fill these in, banana in (combine as a conjunct) the Oxygens (assumptions about the existing environment), identify the loops, and *voila*, the CRT. The process starts by identifying a number of UDEs about a subject matter and selecting three that are representative of the subject matter. Write the storyline for each of the 3 selected UDEs. [Only then] create a cloud for each of the three selected UDEs based on the storyline. The CRT is less important in that it is not needed to identify the core conflict. Of course, the CRT is also not necessary to create the cloud because the cloud already exists. Nevertheless, completing the CRT still provides confirmation the core conflict identified is deep enough, provides a path for the FRT (future reality tree), still provides the verbalization of intuition, and still provides the communication CRT for buy-in. It also clearly notes the specific existing policies and resultant measurements leading to each of the UDEs. Larry says that the logic of CRTs created with the three-cloud approach tends to be weak when compared with CRTs created in the "old way." I do not see how the logic of the three-cloud approach could be weaker than the "old way" if it has survived scrutiny in accordance with the CLRs (categories of legitimate reservation). After all, the CLRs are the same for both techniques. --- From: "Ronald A. Gustafson" Subject: [cmsig] RE: Three Cloud Method Date: Sat, 13 Apr 2002 21:12:06 -0600 Larry - In the e-mail below you made two statements in relation to Karl Popper that I'd like to discuss. Those statements are: 1) The 3-Cloud Method is an INDUCTIVE process. 2) The core problem development process described by Dettmer is a DEDUCTIVE process. As I thought about those statements, I arrived at the opposite conclusions. It seems to me that the 3-Cloud Method is a DEDUCTIVE process and the 'original' core problem methodology that Dettmer and others (e.g., Lisa Scheinkopf) describe is an INDUCTIVE process. Let me describe my thinking. In the 'original' process, UDEs are logically connected (using cause-effect thinking) and the process continues downward (or backwards, in Dettmer's words) until a core problem is found (i.e., an entity that accounts for 70-80% of the UDEs). This process goes from observed effects (UDEs or symptoms) to what is considered to be a root cause (core problem) leading to the observed problems (UDEs). I'm sure that Karl Popper would have considered this to be an INDUCTIVE line of thinking. That is, one of deriving theories based on observation (in this case, symptoms). I'm interested if you see this differently. On the other hand, the 3-Cloud Method has one speculate (develop a theory) about what underlying cause leads to the UDEs. That theory is than critically examined by using cause-effect logic to see if it leads to the UDEs. This fits what I would expect Karl Popper to view as a DEDUCTIVE process. That is, formulating a theory and then examining to what extent that theory explains what happens. He also would not refer to this as the 'scientific method,' for he was quite serious about the 'non-existence of the scientific method' as discussed in the preface of his 1983 book. I'd also like to push this discussion a bit further. Let's apply Karl Popper's thinking about demarcation (i.e., differentiating between science and metaphysics) to TOC. To me, key underlying assumptions of TOC are inherently not 'falsifiable.' If that is the case, then TOC falls in the category of a metaphysical speculation, much as Karl Popper viewed Freud's theories or astrology. -----Original Message----- [mailto:bounce-cmsig-26195@lists.apics.org]On Behalf Of larry leach Sent: Thursday, April 11, 2002 11:04 AM Hypothesis: The Three Cloud (3-Cloud) method is not technically supportable. Basis for Hypothesis: 1. The 3-Cloud Method is an inductive process. It makes a giant stride from arbitrarily picking three of the UDEs in the problem space to an assertion of a Core Conflict. Karl Popper demonstrates soundly that inductive processes are logically flawed. That is, there really is no such thing. 2. The core problem development process described by Dettmer is a deductive process. Popper demonstrates how such a process can effectively lead to determining a preferred alternative. The rest of the process; i.e. scrutiny, fits well with Popper's development of the scientific method. 3. The deductive process requires building down the tree to get a core problem that affects (at least) 2/3 of the UDEs. There is no such development in the 3-cloud process. The fact that people get to all the UDEs from the Core Conflict asserted by the 3-Cloud process is evidence that people can invent connections between hearly anything. It is not evidence that the process works to describe reality. Although the 3-Cloud method gets to a solution faster, there is no reported testing to prove that it works. People liking it and agreeing with the result does not mean it works. (Indeed, it may make it less likely that it works...because people are most likely to agree with things that confirm their present thinking.) People agree with lots of illogical things that do not work. Only turning UDEs into DEs in the organization can prove that it works. Of course, there are many confounding factors when evaluating the overall process. Chris Argyris' research demonstrates common problems in organizations that make it unlikely that the 3-Cloud method could work. Specifically, people are systematically unaware of their real behavior. That is, their actions (theory in use) do not match what they claim them to be (espoused theory). While the deductive approach does not necessarily avoid Argris' problem, it helps avoid denial. The inductive leap of the 3-Cloud process is more likely to leave out the undiscussables, and nearly certain to leave out the undiscussable undiscussables. From: "Jim Bowles" To: "CM SIG List" Subject: [cmsig] Re: Overview of discussion on Three Cloud Method Date: Fri, 24 May 2002 18:24:23 +0100 X-Mailer: Microsoft Outlook Express 6.00.2600.0000 Reply-To: cmsig@lists.apics.org Hi Philip Thank you for pulling all the discussion together. Having read it through a couple of times it prompted me to clarify my own thoughts on this topic. The main arguments presented against the 3 UDE method are 1. That it is INDUCTIVE rather than DEDUCTIVE. 2. That it cannot be subjected to the same degree of scrutiny as a CRT when using the CLRs. 3. That there is a problem with the "timing" between cause and effect. 4. Questions on whether the system is stable or not. 5. That there is a better way? I remember that there was a debate between Eli and his partners for almost 2 years about whether to make TP generally available or not. On the one hand it was argued that people wouldn't be able to use Newton's scientific E-C-E methods. The main reason being that they would not have the know-how to identify and valid the cause. The premise here was that the "cause" is not generally known and therefore anyone using the methods would need the appropriate tools and wherewithal to be able to identify the cause. In the case of truly scientific exploration this is valid. But once this reservation had been stated it didn't take long to "invalidate". This came on the grounds that when we are dealing with organisations and people we were not dealing with "unknowns" in the way that Newton or a scientist would be we were dealing with things that are readily known or available, if not to everyone. Using the following definitions: Inductive [a) A principle of reasoning to a conclusion about all members of a class. Broadly reasoning from the general to the particular. b) A conclusion reached by this method.] Deductive [To reach a conclusion by reasoning.] I have struggled to identify whether the TOC-TP processes are Inductive or Deductive. In some respects I don't care what labels you give them and for some time I just considered this part of the thread as navel contemplation or an "academic" exercise that didn't take me forward. Based on the statement above about dealing with know causes it seems to me that the common interpretation of INDUCTIVE is not appropriate to what we do. [But I am willing to listen to those that can see something that I can't.] So let us consider the TP processes themselves. Most of the longer serving members of this list will have been brought up through the UDE to CRT to Evaporating Cloud and then to FRT. This was well honed over a two years period in the early 90s until the TOC Roadmap was considered good enough to be used to train Jonahs. Later the PRT and TrT processes were added and refined. Having produced a dry CRT I am sure that many people struggled as I did to extract the elements of the cloud from it. Even though it is there once you know how to look for them. Once I had achieved a higher level of proficiency with TP the cloud and the tree become interchangeable. It got the impression that was true for most of the Network Associates when we started to improve the CRT communication process and extended our use and practise with clouds. For me now the CRT is just the elaboration of all the elements and assumptions contained in a cloud. And the cloud might be described as a more "cryptic" or condensed form of CRT. The main differences being that we read and validate them with a different syntax. In my view learning which words to use was one of the hardest parts in developing the processes and making them easier to use and more teachable. In both instances we use visual and audio logic checking channels. Having seen these relationships I had no problem seeing the value of the 3 UDE method. Since it is still necessary to produce a CCRT to show how the core conflict gives rise to the UDEs that we are concerned about I cannot understand the argument about scrutiny with CLRs. They are still relevant as far as I can see. I am still having difficulty understanding the objections to the use of UDEs on the basis of time lag. The criteria for selecting "good" UDEs [An oxymoron perhaps?] is well documented. My assumption is that: Is it a proper statement? It exist? Is it negative? I care about it? Is yes to all include on the listed. But it doesn't mean that I will use it to start my analysis. The most I have been asked to help people connect into a CRT is 64 but as we all know this is far more than we need. The real key is to include ones that give us a sufficiently wide enough view of the system so that we can "drive" down to a core problem area that will connect to as many of our undesirable effects as possible. At least 70% of our UDEs. On the question of timing I will try an example. Last year I was asked to help a company that had a problem with meeting due dates. And they were also under pressure to expand their capacity to meet additional requirements from their aerospace customers. In June when I visited them they told me that only about 60% of their orders were being delivered on time. This had got progressively worse during the year as the demand increased. A tour of the plant and a few questions quickly confirmed that they were using a very inadequate method of control. They hadn't a clue as to the impact of the "constraint" or what it was. I was able to sketch out the CRT and suggest a direction of a solution before I left. At that time they were too busy and didn't have time for the meeting with the managers that I proposed. By August the situation was much worse and an internal war had broken out between the different factions. By now only 40% of the orders were going out on time. They couldn't understand it they had recruited more people. They were working more overtime but yet their arrears were growing. Was the system stable? This is an interesting question. There were many aspects of their system that were constant. For example; The way they released work and the rules that they used to move it through the plant. The way the reacted to workloads and the way they handled overloads on some resources. But stable in a "statistical sense" No way. As the arrears got bigger and the delays got longer and the number of interventions (expediting) increased. And they couldn't see that this was taking them the wrong way. Why? Eventually they took the time to hold the meeting that I had proposed. As part of my presentation, which was primarily a discussion of their key issues. I developed the cloud an CRT with them. My small CRT showed the relationship between their method of working and the effects that were complaining about. The CRT showed why the due dates were missed and why this was getting worse over time. By the end of September their Due date performance had improved to 80+% and things were looking even better by the end of October. By November the effects of 11 September came into being and the workload dropped to the extent that they had excess capacity everywhere. Their DDP went up to 100% and they were looking for more work to fill the capacity that they now had. In my view the CRT and FRT are not time based. They merely show that every time we do this we can expect a given result. Even though it might be some time ahead before we see the results in real time. This concludes my exploration of this subject for the time being. Perhaps someone who has good cause to question the use of the 3UDE method can tell me what I am missing. --- From: "larry leach" Subject: [cmsig] Three Cloud Method Date: Thu, 11 Apr 2002 12:03:30 -0500 Hypothesis: The Three Cloud (3-Cloud) method is not technically supportable. Basis for Hypothesis: 1. The 3-Cloud Method is an inductive process. It makes a giant stride from arbitrarily picking three of the UDEs in the problem space to an assertion of a Core Conflict. Karl Popper demonstrates soundly that inductive processes are logically flawed. That is, there really is no such thing. 2. The core problem development process described by Dettmer is a deductive process. Popper demonstrates how such a process can effectively lead to determining a preferred alternative. The rest of the process; i.e. scrutiny, fits well with Popper's development of the scientific method. 3. The deductive process requires building down the tree to get a core problem that affects (at least) 2/3 of the UDEs. There is no such development in the 3-cloud process. The fact that people get to all the UDEs from the Core Conflict asserted by the 3-Cloud process is evidence that people can invent connections between hearly anything. It is not evidence that the process works to describe reality. Although the 3-Cloud method gets to a solution faster, there is no reported testing to prove that it works. People liking it and agreeing with the result does not mean it works. (Indeed, it may make it less likely that it works...because people are most likely to agree with things that confirm their present thinking.) People agree with lots of illogical things that do not work. Only turning UDEs into DEs in the organization can prove that it works. Of course, there are many confounding factors when evaluating the overall process. Chris Argyris' research demonstrates common problems in organizations that make it unlikely that the 3-Cloud method could work. Specifically, people are systematically unaware of their real behavior. That is, their actions (theory in use) do not match what they claim them to be (espoused theory). While the deductive approach does not necessarily avoid Argris' problem, it helps avoid denial. The inductive leap of the 3-Cloud process is more likely to leave out the undiscussables, and nearly certain to leave out the undiscussable undiscussables. So, onward to critical discussion of this hypothesis! --- Date: Tue, 07 May 2002 11:27:39 -0400 Subject: [cmsig] Re: N-Cloud vs. UDE Heap CRT Construction Procedures From: Frank Patrick On 5/7/02 2:13 AM, "Rudolf (Rudi) G. Burkhard" wrote: > The 3rd one seems to be an important conflict - shouldn't it > be tested, do the 2 methods come up with different results? > If they do why? Does it matter? The assumption raised by Brian that bothered me in his cloud was... > A-C: The N-Cloud method risks identifying an conflict other > than a CORE CONFLICT as a core conflict (i.e., the N-Cloud > method is not effective.) While I may need clarity on Brian's distinction between "conflict" and "CORE CONFLICT" in this context, I'll comment anyhow. By definition, if the results of an N-cloud process can be connected to the original UDEs via a CRT, it is "A" core conflict associated with the problem space defined by the initial UDEs. Is the concern that it may not be "THE" core conflict? If that is the case, I need to express either an "entity existence" reservation on "THE" core conflict, or perhaps a "so what" reservation. Brian also wrote... > The traditional approach offers an inherent risk that > essentially independent UDEs will yield a CRT with two or > more core problems. And that's a problem because? Is there anything really wrong with building an FRT with two starting injections? That being said, I was intrigued by Brian's suggesting for combining the two approaches. Intrigued because his approach of starting with with 3 clouds, and then sees which UDEs are left over, is pretty much the opposite of what I like to do. (It's also an approach for when someone asks which UDEs to use to start the n-cloud process.) 1. Collect 5-10 initial UDEs associated with the problem at hand. 2. Do a high-level, long-arrow CRT (an UDE map, so to speak) connecting reasonably obvious causalities between the initial UDEs. 3. Choose, from the entry UDEs of these CRT/UDE-map fragments, UDEs to use in the n-cloud process. 4. Connect the resulting core conflict cloud (CCC) to the entry UDEs of the fragments. 5. Flesh out and scrutinize the logic, using CLRs, PMBs, reinforcing loops, etc, also adding in additional UDEs if and when they are identified. 6. Reselect UDEs. 7. Get on with what really matters...the solution of the developed CCC and the FRT. When starting with the usually recommended 5-10 UDEs, I've yet to see a situation that needs more than 3 entry points that need to be subjected to the 3-cloud process. (Now this assumes that the objective is to address the problem space defined by the UDEs. If the UDEs can be made to go away, and/or be turned into DEs, and set me up for moving on to more strategic objectives, that's good enough for me. Whether it's good enough for Stefan and his concern about starting with UDEs, I really don't know, and I'm trying to figure out why I should care.) +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 2 examples From: "Christopher Mularoni" Date: Wed, 19 Sep 2001 19:49:55 -0400 At the TOC for Education Conference this summer I understood Goldratt to say the following. A - is the common goal or objective B & C - are actual indisputable (even if situational or personal) needs required to obtain A D & D'- are "wants" or assumed necessary to achieve B & C respectively. The cloud maybe broken between D & D', B & D, or C & D'. If you break it elsewhere your cloud was not properly construted to start. (In most cases the mistake being one assumed the wants as needs ). However the possibility exists that I misunderstood or english being his second language Goldratt misscommunicated. --- From: "Potter, Brian (AAI)" Subject: [cmsig] RE: Software development clouds Date: Tue, 23 Nov 1999 10:34:03 -0500 D: Do not add additional capabilities to the project content. B: Complete one's task and hand off as quickly as prudently possible. A: Deliver a good product on time within budget. C: Exploit convenient opportunities to improve the product which team members discover during development. D': Add additional capabilities to the project content. Or this one ... D: Use the newest technology available. B: Exploit the most powerful tools available (typically, the newest hardware, the latest software development system, ..., the bleeding edge). A: Deliver a good product on time within budget. C: Avoid unnecessary risks. D': Use only established technologies with understood methods and ample existing "talent pools." Or this one ... D: Pay a (possibly large) premium to hire one or more "super programmers" (people 10 to 100 times more productive than normal "good" programmers). B: Rapidly create high quality product. A: Deliver a good product on time within budget. C: Control operating expenses. D': Do not pay the premium salaries needed to attract "super programmers." D: Allow the product to consume substantial memory and processing time resources. B: Offer many features and capabilities. A: Deliver a product which performs well for the end user. C: Offer a product which loads quickly, minimally impacts other processes, and operates on older computers as well as on the newest ones. D': Severely restrict memory and processing time requirements. Date: Tue, 27 Mar 2001 18:27:49 +0200 From: Jean-Claude Miremont Here is the common cloud: D. Subordinate production to the market B. Serve the market A. Prosper C. Subordinate to the constraint of the system D'. Subordinate market to the (internal)constraint In the throughput world message we say that the market is the one to satisfy. Therefore, not very many people accept to recognize an internal constraint as a fact. When it is identified, it is pointed at as something to get rid of immediately... and the marketing policy is almost never adjusted accordingly. +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 3 evident statements on which to build trees From: "Jerry Keslensky" Date: Fri, 16 Jun 2000 01:20:28 -0400 The following are statements, which are widely, accepted facts, I might even go so far as to say they are business "laws of gravity", hopefully the appropriate logic to use these facts to render a conclusion is self-evident. 1. Money is a worldwide standard used for the measuring of business success. 2. Money is the predominant worldwide medium of exchange used to purchase goods and services. (Food, shelter, health care etc.) 3. Money is significantly important to people whom own business or sell their goods and services to businesses, which includes every employee that expects to be paid for the services they sell to their employer. (Their time, sweat, skill, experience etc.) 4. Customers must purchase goods and services from a business for that business to earn money. 5. Customers purchase goods and services that they perceive satisfy their needs otherwise they don't spend their money or they spend it elsewhere. 6. If a business takes in more money than it pays out then it accumulates money (simplistically called "making money", no matter how convoluted your accounting and tax avoidance methods) otherwise the business must borrow money or attract money from investors until it becomes selfsustaning. (negative cashflow is like energy usage in a mechanical system, eventually the machine stops unless additional energy is added to at least counter-balance the energy usage.) 7. No one invests money in a business or loans money to a business unless they expect and believe they will be paid back in a reasonable time with an exceptable return ultimately in the form of (you guessed it) money. (If the expectation or belief that an exceptable return is not to be received, in a reasonable time, the in flow of money from these sources stops.) 8. In the history of mankind there is one constant trend associated with money. That trend is that as time goes on the price, the amount of the medium of exchange required to purchase the majority of goods and services continues to rise. We often refer to this trend as inflation. (don't point to pocket calculators as an example of deflation, we're talking majority of cases, not occasional exceptions. You don't own or drive an automobile or live in a house or apartment or feed yourself or your family on those exceptions, the basics of life and business keep costing more and more as time passes.) 9. If only to counter-balance the trend of inflation and for no other reason, a business can not be selfsustaining unless is makes MORE money now and in the future than it made in the past. It is not greed; it's gravity. These statements can be used as the basis to build any proof ( logic tree or cloud) you choose to substantiate the objective and necessary conditions for ideal business performance. Reword them to fit your taste and to satisfy the catagories of legitimate reservation. You have to take care of your customers needs, you have to take care of your employees needs and you have to make more money now and in the future to reach your business potential. (notice I used the word potential not to be confused with survival, survival can be achieved for a life time without ever achieving or approaching potential, achieving potential requires significantly more than just the bare minimum of achieving survival.) By the way, for something to be equated with the "law of gravity" it should be universally observable much like when you throw a large rock straight up into the air and observe it fall back to earth hitting you on your head. ;-) +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 4 work instructions 1) do a list of ~ 5 day-to-day problems 2) write up a story of one recent problem = make an incident report 3) do a brainstorming and create a list of possible choices to over- come the incident/problem/conflict. Classify this list into those which would be preferred and those which are imposed on you (expected from you or forced on you). 4) Pick the conflicting options out of this list - the most preferred action - the most forced action 5) create a cloud starting with D=FORCED and D'=PREFERRED 6) work out the assumptions underlying the arrows B-D, C-D' and D-D' by asking yourself the question "In order to make (tip of arrow) I must (tail of arrow) BECAUSE..." 7) Pick the two strongest assumptions of B-D, C-D' and D-D' 8) Negate these assumptions (and treat this as injections) 9) From the injections pick your preferredchoice and check it as WIN-WIN against B and C --- 1. Establish a calendar of work sessions for the group (duration 2-3 hours each) with specific session objectives stated in advance. 2. The first session should be devoted to introducing the methodology, tools and showing case examples of application of the tools. The big picture. This is very important and requires considerable preparation of materials for presentation. 3. The next session should be devoted to group engagement to gather observed undesirable effects. 4. The next session should be to present a multi-cloud approach to developing a CRT as well as an initial CRT (you should prepare these prior to the session) 5. The next session should be devoted to detailed review of the CRT with introduction of legitimate reservations and let the group expand the tree. 6. Keep expanding the CRT and reinforcing the categories of legitimate reservation as a guide to validation and discovery. 7. Once the CRT is fully developed introduce the usage of clouds to drive injections. Let the team build and validate the second to Nth clouds, you do the first one as an example, but let them refine it. 8. By this point they will be ready to work as a group building the FRT. You just need to facilitate. 9. The same is true for PTs and TTs. This process will take about 15 sessions. Depending on the group you may find they are real good at reviewing and validating but not at developing trees. If so do the development work between sessions and just use the sessions to review, validate and enhance the trees. The key to success is engagement. The more you can engage the participation of the group the better the results. Engagement is most effective when it revolves around discussion driven by good questions and lots of patience. The mechanics of building trees is boring to most groups so be prepared to gather ideas in the sessions and do the grunt work yourself between sessions. Engagement also comes from reviewing each iteration of the trees. Thanks, Jerry mailto:Jerry.Keslensky@connectedconcepts.net ============================================================================== I've used the process both ways (creating with a group and getting data, then going off to create the trees and only then coming back to the group to present and get scrutiny). To me, it's one of those "it depends" situations. - It depends on your skill in building trees of someone else's situation. - It depends on whether or not you are a member of this group or are serving as a facilitator. (If you are trying to build the trees in a group and you are not able to divorce yourself from the situation, you are in an impossible situation. I don't recommend the group situation at all then.) - It depends on whether or not you are able to stay on the process of converting the discussion to trees (you provide/diagram the boxes and arrows, they provide the content of the boxes and the arrangement of the arrows), instead of turning the session into one of teaching them to build trees and building the trees at the same time - that's a very easy trap to fall into -- they may want to learn how to do what you do, but that's not the purpose of this meeting, is it? - It depends on your ability to listen to them, hear the logic, capture it in writing, and scrutinize yourself along the way so that you can ask for clarity, etc., all almost simultaneously. Other comments: - you are talking about "our product development system". If you are part of the system, it may make sense for you to do the tree from your perspective and then present it to others for scrutiny/validation. - complaint sessions to gather data are sometimes difficult to keep focused on the topic, and to derive the necessary data for building the trees later. - for me, doing clouds and NBRs are easier with almost any size group (than doing trees) because they are shorter and easier to keep people on topic). My preference when doing trees is to do them off-line and then present for scrutiny/validation Kathy Austin +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 5 analysing a core conflict Quickly Understanding the Organization-Finding the Core Conflict Some of you are very nervous about your class project. "How can I understand the organization? I've only been here x months? I've always been stuck in y department?" Now you can build a quick Evaporating Cloud, this is pretty easy. Here are some techniques I use: 1. Visit with three (or more) individuals in different parts of the firm. 2. Find out their UDEs (more on that below). 3. Create one Evaporating Cloud from one UDE while still with them. You don't need to solve it. The cloud is to show yourself you understand their environment. And the cloud helps them see their own systemic conflict (it puts you on their side and begin a trusting relationship--they will be willing to open up to you). 4. After the three (or more) separate interviews, make a stab at combining the three (or more) clouds into one generic cloud. 5. If you can't get the Generic Cloud, try connecting as many of the UDEs as you can. This will build your intuition. Then, create some cloud (even a poor one) as a Generic Conflict. 6. Revisit the same people as before showing the generic conflict. If they don't agree their conflicts are subsumed (specific examples of the generic conflict) then listen and learn. Modify the generic conflict if necessary. 7. This should give you a lot of intuition and knowledge of the firm. You may have to go back to step 4 in order to get the Generic Conflict. Don't worry about offending. When you find WHAT TO CHANGE, everyone will love you. Hints on finding a person's UDEs: Many times the people you interview won't give you their UDEs. Sometimes they can't verbalize them. Sometimes, you are viewed as an outsider. Maybe they don't want to discuss 'dirty laundry'. Maybe they don't want you (a lower level in the organization) to know about the problems at their level. Whatever. I find the following approach works quite nicely (and if said with real concern it has few negative branches). 1. "What area are you responsible for?" 2. "What is the purpose/goal/mission/product/measure that you try to achieve?" (Doesn't matter if they say parts per day or $/part or profit or reduce cost or flow time or other. Just get them to express in some term what they are trying to do.) 3. "What level of performance have you achieved? / What was the profit last year? / How many did you produce? / What was the flow time? / ..." Get some measurement of their progress towards their goal. 4. "Is that all? / That long? / That few? / That many? / ..." Make some statement of disappointment. Your disappointment is not condemning. Just a bit shocked. 5. "I would think with the quality of this organization and the excellent people and processes you would have much more (or less)." 6. "What stops you from doing better?" 7. WRITE FAST! The UDEs will flow fast! In your mind, think of the opposite of the UDEs (the Desired Effects) that the firm is missing. Look for ANY connection between any one UDE and any other UDE. 8. Be empathetic / understanding. "Oh, now I see why it is so hard. This UDE prevents this DE. And, not only that, this UDE also causes the other UDE which prevents this other DE! Wow, thanks for sharing that with me. I've learned a lot about your organization." Well, now you have my secret formula. Keep thinking! Dr Holt +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 6 CRT <--> ISHIKAWA From: "Jim Bowles" Subject: [cmsig] Re: CRT and ISHIKAWA Date: Mon, 28 May 2001 15:49:18 +0100 Hi HP You wrote: >First level is OBSERVATION : you just watch things like the rise or fall of >the day. FOR ME THE FIRST LEVEL IS CLASSIFICATION BUT OF COURSE THIS STARTS WITH OBSERVATION. E.g The table of elements >Second level is CLASSIFICATION : you have observed that there are stones, >plants and animals and now you start to classify them into >trees, flowers, grasses, fish, bird, 4-legged etc FOR ME THE SECOND LEVEL IS COORELATION >Third level is EFFECT-CAUSE-EFFECT : you observed and classified an effect, >you hypothize a cause and predict other effects of your >hypothesis. FOR ME THIS MEANS THAT YOU SPECULATED A CAUSE AND THEN VALIDATED IT BY PREDICTING AND VALIDATING ANOTHER EFFECT. E.g. Apples fall from trees- Hypothesis - Due to the force known as gravity. Proof: The orbits of the planets around the sun. (20 years later) +( CLOUDS and CAUSE-EFFECT-CAUSE relationships 7 applied on ToC From: "Philip Bakker" Date: Thu, 21 Jun 2001 01:27:51 +0200 After the contributions of Pepijn van de Vorst, Bill Dettmer, Allen Simen, Rick Dennison etc., I collected some Undesirable Effects (UDE) from earlier discussions associated with the propagation of TOC. I think many of these UDE's are pretty common: - More people with reasonable analytical skills should be available in order to be able to apply TOC. - Few people have heard about TOC. - Few people are motivated to learn more about TOC. - Few people are willing to give TOC a thorough try. - Most people are too busy doing the things they're doing. - People seem only comfortable with bottlenecks, conflicts, and cause and effect diagrams when phrased in terms of Deming, TQM, JIT etc. (which they've all heard of). - TOC propagates too slowly. - Sales people are too often spoken to with TOC terminology in stead of their own language. - TOC people have little experience in applying the concepts to sales due to a lack of deep knowledge of the business to business sales process. - TOC spreads only at the pace which suits the "owners of TOC" in their strategy that maximizes THEIRrevenue. - Only a few TOC books (The Goal) are widely available in bookshops. - TOC hardly receives coverage in business programs on TV or in the main business magazines. - TOC people are often perceived as zealots, dogmatics or religious fanatics. - TOC people too little practice TOC according to its scientific roots. - TOC people often ignore or belittle efforts of other management approaches or scientific fields - The spread of TOC is very much depending on Eli Goldratt. - TOC people are often complaining about the resistance they meet. - TOC people tend to fight accepted methods (cost accounting). - The TOC communicty has a high internal focus. - TOC concepts have hardly been subjected to scientific research or prove. - TOC training is very expensive. - TOC education is too much focused on the Goldratt Satellite Program (video and cdrom). - Marketing efforts of TOC has a limited impact. - Succesful TOC references receive little promotion. - The name of TOC puts too much focus on theory. - The definition of a constraint is unclear. - The TOC communicty operates as an isolated island in a networking world. - Many consulting companies still favor pricing based on an hourly rate. - TOC is unappealing to senior management due to it's treatment of costs. - Throughput Accounting seems 'not of this world' to many people. - TOC measurements are too different from what people are familiar with. - TOC offers little supports to improvement to non-constraints, unlike TQM, Lean etc. - The TOC toolbox is too much focused on Thinking Processes, DBR, CCPM etc. - Shifts in constraints can cause too drastic changes in management focus. - TOC people focus too much on the TOC hammer. - TOC is mainly associated with production in traditional factories. - TOC often comes across as being too rational. - TOC improvements are hard to start with a bottom up approach. - TOC people often ignore the human resistance to change. - TOC people too easily impose their solutions on disciplines as marketing and sales. - Explaining of TOC takes too much time. - TOC people too little practice their own TOC-tools. - Only a limited number of TOC implementations survive. - There is a perception that their are substantial risks involved with implementing TOC. I'm willing to put in time to work towards an understanding of our own constraint that can be managed. Who has a suggestion to just do it? Three cloud approach etc. Let's Nike it. +( CMSIG list server From: "J Caspari" Subject: [cmsig] Re: Accessing CMSIG list archive Date: Tue, 23 Jan 2001 13:13:42 -0500 I have had not very good results with this because of the limitations on the number of messages. You will recall that this was one of the original reasons given for moving the Goldratt list to the APICS server. I discontinued the general archive effort that I was doing at that time because of the promise of conviently searchable archives. Anyway, the dirrections are below. Searchable Archives =================== If you are a member of a mailing list, you can search past messages of the mailing list, and the messages that match your search specification, will be sent back to you via email. Send an email message to our List Server Address (lyris@lists.apics.org), with the following command in the body of the message: search listname [search words] . Lyris will search for messages which contain any of the given search words. If multiple messages match your search, the search results will be organized so that the messages that matched more of your search terms appear at the top, and messages with equivalent search scores are organized with the newest messages first. +( Collins : Built To Last, goals of a company Date: Tue, 26 Feb 2002 07:00:01 +0000 (GMT) From: Mandar Salunkhe Subject: [cmsig] Re: Jim Collins article In my mail i had asked "Has anyone on this list read Built to Last and had the same questions in his/her mind?". Except for a few, it seems not many had read this book which also is as good and thought provoking as The Goal. Yes, Ronald, I shall surely read "Good to Great". I have added here another extract from a Jim Collins article which on reading gives you the same feeling that you get on reading the Goal. That the answer is always right in front of us and with a liitle bit of commonsense and the "right consistent approach" (a combination rarely found in upper strata of management), greatness is not far away. "The Timeless Physics of Great Companies (published as 'Perspectives: Don't Rewrite the Rules of the Road') By Jim Collins Copyright 2000 by Jim Collins. This article first appeared in Business Week, August 28, 2000. I had just finished sharing the results of 10 years of rigorous research into what makes enduring, great companies with a gathering of Internet executives when a hand shot up. 'How do you respond to the idea that nothing from your research applies in the New Economy?' asked the exec attached to it. He was challenging the whole idea of learning from the past. If this truly is a New Economy, he wondered, don't we need to throw out all the old concepts and start from scratch? Well, yes and no. Yes, the specific methods of building companies in coming years will be dramatically different than in the past. But that does not mean we should toss aside the timeless principles that made great companies great. What's the difference? Think of it like this: While the practices of engineering continually evolve, the laws of physics remain relatively fixed. The immutable laws of management physics include some simple yet important concepts: Do only those things that you can be the best in the world at; those things you can be passionate about; things that make simple economic sense. Take the axiom that you need to 'put the right people on the bus.' The best executives have always focused first on getting people who share their values and standards. They understood that vision and strategy cannot compensate for having the wrong people. Once you have the right folks in place, it's much easier to steer the bus as conditions change. That's exactly the idea that Bill Hewlett and David Packard had in mind when the two young engineers met to form their company in 1937. The minutes of that meeting begin by stating their intention to manufacture innovations in the general field of electronics, but then go on to say, 'The question of what to manufacture was postponed.' In fact, the whole founding concept of the company was not so much what, but who. They were best friends in graduate school and simply wanted to work together and create a company with people who shared their values and standards. As Hewlett and Packard scaled up, they stayed true to this guiding principle. After World War II, they hired a whole batch of fabulous people streaming out of government labs, without anything specific in mind for them to do. Packard grasped the subtle truth that a great company will always generate more opportunity than it can handle, and that growth is ultimately constrained only by the ability to get enough of the right people. At the same time, if he picked the wrong person-someone misaligned with the company's values or unable to deliver results-Packard would throw him off the bus, and in a hurry. Yes, the Internet requires significant changes in the way we manage and lead. But if you don't have the right people, it doesn't matter what you do with the Internet; you still won't have a great company. If, for example, Value America had spent less on advertising ($69 million in 1999 on a revenue base of $183 million) and invested even half that in assembling an army of the best possible people, then perhaps it would have avoided the distinction of becoming the consummate dot-com implosion. Iacocca-style advertising and a snazzy Web site are all fine and good, but Packard's Law still holds, even in the Internet economy: Growth in revenues cannot exceed growth in people who can execute and sustain that growth. In fact, our bigger problem today lies not in the fact that we live in a time of change. Rather, like people in the 1500s groping to understand the natural world, we have only limited understanding of the physics of great companies. Worse, we inconsistently apply what we do understand. === The real path to greatness, it turns out requires simplicity and diligence. It requires clarity, not instant illumination, It demands each of us to focus on what is vital and to eliminate all of the extraneous distractions.' '. -Jim Collins Author" +( complex systems From: "Jim Bowles" Date: Thu, 3 Mar 2005 00:56:33 -0000 Subject: Re: [tocleaders] Inherent Simplicity Eli Goldratt uses in an interview he gave about the Viable Vision: "When I was still in University I said that the way you look at things in physics, biology or chemistry is so different from social sciences. In physics there is no statistics. There is Cause and Effect and they are the tools for predicting the outcome. You understand that everything is always connected and things converge. Newton, for example, set his three laws of movement and had finished with all the mess of the objects in the world. One can assume at the beginning that in reality there is an inherent simplicity, and that things converge." "The more complicated the system is - the more interdependency there is. That means that the degree of freedom is very limited. And that makes it a simple system. Even in the most complex system there is inherent simplicity. If we will find the points that affect the entire system, its constraints, and know how they affect it - it’s much simpler system that we’ve imagined." It also appears in one of his slides during the Viable Vision when he introduces the Five Steps of Focusing. --- From: "Jim Bowles" Date: Fri, 9 Sep 2005 09:54:04 +0100 Subject: Re: Where conflicts hang out; was: RE: [tocleaders] A question For those who are new to TOC it might be useful to use Eli's actual words Day 1 - 20/5/98 An overall view of TOC Objective: To transfer the world of material science to the world of people - human based world. There are two problems - 1. The definition of science[i]. Can't be measured. 19th century Popper[ii] - the ability to predict. 2. The difference between hard science and the soft science. Complexity. system A & system B (with arrows). See diagram A and diagram B below: Definition of complexity. - common definition is the amount of data needed to be supplied. - Science, the degree of freedom the system has, in common language, the number of points needed to be touched in order to influence the whole system. [Pyramid of command]. Arrows of cause and effect. => B is simpler. Science is based on two beliefs: - any system in reality is extremely simple. There cannot be a complex system in reality. Example: Four forces - search for a unified force that explains the whole four. Transfer this to the human behaviour. Sense of control demands simplicity. Scientist will not accept system A. it is incomplete. In order to improve it we need to find the common cause through cause and effect. Question: Which approach is more effective? It depends, on the scientific approach, the work, the time it takes. In the steel industry example Ton's per hour was shown to be a problem. Measurement? - people are not stupid. We need to change the definition of the word "problem". Measurement cannot be defined as the core problem. Science defines a problem as a conflict between two necessary conditions. At the bottom there must be a conflict. The second belief of science - conflicts cannot exist in reality. Example: We measure the height of a building. We get two answers. Scientific approach - we surface the underlying assumption and remove them. There must always be a way to solve problems. Conventionally - compromise Solving a problem is taking actions to change reality so that the conflict evaporates. TOC is about finding a no compromise solution. A Win-Win Jim Bowles +( Consignment Stock - Maffia Offer From: Rudolf Burkhard [mailto:RudiBurkhard@compuserve.com] Sent: Wednesday, December 22, 1999 11:07 PM To: CM SIG List Subject: [cmsig] Distribution Solution/Mafia Offer In one of Eli's books (It's Not Luck?) he suggests the Distribution Solution in combination with consignment inventories as a Mafia Offer. The Distribution Solution makes it possible to do this because much less inventory is needed. I think in the same story he also suggests that the 'free' inventory can be traded for shorter payment terms - in a way that both supplier and customers win. The effect is possible because the supplier values his inventory on consignment at raw material costs while the customer must value the same inventory at his purchase price - usually a fairly high multiple of material cost. The benefit to the customer is therefore enormous and he should be willing to give some by paying with shorter terms. If you play with such numbers (and European payment terms of say 90 days (France or Italy)) then in the customers books the effect can be seen to be small - as his payables also go way down eg the reduction in inventory is offset by a reduction in payables (The offsetting effect is actually there for any level of receivables.) The reduction in working capital is then very small. ============================================================================ There is benefit when this solution enables the customer to turn the inventory and receive cash to use for payment before the payment is due. In typical cases where the customer must purchase a large volume of inventory at a time, the payment is due before all of the inventory is sold. The benefit is recognized through improved cash flow. Another benefit is when one looks at return on assets. The extra inventory means slow turns and this means low return on assets. And it does not matter that the invoice is not yet due for the inventory unless one is looking at Return on Net Assets. Norman Henry =========================================================================== Bill there is a Jonah workshop video, October 1996, abstract below, that sets out a TOC solution. I have worked with an office supplies company that developed almost the same solution, but over a considerable timescale, by trial and error, and by the formidable intuition of an individual manager. Using TP, could that time not have been reduced, and also could ordinary sales people have been able to sell it? Also, from the tape, it was clear that the sales people needed to re-develop the solution for themselves under guidance. Regards Peter Evans JCS-5 A Mafia Offer isn't Enough: Obstacles, Negative Branches, Academic Help & New Ideas Patrick Hoefsmit TIM VOOR KANTOOR Patrick is the Managing Director of TIM VOOR KANTOOR. TIM stands for Time is Money. This 100 year old company sells office supplies. They sell $18 million in a $1.2 Billion market. Prior to TOC, they were box pushers. Now they are a service organization. Patrick's goal was to implement a Mafia Offer to the market. UDEs of market: Reference office supplies. Office Supplies are not available when needed. Everyone takes them and hordes them. When needed, last minute ordering is very expensive. Mafia Offer Solution: Put a TIM storage cabinet in the clients office and keep it stocked. They receive an itemized list of supplies used weekly-Never Ending Store. Implementation issues: Installed buffer management on fast movers and slow movers. Some clients are far away from TIM so they needed a franchise organization to deliver and restock. They needed to learn to sell the Mafia offer. Decision maker to accept the Mafia offer is higher up in the organization than the purchasing agent. Difficult to raise the issue of office supplies high enough to merit high level attention. Hard to get franchises started with few clients using the Never Ending Store. Some clients have people who stock supplies. Solutions: Hired internal marketing person to teach franchises. Created TT to sell Never Ending Store. Results: Clients doubled or tripled use of supplies (started purchasing other things from TIM that they previously obtained elsewhere). Started ordering Copiers and Fax machines because the TIM's stock person was in the office weekly and was an easy contact. Pilferage went from 10-20% to near zero (before, sales of magic tape went up in December and Binders up in September). Now with the reduction in quantities, there is a maximum that can be taken. Also, accounting is good. One manager told the office, "We have issued 1.2 pair of scissors per person this last month and I still don't have any. I don't know where they are but if they are not back by tomorrow, there will be trouble." They mysteriously returned. =========================================================================== Straight from Steve Melnyk's mouth (he was my OM prof during my MBA studies) were also concepts like ... - Minimum requirements to compete - Desirable (but not required) product content - Content which DELIGHTS the customer - Content (or lack there of) which destroys the deal - All the above applied to transaction terms I think the notion of "mafia" offer goes to determining and meeting customers' requirements AND finding enough things that DELIGHT the customers so that (once they understand), no customer will decline the offer. These notions apply best (perhaps, only) in segmented (or segmentable) markets. Successfully segmenting commodity markets where the buyer is also the (retail) consumer is a GOOD trick. It is not impossible. My better 51% pays a premium for Morton brand table salt rather than buying the store brand ... +( constraint 1 From: "HP Staber" Sent: Saturday, April 14, 2001 3:54 AM The classical categories of constraints are : - market - vendor - ressource and - policy constraint as proposed by Goldratt > > At the Decalogue homepage in the link to "10 steps to improvement" there are 7 constraints and methods to resolve them mentioned : > --- > Resource and capacity constraint: Drum-Buffer-Rope together with Buffer Management allows effective exploitation of the constraint and control over the performance of the system. > Time constraints Critical Chain, synchronization of strategic resources (drum) and buffer management control and elevate time constraints. > Policy constraints The Thinking Processes provide a systematic analysis of what is wrong with current policy and what to replace it with, without generating negative effects. > Sales constraints The Thinking Processes have to be used to redirect the mindset of salespeople to focusing on the objective of selling and demonstrating their commitment to the goal of the company. > Marketing constraints This is addressed through constructing an offer the market cannot refuse. (See Step Eight.) > Organization (Structure) Constraints These arise when the developing business is held back by practices, functions and authorities that do not make sense anymore. The Thinking Processes must be used to develop a new structure suitable for the growing business. > The Human Behavior Constraint This has been addressed throughout the Decalogue in setting the goal, defining measurements, designing and making the system stable, and controlling variation. > --- > I tend to think that you are able to regroup them into the 4 types however. The last two fit well into "policy constraints". The time constraints are usually either ressource or policy constraints ... > But then I'm not authorative in ToC. What I say is just what I learned here and from reading the books. From: "Bill Dettmer" Date: Sat, 14 Apr 2001 07:40:44 -0700 [Okay... Material, financial and knowledge/competence. Material is different from resource, which has been used in the "capacity" context long enough that I don't think it can bear wrapping anything else with it. By a material constraint, I'm referring to a shortage of material (volume) or enough material at the quality level required to complete a mission. For example, about 20 years ago there was a world-wide shortage of chromium. That had nothing to do with the reliability of suppliers, or steel producers' capacity to do something with it once they got it. It just wasn't available. I suppose that the current high level (projected to go higher) of gasoline prices in the U.S. could be considered a material shortage stemming from OPECs decision to curtail production and no alternative source readily available. Financial would be a cash flow constraint. It's not a budget issue. It's a situation in which the company needs to be paid for work it has already done before it has the funds to go ahead with work it could do in the near future--but can't, because it doesn't have either enough cash in the bank or enough credit. Bob Fox has often said that net profit and ROI are nice, but "if you don't have enough cash, nothing else matters." I tend to agree with that. Cash CAN be a constraint, yet it's not explicitly addressed in any of the other categories. Knowledge and competence are somewhat related to one another. Maybe a company that wants to expand its business COULD do so (the market is there, and it has the capacity), but can't because to do so would require knowledge or understanding of technology that it doesn't currently have. E.g., a capability to effectively do solid-state electronic production, micro-optical work, or plasma physics. It's not that this kind of knowledge doesn't exist somewhere, but if the company doesn't have that knowledge resident within the organization, then I submit they have a knowledge constraint. Similarly, not having a work force that can apply such technology might be a competence constraint, so I tend to consider them related. Like quality, I haven't seen these explicitly stated as being included in any of the other categories. There may be others as well (at least, I admit to that possibility--otherwise, I'd be guilty of a "thinking constraint".] +( constraint 2 in the market Date: Fri, 09 Feb 2001 11:40:27 +0200 From: Eli Schragenheim Subject: [cmsig] RE: IS the constraint always in the Market? Or should it always be internal? The basic assumption I try to challenge is 'either you have the constraint internally OR you have the constraint in the market'. To my mind we can and in most cases should strive to have an internal constraint while still recognizing the market as the major constraint, hence taking the market trends in close inspection and trying to exploit both constraints. I too see two clear categories: either the constraint is in the market or we have both internal constraint and the market. While this is a different verbalization then the one Eli Goldratt uses, I believe it leads to the same guidelines. Mark Woeppel describes Goldratt's approach to keep an internal constraint while keeping presence in many unrelated segments to protect from the damage of market decline in one segment. I interpret this as guidelines for proper exploitation of both constraints: the limited capacity in the organization (focused on just one internal resource) and the stochastic nature of markets. So, the recommendation is: do not fully exploit your internal constraint by sticking only to the most lucrative ones - go to somewhat less lucrative markets as well. This is a core strategic approach. For the shorter term I have another guideline: leave SOME protective capacity on the internal constraint to protect the service level to your market. If you try to load your internal constraint to 100%, then your lead time may be too long for your clients and any Murphy hitting directly at your constraint has an impact on all your customers. Note, if I knowingly chosen to load my capacity-constraint-resource to only 90% of its capacity in order to preserve the level of service to the market, then the internal constraint IS STILL A CONSTRAINT! I refuse to take more orders, that are available and would generate more T, because I lack the constraint's capacity to do more while keeping the level of service. So, my approach is that the focus of management should be to assess and closely monitor the balance between the market opportunities and requirements and the limited capacity/capability of the strategic internal constraint. I see no conflict between this view and Goldratt's advice to Mark Murphy. Challenging the assumption that the constraint is either internally or externally simply evaporated the cloud. Mark Woeppel wrote: > > What a juicy question! > > The idea of having a constraint externally is that you have "just enough" > protective capacity internally to absorb internal and external fluctuations, > thus maintaining a high service level and protecting future throughput. > Maybe we can get Eli S. to comment further on this - I know he lurks on the > list. > > The idea of keeping the constraint internally will maximize control, and > protects you to a certain degree, from major downturns in your primary > market. The thinking is that if the constraint is internal, and the market > undergoes a decline, you will not suffer too much, because you have > implemented a segmented market strategy that will allow you to shift your > efforts to the market that is not declining and have not sought to take the > entire market segment (what a mouthful!). > > So one side says that we are to keep the constraint in the market by keeping > excess capacity, but using an internal control point (drum) to synchronize > the system. This protects long-term throughput. > > The other side says keep the constraint internally to protect the health of > the organization, and manipulate the market segmentation strategy to keep > the plant full. This maximizes profitability. > > Which is right? How much risk do you want to take? How well can you > segment your market? How much market share do you have? --- From: Billcrs@aol.com Date: Fri, 9 Feb 2001 09:18:57 EST Be careful here because I doubt that this is what Eli G. really means. An internal constraint (as in bottleneck) means it is more easily controlled, but the key question is what are the implications for the future. Irritating customers in the short term has great potential for causing them to go away in the long term and you may not know until it is too late. I'm sure that what he means is to manage your operations by keeping the constraint in the same place and scheduling it accordingly, but DO NOT force it to be a bottleneck. A sustained bottleneck greatly increases the probability of challenges in maximizing Throughput in the long term. In a message dated 2/8/2001 4:46:52 PM Eastern Standard Time, Fountme@jmusa.com writes: > The constraint in the market feels true to me as a polite way of saying "We > have not yet found all possible ideas and customers for our product.". > > As for keeping the constraint internal as an easier way of focusing, from a > management standpoint does this not make sense? For example in computer > companies, the easiest way to grow something is to just add another > computer doing the same thing. You probably know as well as, or better > than the rest of us, the expense and problems of any process change. > > Is the reality though that it is really hard to sell buying more > non-constraint cpaacity to keep the business management easy? If we free > up the bottleneck, it will move. > > >>> "Murphy, Mark" 02/08/01 10:42AM >>> > > > I have a question for the list: > > I have read Eli S's book Management Dilemmas - in this book he professes a > belief that your true constraint is ALWAYS in the market. > > However, from personal conversations with Eli G, I know his personal belief > is that a business should manage itself in such a way that it's constraint > is always internal. His reasoning is that an internal constraint can be much > more easily managed for maximum profit. (This does not preclude growth in > any way - but determines that you will grow as fast as possible while > keeping your constraint in the same place) --- From: Billcrs@aol.com Date: Fri, 9 Feb 2001 14:50:18 EST In a message dated 2/8/2001 10:09:26 PM Eastern Standard Time, m.woeppel@gte.net writes: > What a juicy question! >.... {{I would suggest that there is a flawed assumption inherent in what you have said Mark. You claim that you can "shift your efforts to the market that is not declining and have not sought to take the entire market segment". It has been my experience that no business in today's world can gain significant market share in any market without really concentrating on that market for a long period of time. This belief, that market share can be gained simply through faster delivery, is simply not defensible. It's not that you can't gain some sales, it's just that as your competitors see their share declining they will respond be matching your delivery commitments no matter what they have to do and your gains will stop. Business customers do not readily switch from suppliers unless they have been really unhappy for a long period of time. It takes more than claims of faster delivery to get them to switch and when they do switch it certainly takes longer than a few weeks. If a business is to gain market share they are going to have to do it by helping the market produce better ongoing operating results. To do this requires a concentration of effort that is full time. Part time won't do it. Thus I would argue for not allowing the constraint to be a bottleneck for any significant length of time.}} > So one side says that we are to keep the constraint in the market by keeping > excess capacity, but using an internal control point (drum) to synchronize > the system. This protects long-term throughput. > > The other side says keep the constraint internally to protect the health of > the organization, and manipulate the market segmentation strategy to keep > the plant full. {{Keeping the constraint as a true bottleneck can only protect the short term health (maximize short term profitability) of the organization. In fact, I would argue that you are mortgaging and very possibly sacrificing your future in order to maximize short term Throughput with this method of management. Internal bottlenecks irritate the heck out of most customers. Do it long enough and they will go elsewhere and you may not even know that they have left until it is too late. In my comments above I've said that customers have to be really unhappy with delivery for a long period of time to change. Having a policy of keeping the constraint internal runs the risk of doing just that.}} > Which is right? How much risk do you want to take? How well can you > segment your market? How much market share do you have? > > The great consultant's answer: It depends. {{My answer would be that there are far more long term risks associated with a policy of keeping the constraint internal. I think it would be extremely difficult to make a business case for keeping the constraint inside the business as being a good decision IF the business wants and/or needs to grow.}} +( constraint 3 - policy constraints From: David J Anderson [mailto:netherby_uk@yahoo.co.uk] Sent: 07 December 2004 15:50 To: agilemanagement@yahoogroups.com Subject: [agilemanagement] Policy Constraints are Dead A change of subject this morning... Policy Constraints are Dead! So it's been a while since I was at the TOC ICO conference in Miami. I've been away on personal business. I'm just catching up with some of the news from back then. Regular web log posts at the website will restart soon. One big piece of news from Miami was Eli Goldratt's announcement that "there is only one kind of constraint - a bottleneck". "Policy constraints were one of the stupidist mistakes of my life." So there you have it - the term "policy constraints" is dead. All policies formerly known as constraints are in fact "subordination decisions" (which is why I used that term in a post yesterday). If you weren't in Miami - you heard it here first ;-) David +( constraint 4 : corellation to variation Date: Thu, 21 Jun 2001 23:50:28 -0400 From: Brian Potter Subject: [cmsig] Correlation as Indicator of Constraint Location? The proposal sounds like shooting model boats with 16" 50 caliber guns, but it will identify SOME resource(s). If you do not know, any sane guess is as good as any other sane guess ... so why not? As Tony mentioned, you may learn something useful as a side effect. IF all you want is the constraint location, there is an easier way ... 1- Make a sane guess (e.g., apply your statistical model or just write everyone's favorite suspect on a paper slip (opportunities for an office pool, here); dump all the slips in a container; draw one slip from the container "at random," and call the resource named thereon THE CONSTRAINT). A saner guess is better, but if you do not know, you do not know. Go with a "best guess" reached by some reasonable process. You need not be too concerned about making a mistake because you will correct any mistake without suffering more than you now suffer. Making a choice worse than what you are probably doing without explicit awareness of your constraint and its true capacity is quite challenging. Since you are making a good faith effort in a completely different direction, getting worse (even temporarily) is a VERY unlikely outcome. Go to step 2. 2- Run as though the resource you picked is THE CONSTRAINT. Go to step 3. 3- If it is ACTUALLY THE CONSTRAINT, you will know within a few job lead times (or sooner) that your guess was a good one because the rest of your shop can keep up, WiP melts away, and all the good DBR blessings happen to you. Go to step 5. 4- If THE CONSTRAINT ends up with idle time while one or more other resources cannot clear their queues, the REAL constraint(s) is (are) one or more of those resources. Things are better, but you still have some chaos focused on the resource(s) which cannot keep up. Call one of the resources which cannot clear its queue THE CONSTRAINT and return to step 2. 5- Having identified your internal physical constraint (or maybe even learned that you do not have one), begin doing the "process of ongoing improvement" things as though this exercise were the simple one liner, "IDENTIFY the constraint." While you are at it, you may wish to consider the "strategic constraint" issue. Which resource would BE the constraint if you could choose? Well? Why not choose? Who does the capacity planning? Why not start with picking a strategic constraint? Would you have known how to pick a strategic constraint up front (before you learned the lessons of steps 2-4)? The simplistic iterative constraint finding procedure above may sound reckless. In fact, it may be more disciplined that the way many organizations manage their shop floors. Limiting material release to the capacity of SOME resource will reduce the WiP clutter that happens when material release happens as soon as SOME resource can use the material. This iterative process WILL converge and surprisingly quickly. Each step through the cycle will adjust material release toward a better approximation of actual capacity. Since the estimate ALWAYS IMPROVES and the number of choices is finite (please, tell me you do not have an INFINITE number of resources; that's a bit much even for a dot-mil), sooner or later one picks the ACTUAL constraint and stops t rying alternatives. Upon picking the actual constraint(s), one actually REACHES a state where material release EXACTLY MATCHES real capacity. Yes, you COULD have more than one constraint (up to a maximum of one constraint per product),! and more than one active constraint will indeed provide an excellent complexity source. If you try the little algorithm above, you may conclude that you have more than one constraint. If that happens, try picking (from among the likely candidates) as THE CONSTRAINT the resource which has the hardest time subordinating itself to other acti vities (e.g., it has a long setup time so that breaking a setup for subordination to demand from another resource wastes too much capacity) and treating the other potential constraints as resources with severely limited protective capacity. If that tre atment fails, then fall back on the multiple constraint view (pick the smallest number of constraints possible; Ockham's Razor applies). >>>>>>>> Original Message <<<<<<<< Subject: [cmsig] Correlation as Indicator of Constraint Location? Date: Thu, 21 Jun 2001 15:06:52 -0700 From: Laney Nancy K PSNS See what you think of this idea.... If the constraint drives overall performance, then wouldn't variations in the constraint's performance appear magnified/diminished in overall performance? And if that is so, couldn't you use multi-variable linear regression to find (quantitatively rat her than qualitatively) your Herbie? The reason why I ask is that our organization/work is complex and no-one is sure what the limiting factor is. All we have is suspicions. Comments? Nancy Laney --- From: "Tony Rizzo" Subject: [cmsig] Re: Correlation as Indicator of Constraint Location? Date: Thu, 21 Jun 2001 18:28:20 -0400 This certainly works for physical systems. I've discussed this approach with statisticians who used it to identify the major causes of energy loss from homes, although I don't know what precisely they were measuring or correlating. However, to find herbie with this method would require that we measure performance at all stations. If we do this, then our very attempt at measuring performance is more than enough to corrupt the measurement of interest. In other words, just by measuring it, we affect it and confound our results. Finally, there is the issue of policy constraints. How does one include policy constraints in such correlation studies? +( constraint 5 : where is the constraint Subject [cmsig] Where is the constraint ????? Date Wed, 15 Dec 2004 185819 -0500 From "Potter, Brian \(James B.\)" There is actually a pretty effective and easy empirical method. The constraint is where buffer management tells you it is. Switch production to S-DBR (like DBR but simpler, buffer only the shipping point which you treat as the only constraint [meet delivery promise dates] and tie the rope from the shipping point to material release). If your buffer management never identifies any particular resource(s) as being a high frequency cause of shipping buffer penetration (threats to on-time delivery), you have no capacity constraint. This is actually a fairly likely result. If buffer management exposes a small number of resources as penetrating the shipping buffer too deeply much more often than other resources, one or more of those resources (or some resource[s] upstream from them) may be an internal capacity constraint. Check those resources for policies (e.g., large transfer batches, infrequent setups, high local efficiency requirements, making much FGI to stock, ...) which may artificially make any such resource present highly variable supply to their downstream demands (or receive highly variable supply from their upstream suppliers). Absent reasons to suspect poor subordination (formerly called "policy constraints") to the shipping point (or after correcting any poor subordination policies), try treating the worst offender(s) as the DBR drum(s) by switching from S-DBR to full DBR. After the full DBR system begins working, the constraint buffer at each drum will once in a while identify resources which drive constraint buffer penetrations. If any particular resource appears too often in this way, consider shifting the drum from the current drum to the resource causing problems for the current drum. Again, look for poor subordination decisions (and fix them) before electing to reconfigure the drum in the DBR system. Do not pick a new drum based on intermittent buffer penetrations. Choose as a drum a resource that, even with reliably available inputs, frequently leaves either the shipping point or the current drum starved for inputs. Do not pick a drum without first checking carefully for subordination choices which prevented the resource from meeting demands which it might meet without the obstructive subordination choice (e.g., be efficient taking higher priority than serving downstream demand). Yes, this is entirely empirical (but very practical and effective, especially for shops with short order-to-delivery times [which will get even shorter]). You probably cannot calculate a constraint (or several) in advance. Besides, S-DBR or DBR will probably surface so much capacity beyond what you think you have that the calculation would be wrong within a few production lead time cycles. Yes, this seems "too easy," but it will work in any environment where either DBR or S-DBR will work (most discrete item production environments). Automatic constraint discovery via buffer management comes as part of the DBR package. That elegance derives from the applied statistical process control methods (usually, a Pareto chart and some "common sense" will suffice, but SPC will help avoid a false shift from a real constraint to a resource with low protective capacity) involved in buffer management. DBR automatically shifts inventory to the place(s) where it will protect due dates from Murphy with very nearly the minimum required WiP. BTW, a concurrent switch to rapid replenishment distribution will help you cut FGI substantially while improving customer service and presenting your plant with lower variance demands. The time outbound logistics folks spend selling down excess FGI may help operations folks learn the new methods with less pressure on meeting current demands. -----Original Message----- From Spyros Bon [mailtomarkaseos@yahoo.com] Sent Wednesday, December 15, 2004 353 PM To Constraints Management SIG Subject [cmsig] Where is the constraint ????? Hi everybody I am oppening this discussion for a very simple reason. In TOC whatever you do, the first step - always- is to identify the constraint. Let's assume that we do not have a policy constraint (TP can handle this) and that we have only physical constraints. What are the methodologies to find it? Let;s do not take the easiest case, by walking around and looking for WIP but let;s take a more complicated environment - productions are not the same every day (many SKus), demand is not the same every week and there are several production lines sharing common resources. What would be the steps that you would follow to identify the constraints? With your comments you could make it really complicated - so we can learn from each other different approaches. --- From "Stefan van Aalst" Subject RE [cmsig] Where is the constraint ????? Date Thu, 16 Dec 2004 091825 +0100 One very important decision is around the question where must the strategic constraint be in order to manage the company in the way one wants to manage the company. There are some that argue it is important to have an internal constraint. This will put you always with the possibility of interacting constraints. For there is always the market constraint (the promisses sales made act as a constraint) and there might be, for whatever reason, pop-up an internal constraint. This is from a TOC point of view undesirable ...but also from other philosofies interacting constraints = chaos = unpredictable = in direct violation what management needs predictability (within certain limits). Don't you like TOC, then take Deming, same story. So if the strategic constraint has been identified, it must be decided how to exploit it and what needs to be done to subordinate the rest of the organization to it. First question that arises if the strategic constraint is at the delivery only, how do we make sure that the flow is not constrained? Many techniques exist for this and Simulation is one of them (in whatever form). If the simulation demonstrates a weak spot, action must be taken to make it strong enough ...some how. If the above is not done, chaos will quickly come into play. The above is only possible when somehow the system is capable to deal with fluctuations. There are only to mechanism that can do this input control (refusing or adding extra work if the actual demand is a mismatch with the available system capacity) or by capacity control (adding / removing capacity if actual demand is a mismatch with the available system capacity). If one can't play enough with capacity, typically in capital intensive operations, DBR as is mostly known is fully applicable. If one can play nicely with capacity, typically in ready-available-skill human intensive operations, then DBR is still applicable (see paragraph below) if in addition the simulation or other solution is used to identify which station needs more/less capacity. Why is DBR still required IMHO? Forget about TOC, think Deming, Sheward and Pareto. Next to good Planning and Control, continuous improvement is important. Process improvement can only come from listening to the voice of the process (not by doing simulations, only by actual facts). Pareto diagrams are not only straight foreward, they are also very pragmatic and proven (including theoretical proven). An important business driver is lead-time, the faster the better for both the organization and the client. Buffer Management can be used to have good trigger points to collect the date to do the Pareto. So in short, what you describe on what you do is rather following the Focusing Steps pretty precisely and in doing so you use (similar) principles on which TOC is built. Can it be done even better? Assuming that Deming et all are right, yes you can. Your operations can be improved with more focus if you use Pareto that draws its information out Buffer Management (this doesn't mean you must manage your system through Buffer Management, only to collect the data based on the triggers of Buffer Management). I wouldn't be surprised that you are doing something very similar already ...in any case call it what you want, but if it quaks like a duck, looks like a duck, waggles like a duck ...I'm tempted to call it a duck. ----original message From Cirunay, Cesario N [mailtocesario.n.cirunay@boeing.com] Sent Wednesday, December 15, 2004 1140 PM To Constraints Management SIG Subject RE [cmsig] Where is the constraint ????? We manage our business thru Risk Management and Simulation upfront or during planning phase to identify possible constraints. We adopt all the best practices in the industry that have ever known and discovered. Simply, our approach is not to wait for the constraint to appear. Whether we like it or not, constraint will happen along the way; so proactively, we prevent it to happen. We belong to a make-to-order company. During the proposal phase, we already know how long will it take to build the products and what are the resources required, both technical and physical. Before signing the contract, rate of production is already determined at an executable level. We use Push System in planning and Pull in execution. For the last decade, we operate at approximately 90-95% utilization of our capacity on a semi-continuous flow production with no significant constraint in the whole operation. Our business unit P/E is getting better every year thru elimination of wastes or Lean. Sorry guys, I have to hurt your feeling - technically, as of now, TOC has no place in our operation. --- Subject: RE: [cmsig] Internal or External? [was: Thank you (Segment 3 of 3)] Date: Mon, 9 May 2005 09:42:01 +1000 From: "Justin Roff-Marsh" As I suggested previously, I strongly suspect there is a connection between Porter's three competitive strategies and the optimal constraint location. (I do agree with John that the constraint should always be internal): Cost leader (operationally efficiency) Technology leader (continual innovation) Niche marketer (customisation) My suspicion is that the constraint, in each instance, should be determined by the strategy: Cost leader: production Technology leader: new product development Niche marketer: the sales process Now, I know my position on this has been disputed, but I don't see a flaw in my logic, and I can think of lots of examples to support it. I'm thinking that the idea of having an internal constraint is less concerning if the constraint is maintained in the appropriate location: Disney or J&J presumably want to keep their NPD functions fully utilised Toyota and Dell would not be able to maintain their cost advantage if their production processes were not fully utilised A niche marketer cannot afford to have the constraint anywhere other than the sales process -- but clients are more likely to be prepared to wait for service (we maintain a two-month wait-list) I'm guessing where Toyota and Dell are concerned, their sheer size enables them to aggregate variation and, consequently, that a production constraint would result in relatively minor variations in lead time (but that's just a guess). +( constraint 6 : setup optimization example From: Prasad Velaga [mailto:prasad_velaga2003@yahoo.com] Sent: Wednesday, March 30, 2005 5:03 PM To: Constraints Management SIG Subject: Popularity of TOC The following is a numerical example that I have created without waiting for any help from Christopher. There are 10 products 1,2,..., 10 that a company can produce and the setup time, run time per unit and throughput per unit and upper limit on production level are denoted by S, R , T, and UL, respectively. The time unit is a minute. What is your production strategy if you have 4800 minutes of time for production. # S, R, T, UL 1, 230, 2.5, 60, 250 2, 520, 1.6, 45, 400 3 , 340, 3.1, 55, 240 4 , 210, 1.2, 32, 360 5, 420, 2.2, 50, 300 6, 150, 3.0, 55, 420 7, 520, 1.2, 38, 310 8, 310, 1.8, 48, 260 9, 450, 2.1, 67, 340 10, 500, 1.5, 50, 410. S = setup time for a product R = run/process time per unit of product T = throughput per unit of product UL = upper limit on quantity of product which shoud not be exceeded. If you have an option to select and produce several products having different values of S, R, T and UL within a specific constraint time, what is the product mix that maximizes the total throughput during the specific constraint time? This is a very diluted version of some real production problems involving there is a single constraint resource. Sorry for introducing new symbols as part of a numerical example. --- From: "Stefan van Aalst" Interesting challenge. If I got it correctly, the question is asked to the production manager. A quick calculation learns that the actual market demand exceeds twice the available capacity (2.121) if the total demand is going to be satisfied. This requires a business decision and not so much a production strategy: - how likely will this situation continue to exist that it makes business sense to double the total capacity? (a quick calculation learns that 2x capacity increases T with by 1.78 times, the question is what will OE and I do?) - if this doesn't continue long enough to invest, then it is a matter of choosing the right customer (which one will be profitable over the long run). This last point shouldn't be underestimated. I've done some business with a large copper company and was lucky to talk to the president as well. The problem they faced then was: how to increase capacity the order books were more full than production could handle (6-9 months). Unfortunately I wasn't able to make clear the problem we foresaw: when the customer perception changes that there is no shortage world wide, even 'firm' orders will leave the books. Even more unfortunately within 6 months after this talk the company faced firing people rather than facing how to increase capacity. But sticking to the original question and given the avialable information, my answer would be: Since set-up times are significant in relation to the production run (from 37% up to 140%), I included them in T/Cu. Given the described situation I assume that only one setup is needed and that more setups have no impact on improving the flow (as usually would be the case) nor has impact on delivery reliability and inventory. In any case as a production manager I will be in trouble. Sales, assuming to work on a commission basis, will continously be all over me for they can earn a lot more if I were to produce more. --- From: "Stefan van Aalst" Subject: RE: [cmsig] Popularity of TOC - Bad Solutions Date: Fri, 1 Apr 2005 19:40:43 +0200 You only mentioned the algorithm not the rule that goes with it in selecting the 'last' product. The rule is: if total capacity required (S + UL*R) > remaining capacity, then adjust the UL for that product to what is feasible and then do T*UL/(R*UL+S) to calculate the T/Cu. Now choose the product with the highest T/Cu AND reduce the remaining capacity. This will result in: # S R T UL T/Cu (s) Capacity Throughput 3 75 5 25 150 4.55 825 3750 4 250 3 15 700 4.47 3175 14250 1 300 4 20 300 4.00 4675 20250 2 200 2 10 340 3.86 5555 23650 Note that when products #3 and #4 are chosen, the remaining capacity is neither enough to fulfill all #1 nor #2, this results in the next table: # S R T UL adjusted T/Cu (s) 1 300 4 20 131 3.179612 2 200 2 10 312 3.786408 This results in the final table # S R T UL T/Cu (s) remainingCapacity Throughput 4000 3 75 5 25 150 4.545455 3175 3750 4 250 3 15 700 4.468085 825 14250 2 200 2 10 312 3.786408 1 17370 1 300 4 20 0 3.179612 1 17370 This I believe is in line with the answer you're looking for isn't it Prasad? Still, I like to point out that in this example the demand for capacity is 5555 and the available capacity is 4000; more than 39% more capacity is asked. This is a non-viable situation, focusing on how to achieve the highest T will cause this organization pretty soon to go under. Any calculation, how smart it is, is based on logic. Before diving in finding a number, one needs to ask the following: Who is/should dictate the real constraint of the organisation: - production; or, - sales/marketing? In case of the former, the above exercise needs to be done only once to create a permanent and rigid schedule. Any changes in the schedule will cause additional fluctuations and therefore loss of T. In case of the latter, production needs to have enough protective capacity to deal with the fluctuations. In this case the buffers in (S-)DBR will do the trick and there is NO need to do the above exercise. Therefore, in line with the conclusions of MTRD: The speed at which an organization can refocus itself is the ultimate constraint. Therefore choose a viable strategic constraint and manage the organizations accordingly. The implications are that from time to time additional T will be missed for the sake of having the highest possible T over a longer period of time. Chasing today's highest T is suboptimization and will result in situation that is too complex and dynamic to yield consistent good T. Although these are my words and view, I like to point out Demings first point: constancy of purpose; the role of a company is not so much making money as to stay in business and create jobs. Forget the 'creating jobs' if it is asking too much at this stage, focus on 'staying in business'. Please don't be offended if I spell it out very simply: focusing on making money does NOT guarantee that you stay in business. Yes, I know what Eli G and many others say. But I believe this quest for the ultimate T at any given time is deemed to fail. You don't need to win every battle to win the war. On the contrary, you have to know which battles are important to win and which are not. Try to win them all and you'll fail ...this goes back to Tsun Tsu. Stefan From: Prasad Velaga [mailto:prasad_velaga2003@yahoo.com] Sent: Friday, April 01, 2005 5:05 PM To: Constraints Management SIG Subject: Re: [cmsig] Popularity of TOC - Bad Solutions Jim, I think it is time to conclude the challenge with a few observations. I noticed that Rick Dilbert, Stefan and Fred got the best solution. I don't know which method Rick used. Stefan and Fred used the same rule : select products in the decreasing order of T*UL / (R*UL + S). This rule indeed provided the best solution for my numerical example. It is a fairly good rule. As an operations researcher, I investigated the nature of this rule and constructed the following counter example with 4 products and 4000 minutes of available constraint time. One may notice that the rule fails to yield the best solution (if I did not make numerical errors). # S, R, T, UL 1, 300, 4, 20, 300.00 2, 200, 2, 10, 340.00 3, 75, 5, 25, 150.00 4, 250, 3, 15, 700.00 If my arithmetic is correct, then the total throughput will less by $500 over 4000 units of constraint time if we adopt this rule. When the setup times are quite often sequence-dependent, this kind of problem will become very nasty and we have to get into higher gear. All this discussion holds when there is really a single predominant constraint which is already existing or deliberately created for planning purpose. --- From: "Fred Wiersma" Subject: RE: [cmsig] T/Cu Date: Fri, 1 Apr 2005 10:23:14 +0200 In the 'Prasad Challenge' (10 products, with setup) I computed T/cu as follows: 1) compute setup time per unit (SU) as setup time / max product demand 2) compute T/cu as product T / (R + SU) where R is the constraint time per unit product. Example: for product X: T is 50, R is 1.5, demand is 340 and setup time is 500. Then SU = 500/340 = 1.22, and T/cu = 50/(1.5+1.22) = 18.4 --- From: "Gilbert, Rick" To: "Constraints Management SIG" X-OriginalArrivalTime: 02 Apr 2005 01:34:50.0209 (UTC) FILETIME=[295ABD10:01C53724] List-Unsubscribe: Reply-To: cmsig@lists.apics.org First of all, despite the fact that I have a full-head Dilbert mask that I wear at Halloween, my last name is Gilbert, not Dilbert. No offense taken. Just want to get the record straight. I sent my responses off-list to Prasad so as not to interfere with others' thinking about the solution. Although I use math programming in my work, I simply used Excel as a calculator and selected order of products by T/Cu including setup time. Like Stefan, I noted that there are "edge effects" to be taken into account. That is, once the remaining time is insufficient to produce full market demand for a product, the relevant T/Cu measure has to account for how many units of product can be made in the remaining time. I didn't get caught by the "trap" set by Prasad in his second data set. As I told Prasad in my private communication, I use TOC as the framework to guide my use of math programming. Properly framed math programming will get the same answers as a TOC approach. Sometimes it's simpler to think the problem through using a calculator (even a spreadsheet-as-calculator) and TOC principles. Other problems will be too complicated to see the answer without a well structured computer program. Prasad mentions that order of scheduling can affect setup times and complicate the "optimization." I agree. That kind of complication also crops up in multistage operations where one intermediate product can be used in more than one finished product. (Do I make enough "A" components for both products under one setup, or do I need the machine to make "B" components to meet this week's promised shipments?) Another complication occurs when one makes multiple products for a single customer. It may make sense to make a "lower profit" item because one also makes certain "higher profit" items for the same customer. This is when system-wide thinking is critical. The two products involved may be made in different plants, and may even be in different businesses within the company. Making silo decisions about which products to make on a plant-by-plant basis may jeopardize very profitable relationships. The analysis must take the entire relationship into account. None of these complications invalidate either math programming or TOC. They do invalidate any approach that would require blindly following some strict procedural algorithm. Understand the system. Figure out the behavioral rules for the system, and be careful because the environment can change and thereby change the rules of behavior. I don't quite agree with Stefan that the company that chooses not to find a way to meet all the market demand will go under. There are likely other companies out there that will step in and fill the demand that "my" company cannot. I just want to insure that I satisfy that part of the total market that makes my company as profitable as it can be. An example. There was an auto repair shop that I relied on when I lived in Georgia. Their backlog was such that it normally took a week to get an appointment. But they were extremely competent and fairly priced. They were a local "institution" when I moved into the area in 1990 and were still held in high regard when I moved away in 2003. Had they expanded capacity, they might have lost something in quality (due to availability of competent mechanics), or they might have encountered slack periods that would have caused them to raise price to cover under-utilized capacity. Instead, they remained about the same size and maintained a reputation as the premier auto repair shop in the area. Sometimes I couldn't wait for them and had to find someone else to do the work. But I always checked to see whether or not they were available. --- +( cost see also files cost.txt and variance.txt --- From: "Potter, Brian (James B.)" Date: Tue, 9 Jan 2001 13:30:45 -0500 Well, here are a few potential drivers behind the observed reality that we have retained full absorption cost accounting 40 to 60 years beyond the end of its usefulness ... - The speed and flexibility with which large organizations (in this case, accounting professionals, their professional organizations, associated education programs {e.g., the Harvard Business School}, users of accounting services, associated regulatory agencies, applicable standards {e.g., GAAP}, ...) adapt to external change. - Reluctance to abandon a false opinion without having first accepted a suitable replacement. Kaplan, et al (the ABC crowd and its derivatives) have long acknowledged the intrinsic flaws in full absorption cost accounting. Kaplan has even acknowledged the flaws in ABC (his own concept, good man ...). "I do not know," is (fortunately) not an answer near and dear to the hearts of our accounting colleagues, but that means some people will prefer a wrong answer to "I do not know." - Unwillingness to accept labor as an integral part of the total enterprise. Some people like to believe that they can increase profits at the expense of labor. - Some people LIKE an "us against them" approach to things. Worse, in some cases, folks on the "labor side" encourage such folly among leadership by holding such attitudes themselves. - Unwillingness to accept the higher "break even point" implied by the assumption that labor is a fixed operating expense. The assumption does not change the reality (or even the number), but moving labor from "variable expense" to "fixed OE" dashes the false hope that one can lower that break even volume by cutting "labor expense." - Full absorption cost accounting is enshrined in U. S. tax laws at the federal, state, and local levels. Some of these proposed drivers may be VERY weak (or plain wrong). Other drivers no doubt exist. Nominations? Comments? :-) Brian The real question is ... "why managers believe they can manage by manipulating pay." - Peter Evans -----Original Message----- From: Murphy, Mark [mailto:mmurphy@andovercoated.com] Sent: Tuesday, January 09, 2001 11:07 AM ---- However, in the days of the $5 day at Ford people were literally hired on a daily basis to work ONE DAY for that $5. Daily hiring levels were based on planned production volumes. Perhaps, in that not so distant past, labor was an expense which (like material) varied with output levels. Also, maybe not; having a line boss for a pal also influenced daily hiring decisions. ---- This was also true in the meat processing plants at the turn of the century. There was a pool of labor that would show up at the front door of one of those plants everyday hoping to find work for the day. Some people could walk right in (those with relationships with the line bosses), but a large percentage of labor was hired daily on an as-needed basis. And the labor supply was in no way a constraint - the pool of immigrant workers was seen as inexhaustable. This may be just the reason that Direct Labor was originally put into the COGS (or totally variable) line.... but in 100 years almost no one has changed the rules, even though most have recognized that direct labor is not really direct - workforces don't change daily or weekly or even monthly - just look at layoffs. You don't just layoff a large chunk of a workforce - there are many ramifications to that. You also don't hire large numbers of people quickly without a strategic decision to do so. Labor, even direct labor, is treated more like an investment now (in management decision making - the real world). Yet still, cost accounting remains the same.... --- Date: Tue, 09 Jan 2001 10:13:45 -0800 From: Norm Rogers I am sorry Mark, your logic (along with the rest of the majority in this web site) still defies my comprehension ( along with most of the rest of the capitalistic world). It seems so unreasonable to view the cost of a manufactured part as not including the labor required to produce it. There is a direct correlation with the increase in production and an increase in labor. There is also a direct correlation with the amount of capital assets you have but that relationship is not always as variable. Sales orders fluctuate constantly in manufacturing companies. The more you pay for labor to stand around not producing, the more cost you have to recover in your sales price. If you need to pick up an additional order which requires a temporary increase of short term productivity, you need to either hire temps or work overtime. This is a direct variable labor cost. You need to know if your margins are sufficient enough on the new order to cover the additional variable expenses so you can decide if you want the order or not. As long as companies focus on the bottom line (profit) rather than the top line (sales) I do not anticipate the costing world to ever change other than further attempts to improve upon the accuracy of costing methods. (ABC costing was just such an attempt) Assume you employ 10 people at a monthly cost of $30,000 (including benefits) which can produce 10 parts per day and you normally sell 300 parts a month. On day 1 you get an order for 4 parts which you can produce and ship the same day. Do you build 10 parts or 4 parts? --- From: "Tony Rizzo" Date: Wed, 20 Jun 2001 16:40:23 -0400 First, let's distinguish between cost and price. Price is what you ask the customer to pay. Cost is what you really spend, to do a job or to make a widget. Now, let's distinguish between cost of existence and fully variable cost. The cost of existence, also known as operating expense (OE), is the money that the company spends just to exist, without doing any business transactions of any kind. The fully variable cost associated with and attributable to a job is the additional cost (above and beyond the cost of existence) that the company incurs to do the job. Now, let's talk about your cutter. If you don't change the cutter, the cost of doing the job (fully variable cost) equals the additional money that the company has to spend, above and beyond the cost of existence, to get the job done. This usually is just the cost of materials. If you do change the cutter, then the cost of doing the job (fully variable cost) still equals the material costs. What about the money that you spend to get and install the new cutter? That's an investment for the company. You make the investment so that you can make more parts faster and SELL more parts faster, thus earning more money faster than before. Think in terms of flow rates. With the current cutter, the company can make money at X dollars per day. The cost of a given job equals the cost of raw materials for the job. With the new cutter, the company can make money at the rate of X+delta dollars per day. The cost of a given job still equals the cost of raw materials for the job. Should you make the investment? If "delta" dollars per day is real and is large enough to give you a rate of return that exceeds bank interest rates or anything else that the company can do with the money, then you should make the investment. However, please note that I qualify my statement. I say, "if delta IS REAL." That's the catch. If with the new cutter the company gains additional capacity but then cannot sell that additional capacity, then DELTA IS NOT REAL, because the cutter isn't the constraing of the business. Something else is the constraint, possibly the market. For delta to be real, there must be more business waiting to be had, and the current cutter's speed must be the limiting factor. Finally, usually, if delta is real, then it is also quite large, so that a comparison with bank interest rates becomes ludicrous. ----- Original Message ----- From: "Greg Gamble" Sent: Wednesday, June 20, 2001 12:10 PM > Tony, > Excuse me for being dense, I'm really trying to follow this, but what about > the cost? Do we not care how much it cost to elevate the constraint? > This job is run, womb to tomb, on one machine. No set up, pallet machine > with dedicated tools. The constraint isn't going to another job or machine. > So, if I understand right, and it doesn't seem so.(becouse of cost) Then we > should invest in cutter technology that will make the parts quicker, even if > it cost more? Which in turn will open capacity to new jobs...which we can > sell. Isn't this sacrificing the first job to obtain the second?? > The cutter is the constraint, but how much do you spend on it to elevate > it...there has to be a limit... > > I understand how TOC works, but this doesn't seem right. Our throughput > doesn't change, except for the ability do more work thru another job. But > losing money on one job, to get another job doesn't seem right... > -----Original Message----- > From: Potter, Brian (James B.) [mailto:jpotter5@ford.com] > Sent: Wednesday, June 20, 2001 6:58 AM > > One small addition to Tony's comments: With your cutting operation running > at 6x current performance, there is some risk that you will shift your > constraint. Thus, you may want to think about one or more of the following: > > - If your constraint is now where you want it, you may wish to verify that > it will stay put (or that necessary elevation at current nonconstraints is > feasible) before you switch cutters. If your next lowest capacity resource > has (for example) 15% protective capacity, the new cutters will shift the > constraint to that resource and you will get only(!?) a 15% throughput boost > rather than the 500% improvement some might expect. > > - Perhaps, you will want the faster cutters only on specific operations > until the rest of your shop is ready for the much higher throughput at your > 5 axis horizontal mill. > > - If your constraint is NOT now where you want it, this may be a opportunity > to both break your constraint AND move it to the strategically desired > resource. That possibility may deserve some attention and planning. > > Depending upon your situation, other similar considerations may apply. Your > attention to detail before doing the obvious thing has potential to pay off > handsomely. > > Brian > > ... mechanics spent three-quarters of their time waiting in line for parts. > - W. E. Deming > > The job of management is not supervision, but leadership ... > - W. E. Deming > > > -----Original Message----- > From: Tony Rizzo [mailto:tocguy@pdinstitute.com] > Sent: Wednesday, June 20, 2001 9:08 AM > > Then, the cutter IS a constraint, and improving the speed of the cutter is > worth the value of that additional work that today you are not doing, > because the cutter can't er... cut it. Go for the faster tool. > > However, before you do anything, please take a financial snapshot of the > operation as it is today. Then, after that additonal work starts coming in, > take a second snapshot and compare the two. > > ----- Original Message ----- > From: "Greg Gamble" > Sent: Friday, June 15, 2001 7:36 PM > > I think so... > It effects the speed of the whole job, which means we have a machine, 5 axis > horz., that is dedicated to this one job. What makes it a constraint...I > think, is that we have work that we could be bidding on if we could get more > capacity from this particular machine. +( COST CONTROL = BUFFER PENETRATION From: Peter Evans To: "CM SIG List" Subject: [cmsig] RE: How to link CCPM/MPM Project Planning Process to Financial Budgeting I have some (untested) views on the matter. The following is "draft": Estimates --------- Managers use the same padding techniques for money as project members do for task times. Each department safety needs to be pulled apart. The technique for adjusting downwards is similar, but easier. Ask the manager to set out the money that _must_ be spent in the next period. This is rent, wages etc, and is pretty much what is being spent now - the underlying expenditure. Next ask for the possible additional expenditures (related as much as possible to additional OE related to revenue gains). The possible additional expenditure is the money buffer for that particular department (assume each department is a feeding chain to the company critical chain). The department manager can spend this money without getting approval, unless in the yellow zone, but needs to be able to explain the delta T - delta OE = positive amount (or more to the point, explain why the equation does not provide a positive amount). All other possible expenditure, including major capex, is in a single company (business unit) financial buffer, that _all_ departments (feeding chains) draw on for expenditure which is either to overcome major Murphy, eg a plant burns down, or to exploit major business opportunities. Measures -------- 1. Stop micro managing by budget line items. Department managers want to know where money is being spent (even though most of the _line item_ variances are noise), but this is NOT reported to top manager level. Accountants do not have the power to authorise/approve any expenditure, but flag when budgets are in yellow. 2. Manage only buffer penetration. 3. When there are significant (need to define significant, probably outside a control chart limits) changes in the underlying expenditure for a department, change the budget. This will require a _huge_ policy change. ========================================================================== Budget contingencies can be calculated in the same way as schedule contingencies/buffers. A simple example. An evaluation by the project team indicates the following variability in the project task costs: Task Optimistic Most Likely Pessimistic ($K) ($K) ($K) 1 2 4 6 2 4 6 8 3 2 6 10 The budget contingency is: * Statistical formula: Budget contingency = Sq rt ( Sum (Pess - Most Likely) (Pess - Most Likely) ) = Sq rt ( 4 + 4 + 16) = $ 4.9K * Goldratt approximation Budget contingency = 1/2 ( Sum (Pess - Most Likely)) = 1/2 ( 8) = $ 4.0K Budget request = Most Likely + Budget contingency = $16.0K + $4.9K = $20.9K +( CRT of steel industry 444 Abteilungen neigen zum "stehlen" um ihren Aus- stoá in [ton/h] zu maximieren ! o -o- --o- / | | \ | \ / / | \ | \ / / | | | \ ------------- / | | | \ / / | | | \ / ---/-------- | | \ 300 / / | | \ in V-Fabriken laufen / | / \ Teile durch viele --/----------------|--- \ Verzweigungspunkte / | \ ----------- | \ / | \ 332 333 334 Abteilungen versuchen Abteilungen versuchen Abteilungen versuchen "schnelle" Produkte auf Lager zu produzieren Auftr„ge vorzuziehen eher zu produzieren / | und Loágr”áen zu er- und "langsame" hinaus- / | h”hen zuschieben / | / | | \ / | / | | \ / | / | | \ 230 | --------- | | \ Keine Fertigung | / | | \ fhrt zu Null | / | | \ [ton/h] Ausstoá | / | | \___________________ | / | 200 111 220 In den meisten Ab- Abteilungen versuchen jeder zus„tzliche teilungen brauchen ihren Ausstoá in [ton/h] Rstvorgang redu- einige Teile weniger zu maximieren ziert den Ausstoá Zeit pro Tonne als O in [ton/h] andere Artikel _________________/ \ / \ 100 101 [ton/h] ist seit langer die meisten Leute Zeit die wichtigste verhalten sich ent- Kennzahl in der Stahl- sprechend der Kenn- industrie zahl, nach der sie gemessen werden 10-JUL-2001 +( Cynefin Model From: "Tony Rizzo" Subject: [cmsig] Re: Complexity Date: Sat, 8 Jun 2002 09:10:09 -0400 ----- Original Message ----- From: Michael Seifert Sent: Saturday, June 08, 2002 7:58 AM Subject: [cmsig] Complexity Thought a few might enjoy this. I just attended a two day seminar this past week in Washington DC around scenario planning and the complexity model. By chance or by purpose, we have a view the entire time of the Pentagon and the section that was attacked on Sept 11 (an example of an asymmetrical threat). This was put on by a group of folks from across industries who were funded by the government under DAHRPA. They have two basic research purposes: 1) They originally started when Admiral Poindexter (forgive the spelling if I am wrong) asked them to develop the best way to brief the President in 15 min on what possibly could be decades of knowledge and analysis and 2) How do you create more robust scenarios planning in a complex world on which to act. Out of this research project on scenario planning came their Cynefin Model which breaks the world into 4 quadrants. Also there is a difference between a complicated system and a complex system. That being a complicated system is like an airplane. A high number of parts and interactions but all can be readily known and categorized with cause and effect well documents. A complex system is similar to understanding human behavior and degree to which we apply contextual complexity. For example; I am a manager, engineer, husband, father, etc. Depending on which role I am acting in at the time (or possible combinations of roles) causes me to act many times differently then when I am acting in a different role. I am giving the readers digest version here and omitting a whole lot but the basics are: The Cynefin Model is based on the following principles: 1) Descriptive Self-Awareness 2) Deliberate Use of Paradox 3) Diverse Response 4) Contextual Complexity 5) Entrainment 6) Asymmetrical Elasticity (When it breaks it is catastrophic; i.e. 9/11 when viewing asymmetrical threats to the system). The model is comprised of 4 quadrants: 1) Quad 1: Cause and Effect is readily known. We sense, then categorize, then respond. 2) Quad 2: Cause and effect is knowable. We sense, then analyze, then respond. The scientists present propose this is the only legitimate domain for systems thinking. 3) Quad 3: Complexity: Cause and Effect exists but it is interconnected to generate patterns. We probe, then sense, then respond. The probe develops the emergent patterns. Of key note in this quad, the only good model in this quad is the original model itself. 4) Quad 4: Chaos. Continuous Turbulence. Here we Act, then sense, then respond. We act to generate some response on which to sense. OK, so all that said, here is a good little problem they presented. Maybe some of you have seen it before. Maybe not. Think about this and what is wrong here: Using basic algebra: I state that in reality ( and let's assume this is 100% correct in reality) a=b now multiple both sides by "a", then we have a^2 = ab now subtract both sides by b^2, then we have a^2 - b^2 = ab - b^2 then we can restate (a-b)(a+b) = b(a-b) divide both sides by (a-b) a+b = b substitute for a from our original equation of a=b b+b = b then 2b = b We know this cannot exist in reality. So what is wrong with this? Take this to you local HS or University Math Professor and see if he / she can get it. This demonstrates that the models we humans build ( and yes math is a model we have constructed) are only applicable within certain boundaries. The key is to know where the boundaries lie and how to shift the models. So what does this say of the TP, other TOC solutions, Lean, 6 Sigma, TQM, Newton's Laws, and Quantum Mechanics to name a few? This is what I am currently pondering based on the session. Hope you enjoyed this tid bit. Mike Seifert --- When we divide by zero, we can get just about any result that we like. Since a = b, (a - b) = 0. Division by zero is not defined in math. It is an impossibility in this reality. What does this mean w.r.t. the currently known improvement paradigms? They are all based on models. And every model is at best a partial representation of a real system. At worst, a model is a wrong representation of some real system. Here's the rub. Most people most of the time don't know which model is the basis of the theory that they are using. Witness the investment analysts, insitutional investors, executives, and accountants, who base many investment decisions upon theories that are themselves based upon an invalid model of the operations of businesses. What is that invalid model? It is the mathematical model defined in the GAAP. GAAP is a negotiated and legislated mathematical model that is designed not to predict but to determine how to divide profits. Yes, taxes are nothing more than the government's legislated share of a company's profits. The problems occur when investors use the GAAP model for purposes other than sharing existing profits. Why is this a problem? It is a problem, because investors need to predict future profits. For this, they need a predictive model that faithfully represents the operations of a business as well as the market size of the business. Unfortunately for investors, such models have not been created and are not available. Therefore, investors look to the GAAP model and use that highly dysfunctional representation as if it were a predictive model. Of course, it is not a predictive model. It is merely a negotiated and legislated model that has zero capability to predict anything. Tony Rizzo +( DBR - a scientific approach From: Eli Schragenheim Date: Mon, 24 Oct 2005 13:55:32 +0200 Subject: Re: [tocleaders] Re: DBR or not? Prasad claims: "It is not easy to quickly construct a good numerical example involving uncertainty." Not true. It is easy and straightforward. The difficulty is to show how DBR does not achieve the best possible results. Eli Goldratt talked a lot about the components of uncertainty. Especially he talked about the impact of the tail of the uncertain element. As I developed, under the guidance of Goldratt himself, many simulations to show the real impact of uncertainty let me say that we have analyzed them very well. More, Goldratt said many times to use buffer management to identify the components that should be the focus for efforts to reduce the uncertainty. I assume some hear and read only what they want. The real nasty part is to say the following: If DBR was developed in a scientific manner, there would have been a good discussion on the components, the nature and the magnitude of the variation in the system and their effect on DBR performance. Then, it gives more confidence about DBR to people like me with scientific temperament. In the development of OR methodology, the developer is forced to make such analysis. Let me say this: the rules behind DBR were developed as a science. There was a hypothesis and predition of effects. The magnitude of Murphy on real manufacturing system was closely checked accross many real-life environments. Remember that Goldratt was the one behind the OPT software system and he and his people were involved in so many actual implementations. Then we all did quite a lot in DBR. Just the question of sizing the buffers already involves estimation of the actual impact of Murphy, checking the various sources of the uncertainty. Goldratt has a Ph.D degree in Physics. To my mind this means he is a scientist. I’ve no idea whether Prasad is a scientist. I’m curious whether “scientific temperament” is a scientific definition. Certainly this kind of remarks are mere manipulations. When you have no logical claim, let’s cover it by claiming it is not “scientific”. DBR, which has to be coupled with buffer management, a fact Prasad always forgets, has a clear description of its objective, and a set of measurement to measure its effectiveness: exploiting the constraint, delivering on time, and being reliable is doing it continuously. So, one can challenge DBR/Buffer Management by describing a system with more demand than DBR can do and show that other method would be able to deliver it. Then, we could test the challenge – including the probability of meeting the deliveries. But, the real difference between TOC and other approaches lies in two other areas: TOC looks on the whole system – not just into the production floor. It is not a coincident that Prasad did not mentioned the uncertainty in the market demand, and how it is impacted by delivery dates that are truly reliable and the impact of very short response time. Hence the TOC measurements are focused on the bottom line, not on any efficiency measurements. All the other approaches I know dissect the system into independent subsystems. The interpretation of what we really know about our reality. Can we really meet all the axioms required by the mathematicians for their OR model? Can we really describe how Murphy behaves? And, more important than all: do we really know what we need to produce? In other words do we have a good MPS to start with? I think we are living in a dynamic, but also very fuzzy, reality. TOC guides us how to live well in such a system. Another area is the distinction between planning and execution that TOC emphasizes. I mentioned it in a previous post. An example for mathematical manipulation: A “scientific” paper dealt with cost functions. The author assumed the function is continuous and derivable infinite times. Well, I cannot see how any cost function can be continuous. To my mind all cost functions are discreet. --- From: Eli Schragenheim Date: Tue, 25 Oct 2005 12:47:52 +0200 Subject: Re: [tocleaders] Re: DBR or not? I like to answer the valid questions that still remain. When DBR is applicable? How do we measure between DBR and another possible method? I'm doing it in a hurry and risk being less well verbalized and accurate. First, about the definition of scientific methods. Stefan brought a definition of scientific methods to show that TOC was not developed as a science. Sorry, that definition shows very clearly that Eli Goldratt did employ a scientific method is developing DBR. There are clear observations (assimptions about reality), reasoning, hypotheses and claims that can be challenged. In itself a scientific method does not mean the theory is “right”. It can be wrong all the way through or it can be flawed – meaning the basic assumptions (under observations) or logic (reasoning) should be updated. The simulations I did for Dr. Goldratt and for myself were not documented because there was no real desire to do so. Dr. Goldratt predicted very accurately the results. All he wanted were tools to teach others the principles. I wanted to do experiments to validate some of the assertions. I was surprised to see how accurate he was. It took quite a lot of time to be able to do so myself. If the Academic world wants to replicate the experiments, it is pretty straightforward to do it. But, you must know what claim you are looking after. You have to start with a clear hypothesis and check, via simulation in this case, whether the predictions come true or not. My main manufacturing simulation is almost in the public domain. It is part of three books. Carol Ptak’s book on ERP, Prof. Shtub’s book (yes, an academic book) on ERP and my own (with Bill Dettmer). This is a simulation with fluctuating market, unreliable suppliers, fluctuating setups and downtime. As you can impact the market demand by changing the price, you can simulate the emergence of new constraints, new product mix etc. The simulator is not DBR oriented. You can decide to do whatever you want. You can enter a sequence for any resource. Tell me what you miss in order to demonstrate that another method would yield better. I’m writing this email in order to clarify certain elements. TOC was clearly developed in a scientific way. But, our objective is to spread the knowledge not in the “scientific community”, but in the community of the decision-makers. There is a big difference. Most of Prasad questions about my simulations are not coming from a scientific approach, but from the ACADEMIC world. Let me just point out that so much of today science is done under secrecy. The defense and security organizations encourage a lot of science to be developed that is not public. I also have a lot of doubts whether the academia in the field of management is using scientific methods as defined in Stefan’s mail. The right questions should be directed to what are the observations (basic assumptions) behind DBR, the hypotheses and how to construct a scenario that could invalidate those hypotheses. As I like, later in this post, to define some environments where DBR would not be applicable, I like to state some (not all) of the observations: Manufacturing organizations need to commit timely deliveries to their clients. It is critical for the organization to be able to meet the commitments. There is some fuzziness regarding what we mean by meeting the commitment, but I take it we understand the general observation. There are many different resources involved. There are several different sources for uncertainty. The resulted amount of uncertainty (including the fluctuations in demand and supply, which are external to the organization) is significant. The potential market is much larger than the available capacity. This is probably necessary for showing a superior scheme. Because if the demand is not high enough, certainly DBR would deliver everything on time, and maybe another method would do it as well (Lean), but then we do not have a superior scheme. Under these assumptions of reality DBR claims: The capacity limitation of just one resource is enough to determine the amount of commitments that can be safely taken within a period of time. The less excess capacity the other resources have then more safety time/ safety stock are needed to ensure on time delivery. I can translate these claims to the equivalent claims: A: If even one resource is loaded more than 100% then the commitments cannot be met. B: If there are more than one resource that are loaded close to 100%, then many times some of the commitments will not be met. In other words, there is no way to get good due-date performance in this case. Comment: in the real world overtime and outsourcing are used to temporary increase capacity and deliver on time. In such a case, the term “close to 100%” does not describe the reality. The DBR prediction is: when more than one resource is close to 100% of their regular capacity, then the need for overtime will go up sharply. So, the DBR objectives are: To determine the largest set of market demand commitments that can be safely taken, under the current level of capacity and with proper expectations regarding overtime and inventory. Then direct the shop to fully deliver all the commitments, using as low overtime and inventory as possible. Are not these the objective of any production planning and control? How do you measure? Certainly by due-date performance. Given more than one method: can one method promise more commitments and safely deliver them than the other? Can one method do the same as the other using less inventory (investment)? Can one of them deliver the same with less overtime/outsourcing? There are several cases where the above claims (what is denoted as a and b) are not valid: When more than one resource are very highly loaded but not interactive, meaning they do not feed each other, then due-date performance can still be good. This is recognized by DBR and the regular scheme has no problem defining several not-interactive CCRs. So, the updated understanding is that INTERACTIVE CCRs significantly reduce the reliability of the commitments. When a mistake in the chosen sequence of a non-constraint could cause a miss of a timely commitment. This means that in spite of having significant amount of excess capacity, a non-constraint sometimes cannot simply catch-up within the required time frame. This means DBR won’t work properly. This could happen only when the actual touch time of that resource is relatively long. More on it later. When two or more interactive CCRs (capacity constraint resource – a resource that is loaded very high relative to its capacity) can be nicely scheduled in a safe way. This is where Prasad would like to find a good exemplary case. Is such a case possible? Take the last option. Here is a simple net where this occurrence could happen. Product P1 requires first the MA machine then MB machine. The demand is infinite. From the two, it is MB that limits the output. Product P2 requires only MA, and its demand is also unlimited. Simple and straightforward. No need to add complications. There are two valid options: We only make P2 utilizing all MA’s capacity for that. Let’s assume this would yield less profit, so we go to the other option. We commit to deliver P1 according to the maximum MB can provide. MB is supposed to work on P1 all the time, and based on MB’s average pace the deliveries could be promised. MA needs to provide enough WIP for MB while MA is working on P2. And P2 customers also demand to know when exactly they would get their order. If now you add some uncertainty, like downtime of both MA and MB you can easily see that both lose capacity, not just because of the downtime, but also because of the interaction. MB will lose if and when MA is down too long, and also when MA finds himself in a conflict: deliver a commitment of P2 or hurry back to do P1 because MB has not enough P1 to work on. So, two CCRs are theoretically possible, but any solution would end up with losing some available capacity in exchange for more reliable delivery. A theoretical solution could be to enlarge the delivery-time and then draw more commitments. After all if I promise delivery next year, I certainly can exploit better both MA and MB. Similar cases like the above are analyzed in The Haystack Syndrome. A book anybody who likes to challenge DBR has to read, especially the third part. Here is a semi-complicated DBR solution to solve a problem, of interactive constraints, which should be dealt by the management: add capacity to one of the two CCRs and get much more of the system and be also more reliable. When you understand the negative impact of 2 interactive constraints you can deduce how 5 interactive constraints would look like. In a world where competition causes the clients to expect reliability - having that many capacity constraints is a recipe for disaster. Now, if you show me such a living organization, I'll look for the hidden excess capacity that is not showing in the database. This is a simple straightforward logical reasoning. An important element in science. The other case is having a resource with significant excess capacity still delaying the delivery. When can that happen? It can happen when the total touch time is close to the required manufacturing lead-time. THIS IS TYPICAL TO PROJECT MANAGEMENT. Steven Holt already referred to it. Danny Walsh and me wrote a paper about it. If you want this is a definition of the necessary conditions for DBR to apply, because DBR is not a good method for multi-project environment. I believe Danny put the article (published in The Performance Advantage, cannot remember when) in his web site (www.vectorstrategies.com). In most cases in manufacturing the actual touch-time is much shorter than the manufacturing lead-time. The DBR main claim is: It is enough to have a priority system, as given by buffer management, to ensure that all the non-CCRs would not delay any order beyond the buffer. What other complications would cause different methods to outperform DBR? You certainly still need to ensure that every single resource is not loaded to more than 100%. So, what is left is to show a way to operate with shorter response time than the time-buffers regular DBR implementations would do. You can use stocks, if you want. But, don’t start a simulation with all the stocks already in. Build the stock within the simulation scenario. You'll need excess capacity for that. Make your own method on how to make-to-stock as a way to enlarge the critical time-frame. Remember that buffer management is part of the TOC solution for manufacturing. When you (anybody who likes to challenge the DBR method) design a simulation – add buffer management to the execution rules of the simulation. Then show that another method, probably more sophisticated scheduling, could safely deliver more. --- From: "Eli Schragenheim" Date: Fri, 4 Nov 2005 07:52:31 +0200 Subject: Re: [tocleaders] Re: DBR or not? Well, it seems we are carrying a dialougue between deaf people. I don't think Prasad really understood what I meant, and I have to assume that I don't have the faintest idea what he is after. But, beneath all the misunderstanding there are issues that most people of the list are interested at. So, let me respond to what I did understand. Eli Goldratt claims the following: The more complex is the problem the simpler the solution has to be. The rational behind this clever (to my own mind) saying is that any human system (organization) cannot afford to have chaotic performance. A chaotic peromance means that the customers of the organization do not know when, if at all, they would get what they ask for, or something close to what they ask for. So, the basic assumption is that most of the orders are delivered just about the time and quantity promised. How come when the manufacturing organizations are so complex with all the variables Prasad is given us? The only rational answer is: Most of the resources have considerable amount of excess capacity! How come managers, who hate paying for resources with excess, do not get rid of the excess? Well, some of them understand the real need for excess capacity. The others are fooled by their employees who manage to seem to be busy all the time. This is also the cause for having truly low accuracy of the data, as there is no interest to reveal the truth of how much excess really is. So, how complex can any organization be? In a batch of claims accumulated by Prasad he gives a list of variables that complicate manufacturing organizations. But, the real question is not how to cope with all these complexities. The people at these environment mostly succeed to do so, because if you have a lot of excess capacity, even when it is mostly well-hidden, you can still perform adequately. It does not mean they perform really well. The point is they perform good enough to survive. Survival is a major point. In the service sector the need for excess capacity is much better observed. Think why should the owners of gas stations have so many pumps when the utilization of the pumps is so low? So, the truly complex manufacturing companies actually have much more excess capacity than simple organizations. Otherwise, their performance would be chaotic. Prasad did not understand me when I say that practical solutions CANNOT be SENSITIVE solutions. Sensitive solution means that it is enough that one input would not be precise to cause a considerable shift in the optimal solution. Regarding science or not science. I'm saying it again: there is no requirement in science to be public. Certainly we in TOC don't have a need to prove we are "scientific". And OR, Operation Research, is NOT a science! Actually Mathematics as such is not a science. It is an important tool for science, but in itself it does not predict anything in nature. By the way, my B.SC is in Mathematics. But, as the DBR principles are public is the sense that they appear in books, like The Haystack Syndrome, people who like to challenge DBR can do it. The direct questions Prasad has on what DBR can and cannot do show lack of knowledge on DBR, and especially on what subordination is about. Here are some brief answers: DBR is fully applicable when no CCR is active. It is normal that even when the CCR is active some products do not pass through the CCR. I'm not going to state here how. It is all written in several books. Please, Prasad, read a book first. DBR has no limitation in considering sequence-dependent setups. The problem for sequence-dependent systems in not the schedule itself, this is easy. It is how to promise reliable due-dates to the customers and also be acceptable to the customers. As I claim that the market demand is always a constraint - we always have to subordinate to the market, not just to the CCR. When a resource is loaded to 33%, there is NO PROBLEM for it to subordinate even when a specific process time is long, certainly longer than the processing time of the CCR (so what?). In manufacturing the process time is negligible in comparison with the lead-time and/or buffers. This is common APICS stuff. Yes, DBR can cope with various types of Murphy. Actually this is the objective of it. --- From: Eli Schragenheim Date: Tue, 18 Oct 2005 16:07:05 +0200 Subject: Re: [tocleaders] DBR in batch process industries I like to comment on Prasad’s notions of TOC and his reservations because I find them quite common, coming especially from people with quantitative background. Apart from the discussion itself about the cause and effect in reality and how to compare very simple solutions with more complex ones, Prasad expressed some assertions about TOC, which are common, but quite wrong. TOC does NOT claim that capacity constraints do not move with the change of the product mix. But, TOC claims that this cannot be very frequent. I’ll clarify it later in this post. We are dealing with manufacturing systems. The number of variables is quite large, the dependencies, or semi-dependencies, are significant. So, from the aspect of any mathematical model, it is already quite a complex problem to manage, even before we add the impact of uncertainty. What are the sources of uncertainty? The major ones are external: the market demand and the material supply. The difference is that one has much better control on the supply than on the demand. I believe the fluctuations in the demand are, by far, the most critical factor. The popular remedy is to maintain finished and semi-finished goods stock. There is a critical difference between make-to-order and make-to-stock. When you make only for order you are bounded by the willingness of the customers to wait. This creates a time frame for which capacity constraints should be evaluated. Take the cashiers at the supermarket. As long as people are willing to wait for 30 minutes in the queue, then 25 minutes queue does not make the cashiers a constraint. Remember the definition: a constraint is anything that limits the performance of the organization versus its goal! In make-to-stock we significantly enlarge the time frame according to which we judge whether a resource is a constraint or not. That time frame is, at least, the time it takes to sell all the current available stock. We are not interested whether a certain resource was slow during part of the time, causing the stock level of some of the products to go down, as long as it could still catch up before the whole stocks are depleted. In make-to-stock we make some products that are not needed very soon. This creates the impression of being fully loaded, but that does not mean that we have a capacity constraint. You could use spare capacity to build more stock. By definition we have a bottleneck only when the market demand requires more capacity than that resource has. In a make to stock environment you cannot have a bottleneck – because all the stocks would go down. If you expect a peak of sales, you better prepare (using the excess capacity of your weakest link at the off-peak) enough stock so your weakest link will NOT become a constraint during the peak! The constraint of any manufacturing system lies in the market. Still, within the production system you have “weak links” that could tamper your due-date performance, or might create shortages in your finished goods stocks. I assume we MUST commit to the market a certain level of performance: either promised delivery date or availability of products in stock. What we have to do is to commit only to what we can safely perform. Because of the uncertainty in the demand we need to buffer it, even from lack of capacity on our “weakest link”, so the performance to the market will be within the tolerance of the market. So, we need to have some excess/protective capacity of the “weakest link” within the appropriate time-frame. Hence, in make-to-stock we won’t produce some products in order to keep the existing products under acceptable availability at all times. Thus, we’ll state a delivery time for make-to-order products that would provide safety. And when this safety time is beyond the tolerance time of the customer, we’ll need giving up some products to ensure some excess capacity even on the weakest link. Regarding subordination. We certainly have to subordinate to the MARKET DEMAND!!! Yes, when we do have a capacity constraint then it could require some subordination to the capacity constraint as well (but not instead of subordinating to the market). Can we have more than one “weakest links”? First, if the resources do not interact then it is certainly possible. If they are interacting (one feeds the other) then having two resources that impact our commitments to the market has a huge price. One needs to provide much more excess capacity to the two links in order to still maintain a stable performance to the market. Let’s consider now the possibility to use a sophisticated mathematical algorithm that would allow the exploitation of two, or more, resources for a given market. One assumption has to be: the initial Master Production Schedule (MPS) is exactly what the market asks, and during the schedule horizon the market won’t change. The other assumption is that our production data is very good. Both assumptions are usually invalid, but let’s suppose they are valid for now. Suppose, that by considering the finite capacity of two or more resources that solution succeeds to do more of the initial MPS than the simple TOC technique, that looks only on one capacity constraint resource and buffers of time. I claim that the resulting schedule is much more sensitive to uncertainty and to mistakes (either in the data or human mistakes) than the TOC schedule. Prasad says that the processing time, in many environments, is not subject to much uncertainty. I agree that relatively this is right, BUT, is it so small relative to the sensitivity of the planning? And what about sudden changes in the market, that happen every single day? What about the quality issues? Note also that in so many environments there is another type of buffer: overtime. So, many of the failures of the sophisticated schedules are covered by the use of overtime. The TOC buffers do take this option (or should take this option) into consideration. More about the TOC technique. There is no rule in TOC that dictates ONE buffer. Products, or orders, with very different routes should get different buffer sizes. But, the main point is that TOC has a different concept regarding the distinction between planning and execution. Planning means making decisions ahead of time. In dynamic situations it means we have very partial information. So, one way is to plan based on predictions, and for that we could use the most sophisticated algorithms. The other way is to minimize the planning to decisions you MUST do NOW. This means concentrating on your commitments and buffering them, and possibly also to medium-term capacity limitations. Then, you leave enough degrees of freedom to the execution phase, when the real information would surface, and then you follow a PRIORITY mechanism to set the online working rules in order to follow the commitments. The TOC way is based on coupling DBR – a minimum planning algorithm, with buffer management: a priority system for the execution phase. Please, don’t speak about DBR as a stand-alone technique. It has to be coupled with buffer management. It is a critical part of the whole approach. DBR in itself does not start with the capacity constraint. It starts with the MPS. Most of the troubles I see in manufacturing lie, first of all, with flawed MPS. The most critical decisions are: what should we commit to? Both in quantity and time. The concept of the buffers arises already here. Then one should also check whether, taking into account the time issues of our commitments, we have enough capacity. Checking just one weakest link for having enough capacity is, in all the systems I’ve seen, more than enough. In my book, with Bill Dettmer, Manufacturing at Warp Speed, and in Carol Ptak’s book, ERP Tools, Techniques, and Applications for Integrating the Supply Chains, you could find the MICSS simulation of semi-complex manufacturing systems: very fluctuate market demand, unreliable suppliers and some uncertainty within the shop. The routings are built so a change in the mix could easily move the “weakest link. If you mess with the selling prices, or with the quoted lead-times you surely can change the product mix. One could experience the meaning of time-frames in judging where the capacity constraint lies and whether it moved. Then my claim that a constraint CANNOT FREQUENTY move would become clearer. +( DBR - the acid test From: Kelvyn Youngman Date: Sun, 08 Jan 2006 22:56:58 +1300 Subject: RE: [tocleaders] RE: DBR? How do you know? I saw this nice question several days ago and wanted to leave it alone. But I can't. As a mercenary, I would be asking the following two things; (1) Where in this line (chain) is the drum? There has to be a unique answer to this; at the end, or somewhere else. (And then I would come back and see if it is working when no one else is looking - don't trust data - and then I would come back a third time too. In an externally constrained environment it may indeed not be working all the time). If they have a drum then here comes the second question; (2) How long are the rope(s)? The answer to this in make-to-order (or make-to-stock with dummy due dates) has to be a time; weeks,days, hours. The answer can't be in pieces or batches or currency - or you know that they are not talking the right language. You said excellent plants so I don't know whether to go any further than that. My assumption is that an excellent plant would be well subordinated (as so too would a Toyota-like plant). However, to check if they were walking the talk I would then ask to see the hours of WIP at the drum rate for some sequential snap shots in time (start, middle, and end of the month spring to mind). Does it ever exceed the rope duration, this is a good indication that there is a gating problem and a "little bit" of push scheduling is going on. Asking these questions you don't get bogged down in buffer issues. Having just said that, you are suggesting some sort of repetitive manufacture (Dell), rather than batch, so ask what protection there is for downstream stoppages after the drum (if it isn't at the end). You are looking for some space buffer (an empty hole) to handle the drum output when the line goes down. What else (all this is less important, the above two do it for me and even DBR plants will work with the following). As Brian has pointed out ask for a job card (batch) and see if you can find intermediate dates on it. If so then MRP (and local optimisation) hasn't been chased out completely. Ask to look for peoples' time/job cards - this shows local efficiency (cost) measures are still in effect. Ask how often they load against the schedule, if it is not every couple of days (or daily or less) then we are in trouble. If they are bulk-loading against a group of machines ask to see the detailed schedule at the location of those machines. If you can't find one for the next several days in advance put together by the foreman then once again we are in trouble. Finally, I'd ask to see evidence of the (rapid) reduction in WIP or lead-time that occurred with this initiative and/or the (rapid) increase in output. If the WIP/lead-time reduction is missing then this is not DBR (the safety aggregation did not occur). In one of your posts today you said; "I just wanted to know if they were doing DBR, not ToC. Do the people in a plant need to know more than their bottleneck to do DBR?" The short and colloquial answer to that is abso-bloody-lutely! TOC and DBR are as much (more) about what the non-constraints are not doing as what the constraint is doing (and so-too in Toyota-like plants which is why I used two key questions about the drum and rope rather than buffers to establish if this hypothetical plant is DBR or some other logistical system). Cheers Kelvyn P.S. World-class generally isn't. +( DBR in job shops, repetitive manufacturing and process manuf From: JmBPotter@aol.com Date: Mon, 28 Feb 2000 02:19:27 EST Subject: [cmsig] how to implement TOC/DBR? I am going to choose a plant to implement TOC/DBR.There are some questions: 1) Firstly, What is the difference between these process types: job shops, repetitive, and process? Can you give an example to each of them? {Job Shop: a facility able to produce a wide variety of distinct products (usually, in relatively small volumes like a single prototype to a few thousand units). Likely examples include general purpose machine shops, a specialty chemicals synthesis laboratory selling to universities and research laboratories, hospital STAT laboratory. {Repetitive: a facility able to produce a relatively narrow range of discrete products in high volumes, often organized around an assembly line. Likely examples: automobile assembly, consumer electronics assembly, high volume commercial medical reference laboratory {Continuous process: a facility which produces outputs either in a continuous flow or in "batches." Rather than yielding discrete objects, the facility usually produces large amounts of material typically measured in units of mass, volume, or length (with cross section implied by product). Likely examples: steel mill, oil refinery, bulk plastics synthesis, commercial chemical synthesis, paper mill} 2) Which process type is more fit for implementing TOC/DBR? {ToC is highly applicable to all three. DBR also has potential applications to all three. The benefit a DBR application might bring to a particular enterprise will depend strongly on its current state. A chemical plant with a single product may be so constrained by things like reaction vessel capacities and chemical laws that DBR will offer little if any improvement. A job shop with a huge backlog may be so busy keeping each resource "efficient" that DBR can offer substantial improvements. {Perhaps, you should share descriptions of a few candidates which seem most likely to you. Focus on facilities with theoretical capacities well in excess of current output and an order backlog. Your global constraint is more likely to reside in such a facility than one producing at or near its theoretical capacity or one without an order backlog.} 3) Is there any "proved path" in implementing TOC/DBR? {Experienced consultants may have some sound notions based on their broad experience base. Generally, each application has its own circumstances. Applying cause and effect logic (e.g., the ToC thinking process tools) to anticipate the likely impact of changes before making each change may offer the best "implementation path." Goldratt's continuous improvement focusing process steps* will often provide a good framework for the sound cause and effect logic in this context. {Note that an organization with more than one plant may face a single organization wide constraint in exactly one of its plants. In such a case, you should apply the improvement process first to the plant constraining the total organization. Improvements elsewhere will have no significant effect. {You should consider BEFORE you start how you will redeploy people whose efforts your improved organization may no longer "need." Successful improvement efforts require cooperation throughout the organization. How much cooperation can you expect people to extend if they learn that cooperation with improvement efforts causes them to lose their jobs? Failure to resolve this dilemma before you make your initial improvements can lead to the following event chain: -Improve -Dismiss "surplus" people -Improve (zero or more [but not many] additional opportunities) -Dismiss "surplus" people (one for each additional improvement) -People learn that improvements cause job losses -All future improvement efforts fail because people do not cooperate or because people actively sabotage improvement efforts} {* One statement of the "focusing steps" follows: 0- Define the system. 1- Define the goal of the system. 2- Identify the system's constraint. 3- Decide how to exploit the constraint. 4- Subordinate every other decision and action to the constraint exploitation decision. 5- Repeat steps 2 through 5 forever.} +( DBR, S-DBR for MRP systems From: "Brian Potter" Subject: [cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Date: Mon, 15 May 2006 13:19:22 -0400 Rick, Jim, Nicos, Grant, John, John, Dave, Rob, et al, Adding your thoughts to my initial notions stirring a fermenting yields the following . . . So, what do you think about this way to fake out an ERP system with an FCS so that it will do DBR or S-DBR in spite of itself? Do the following: --- For S-DBR --- - Ignore routings in the ERP (for scheduling purposes, every order has one operation at the shipping point, but it has a "shipping point setup time sufficient to produce the product most of the time) - Mark capacites for all resources as infinite (or do what one must to suppress scheduling resources other than the shipping point) - Set the setup time for the shipping point equal to the Rope length (the expected lead time plus the protective time to allow for Murphy---probably, lead time plus one sigma) - Have the FCS schedule the shipping point and material release accordingly - Have upstream resources work orders reaching their locations in scheduled shipping order (prevents excessive delays at downstream routing integration points) according to the Roadrunner Rule unless otherwise dictated by the expediting process - Add a buffer report picking up order location from time posted at various resources that will use routing information to estimate shipping point arrival based on the current (or recent past) location(s) and lead times from said location(s) to the shipping point - Manage the orders on the floor via the usual DBR and S-DBR buffer management practices - Keep statistics on buffer hole sources to drive continuous improvement and to watch for an internal Archimedean constraint - For convergent routings, each initial operation might have its own material release to allow JiT material purchasing and receiving if desired (helps minimize investment in WiP); one might fake this by splitting an order into a distinct order (e.g., same-A, same-B, ..., same-Z) for each initial operation all having the same shipping date - The buffer management reporting process will need actual routings and some estimate of time that each operation consumes to estimate when each order will reach the next constraint on its route; this may mean alias names (one name for the path through constraint[s] and the ship point and another for the complete routing) for products or routings outside the ERP for buffer management reporting, only. --- For DBR --- - Same as S-DBR but add the following . . . - For orders routed through a known constraint, resource split the lead time into a "setup time" for the constraint (likely, lead time from material release to the constraint plus one sigma) and a "setup time" for the shipping point (probably, lead time from the constraint to the shipping point plus one sigma) - Add a buffer report and buffer management for each constraint - Resources feeding a constraint (or more than one constraint) process orders via the Roadrunner Rule in "needed soonest at a constraint" order unlss otherwise dictated by the expediting process --- From: "Brian Potter" Subject: [cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Date: Tue, 16 May 2006 10:08:21 -0400 Jean-Claude & Prasad, To Prasad's point, DBR (S-DBR if the shipping point has an actual or pragmatically set capacity limit) uses its "Rope" piece to time order inductions into the system (soon enough to reach the shipping point and any Archimedean constraint in a timely manner, but not so soon is to encourage excessive investment in WiP). Prasad, a few hours spent with Goldratt's _The Race_ really will help you avoid seeming ill informed about DBR in these dialogues. Even if you continue to disagree, you can do so from a well informed position so that others will find your observations and suggestions in context rather than noise of some form. If you decide to what DBR is (and is not), I suggest that you read _The Race_ in the following very structured way: - Work the exercises in the back on 4-10 sheets of letter size (A10) paper (NOT in the book). Skip exercise five if you like; it's about project management (mostly, not germane to production) and has its flaws. Do not consult the answers further in the back of the book. - Spend no more than 5-15 minutes on any exercise. They are short and use numbers so small that pencil and paper will suffice for the necessary arithmetic. You will not need calculus, mathematical programming, or any other higher math/stat for any exercise. High school "business math" will suffice. Yes, you can use LP to solve one or more of the exercises, but that is not the point. - Set the exercise answers aside. Now, you have invested 21-76 minutes. - Read _The Race._ This will probably take you about an hour. Budget 60-120 minutes. - Without consulting either the "offical" answers or your firse effort, do the exercises, again. 4-10 more sheets of paper and another 20-75 minutes invested. - Now, compare the "official" answers and your two answer sets with one another. This should take about another 20 minutes. After you've spent 2-4.6 hours and $20 (or less) for _The Race_ and some paper, you will have a fundamental understanding of what DBR is really about (but probably NOT with a full comprehension of what buffer management brings to the table). To really grasp buffer management, you'll need to tackle _The Haystack Syndrome_ (where you WILL find some discussion of scheduling, too) and some more recent literature on the buffer management topic. Without understanding buffer management, one cannot fully grasp the power of DBR and S-DBR. Naturally, without understanding their power, one can offer neither useful criticisms nor meaningful improvement suggestions. Go forth and learn WHY so many ToC advocates see a powerful FCS as a nice but not terribly important tool. Most of us KNOW the strengths, burdens, and flaws that come bundled with a powerful scheduler, and we also understand the context in which the scheduler must contribute to effective operations. One needs both. --- From: "Brian Potter" Subject: [cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Date: Tue, 23 May 2006 14:25:54 -0400 Jim and John, I think I have what I should have written sorted out. Thanks for the probing questions that helped me get on track. Now, it is MUCH simpler than what I wrote, earlier. To fake out a DBR/S-DBR hostile ERP with FCS . . . . . . Provide triplicate definitions for the shipping point "operation" and each constraint operation: A- One definition with . . . . . . a pseudo-setup time equal to the mean lead time (plus 1-sigma of that lead time) from the shipping point to the nearest upstream control point (either a constraint or material release) . . . and . . . . . . processing time per unit suitable for scheduling B- Another definition with accurate setup time (add a fudge factor to guesstimate processing time if one must set the unit processing time to zero to fake out the FCS), zero processing time (if needed to fake out the FCS othewise, use something "real" that provides useful buffer management reporting data), and infinite capacity (to have the FCS ignore the non-constraint) C- A third pseudo-operation with zero setup time, zero processing time, and infinite capacity (all operations at each constraint may use the same [C] pseudo-operation) . . . For each non-constriant resource operation, provide a single definition with sane setup time information but zero cycle-time estimates or an infinite capacity indication with sane cycle time estimates (to suppress FCS operations on these nonconstraining operations). If suppressing the FCS requires a zero cycle-time, add a fudge factor to the setup time to In the routing information indicate one routing from release through both copies [A] and [C] of each constraint and copies [A] and [C] of the shipping point. Also, indicate a parallel routing through all the non-constraint resources AND through copies [B] and [C] of the constraint(s) and the shipping point. Now, let the FCS schedule the constraint(s) (if any) and material release (using routings through the [A] and [C] constraint/shipping point operations) subject to the additional constraints of shipping everything on time and maintianing a stable schedule for orders with material already released. One might allow the FCS to combine new orders with previously scheduled orders using the same constriant setup if and only if material from the newer orders reaches the constraint buffer before the constraint complets the operation on the older orders. The constraint operator can probably handle such order joining manually when it is feasible and won't hurt the shipping schedule. Use the routings through the nonconstrianting resources along with the [B] and [C] constraining operations to develop the buffer management reports. Labor time posted on nonconstraining operations and [B] constraining operations identify each order's current location(s). Timings through pending nonconstraints and pending [B] constraining operations yield expected arrival times at each constraint buffer (constraining [B] operation) and at the shipping buffer (the [B] shipping operation). Compute buffer penetrations based upon the expected buffer arrival times and the scheduled times for shipping and for starting the constraint operation. Expedite and process improve in the usual DBR or S-DBR way using information from the buffer report. The [C] operations may be redundant, but I suspect that they tie the duplicate control point [A] and [B] operations together so that the [B] operation must have the same finish time as the corresponding [A] operation. This tie should prevent overly ambitious buffer arrival time estimates for buffers downstream of a control point. It certainly happens the way in my mind. Note that we need only one [C] operation for each constraint and for the shippng buffer. Thus, as the number of SKUs and routes gets very large, in information maintenance terms, we really have only one duplicate for each constraint operation and for each shipping operation. Note the following: - Processing rates and lead times (coded as setup times) for the "real" constraining operations and shipping point operations should be reasonably accurate and precise to get good results out of the FCS. Errors here could cause a bad schedule, improper material release, or both. - Setup times for the nonconstraining operations model both the setup time and the processing time (because we set the processing time to zero and capacity to infinity to fake out the FCS). Individually, they need not be all that accurate or precise as long as in the aggregate, summing the setup time and processing time offers a fair guess as to when an order following the route will reach the next control point (constriant or shipping) buffer. From an information management view, we have added an extra [B] pseudo-operation for each constraint operation and for the shipping point we have (in the worst case) an extra [B] pseudo-operation for each SKU/route combination. However, the only data which must match reality closely are the lead times and processing rates for the constraint [A] operation(s) (if any) and the [A] shipping point operation. As long as we have made no grevious errors in the non-constraint operation setup times (coarse estimates of setup time and process time combined) and the setup time for the constraint pseudo-operations, the buffer report will be close enough so that we will expedite or not as appropriate. Similarly, even with inaccurate, imprecise (e.g., real world) data for the nonconstraint operation, the buffer hole information should suffice for improvement activity planning. --- From: "Jim Fuller" Subject: RE: [cmsig] [Yahoo cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Date: Wed, 3 May 2006 10:51:02 -0500 Hi Rick, 1) The constraint(s) in our system are determined at the manufacturing plant level because we have many different product lines with different routings. So far, I haven’t convinced anyone that we need to consider where our constraint(s) should be in the future. So, I guess you would have to say that the old decisions about how much capacity to buy and which machines to buy have determined where the constraints are. The FCS and our policies just have to be able to acknowledge those constraints. 2) I’m going to assume you mean do we use “Rapid Replinishment”? Since we’re a combination of MTS and MTO, naturally we only replenish the MTS products and we do that with a different algorithm for each person making the replenishment decision. HaHa. Remember, I said production was TOC compliant, not distribution or the company as a whole. Jim From: Rick Denison [mailto:rick.denison@gmail.com] Sent: Wednesday, May 03, 2006 10:21 AM To: Constraints Management SIG Subject: Re: [cmsig] [Yahoo cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Jim, Nice Post, looks like a lot of thought, effort, and learning from the school of hard knocks. Two questions I have: 1. Do you choose the Constraint in your system, or do you let the FCS determine that? 2. What model of Distribution and Replenishment are you using? --- From: "John Maher" To: "Constraints Management SIG" Subject: RE: [cmsig] [Yahoo cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Nicos, I suspected this is what you meant by “theoretical”, so just consider my earlier comments as re-enforcement of your statement. In answer to your question, I have yet to see a true pull environment put in place with an out-of-the box MRP system. The environments where I have seen a pull system using MRP typically have a couple of developers pounding out custom code to manipulate the system to support their goals. Or, they have a team of planners putting in Herculean efforts trying to keep the supply in alignment with the demand-pull. Either way, both of these approaches will lead to a point where the software system become the major restriction in the company’s process of on-going improvement – forcing them to stagnate at a higher level. In addition, I have seen both the scenario where a company first implements the business process changes and then the system that supports them (not MRP) and the scenario where a company implements the business process changes in conjunction with a software system that supports them (not MRP). My observations have been that the company that implements the business process changes in conjunction with a software system that supports them will have a higher success rate, show results faster, and show better results than a company that implements the business process change and then the software that supports them. So, it really comes down to where the company wants to go, how high of a probability they want of reaching their destination, and how soon they want to get there. If I were Brian and my company was in the process of purchasing an ERP solution that only had MRP or APS as the manufacturing solutions, I would avoid purchasing those modules and look for a piece of software that supports the pull environment that I am trying to create. If I was Brian and had only MRP to work with, I would be careful to understand the true costs and business risks that I am putting on my company by manipulating MRP to achieve a “quasi” pull system. The key to a pull system, other than having pure pull signal from the market, is understanding cycle time (time from release of the order to the shop floor until completion of the order to the shipping dock) and relentlessly driving cycle time towards zero. Cycle time does not translate well into an MRP environment, whether it be the fact that MRP works in daily buckets, the dissemination of demand and supply with work orders for each individual make item on the BOM, the multitude of parameters to manipulate cycle time at the operation level hoping it will equate to the proper aggregate number, or (and sorry for running on) the ability to know not only what to work on – but what is actually available to work on. These are just not things that MRP was designed to focus on or deal with. So, I think that some elements of pull can be achieved by manipulating MRP, however I believe it is impossible to reach a true pull environment and I believe Brian would be leaving money on the table and putting his business at risk by doing this. In addition, I might look for a different company to work for if I was responsible for making MRP perform in this manner. I once met a person at a TOC company that initially tried MRP, then implemented a well-known APS solution. In the end, he was responsible for scheduling the system based on Excel. Two interesting things happened by doing this 1) the company wanted to promote him to plant manager, but could not free up enough of his time from the Excel scheduling – so he was shopping for a new system just so the company could finally promote him and 2) the company was continuing to grow to the extent that the Excel scheduling was becoming a serious constraint of the organization. On the flip side of this, I have seen companies implement software systems that enable a true pull system, where virtually all of the planners are freed from the daily grind of MRP expedite and slide reports and e-mail expedite alerts so that they can spend their time doing work that is actually adds value. There will certainly be cases that do not support my statements, but I believe the overwhelming majority do. Another note, I am typically not talking about consumer package goods or pure process environments for two reasons. 1) Typically their issues have more to do with replenishment (and line design in the case of pure process environments) rather than scheduling and execution. 2) While I have been in these types of environments, they are not my areas of expertise. --- From: "John Maher" Subject: RE: [cmsig] [Yahoo cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it Date: Wed, 3 May 2006 17:59:11 -0500 Dave, Not sure I fully understand your statement on China, but will comment a little on it anyway. I think you are saying that lead time doesn’t matter on parts coming from China. If they did, why would so many manufacturers that are not in China be buying parts or setting up operations in China given that there is a very long lead-time associated with them. Obviously the price for the product is compelling enough to these manufacturers to overcome the increased lead-time and increased inventory associated with sourcing from China. Here I am curious if all factors are coming into play such as the cost of having the inventory, the cost of obsolescence (any insight from the group is more than welcome), cost of transportation, cost of quality, cost of initially establishing the source in China, cost of communication with the source, cost of finding a new source if previous failed, etc. Beyond this, I wonder why in a time when everything seems to be moving to China or some other low-cost country why companies such as Toyota are not joining in: http://kanban.blogspot.com/2005/10/toyota-bucks-trend-by-not-outsourcing.html For China and other low-cost producing countries, there will always end up being someplace cheaper to produce product from a labor standpoint (I understand that lower environmental standards, piracy of software, etc. can play a part as well). Whether that be right-now, or after the money injected into the economy changes the standard of living and drives wages up in the future, a country and company must compete on things other than just cost or eventually they will lose. Can you provide me some examples / articles about consumer electronics being the opposite? Also, do you believe that this is the way they should be or if it is just the way that industry has always been? This seems to be an industry where product life-cycles are short, which tends to drive companies to want to be first to market. In addition, excess inventories are costly. They either need to be scrapped, or introduction of the new product must be held back, or (and I believe this is the norm) they need to slash the prices on the current models to move the inventory in order to make room for the new model. In which case I often wonder how much money they are leaving on the table through the discounts and also through the cannibalization of sales that they would have had on the new model had the old model not been such a bargain. I agree with your statement on the process industry. I think their problems are normally more on the order of line design and replenishment than they are in scheduling and execution. From: Dave Tootill [mailto:dtootill@mweb.co.za] Sent: Tuesday, May 02, 2006 6:05 PM To: Constraints Management SIG Subject: RE: [cmsig] [Yahoo cmsig] KIS: Probably an Important DBR or S-DBR Adv - Danger of winging it John Very neat explanation. If lead time doesn’t matter… well, because of China I suppose, you can’t get a yellow earthmoving machine for a year or two, so what’s the odd month? Consumer electronics are well reported as being the opposite. And if you’re in a process industry, material-centric views & work orders aren’t of much interest. +( decission making - NUTS AND BERRIES From: "Kent Johnson" Date: Thu, 4 Nov 1999 00:22:46 -0700 I cannot seem to locate the original posting someone sent in with the title "Some Nuts are Bad For Business". So I will repost the original text, renamed because it now includes a sequel. NUTS AND BERRIES Mad scientist and science fiction stories often give us reason to wonder about technology as it whirls forward at a dizzying pace. Technology has created an opportunity for some to become obsessed with numbers and analysis that can lead to equally fictitious results. On the job at one time or another most of us have experienced a fearful fleeting thought about this. Perhaps in a way that the proprietor of this short order establishment experienced. When Cecil the accountant came into the restaurant for his morning coffee, he saw the new rack of peanuts by the cash register. " Sam," he yelled at the owner, "do you realize what that peanut rack is costing you?" Sam said, "It's not gonna cost. The rack is only $25 and I get ten cents a bag for the peanuts that only cost me six cents. I think I will sell about 50 bags a week to start. In 12 « weeks the rack is paid for and I make four cents a bag from then on." Cecil shook his head sadly, "Wrong, Sam. Those peanuts are part of your operation now and must carry a share of the overhead. You know Sam, the rent, heat, lights, salaries for your waitress, cook--," Sam broke in, "The cook? What's he got to do with it? He don't even know I got peanuts!" Cecil began writing on a napkin. "Sam, just quickly, your peanut operation is going to have to pay $1,278 a year toward general overhead costs. Well, maybe a little more like $1,313 when you consider window washing, soap for the washroom, etc." Sam held up his hand, "The peanut salesman said all I got to do is put 'em on the counter and every bag I sell is four cents more profit." Cecil sniffed with contempt. "Sam he is not an accountant. Do you know what that space on your counter is worth?" "Hey, it aint worth nothing", Sam said, "There's no stool there." "Sam, you have 60 square feet of counter and you gross $15,000 a year. That space is worth $250 per year." "Ya mean I gotta add $250 a year to the peanuts?" "Right, Sam. Add that to your operating costs and that comes to $1563 per year or 66 cents per bag. If you sell them for ten cents, you will be loosing 56 cents on each bag you sell." "Ok, you're so smart, what do I do?" "You have to cut operating expenses," said Cecil, "move where the rent is lower, cut salaries, take the soap out of the washroom. If you can cut operating expenses by 50% you can cut your cost of each bag of peanuts to 36 cents. In order to make four cents on each bag, raise your price from ten cents to forty cents." Sam said, "Forget it. I'll just throw the damn nuts out. All I lose is 25 bucks for the lousy rack and three bucks worth of peanuts." Cecil shook his head. "It's not that easy Sam. You are in the peanut business. If you throw them away you add $1,563 annual overhead on to the rest of the operation - you can't afford that." Sam looked toward the ceiling, "Last week I made money; now I am in trouble because I wanted to make a few extra bucks on peanuts." Cecil smiled, "That' s right Sam, you can't avoid making these poor decisions unless you consult with me first." After Cecil left, one of Sam's regular customers who had been sitting in the corner booth came up and paid his tab. As he was making change Sam asked, "Jonah, did you hear all that? What do you think?" Jonah looked down at the cigar he was fidgeting with and said, "Food I can get anywhere, but here I get good food and service. The money you make depends on how much you can sell. The building is here. Not much you can do with that. And no matter how good your cooking is, my stomach can only hold so much. But sometimes when I come in, there's no room and I am not a patient man so it's over to Tony's I go. If you can get your customers in and out faster, you can take their money and mine too. If you get them to buy more, they would only stay sitting here longer to eat it and that prevents you from taking my money. It's not a lack of customers or food that keeps you from making more money, it's a lack of capacity. But the peanuts, that's a different story. The peanuts they take with them as they leave, so you still serve as many customers but now they buy a meal and a little something for later. You have found a way to sell more to your customers and still serve as many of them as before. Same expenses but more sales. That's how you make money." "But what about all that overhead stuff Cecil was talking about?" Sam asked. Jonah smiled and said, "Don't make life more complicated than it needs to be. To sell peanuts or not to sell peanuts, that is the question. You can sprinkle a little overhead on your peanuts if you like. But you will have to scrape it off from all the other things you serve first. They will look more profitable but the peanuts are the real source of that extra profit. The overhead your accountant speaks of? You will have that whichever you choose. Only three things will change if you choose to sell peanuts; sales, peanuts, and the rack. The analysis you gave him was correct" "Sometimes the problem aint capacity. Right now you're the only customer in here and I already fed you. What do you do when you aint got no customers?" Jonah held up a water glass that had been on the counter and said, "What do you see here, a glass that is half full or one that is half empty? What your accountant friend would say to you is that you have too much glass. Is it not better to fill it up than to cut it in half?" "So how do I fill it up?" "Look outside at all those people, many of them are hungry right now. What can you do to help them?" "Some are in such an all fired up hurry they can't sit still long enough to eat." Then after a moments thought on the subject he said, "I could fix something they could take with them." "Exactly! They have lots of money but not time to spend. How about that man over there sitting on the dust bin. He looks like he hasn't had a good meal all week. What would it cost you to serve him?" "I don't think he could afford my prices." With some impatience in his voice Jonah replied, "That is because you not only charge your customers for your service when they are here, but for having the capacity available even when they are not here. What you must do is go back to making a choice between two things. If you do not feed him, things are as they are. Now what will change if you fix him some of your famous blackberry pie? And remember, you are already paying your cook to play canasta with your waitress in the kitchen right now. How will your expenses change to serve that man, right now?" "Just a few more berries, some pie crust and a little higher utility bill to bake it", Sam said. "I could cut the price of a piece of pie by half which he could afford and still make a nice profit, just by using unused capacity." "My guess is that he has friends that would join him. But you must only do this thing when you have capacity you are not using. Your lunchtime crowd will still pay a premium price for your pie when they are in a hurry to get back to their jobs. The key to making money is to use every opportunity and eliminate waste. And the biggest waste for many businesses is both wasted demand and wasted capacity. Soap in the washroom is just peanuts." "Say, what sort of business are you in anyhow, Jonah?" "I like to think I help people make money. And for the conversation we just had I have been paid as much as $10,000. But this morning I am between flights and had a small amount of capacity available. It is yours for the price of a piece of blackberry pie to go. And no peanuts, please!" +( definition : drum vs bottleneck From: "Tony Rizzo" Date: Thu, 20 Dec 2001 15:18:28 -0500 Subject: Re: [tocexperts] Drum vs Bottleneck "A bottleneck is any resource the capacity of which is less than the demand placed upon it." A drum resource is any resource the schedule of which is used to stagger the projects along the timeline. In a perfect world, we would have one bottleneck, we would know what it was, and we would use that resource's schedule to stagger the projects. In the current reality, and due to the complete lack of a sane scheduling policy, every resource is a bottleneck. Under these circumstances we have to take an educated guess as to the most heavily loaded bottleneck, and we use the schedule of that resource as the pace setter for the entire system. To identify the most heavily loaded bottleneck resource one needs to do a Herbie hunt. Aggregate the days of effort for each resource, across all projects. Plot the data in a Parato chart, and look for the tallest column. Of course, this presumes that you have project plans. +( definition of constraint From: "Bill Dettmer" Date: Thu, 12 Apr 2001 20:49:35 -0700 [According to the guy who conceived the Theory of Constraints, a constraint is anything that limits the system's attainment of its goal. Based on that definition, does poor quality qualify as a constraint? Can it prevent a system from achieving its goal? If you answer in the affirmative, then quality CAN be and IS a system constraint. Whether there are root causes at a lower level than quality is not germane to the question. There's no law that says a system's constraint HAS to be a root cause. I've seen plenty of them in the middle (or at least several levels of cause-and-effect up from policies that might be considered root causes).] --- From: "Bill Dettmer" To: "CM SIG List" Subject: [cmsig] RE: Quality Costs!? (A quality zombie rises once again) Date: Thu, 12 Apr 2001 19:22:09 -0700 Reply-To: cmsig@lists.apics.org ----- Original Message ----- From: "HP Staber" Sent: Thursday, April 12, 2001 12:55 PM > I do not think that there is a quality constraint. If you have bad quality > products/services then you will not find customers - this then is a > market constraint, isn't it ? [I would have to agree with Mark on this. There absolutely CAN be a quality constraint. If you follow the logic of the thinking process, then a constraint is generally at a root cause level. It just about HAS to be. If it's not, then something else is the constraint. Poor quality alone doesn't necessarily cost you customers (meaning customers leave you for a competitor because of your product quality). Poor quality can cost you opportunities to increase Throughput if your quality is so bad that rework and scrap (starting over) consume so much of your production capacity that you can't take on new work. +( definition of system From: "Tony Rizzo" Date: Wed, 7 Mar 2001 21:59:09 -0500 Dr. Russell L. Ackoff noted, a system is defined not by its components but by the interactions between those components. By their very nature, systems are interactive, not additive. Hence, it is mathematically impossible to allocate any objective function to a component of a system. To allocate T to any piece of a system is to make the same conceptual mistake upon which activity based costing is founded. You might as well go looking for that farm animal and sharpen your knife. - Original Message - From: "Bill Dettmer" Sent: Wednesday, March 07, 2001 12:56 PM > - Original Message - > From: Brian Potter > Sent: Wednesday, March 07, 2001 3:37 AM > > True enough, but non-constraint operations can influence (for good or ill) > constraint performance. Since the "standard" ToC recommendation for > measuring non-constraint operations is Throughput Money-Time, it seems > appropriate to measure support functions on the same metric. If facilities > does something (maybe completing maintenance in less than scheduled time) > that increases throughput, they deserve to "earn" the Throughput Dollar-Days > they contributed. If the organization loses constraint time because HR can't > hire a key person, HR deserves the Throughput Dollar-Day penalty. If support > functions do well, their "score" will usually be zero with an occasional > small plus for bringing something in early. Consistent TDD losses caused by > support functions would be a sure indicator of wholesale failure to > subordinate to the constraint. > > [How is attributing a piece of "T" to a component of an organization any > different than allocating a piece of fixed cost to the same component? The > original comment was: "...and because they are support they are indirect > and you ain't a gonna be able to measure their (direct) impact on T." > > ...And I agree with this. Especially the "direct" part, for the reason > originally offered. Moreover, why bother? What is gained by crediting the > facilities operation with "earning" some portion of T (other than making > them "feel good"?)] --- From: "Tony Rizzo" Date: Fri, 10 Aug 2001 23:54:26 -0400 ADDITIVE and INTERACTIVE SYSTEMS Multitasking has this sort of beneficial effect when the output of an organizational system equals the sum of individual contributions, as is the case for defense contractors that perform typical cost-plus contracts. This is also true of, say, organizations that perform piece work, such as farms that use migrant workers. Under such circumstances, keeping everybody busy is entirely consistent with maximizing the output of the system, because the system's output is additive, rather than being interactive. However, most organizations that perform projects for new-product introduction or for IT purposes are described best not as additive systems but as interactive systems. In fact, for many, the output is attributable entirely to interactions. For these, multitasking is a total disaster. ----- Original Message ----- From: John Curran Sent: Friday, August 10, 2001 8:18 PM The subject has been the topic of a number of sessions in this list. Intuitively we all know it is not the best thing to do, yet most supervisors/managers/owners seem to think that it is a legitimate means of squeezing just a bit more from limited resources. In a former lifetime, when I was called in to straighten a mess out, the first thing I did was get rid of the multi-tasking. Once done, the projects quickly got back on course. I'm a devotee of non-multi-tasking. Those who drive and use cell phones might take a lesson from this article, below. Sorry I can't say more but I've not had the time to pursue it further. John C. Excerpted from the August 7, 2001 issue of the Roanoke Times, which in turn got it from the Hartford Courant. Scientific studies have shown that multi-tasking is a waste of time. David Meyer, professor of mathematical psychology at University of Michigan is coauthor of a study published in the Journal of Experimental Psychology: Human Perception and Performance. He believes people believe they have abilities or limitations which don't apply to real performance. His conclusion: rapidly switching between tasks generally wastes time - a lot of it. In some cases multi-tasking added 50 percent to the time required to do chores. Moreover, tasks were not just done more slowly, they were done more poorly (even when cash rewards were offered). Not just speed of task and accuracy were effected, but fluency of task (gracefulness) was negatively influenced by an overload of multi-tasking. Similarly, scientists at Carnegie Mellon, using brain imaging technology, found the level of brain activity devoted to a task decreases when two tasks are performed at the same time, dragging down performance. They reported in the journal NeuroImage that people performing two activities at once do neither one as well as when they do one at a time. Multi-tasking is a problem all society faces and the idea that it simplifies life is negatively seductive. Studies were funded by the FAA and the U.S. Navy. --- From: "Tony Rizzo" Date: Sun, 12 Aug 2001 00:43:58 -0400 I'd be happy to explain. If the output of the system equals the sum of the outputs of individuals, then the system is entirely additive. No interactions exist between components of the system (individuals). This sort of system includes all hand labor where each individual produces the end- product. Examples include picking farm produce, sewing simple garments, most craft work, and billing clients for hours spent on their jobs. The former is often done by lawyers and defense contractors. With such additive systems, the output of the system depends entirely on how many hours each individual works. Utilization does become an important measurement, because the system's output (profitability) is linearly proportional to the utilization of workers. This is why the CFOs of defense contractors monitor very closely the "overhead hours" of their people. Of course, there are defense contractors that don't perform cost-plus contracts. There are others that derive more profit from follow-on production contracts than they do from the development efforts. For these, the overhead measurement is irrelevant, although they continue to behave as if it were relevant. I've met managers who used to work for defense contractors but now work for commercial enterprises, and they still strive to maintain low overhead hours with their people. It's tough to break old habits. +( definition of ToC 1 From: "Jim Bowles" Subject: [cmsig] What is TOC? - Clarity on current debate Date: Mon, 22 Jan 2001 22:23:49 -0000 I feel that the current debate about TOC and TP may be misleading for newcomers to the list. So here is my attempt to add clarity. "It is not TOC or the TP that limits its use - It is the person(s) using it that imposes the limits." Having been at the Conference when Eli Goldatt agreed to use a three letter acronym for his work he said that: TOC = Thinking Process plus Applications (he claims to have used the Thinking Process to derive the applications.) A fuller version of what was said is given below: The Theory of Constraints is a body of knowledge that has been accumulating for about 20 - 25 years. It comprises two parts, a set of applications for improving different functions of an organisation, and a set of scientifically based tools for resolving different types of problem or for creating new applications. These are known as the Thinking Processes. The best known application is for Production/operations (described in the form of a novel called The Goal, 1984). Others include Finance and measures, Project management/ Engineering, Distribution, Management of People, Marketing and Sales and the all-embracing Strategic and Tactic approach. On their own each application provides the means to change the performance of a function by several orders of magnitude. But the most powerful of all is to use the combination of applications and the Thinking Processes in a strategic way in what can be called a Process Of On-Going Improvement. We do this to set the direction and best practice for developing the organisation as a whole. The Thinking Processes (which have their origins firmly rooted in the hard science of physics) are a set of tools/processes or procedures that allow you to "break out of the box". They can be used singly or in combination (known as the TP Road Map). Each tool addresses a different problem depending on what it is that blocks you from moving forward towards your chosen Goal. In essence they help you address three questions; What to change, What to change to, and How to cause the change. Of course these questions only have relevance if you know what you want to achieve (A Goal) and know how to measure your progress towards that Goal. In TOC the analogy of a chain is used to focus attention on how we view an organisation. On the one hand we can view it as a set of independent links. This is the most common view in all types of organisation and it encourages people to do things that are good for their link alone. We call this focusing on the local optima. On the other hand we can consider the chain as a whole and then we need to consider its weakest link(s). We call this the focusing on Global Optimum. Identifying the weakest link(s) is synonymous with finding the systems "constraint". For those more familair with the TQM philosophy or Deming this is our way of "Using the few to control the many". The most commonly encountered problem is the differences in the way that people view and measure their chain or part of it. By focusing on the "weight" or "cost" we come to one set of decisions actions and conclusions. But by focusing on its strength or its ability to deliver results we come to an entirely different set of actions and solutions. This has led TOC practitioners to a consciousness of two opposing paradigms. One we call the "Cost world" the other we call the "Throughput World". Any questions? +( definition of ToC 2 From: "Jim Bowles" To: "CM SIG List" Subject: [cmsig] Re: A really stupid question? Date: Sat, 17 Jul 1999 11:39:51 +0100 The Theory of Constraints is a body of knowledge that has been accumulating for about 20 - 25 years. It comprises two parts, a set of applications for improving different functions of an organisation, and a set of scientifically based tools for resolving different types of problem or for creating new applications. These are known as the Thinking Processes. The best known application is for Production/operations (described in the form of a novel called The Goal, 1984). Others include Finance and measures, Project management/ Engineering, Distribution, Management of People, Marketing and Sales and the all-embracing Strategic and Tactic approach. On their own each application provides the means to change the performance of a function by several orders of magnitude. But the most powerful of all is to use the combination of applications and the Thinking Processes in a strategic way in what can be called a Process Of On-Going Improvement. We do this to set the direction and best practice for developing the organisation as a whole. The Thinking Processes (which have their origins firmly rooted in the hard science of physics) are a set of tools/processes or procedures that allow you to "break out of the box". They can be used singly or in combination (known as the TP Road Map). Each tool addresses a different problem depending on what it is that blocks you from moving forward towards your chosen Goal. In essence they help you address three questions; What to change, What to change to, and How to cause the change. Of course these questions only have relevance if you know what you want to achieve (A Goal) and know how to measure your progress towards that Goal. In TOC the analogy of a chain is used to focus attention on how we view an organisation. On the one hand we can view it as a set of independent links. This is the most common view in all types of organisation and it encourages people to do things that are good for their link alone. We call this focusing on the local optima. On the other hand we can consider the chain as a whole and then we need to consider its weakest link(s). We call this the focusing on Global Optimum. Identifying the weakest link(s) is synonymous with finding the systems "constraint". For those more familair with the TQM philosophy or Deming this is our way of "Using the few to control the many". The most commonly encountered problem is the differences in the way that people view and measure their chain or part of it. By focusing on the "weight" or "cost" we come to one set of decisions actions and conclusions. But by focusing on its strength or its ability to deliver results we come to an entirely different set of actions and solutions. This has led TOC practitioners to a consciousness of two opposing paradigms. One we call the "Cost world" the other we call the "Throughput World". Any questions? All productive editing gratefully received. Jim Bowles A Certified Associate of the Goldratt Institute ________________________________________________________ Date: Sat, 3 Jul 1999 06:52:45 -0700 (PDT) From: Tim Sullivan Subject: [cmsig] Re: TOC and Throughput Accounting. To: "CM SIG List" Here are a few www sites: http://www.rogo.com/cac/ (includes excellent list of TOC books, etc.) http://users.aol.com/caspari0/toc/MAIN.HTM (constraint accounting site) http://www.ciras.iastate.edu/toc/ (lots of general info on TOC) ________________________________________________________ Date: Mon, 21 Jun 1999 19:56:57 -0400 From: Anita Ehrenfried or Brian Potter AGI's website for ordering GSP tapes: http://www.eligoldratt.com/ Prochain's website: http://www.prochain.com/ This is Newbold's company. They have distributors outside the US. ________________________________________________________ From: Greg Lattner To: "CM SIG List" Subject: [cmsig] Re: TOC and multi-national conglomerates Date: Mon, 19 Jul 1999 12:20:01 -0600 The Cloud Dr. Goldratt describes is very common. The fuel to get people to buy TOC is often specific failure rather than broad failure. When newcomers to TOC have UDEs of failure, they recognize they need something more valuable than mainstream methods tought by Harvard and other prestigious universities. That is the time to present the TOC solution, when your internal customer has failure and wants more valuable methods. Remember the old TOC sales cloud with (D) Present your solution vs (D') Don't present your solution. It is hard for top management to accept they have broad failure. It's hard for Harvard University professors to admit thay are teaching methods that cause dysfunction and broad failure. There are emotional forces, polictical forces, and forces of greed that are obstacles to all of this. To break through these constraints defys human nature. People, left to themselves, without the help and vision of a higher power are selfish. That is universally true all over the world. The question is how can you use these human weaknesses to force people to learn TOC at the top and smash it on down through all the levels of management? Often those at the top say they don't have time to learn TOC, or that they don't need to learn anything at all because they are so proud. Often those at the top are really followers and wait for others to benchmark and for Harvard (the MBA leader) to give their royal blessing on new methods. That way they cover their butt. But when big companies have success with TOC they freeze up and won't share what their doing for two reasons: 1. Those that like TOC see it as a competetive advantage 2. Those that don't like TOC are too embarrassed to admit they didn't think of it first (it's the pride thing again). So back to the question of how to use these human weaknesses to force those at the top to smash it down and fire people who won't really learn it. Or is that the solution. If TOC ever is done only for lip-service (that is to say Yes Boss, we're doing it) but they don't do it in their hearts, then TOC will start to have a list of TOC failures, whereas most TOC implementations are successes. Top management somehow has to realize that in TOC all the rules are changed. The roles change completely. And the roles and functional silos that have been tought by Harvard and other prestigious universities are wrong! I'm especially hard on Harvard because they are often viewed as the leaders of business acedamia. If they are the leaders it's time for them to LEAD!!!!!!!!! Get off their duff and start moving forward. Break down the walls of traditional roles and functional silos, not with a minor tweaking from Reengineering, which tends to introduce computer software and ERP as the new age solution to all of these functional silos. But instead TOC. Instead Harvard has been dragging their feet and pandering to the old mindsets of Cost Accounting, which are the core problem and #1 Constraint to Productivity in the World. Shame on you Harvard! Some day I need to put these loose thoughts into a coherent Current Reality Tree. Hope this helps identify some of the entities. Greg Lattner -- From: Jim Bowles[SMTP:jmb01@globalnet.co.uk] Reply To: cmsig@lists.apics.org Sent: Monday, July 19, 1999 11:30 AM To: CM SIG List Subject: [cmsig] Re: TOC and multi-national conglomerates Hi Dieter In a recent letter to the poogiforum Dr Goldratt addressed this very issue: He said: Now our conflict is clear - can you picture the cloud? The objective (A) is to enable TOC to spread much faster. To do it properly, we have to (B) increase the rate of new implementations and (C) ensure long term successful implementations. However, in order to (B) increase the rate of new implementations we should (D) start with the TOC application that is desired and needed by the organization. But in order to (C) ensure long term successful implementations we had better (D') not start with the implementation of any specific applications but rather concentrate first on persuading all top managers to fully embrace TOC. I'm interested in learning how many of you experience the above conflict, so please take the time and answer the following question: Does this match your experience? Jim Bowles +( definition of ToC 3 From: "Potter, Brian (James B.)" Subject: [cmsig] Definition of TOC Date: Wed, 28 Mar 2001 17:41:07 -0500 A personal opinion (not an "official" definition for an "authoritative source"): ToC is a continuous improvement management philosophy (...unifying the ideas of W. E. Deming, P. F. Drucker, T. Ohno, E. F. Schumacher, P. Senge, S. Shingo, and others by...) focusing on system constraints using logical analysis techniques and quantitative tools rooted in the philosophy of scientific enquiry. Key concepts: - continuous improvement: always get better - management philosophy: this is (when properly done) strategic thinking - focus: only a few critical control points need top leadership attention - system constraint: the critical control points are the system constraints - logical: cause and effect based logical thoughtoffers valuable insights - scientific: the "theory" in "Theory of Constraints" is in the scientific sense of a MODEL which "explains" observations about the behavior of a system, organizations in this case. We are NOT talking about something which might be "true" or "false" pending further investigation. We ARE discussing an approach to MODELING human organizations which has surprisingly good PREDICTIVE power. The partial list of respected thinkers (parenthetically mentioned within the above personal definition) sniffing around the same domain gives us clues that this is important and that Goldratt was neither the first nor the only one to appreciate that importance. It is even possible that he was not fully aware of the prior contributions by some of the others. From my view ToC is a "unified theory of management." Under its umbrella, other management theories (narrower than ToC, but sound within their own domains) interact with one another and contribute to fuller understanding of the whole. +( definition Throughput Measurement From: Norm Henry Date: Fri, 5 Jan 2001 11:07:27 -0800 Throughput: The rate at which the system generates money through sales. Throughput is typically determined as sales revenue less directly variable expenses. Throughput Dollar Days: The dollar value assigned to missing orders equal to the selling* price multiplied by the number of days the shipment is already late. Inventory Dollar Days: The dollar value assigned to inventory equal to the selling* price multiplied by the number of days anticipated until shipped (sold). * The above come from reading Eli Goldratt's Essays on the Theory of Constraints. I would be interested if, in the definitions of Throughput-Dollar-Days and Inventory-Dollar-Days, which are not directly quoted from Dr. Goldratt, if the word "selling" is correct or if this would be better stated using the word "throughput." My thinking would be to base this on throughput value and not actually on the sales price. Any thoughts or answers from anyone would be appreciated. From: Norm Henry Date: Fri, 5 Jan 2001 14:35:08 -0800 Yes, throughput is defined as a rate. Throughput is often calculated as an amount on a product but is actually the rate over a period of time at which money is generated rather than an amount per product. Directly variable expense is what is says: directly variable. Burdens are not directly variable and are not included in throughput. In regards to direct and indirect it depends on what you mean by direct and indirect. Often what some refer to as direct is still not directly variable and would not be part of throughput. Only the directly variable portion of direct expenses are included. Indirect expenses would likely not (depending on your definition) be variable and would also not be included in throughput. What is typically variable is the raw material costs, sub-contract work, sometimes some freight charges, and sometimes sales commissions. You ask about profit margins? Profit margins on what? Sales less directly variable expenses = Throughput. Throughput in this sense is margin on the sales. Accountants have referred to this as contribution margin. Throughput less Operating Expenses = Net Profit. Net Profit is the net profit margin. --- Tom Schilling : There are 2 types of constraints : - system constraints (= affecting the whole company) - process constraints (= effecting one process) There can only be one system bottleneck at a time but there may be many process bottlenecks at the same time. Throughput Rate (TPR) TPR = (sales-consumption)/labour -labour may be hours or cost of people and/or machines -the ratio itself is meaningless - the trend is important -if you define it clever you can derive a P&L from TPR ________________________________________________________ Definition of THROUGHPUT From: Norm Henry Subject: [cmsig] RE: T I and OE Definitions Date: Fri, 30 Jul 1999 08:17:06 -0700 Yes, you missed something. Sorry. This is a point of confusion within TOC for many people who are exposed to TOC without reading the actual writings. And this is damaging to the introduction and acceptance of TOC. Often in magazine articles in accounting publications there will be reference to TOC and Throughput only considering raw materials. This is generally from people writing about ABC and pointing out the problems with TOC. But, read nearly any writings on TOC from people who have actually studied or been trained in TOC. Read the actual writings of Eli Goldratt from 10 or more years ago (The Haystack Syndrome as example). In the definition of Throughput they will all refer to directly variable costs and not to only raw materials. Raw material is used as a primary example because it is typically the largest directly variable cost and sometimes the only directly variable cost. Also, raw material is used as the primary example because when one refers to variable costs, many people think this includes direct labor as traditionally included in contribution margin analysis. But as you would recognize from TOC, labor is generally not directly variable unless one pays by piece rate. When evaluating Throughput, one should generally include all directly variable costs such as outside processing, freight, sales commission (if directly variable), etc. Otherwise, one does not really see what the Throughput is which is generated. This is consistent with TOC teachings for many years. Norman Henry -----Original Message----- From: Kent Johnson [mailto:phred53@mail.xmission.com] Sent: Thursday, July 22, 1999 4:42 PM To: CM SIG List Subject: [cmsig] T I and OE Definitions Alright. I understand about active and passive inventory. But in reading some of the responses, I noticed that one individual used a definition for Throughput that implied: Throughput = Sales Revenue - all variable costs This I would have called Marginal Contribution! I had always understood Throughput to be: Throughput = Sales Revenue - purchase price of raw materials --- From: "Mark Woeppel" Subject: [cmsig] RE: Measuring ThroughPut Date: Sat, 24 Mar 2001 13:47:26 -0600 You should be very careful measuring local parts of your business on throughput, especially those that have internal customers or are not the constraint resources. You could end up with local efficiencies and all the associated problems - over activation, emphasis on reducing setups to "save" capacity, etc. If you're trying to assess their impact on the company, look at buffer management as an alternative and measure buffer penetrations. Of course, those cells that are entirely self-contained can and should be measured on throughput. Just look for changes in product mix, they could have an impact on throughput, even though the cell is productive. --- From: "David G. Himrich" Subject: [cmsig] RE: Measuring ThroughPut Date: Sat, 24 Mar 2001 21:00:41 -0600 I would encourage you to limit Throughput measurements to those cells that actually sell things to external customers. One of the most powerful insights that Theory of Constraints provides is that little subsystems in a manufacturing organization don't have Throughput; the business as a whole creates Throughput. Those subsystems need to be configured to work together to increase Throughput. Local optimization as defined by local measures of efficiency is what almost everybody has been doing for a hundred years, and it's not the best way. In fact, it often works counter to maximizing Throughput. +( delivery dates - sell and promise the impossible Date: Wed, 13 Sep 2000 18:08:21 -0400 From: Tony Rizzo Subject: [cmsig] Re: Delivery Dates The problem is in the measurements. If the sales people are measured and rewarded on the basis of the throughput that is actually generated, when it is generated (as opposed to when it is negotiated), then the sales people become very concerned with the speed of the system. They become reluctant to do anything that slows the system, because slowing the system means delaying their own bonuses. > Doorenbos Steve wrote: > > In our company, the sales department takes a very active role in > promising delivery dates to our customers. They are supposed to base > their commitments on the data we provide them about current capacity > and requirements. > > Since their goal is to maximize sales: > > * they have a tendency to overbook. > * they also make commitments to customers based on the date we will > receive the actual order, promising them delivery in "X" number > of weeks, sometimes without considering the requirements. (We > are a make to order industry) > > Is this standard industry practice, or do other companies only allow > production control to promise delivery dates? > > sdoorenbos@viracon.com --- Date: Wed, 13 Sep 2000 16:37:42 -0700 From: Norm Rogers Damn the delivery dates, sales full speed ahead! This is true in every manufacturing company I have worked in and I believe is true in all successful companies. Management of course cannot acknowledge such activities publicly. When faced with telling the truth to the customer about a realistic delivery time you stand a good chance of losing the sale. By telling him that you cannot deliver timely, you are telling him that he needs to give the order to your competition. By being honest you have lost the sale and the customer. Since he is now doing business with your competitor. The only chance you have of getting him back is your competitor messing up. By giving an unrealistic ship date, you get the sale and keep the customer. As the anticipated date approaches you then warn the customer of the late shipment. This will not make him happy but you at least have the sale and the customer. You have a better chance of getting his repeat business in the second scenario than you do in the first. Furthermore, by taking the order you give manufacturing/purchasing the chance to increase capacity in order to ship timely. There remains a chance even though only a small one. By telling your customer to go else where, there is no chance at all. The customer may or may not remember you as being an honest person that couldn't deliver but he will remember who supplied him last time for sure. +( Dollar Days Dollar Days We recently received inquiries regarding the use of the Dollar Days concepts in projects. What is written below is a summary of the concept: In Project Management, Dollar Days has two components – Throughput Dollar Days and Investment Dollar Days. The concept of Dollar Days in projects was first developed in 1990 in the creation of the Flush final judge for projects. The idea behind Flush (referenced by Goldratt at the end of his book Critical Chain) was that there was not a good measurement to judge the impact of changing timing in a project in conjunction with changing dollars in a project. These items have different base units. Net Present Value values the cost of tying money up as a matter of inflation and interest. But we all know that while money is tied up or when projects take longer than planned, opportunity is also lost. This is where Investment Dollar Days come in. For each day we have each dollar invested, we accumulate an additional Investment Dollar Day (IDD). As we invest more and more each day, we accumulate IDDs for not only the dollars we just spent, but also for the dollars that remain tied up. As we earn money from our project, we begin to reduce the dollars outstanding and slow the rate of accumulating IDD. When all the money is paid back traditionally, we have the most IDD. After all the actual money we spent is paid back (also considering interest and inflation) Throughput begins to accumulate. We gain Throughput not only for the day we receive the money, but also for each day of opportunity it brings. In other words, after we retard the rate of IDD, we begin to accumulate Throughput Dollar Days to “pay off” our Investment Dollar Days. When all of the Investment Dollar Days have been paid back, the project Flushes – we really return the investment in both money and opportunity lost. Different project assumptions can generate different flush curves – some with more risk than others. Flush can be used as means of deciding which projects have higher precedence or which options have less risk of flushing. The assumption underlying the Flush and dollar days concepts is that cash is a constraint. We can use Flush in not for profit environments where Throughput is not generated in monetary terms. We utilize the Investment times days and Throughput times days of the constraint that limits or gates the amount of work we take on. +( Drum Buffer Rope 1 Definition From: "Richard L. Franks" Subject: [cmsig] RE: DBR or CCPM? Date: Wed, 9 Feb 2000 10:58:48 -0800 1. What is the difference between DBR and CCPM/CCMPM? DBR was designed for improving production. In its basic form, it a. identifies the constraint (called the Drum) in the production process (typically a resource in production or the market for the product) b. Determines the schedule for when each production batch should reach the constraint (typically the constraint resource or the shipping dock) c. releases materials to the shop floor a fixed lead time (called the Buffer Time) ahead of that scheduled time. This buffer time may be different for each product line, but in the basic form is not different for every production batch. IE typically there are only a few Buffer Times to choose among when starting a batch. d. In execution, you check the status of the buffer and (only) take action if it is required. There is more to it, but that is the basic idea. CCPM (Critical Chain Project Management) was designed for improving the way projects are managed. In its basic form, a. Each project is individually planned, clearly indicating the tasks to be done, the resources each will use, the times involved for each task and the dependencies between all the tasks (and many people explicitly include the inputs and outputs of each task) b. Each project is then scheduled using average task times for the task durations and constructing time buffers to allow for the variability of tasks on each dependent chain of events. There is a project buffer at the end of the project plus feeding buffers where each sub chain impacts on the critical chain. c. If there are multiple projects being done by the same resource pool, then CCMPM (Critical Chain Multi Project Management) is used. The most important difference is that projects start times staggered to reduce resource contention between projects done by the resource pool. d. Changes in the way projects are executed is a big part of CCPM and CCMPM. There are two major pieces i. Resources do NOT multi task, ii. There are NO intermediate milestones for individuals iii. Individuals are expected to work full speed on a task as soon as it is ready to start and keep at it until they hand it to the next person iv. The progress of the project is actively tracked by regularly checking the state of the buffers and taking action (only) if required. There is more to it, but that is the basic idea. 2. How do I decide if I should use DBR or CCPM/CCMPM? If both DBR and CCMPM are candidates for use in some environment, notice the differences which should effect the choice; a. With CCMPM you typically plan each "project", which takes a fair amount of effort. With DBR, you typically decide that this "order" is of product type "x" and use the buffer time associated with that. DBR would take much less planning time per project/order. b. Typically, CCMPM is used when there is a lot of variability between the projects. Sometimes each one is very different. It is also common for the projects run under CCMPM to take much longer than the jobs under DBR (for example, months versus days) c. Typically, the buffer tracking in DBR is pretty simple. Usually the question is whether the unit has reached the constraint. If you are in the first 1/3 of the buffer, you don't care much. In the second 1/3 of the buffer, you determine why it isn't there are plan action in case it is required. In the last 1/3 of the buffer, you take the action you planned. d. Typically, CCMPM buffer management is a lot like DBR buffer management but you have both Project Buffers and Feeding Buffers to manage. It is common for projects to have one Project Buffer and 10 or 20 Feeding Buffers. Many people track both the buffer status and the amount of Critical Chain left. 3. Why can I not use DBR for project Management? a. You can if your projects are fairly short and follow about the same plan each time. For example, DBR might work fine for a developer of houses within a tract where all the plans are of only a couple of types and houses can be churned out fairly quickly. A developer of high end, custom homes each of which had a different architect would probably be a lot better off using CCMPM. 4. What methodology should be used in a product development organization? Why? a. Most places, product development has a lot of variability in the plans as well as in the time it takes to do the individual tasks in the plans. Every Product Development organization I've worked with has chosen CCMPM due to its greater flexibility and control in complex situations. For each of them, DBR would have been a very poor choice. --- Date: Mon, 25 Sep 2000 07:46:13 +0200 From: Eli Schragenheim Subject: [cmsig] RE: DBR and Capacity Planning Jean-Daniel Cusin wrote: > > The drum-buffer-rope technique assumes that the variations in demand can be > absorbed within the tolerances of available, and that there will be a > sufficient buffer (stock, lead-time or capacity) that can be mobilized to > absorb excessive (or very low) demand at an acceptable level of cost. The > drum-buffer-rope technique is at the level of execution; it is not a > planning tool. DBR does NOT assume that the variations in demand can be absorbed within the internal buffers. When DBR is used for scheduling some immediate means for adding capacity can be considered (overtime, for instance). But, at that instance the options may be limited and evetually excess load is pushed into the future. The buffers within the DBR are used for the internal variations, not for the external ones (demand and supply). This does not mean that the DBR methodology ignores long term capacity planning. The concept of the drum should be used also for planning of the longer term. The focus on the CCR (capacity constraint resource) makes it pretty straightforward to ask the question: do we face future demand that is more than what the CCR can handle within the tolerance of the buffers? If so, we either add capacity (first to the CCR, then inquire whether an interactive constraint may emerge and add capacity to that resource as well) or impact the demand. In cases of peak and off-peak periods, one should certainly start loading the CCR already at the off-peak period for the peak period itself. There is no need to go into sophisticated forecasting. All we need is to forecast the minimum quantities we almost certainly need at the peak, and add what is necessary at the peak period itself. Having a capacity constraint does not exclude treating the market as a non-constraint. On the contrary. The market demand should be treated as a constraint. The drum should exploit both the market requirements and the CCR capacity. All these issues are discussed in a forthcoming book by Bill Dettmer and me, along with a simulation of a whole year in the life of a manufacturing organization. Regarding the CRP. Due to both the infinite capacity loading, plus assuming transfer batch equals process batch, plus the arbitrary use of lead-times, the actual timing the CRP considers is way off. The RCCP is more useful because it does not pretend to be accurate. But, the real question is the reliability of the forecast. The DBR logic leads us to better answers. --- From: Norm Rogers [mailto:nrogers@n-c.com] Sent: Wednesday, September 27, 2000 11:25 AM You can either tell the customer that his delivery time is anticipated to be x number of months and if that is not acceptable then he can go to your competition with his order or you can take the order and hope for the best. If you take the order then you need to try to increase your capacity in the constrained area. You can do this by 1) overtime 2) additional work force and equipment or 3) out source. Depending upon the severity and anticipated length of the sales surge you need to weigh which alternative / or mixture will best suit your needs. From: "Potter, Brian (James B.)" Date: Wed, 27 Sep 2000 14:13:08 -0400 Please, accept a few additions to your list ... ... 4) changing policies to reduce scheduled downtime (e.g., hire extra operators or cross-train operators who normally run other equipment so the constraint can run during lunch, break, shift change, and on weekends; perhaps, your PM down time includes some activities {e.g., visual inspection or lubrication} which you can actually perform during production if maintenance crews and operations crews cooperate) 5) changing processes to reduce scrap caused by constraint processing 6) changing processes to reduce scrap AFTER the constraint operation 7) improving preventive maintenance (to reduce unscheduled down time) on the constraint resource's equipment 8) creating alternate routings which allow off-loading the constraint to a different resource able to perform the same operation (though perhaps with more "labor costs" per unit, a higher scrap rate, or otherwise in a "less desirable" way) 9) joining batches (Pulling some orders forward to exploit a setup already in place for an earlier order. This trick will create extra constraint processing time equal to the time for one setup for each batch pulled forward. The trick costs extra investment in inventory AND delays in orders pushed back to pull the other(s) forward. A good DBR scheduler will identify opportunities to play this game when it will help) Mike's problem is an opportunity to get creative. In some environments with demand spikes, it may make sense to inventory (above and beyond the inventory "batch joining" creates) outputs from a constraint to level the load on your constraint(s). If your unit expense for material and storage are small relative to unit throughput, extra WiP of constraint output might be a quite attractive load leveling tactic. --- From: "Richard E. Zultner" Subject: [cmsig] RE: Common Cause and Mule Kicks Too? Date: Sun, 8 Apr 2001 16:52:49 -0400 Dirk J. Roorda wrote: Adam Wrote: "So lets start at the purpose of a buffer. -- to make sure some point in the system doesn't deplete to zero due to common cause variation." This is not really germane to the discussion, but let me ask the question: Are buffers only for COMMON CAUSE variation, or are they for the "out of control", "mule-kicks" to the system as well? REZ> As buffers are typically created without regard to whether the system is in control (experiencing only common cause variation) or out of control (experiencing special cause variation as well), the buffers had better absorb ANY variation that the process experiences. If you size the buffers by a rule of thumb (e.g., one-third) then you must hope the buffer is sufficient to absorb all the variability you encounter: common or special in cause. [As I prefer action to prayer, I use the second approach:] If you size the buffer by a calculation (e.g., the sum-of-squares "definition of variance") then you should explicitly add time for extra-task risks (because they are NOT included in your calculations). Any risk that is a low-probability event for any one task (so the estimator does not take it into account) but is a high-probability event over the course of a set of tasks (or the project) is not included in a calculated buffer. Whether this extra-task risk is "common" or "special" in causation is beside the point -- "does the buffer have sufficient protection against it?" is the point. For example, illness: On a one-year project, a project team will have multiple person-days lost to illness -- but no individual task-estimator factors in the likelihood of themself getting sick on a task that is only a few weeks in length. Most organizations have good records on illness, so it is easy to estimate the necessary addition to the buffer for illness. The primary reason I do add a "buffer segment" for illness, however, is not to know how many days to expect people to be absent, but to have a place to record actuals against. Although I may not be able to do much to reduce illness, for many other extra-task risks there are steps I can take to reduce the risk, or its impact, on the project. Having a "buffer bucket" for recording each extra-task risk supports Risk Management and risk reduction in a powerful but simple way. --- From: "Ward, Scott" Subject: [cmsig] RE: DBR Implementation Date: Thu, 16 Aug 2001 07:58:54 -0500 We have instituted DBR without additional software. Our CCR (Critical Constraint Resource) communicates the schedule date that they're working on. All other workcenters can work on jobs (work orders) scheduled in their areas up to the same date. Our MRP/ERP system has calculated the schedule dates in each area already. That's the Drum. We've set queue times before the CCR, the Buffer, and an additional buffer at the end as a Shipping buffer. The Rope is set by the communicated date and how we've changed parameters in the system for scheduling parallel operations, i.e. operations that we allow to start before the previous one is entirely completed. This might be because of independence or effective reduction of lot size close to n=1. That is, the system is not scheduling to start operation 2 after operation 1 has completed 30 pcs., but is scheduling it to start somewhere in the middle (after 15 pcs. e.g.). By working with some of the system parameters we can change the length of the rope. All of this is using our standard MRP/ERP system. We have used the information in a way that gives us DBR. In the last 2 years, we've changed the CCR designation once based on elevating the old CCR's capacity. The new CCR is where we want it because the capability is more difficult to duplicate (elevate) and has lengthy cycle time parameters. +( Drum Buffer Rope 2 Visual scheduling/KANBAN From: Greg Lattner Subject: [cmsig] Re: FW: Re: Visual Scheduling/kanban Date: Tue, 13 Jul 1999 09:28:59 -0600 Since we've been discussing this post I've had a few off-list discussions with others who like DFT and I am very thankful they educated me on DFT more. It has confirmed what I'd heard from others. DFT is similar to JIT. DBR is better. Here is what I've told some of those who have contacted me off-list. DBR is indeed better because 1. It is tied in with TIOE measurements. They are inseparable. JIT tries to ignore Cost Acctg and runs into dilemmas that lock your current reality in place. 2. You can manage Murphy's Law better with DBR because when a non-bottleneck experiences Murphy it can catch up. But with JIT the whole line may shut down, including the all important bottleneck. That means less cash flow! MRP largely ignores Murphy. JIT tries to eliminate it (impossible). DBR manages Murphy in predictable, proactive ways, while minimizing it as much as possible. But DBR never tries to imagine it should or can totally eradicate Murphy (Variation). 3. Inventory can be lower with DBR because you only have stock buffers at 2 or 3 places, rather than 5-10 like with JIT/KanBan and TAKT. That means DBR will have better cash flow! 4. DBR identifies the bottleneck and manages around it 5. Are there more that others on the list can add? -- From: Michael Cuellar[SMTP:mcuellar@earthlink.net] Sent: Monday, July 12, 1999 7:27 AM Greg, I don't understand your point that DBR can take you farther. I have very little knowledge of DBR but the comparison's that I've seen between TOC implementations and DFT implementations show that DFT provides greater improvement as I mentioned in my post. I agree that the costing function and more generally metrics used often prevent/militate against the improvements promised by DBR and Flow. That's why its important to changes the metrics so that we don't get hung up on counterproductive activities. > -- > From: Michael Cuellar[SMTP:mcuellar@earthlink.net] > Sent: Thursday, July 08, 1999 7:44 AM > > To address a couple of questions about Synchronous Manufacturing with a > real long answer. I have never seen a real good definition of Synchronous > Manufacturing so I offer this one: > > . . . a holistic approach to the manufacturing enterprise that seeks to > provide competitive advantage by reducing the time and cost of > manufacture while increasing the flexibility of the enterprise to > adapt to marketplace demands by use of synchronized focused > production lines, J-I-T material provision, demand based work > signalling/material replenishment, and a flexible empowered > workforce operating under a continuous improvement philosophy > > Some points to emphasize: > > - Holistic - It affects all of the enterprise not just the plant floor but > also sales, order fulfillment, engineering, purchasing. > - Objectives are competitive advantage improvement by cost and cycle time > reduction and production flexibility improvement > - The plant floor is redesigned to create focused lines capable of > producing any product in a family of products as opposed to a functional > layout > - JIT material provision. While a SM plant may initially use traditional > methods of material procurement, eventually they should migrate to > frequent shipments of raw materials based on committed purchase volumes > - Demand based work signalling. Don't make anything until its needed. > - Flexible empowered work force. Your floor peoplehave brains - use > them. > - Continous Improvement - Its never "good enough" - find ways to reduce > TPc/t, eliminate more non-value added time (movement, queue, setup) etc. > - It is Revolutionary vs evolutionary - simple improvement will not get you > to world class - adopt a new paradigm to get competitive advantage. > > >From this definition you can see the answers to a couple of questions: > > 1. In synchronous manufacturing you don't focus on the constraint in your > existing plant because you're going to redesign your lines. > Now Costanza never considers constraint management in the planning of those > lines in his book. I think all on this list would agree that > in planning the lines, simulation of expected production demand through the > line to reveal the bottlenecks so that they can be reduced and > eliminated would be required. It would seem that a set of inerations of > potential lines through a simulation would be necessary to arrival > at the plant layout with the highest velocity. It seems that there may be > a difference in attitude toward the line - highest velocity vs managing > a bottleneck. Do you agree? > > 2. I don't think Costanza would use the term balanced plant (is that a > perjorative in the TOC world?). I think he would say a focused or > linear plant i.e. A plant that is focused on producing low cost, high > quality products to customer demand a quickly a possible. > > At Flow Forum, held in Atlanta in May more than 10 companies reported the > results of their implementation of Flow. They claimed: > Cycle time reductions between 67 and 90% > Floorspace reductions of 10, 25 and 59% > Customer service improvements of over 20% > WIP turns improving by factors of 3-4x and > reaching numbers such as 48. > > I haven't verified any of the numbers and its seems to strain credibility > but there it is. > > I fully recognize that this is Constraints Management SIG list and > therefore have no intention of engaging in a flame war of TOC vs DFT nor > evangelizing Synchronous Manufacturing. So unless I am asked specific > questions about SM I will go back to lurking and soak up an understanding > of TOC from the guys who are actually doing it!. > > Greg Lattner wrote: > > > I have also heard that Demand Flow Technology is the old stuff. John > > Costanza has his office right here in Englewood, CO, near where I live and > > the Denver Post or Rocky Mountain News recently did an article on his > > business. As I recall from the article, he was working for Hewlett Packard > > a number of years ago and studied the latest and greatest ideas back then. > > Then he started his own consulting business. It sounds like he used a lot > > of the ideas of JIT which were leading edge at that time. > > > > Although I've never read his books, the core problem that I've heard > > repeatedly about his methodologies is they don't put the "Step 1 Identify > > the Constraint" at the unavoidable forefront of everything. This core > > problem is true of JIT, MRP, etc, etc. I've heard other methods that will > > attach Step 1 to their methods if you please, but it is not the mandatory, > > driving, starting point. For this reason they can't possibley do > > TOC/CM/Synchronous Mgmt correctly. Everything needs to center around the > > clear verbalization and identity of the Constraint or bottleneck. > > > > I would urge you instead to just buy the books "Synchronous Management > > Volume I and II" by Srikanth and Umble and go through 1 chapter per week as > > a team over a brown bag luncheon. Discuss what the chapter means to your > > plant and how to set up your Buffer Content Profile on a simple white > > board. Don't go out and buy a fancy software program. Start with a > > visible white board for $50 that can be moved on wheels to where the > > Constraint is, until any chaos stabilizes into order. Put your Buffer > > Content Profile on the white board. And as the book Synchronous Management > > will also tell you, put the 5 Step Process of Focus at the forefront, not > > as a lame attachment after the fact. > > > > -- > > From: Bill Dettmer[SMTP:gsi@goalsys.com] > > Sent: Wednesday, July 07, 1999 7:46 AM > > -Original Message- > > From: Michael Cuellar > > Date: Tuesday, July 06, 1999 4:26 PM > > > > >You should read: The Quantum Leap in Speed to Market by John Costanza. > > In it Costanza outlines what he calls demand flow technology (we call it > > Synchronous Manufacturing since he copyrighted DFT!). As I'm sure you know, > > companies using DFT have made dramatic improvement in inventory turns, cycle > > time, customer performance etc. He is not very specific about the > > implementation details because he (as well as we) is interested in getting > > consulting revenue to help you do that. > > > > [In conversation with a production supervisor of a plant that was enamored > > with DFT, I was told that one of its essential principles involved > > balancing plant capacity. I can't confirm this personally, since I have > > never read anything about DFT. But if true, it is NOT the same > > as TOC, DBR, etc. > > Anybody know for sure whether DFT prescribes balanced plants or not?] --- From: "Jim Bowles" Subject: [cmsig] Re: Shop Order Priorities Date: Fri, 27 Apr 2001 10:30:47 +0100 Aaron, you said: This may stem some conversation... We are a make to order/make to stock company. We use DBR to run our production system and a TOC replenishment system to run our distribution system. We commonly struggle with what priorities to place on orders to be run on our constraint operation. There are three types of orders that we deal with and they are as follows: 1. Shop order issued for a sold customer order 2. Shop order issued to cover a sold customer order and to replenish a stocking buffer 3. Shop order issued to replenish a stocking buffer Of course, a shop order issued for a sold customer order is cash once it is done. An order issued to replenish our stocking buffer is just that, buffer (some may see it as inventory and not buffer, therefore a waste..."lean thinking"). However, if we keep pushing sold customer orders ahead of replenishment orders, we will eventually be stocked out of all of our stocking buffers and every little demand fluctuation will riddle itself through the production flow. I just wanted to bounce this issue off the list to possibly invalidate some of my assumptions. Aaron M. Keopple I have been in this "movie". So I will offer some guidance on how my clients came to terms with their problems. Some of the difficulties come from a "mind set" problem to do with the notion of "making for stock". Unfortunately this can mean different things to different people. So in the first place clarify what it means and ensure that everyone is clear on what it means. There are four different uses that I know of: 1. "Let's make some extra and put them into stock." (Be efficient, save set ups etc.) 2. "Let's make something for stock as we have nothing else to do." (Look busy avoid having idle people, we're paying them anyway.) 3. "Let's replace those things that have gone "Out of Stock"" 4. "Let's build up the stocks so that we can meet the anticipated demand from our next marketing campaign." No. 1 wastes capacity and builds up unwanted stock. No. 2 wastes capacity and builds up unwanted stock. No. 3 if done arbitrarily you finish up constantly chasing your tail, if done to the rules of replenishment and buffer management you end up with higher service levels. No. 4 sometimes it is essential to do this but it needs buffer management for good control Where next? What is your "Drum"? If you supplied solely from stock then your "Drum" would be the "Replenishment" of holes in the buffer (stock). This is the closest you can get to customer demand. But you also make to order. So which has priority? You know the due date on those to order but this is where it also becomes important to think of "holes in the buffer stock" as an order. If the holes becomes a "stockout" then you know that you are likely to lose a sale (assuming that those customers will not wait and will go elsewhere for their goods. So in a sense you can also determine a due date for these as well. But if making for stock refers to 1 or 2 above, my advice would be to think again about your priorities. In short don't do it, it's wasteful. Replenishment and DBR are good INJECTIONS to this problem but you will need several more INJECTIONS (up to 15 maybe) to resolve the smaller clouds that come from past practices, beliefs and ways of dealing with the underlying conflict. You may have started to deal with the Cost World Vs Throughput world conflict but it will take time to deal with all the smaller and often hidden compromises that this led to. There were two big ones that the production planner asked for my help with. A. "Schedule to replenish Stock" (Keep the management happy) Versus "Schedule to make bonus".(Keep the shopfloor employees off my back) B. "Give priority to replenish Stockouts" (Always keep chasing things that are late.) Versus "Give priority to our agree plans" (Produce the things that we know will prevent us from being late.) --- Subject: [cmsig] Re: DBR software From: "Juan C. Callieri" Date: Fri, 27 Apr 2001 11:24:04 -0400 To all those that are interested, herewith I am copying the response to the issue I got from the Brazilian representative of the Goldratt Institute. I saw one of the softwares in tests, and it did make a great impression. ...... ""Out of the TOC methodologies developed by the Avraham Y. Goldratt Institute - AGI - three are supported by professional softwares. They are DBR, Critical Chain and Throughput Accounting for the managerial Decision Process. Note: AGI has recently decided that DBR should extrapolate the plant and provide synchronization logistics of materials/inventories flows at least with the neighbor links - suppliers and clients/distributors - using the logics of "supply to order" and "supply to stock" (Replenishment/Supermarket). AGI is, then, renaming the DBR application to Supply Chain. The softwares have to/will have to perform algorithms for Replenishment, what means, scheduling to replenish per actual consumption (not per the sales forecast). Buffer Mgmt has to/will have to be provided to this case, too. The softwares must also allow measurements of the materials/inventories flows performance using Throughput dollar-day to measure responsiveness and Inventory dollar-day to measure economizing. Currently, in Brazil, the following softwares performing TOC logic are available: 1) DBR: ThruPut (by Mapics); Drummer (by Linter - Brazilian Co.); ST-Point (STG, represented here by Produttare). Note that the Israeli SWS (by Telly), that was represented here by Datasul, is no longer available because the software did not perform as expected in 7 of 8 implementations. For this reason, Datasul has cancelled the contract with Telly and has signed with Linter. So, Datasul will not only replace the existing SWS by Drummer but will use Drummer in future implementations. 2) Throughput Accounting for the managerial Decision Process: B£ssola (by Linter) 3) Critical Chain: ProChain (by ProChain Solutions, represented here by 2 AGI Licensees) Juan, we are introducing, here, an "AGI-B certification" aiming to label those softwares, present in the Brazilian market, that actually and fully support the TOC logic. This has the intent to protect against confusing messages: the clients, the producers of real TOC based softwares and AGI product image. As an example, I attach the requirements that we are using for Supply Chain (DBR + Replenishment) softwares (unfortunately it's in Portuguese; I may translate it to English, if you wish). Best regards, Celso ................. "" BILL DETTMER : If all you're interested in is "something to 'play with' to demonstrate TOC's viability," and not a full-capability scheduling system, look into the Management Interactive Case Study Simulator (MICSS) from MBE Simulations, Ltd. (http://www.mbe-simulations.com/). This software was specifically designed to help people see the differences between traditional methods of accounting / production management and the TOC approaches. The software itself is "neutral" to the methodology. It reacts logically to whatever policies the user puts into it. MICSS is considerably more complex and realistic than a simple table-top exercise, such as the dice game or the P-Q problem, but it's not as complex as "reality." And it incorporates some interdependence between purchasing, operations, sales/marketing, and finance, which no other simulation programs I've seen do (or do as well). You can test and evaluate manufacturing strategies to a limited extent, but you can't schedule the real world with it. If this floats your boat, go visit the web site... +( Drum Buffer Rope 3 - DBR, variation, finite capacity scheduling From: Edmundwschuster@aol.com Date: Tue, 4 Apr 2006 11:26:06 EDT Subject: Re: [cmsig] Putting the Optimisers on the right track I do not think there is anything wrong with optimizing and taking a methodical approach, as long as common sense prevails. I can understand the frustration with certain presentations of DBR. -----Original Message----- From: On Behalf Of Jim Bowles Sent: Tuesday, March 28, 2006 8:47 AM Subject: [cmsig] Putting the Optimisers on the right track This is a post I picked up via Nicos Leon. You may recognise the tone. My question is how do we inform their 1066 members that the argument is fundementally flawed. Can you spot the Professor's erroneous thinking: Professor Shahruck Irani, moderator of the Job Shop Lean yahoo group [JSLean] posted the following post in JSLean. TOC in a jobshop: Fundamentally wrong from a scheduling perspective? For quite some time, I have admired the simplicity and effectiveness of TOC, and how it focuses attention on what matters. But, as soon as due dates feature as a top priority for a jobshop, all three of the objectives set by Goldratt - maximize throughput, minimize WIP and minimize cost - become subordinated to OTHER measures that relate to lateness/earliness and on time delivery! If one studies the literature on scheduling, even for the case of a single machine, the manner in which we load and sequence jobs to optimize the performance measures set forth using TOC are completely different from the manner in which we load and sequence jobs based on their due date and routing complexity! So, as much as TOC would claim to be different from what Lean would profess (in high-mix assembly manufacturing), it appears to be simply a variation to the same theme!! That is, it works for production systems where due dates are NOT significant at all, instead it obeys a simple rule like Little's Law (reduce Time in System and you reduce WIP, reduce WIP and you reduce the costs incurred with making and managing that WIP). Yes, TOC and Lean do help to firm up the foundations for improvements in a jobshop, but they are based on what appear to be totally wrong principles, at least from a jobshop scheduling perspective. So, is some improvement better than no improvement if it really does not ensure you long-term improvement at all? I mean, you will lay in place a system to suit TOC but it would not mesh with what you really need, a system that works for due date compliance PLUS the TOC-biased performance measures. As I noticed that he was not knowing the subject well, I replied as follows The measurements in TOC are set only after the goal is established. Due Date performance is a prime concern in TOC, and the tools for achieving that are DBR and buffer management. And here is the answer I received [SAI] I disagree. DBR and Buffer Management have no specific methods to do Work Order Release to meet due dates. TOC has good management- level strategies to offer, thereafter it ought to be replaced by Finite Capacity Scheduling. TOC and FCS operate at two different levels, neither can replace the other. DBR and Buffer Management both lack specific methodology to achieve due dates. --- From: "Brian Potter" Subject: [cmsig] Putting the Optimisers on the right track Date: Tue, 4 Apr 2006 13:16:51 -0400 Ed, You might consider rereading Jim's posting a tad more receptively. Before you go there, please, consider the rather long post, below. Let's consider what "as long a common sense prevails" might mean. For purposes of discussion, let's consider a smallish job shop. The outfit buys some parts, makes other parts, keeps some raw material in inventory, and sometimes they do a little assembly work. Even orders without assembly operations have in integration point where the completed goods get packaged, shipped, and invoiced. They probably use bar code or RFID to track materials and verify shipments. OK? Now, each item purchased has an associated lead time. That number is a guess. The real lead times have a mean, some distribution, and some standard deviation. Knowing those three statistical parameters, we can probably estimate L.min and L.max such that ... 1) The probability that ordered goods will arrive before L.min < alpha/2 2) The probability that ordered goods will arrive after L.max < alpha/2 3) The probability that ordered good will arrive after L.min AND before L.max = 1 - alpha Similarly, the ERP system has time estimates for all setup times and all processing times. Each instance of those timed events is a single sample from a population with characteristics not unlike those of the lead time discussed above. The ERP probably has point estimators for all these times. Thus, because variation gets ignored, every estimate in the ERP is wrong. For highly automated processes (NC milling for example), assuming away the variation in production cycle time probably does not matter. For manual operations and for set-up times, ignoring variation may suffice to reduce any optimization model computation to hash. As if that were not bad enough, when real shop floor variations happen (and they will), three bad things ride along. 1) If something happens early, the downstream process will not be ready (because it is running [at best] to the ERP schedule). 2) If something happens too late, it will delay all successor processes. 3) If #2 (above) happens, the delay will propagate to all orders running after the delayed order on all processes down stream of the delayed process. In most shops it will not be long before one delay (perhaps, even a brief one) has turned the entire production schedule into a fairy tale (a Grimm on most likely). To a point one can compensate. By knowingly setting all times a little bit too long, one may "buffer" every activity so that only big delays will lead to chaos as described in #3 (above). The price one pays for stability: operating below capacity (perhaps, a LOT below capacity; I've estimated that this approach causes no less than a 25% capacity loss on most long assembly lines [e.g., automobile assembly]). And this is only a risk reducer, not a cure. Well, what about computing with time interval estimates rather than point estimators? Computers can handle it, and we'd only need twice as many numbers in the ERP system and four to sixteen times as much CPU power (usually, not a problem these days). Then, the schedule would be stochastic with an order sequence and estimated start/finish times for each order. The further the schedule goes into the future, the less certain all the times become. As the shop works, the scheduler would get more and more refined estimates of finish times. However, based on personal experience with interval arithmetic, I'd bet that you'd discover that the scheduled timing estimates would be so wide that the "plan" would be all but meaningless. But, let's assume that the interval estimate trick works OK. Now, we've worked on the first schedule for a while and we must add some new orders to the mix. It's nice that the sales team has done such a good job and our customers appreciate our work so much. Add the new orders to the mix and rerun the scheduler. LOOK! One of the new orders must start before an order with materials already ordered and on the way. LOOK! Some of the jobs in process have been resequenced. After we finish the current operations, we'll need to park the WiP somewhere and start working on different orders. Where's the material for those orders? This morning, it was not even scheduled for release? Do we REALLY want to do that to the shop floor? The above discussion assumes that the ERP timing data are accurate. What are the odds on that? I'll take the "there exists one or more errors in the ERP timing data" side of that bet. Bad data in the ERP system means that we can add just plain incorrect schedules to the all but certain chaos described above. DBR gains via simplicity. 1) Schedule only known (empirically discovered) constraining resources, material release, and the shipping point. This significantly reduces the data in the ERP. Thus, the chance to get the data right (or at minimum, discover and fix any mistakes) becomes MUCH improved. 2) Monitor order flow (That's what the "B" in DBR is all about.) and expedite any order that may reach a constraint ("D" for Drum) or shipping point late. If the buffer report says nothing needs expediting too much of the time, shrink the lead times ("R" for rope). If expediting is getting out of hand, look for causes (right, start at the buffer report). Focus attention on upstream resources causing delays or quality concerns. Improving there will reduce expediting. If necessary, release materials sooner (longer Ropes implying longer lead times to the customer) pending successful improvements. 3) Keep the order sequence stable once scheduled. Thus, new orders will create chaos neither in logistics nor on the shop floor. Since the buffers are global (rather than local to each operation as in the optimize everything case (above), favorable variations at places that matter (constraints) will improve global performance and we'll capture the good fortune. Delays will also usually matter only at the constraining resources, but they are few and buffered. Thus, we can usually manage a few resources well enough to anticipate or avoid delays. We can do this without under committing the entire system (as when fudging all times a little longer, above). 4) Allow expediting to either recover planned timing for any delayed order or to escort a rush order through the shop with minimum chaos on other operations. DBR is all about practical (and not so common) common sense. Sensitivity to variation is a huge production planning issue. Most optimization models sweep variation (and sensitivity thereto) under an ergonomic/safety floor mat when no one is watching. Do you know one that does not either assume variation away or fudge event durations to reduce exposure? DBR addresses variation squarely, honestly, and openly. It can even give one the chance to benefit from lucky favorable variations. From today's on the shelf options, what could be better? +( earned value management - EVM From: "Jim Bowles" Subject: [cmsig] Let's talk EVMS - second attempt to send files Part 1 Date: Thu, 21 Dec 2000 22:35:14 -0000 EVMS was an Injection to resolve a problem with payment for progress on projects: Previously the practice had been to pay a date. I.e. Divide the project value by the number of days expected to take and make stage payments as time progressed. This was of course unsatisfactory for the client when the project failed to deliver. Usually the DoD of MoD. Item 1 In 1967 the Department of Defense (DoD) established the Cost/Schedule Control Systems Criteria (C/SCSC) to standardize contractor requirements for reporting cost and schedule performance on major contracts. A basic tenet of C/SCSC is the concept of Earned Value Management. Earned Value Management is a methodology for determining cost and schedule performance of a project by comparing "planned" work with "accomplished" work in terms of the dollar value assigned to the work. [1] Work is planned, budgeted and scheduled in time-phased increments utilizing a Work Breakdown Structure (WBS) to define tasks and to assign costs to those tasks. As work is accomplished, value is "earned" on the same basis it was planned. Comparison of this earned value with the planned value for a specific time period provides an indication of task progress- if more value is planned than is earned for a specified period, then the project is in danger of not meeting its required schedule, unless action is taken to recapture the unaccomplished work. Similarly, comparison of the earned value for a task or group of tasks with the "actual" costs required to accomplish the same task(s) provides an indication of task cost performance- if actual costs are greater than planned costs for the accomplished task(s), then the project is experiencing a cost overrun situation. [1] This incremental method for comparing planned, earned and actual value for a task provides an early indication of project cost and schedule performance and can provide early insight into problem areas which might not otherwise be detected until later in the program. Therefore, by providing early problem identification, the use of Earned Value Management can serve as a key element of a project's risk management program. This paper will begin with an introduction to Earned Value Management, will continue with an overview of Earned Value Management's role in risk management and culminate with a discussion on the methodology for using Earned Value Management in a project's risk management program. Also included is a discussion on how software inspections can be utilized in conjunction with Earned Value Management cost performance measurement techniques. Item 2 ASC - Role of the Government Cost Account Manager Overview C/SCSC, may it rest in peace, was always intended to be a program management tool, but the program managers were all too happy in the past to delegate this onerous task. Earned Value Management Systems (EVMS) is the remedy to this situation. Ownership of the program baseline by our program managers is at the heart of the EVMS evolution. Gary Christle, the Deputy Director for Performance Management (OUSD(A) API/PM), described this in his vision of a model program back in 1994. He envisioned that the focus of EVMS should be on government and contractor technical managers, acting together in concert. This represents a significant change from how we have done business in the past. But how can we accomplish this ownership? Contractors, of course, have always had cost account managers (CAM). These individuals were charged for managing a work group, and bringing in their part of the contract within cost and schedule. CAMs were assigned at the cost account level (according to basic Cost/Schedule Control Systems Criteria (C/SCSC) and then given the accountability and authority to make it happen (according to basic management theory). CAMs embodied the term "empowerment" before the term even existed. On the government side, however, program offices have long been traditionally segregated into functional disciplines (engineering, manufacturing, logistics, finance, etc.). Anything to do with cost performance reports, C/SCSC, etc., was immediately assigned to the financial group. (Didn-t it have cost in the title?) Cost Performance Report (CPR) analysis was usually seen as reactionary, looking back on where we had been, instead of using it as a tool to manage and control the program. Unfortunately, the same also held true for detailed schedule planning and analysis. Certainly the technical managers were little involved with C/SCSC. Occasionally the C/S analysts would bring in a few technical managers to help with audits or with analysis if significant problems arose, but they were never the key players. Technical managers solved the technical problems, but had no real responsibility for cost and schedule. As a result, the contractor CAMs and government technical managers, although usually viewed as equals, had very different roles and responsibilities. +( efficiency or slack From: "Opps, Harvey" Subject: [cmsig] protective capacity and subordination Date: Tue, 12 Jun 2001 09:18:19 -0500 The following is an excerpt of a book review from the FatBrain website. Tom DeMarco has a background in structured systems analysis and is one of the founding gurus. Interesting how he has discovered the need for "protective capacity" - what he calls slack from his background in project management. He also describes what surely is subordination.. implies that there is a difference between efficiency and effectiveness. This looks like an interesting read. Perhaps another way to communicate TOC principles and gain some allies in other areas of your company. The site's URL is at the bottom of this message: ---------------------------------------------------------------------------- Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency By Tom DeMarco Online Price: $18.40 224 Pages Published by Broadway Books Date Published: 04/2001 Product#: 076790768x Summary To most companies, efficiency means profits and growth. But what if your "efficient" company-the one with the reduced headcount and the "stretch" goals-is actually slowing down and losing money? What if your employees are burning out doing the work of two or more people, leaving them no time for planning, prioritizing, or even lunch? What if you're losing employees faster than you can hire them? What if your superefficient company is suddenly falling behind? Tom DeMarco, a leading management consultant to both Fortune 500 and up-and-coming companies, has discovered a counterintuitive principle that explains why efficiency improvement can sometimes make a company slow. If your real organizational goal is to become fast (responsive and agile), then he proposes that what you need is not more efficiency, but more slack. What is "slack"? Slack is the degree of freedom in a company that allows it to change. It could be something as simple as adding an assistant to a department, letting high-priced talent spend less time at the photo copier and more time making key decisions. Slack could also appear in the way a company treats employees: instead of loading them up with overwork, a company designed with slack allows its people room to breathe, increase effectiveness, and reinvent themselves. In thirty-three short chapters filled with creative learning tools and charts, you and your company can learn how to: Make sense of the EfficiencyFlexibility quandary Run directly toward risk instead of away from it Strengthen the creative role of middle management Make change and growth work together for even greater profits A innovative approach that works for new- and old-economy companies alike, this revolutionary handbook will debunk commonly held assumptions about real-world management, and give you and your company a brand-new model for achieving and maintaining true effectiveness-and a healthier bottom line. http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp?theisbn=076790768x&vm=b <> +( Empowerment http://www.goldratt.com/empower.htm The Lieutenants cloud When delegating something prepare the delegation by trying to explain the assignment by asking WHY 3 times : WHY is the action needed WHY do you need to take action (what is the expected objective/result) WHY will the action satisfy the need to the extent that we will get the desired objective. --- http://www.goldratt.com/empower.htm Sun, 23 Dec 2001 09:59:35 Avraham Y. Goldratt Institute - [Empowerment] Empowerment Misalignments between responsibility and authority. by Dr. Eliayhu Goldratt Suppose that you have some responsibilities - a safe assumption no matter who you are or what you do. Now imagine that when you try to do your job (deliver on your responsibilities) you realize that certain actions, which are absolutely necessary, are not under your authority. You are not allowed to do them without asking for somebody else's approval, which s/he may or may not give you. Do you think it's fair to ask you to be responsible for things that are not under your authority? Can you be empowered to take on more responsibility, if it is not perfectly matched with the required authority? Now you see the base for my claim that misalignments between responsibility and authority are the core problem blocking effective empowerment. But my claim can be substantiated only if such misalignments are prevalent; if for almost any person in an organization there is at least one or more misalignments between his responsibility and his authority. Because, if misalignments are sporadic-as disturbing and unpleasant as they may be-it's hard to accept that they can be one of the main reasons blocking empowerment, that they can be considered a core problem. So are misalignments between responsibility and authority prevalent or rare? Why don't we go and ask? Well it's not so simple. It's not so simple because it turns out that the answer depends on what we ask. Ask any person if s/he personally suffers from such misalignments and almost always you will get a "yes," accompanied by more than one example. To verify that that is the case notice that, in a way, you were presented with this question when you read the first paragraph. Did examples of misalignment that you suffer from, pop into your mind? Since almost any person claims that s/he suffers from misalignments our conclusion should be that misalignments are very prevalent. But if you ask the same person whether or not people who report to him/her suffer from misalignments between authority and responsibility, their sincere answer is: "Rarely." So what is the answer? Maybe rather than interviewing people we should look at cases where there is interaction between an employee and his/her boss on a specific, detailed issue. A generic family of such interactions is called "fires": a person comes to his boss and demands an immediate action or decision. Not a rare event. As a matter of fact, most managers claim it is prevalent to the extent that more than half of their time is devoted to "fighting fires." Now let's ask ourselves two very different questions. First, why is the person demanding an immediate action or decision? The only plausible answer is that the "fire" is under this person's responsibility, and therefore the person seeks a resolution - or at least a cover for his rear parts. The second question is, why did the person come to his boss? Do you really think that it's because the person believes that his boss is a genius? Probably the only prudent answer is that the person came to his boss because the action needed is not under this person's accepted authority. It is now hard to escape the conclusion: probably every time a person comes to his boss with a fire it is a clear indication that, for this person, on the subject involving the fire, there is a misalignment between the person's responsibility and authority. Maybe not the formal authority given to the person, but the authority that counts, the authority that the person has assumed. And since fires are so prevalent, we must conclude that misalignments between authority and responsibility are much more common than we suspect. This observation opens a myriad of interesting questions. Like how come there are so many misalignments? Is it because of negligence or because of a more fundamental reason, reluctance to share power for example? How come that even though misalignments are so prevalent, most managers are under the impression that misalignments underneath them are rare? Or another important question: are there other causes preventing or blocking empowerment to the extent that misalign-ments do - are there additional core problems? But I think that, being practical, the question we should address first is how to pin down the misalignments. We concluded that: "probably every time a person comes to his boss with a fire it is a clear indication that, for this person, on the subject involving the fire, there is a misalignment between the person's responsibility and authority." But how do we find out exactly what is the misalignment; what is the specific responsibility that is not matched with which specific authority? Pinpointing the misalignment. Through an example let me demonstrate a simple, yet generic, way to do it. About four years ago, at the time that this know-how was still under rapid development, I explained all the above to a friend of mine, who is in charge of a small plant in Israel. He agreed with the logic of each stage (of course, not without a fight - we are both Israeli). He agreed that misalignment between responsibility and authority will definitely block empowerment. He agreed that "fires" are an excellent indication of such misalignments. Without hesitation he admitted that fires are prevalent in his plant; "Hell, sometimes I think that we run this place by the seat of our pants." But then he insisted that his lieutenants do not suffer from any misalignments; "What you say is definitely true for the large organizations you are working with Eli, but in my little place I've made sure that everybody has all the authority they need. Actually, if anything, they have too much authority and not enough responsibility." Going over the logical chain of cause and effect again did not help. He continued to maintain that in his plant there are no misalignments. I was about to lose my temper and leave, but he is a good old friend. So I tried another tack. "When was the last time any of your people came to you with a fire?" I asked. "About five minutes before you arrived," was his answer. "But it was not really a fire. Just a question." "Tell me about it," I insisted. After some prodding, I had the full story. Uri, the person in charge of shipping, had a small problem. That day a shipment was supposed to go to a specific client. Everything was ready and at the shipping dock, but the client forgot to specify which of his warehouses the goods were supposed to be shipped to - this client has one warehouse in Haifa and another in Ashdod. Yosi, the account manager for this client, had been unreachable for three days, so it's no wonder that Uri came to my friend and demanded to know what to do. "No big deal," my friend concluded. "I told Uri to wait another day, and tomorrow Yosi will be at the plant." Then he summarized, "I told you it has nothing to do with misalignments, it was just sloppiness on the part of the client ." That was not my conclusion. I thought that this was a clear case of misalignment between Uri's responsibility and his authority. "What need of the system is going to be jeopardized by the fire what's the damage Uri's concerned about?" I asked. After some more explanations the reluctant answer was: "Clients' orders being shipped on time." "That's Uri's responsibility?" "Yes. Provided, of course, that the goods are on the shipping dock." That was our case so I shifted to looking for the missing authority. "What rule of the system prevents Uri from putting out the fire by himself?" I asked my patient friend. "No rule," was the laconic answer. Well, we wasted some time until we agreed that there are rules of the system that are not written anywhere, but they are very abiding rules. At last I squeezed out the answer: "Only the account manager is allowed to call the client." I know many companies with the same rule so I was not surprised. What did surprise me was that my friend still didn't see the misalignment. "What would happen if Uri disobeyed the rule and called the client? Would he have been able to ship the goods today to the right location without involving you?" "Yes, certainly." "So the rule represents Uri's lack of authority," I concluded and wrote down the following diagram, summarizing his answers, just so I wouldn't have to go over it again and again: My friend finally agreed, but he was apparently still disturbed. I thought I knew why, so, making sure that no trace of sarcasm entered my voice, I asked, "What do you think about this rule that prevents Uri from doing his work without bugging you for help?" "It's not as stupid as it sounds," was my friend's defensive answer. I helped him verbalize it by asking, "What need of the system is protected by this rule?" "The need to provide the client with one contact point. You can't imagine the chaos we had before we instituted that rule. No, I'm not going to give up on it. It's a good rule. " "I agree," I said, and added it to our diagram. Then I continued, "What is the lowest common objective both needs are trying to satisfy?" "What needs?" "The need to have clients' orders shipped on time and the need to provide the client with one contact point," I clarified. "We must have them both in order to have good customer service. Isn't it obvious?" "So it seems," I said. And completing the diagram I urged my friend to realize what we exposed. The misalignment was not a result of negligence, and certainly not because my friend is reluctant to share authority. The misalignment was a direct derivative of a conflict embedded in the fabric of the company. The objective was legitimate, the two needs were very real and so was the resulting conflict. Examine the diagram to see for yourself. "Is it always the case," he asked. "Is every misalignment the result of a conflict embedded in the fabric of the organization?" At the time I didn't yet have enough experience, so my response was, "Let me write for you the five generic questions that I used. Whenever one of your lieutenants comes to you with a fire, of course first take care of the fire, but then take the time and answer these five questions. Do it in the sequence indicated by the numbers, please. Do it for enough cases and you'll know if any misalignment is the result of a conflict between two legitimate needs of the system." And I scribbled the following diagram: He just glanced at it, nodded and returned to examining the conflict of Uri. Which, I must say, irritated me a little bit. Here I am handing him the generic process to reveal the conflict hiding behind each misalignment and he is stuck on one particular, not-so-important, fire. "What are you thinking about?" I asked him. Without looking up, he answered, "I'm thinking about how I handled this particular fire, and for that matter, any other fire, and..." "And what?" I said impatiently. "And in each case," he continued to talk very slowly. "In each case I dealt with the fire by one ad hoc compromise or another. I never tried to deal with the conflict itself." I kept quiet and after a short while he continued. "Is it possible that most of the fires I'm constantly dealing with stem from a handful of conflicts?" Eventually, it turned out that his speculation was not so far off the mark. Now we have enough experience to know that per person there are about three to seven misalignments between their responsibility and authority. These misalignments are a constant source of fires. My friend has five people reporting directly to him. As a result, much of my friend's time was spent fighting fires stemming from about two dozen misalignments. But as I said, at the time I didn't have the experience, so I replied with an noncommittal grunt. "It's even more disturbing," he continued to speak. "These inherent conflicts can lead to worse things than just fires that I can handle. I wonder how much damage, not to mention tension and tug-of-wars, these conflicts are causing. But, that's life. I don't see how I can prevent it." "How come?" I was surprised. "Look," he told me in a tone that indicated our discussion was reaching its end. "I'm not going to change the rule. Now, as well in the future, only the account manager is going to talk with the clients. It's too messy otherwise." When he saw that I didn't agree he added, "Besides, even if I do change it, do you think that Uri will be overjoyed to take on more authority?" I knew what he was talking about - not everybody wants more authority. As an Israeli I spent more than a month each year in the reserves. As a private. And lying in the shadow of a tree watching the officers running around like chickens without heads I always wondered what motivated people to take on more authority. It didn't preclude me from doing just that in my civilian life, but at least I don't take it for granted that all people want to be empowered. "Why don't we call Uri in and ask him?" I suggested. "Ask him what?" "Let me handle it," I said. My friend has known me for a long time and for some reason or another trusts me. He picked up the phone and called Uri in. Removing a misalignment. While we were waiting for Uri, he asked, "Why do you think that Uri will find a solution? He is not the brightest guy and I doubt he sees the global picture." "Because solving this conflict is much more important to him than it is to you," I answered bluntly. "And as for seeing the global picture, you'll explain it to him, but only when I ask you to, okay?" I had to emphasize it because in such matters the sequence is of utmost importance - you expose an issue in one way and you may raise strong resistance, while unfolding the same issue in another way can gain you enthusiastic collaboration. When Uri sat down I took the diagram of his conflict and started. "The objective is to have good customer service," I declared. Uri didn't respond, if you dismiss the shrug that probably indicated, "Here is another smart aleck, with baloney slogans." I calmly continued, "In order to have good customer service, you, as a company, must make sure that customer orders are shipped on time." Uri still didn't say a word, but I sensed him suddenly tense up. No wonder, I touched on his area of responsibility. Now it was time to really wake him up. "Uri," I said, "this morning you didn't know whether to ship to Haifa or Ashdod. Why did you bother the plant manager? Why didn't you simply pick up the phone and call the client yourself?" "Why? You really want to know why?" And turning to my friend he poured out his opinion of the "rule". But he didn't call it a rule, he used much more vivid language (it doesn't take much to provoke an Israeli). I achieved what I wanted. Now it wasn't a conflict of the system, something that could be dismissed as "that's life." Now Uri took it as his conflict. And judging by his emotions, quite a disturbing one. So calmly I turned to my friend and suggested that he explain why the rule makes sense, explain the "global picture." The explanation was not what I expected. Being an outsider, I viewed "providing a client with one contact point" as something that helps the client. But that was not my friend's explanation. "Look Uri," he said. "If we allow everybody to talk to the client, you know what will happen. Everyone tells the client different things, and then the confused client picks something that one person told him, combines it with something else that somebody else told him and we find ourselves in a real mess." Uri didn't seemed impressed. He cut him off impatiently, "What are you talking about. Who is going to tell the client anything? I just need to ask the client something, not tell him anything. How can I confuse him by asking? If I was allowed to do it we could have shipped today, on time. Now we'll be late, and I'm telling you, knowing Yosi he won't be here tomorrow either." My friend and I exchanged glances. Uri had a point. Slowly my friend said, "Let me understand. What you're suggesting is that whenever you are missing information that you can't get any other way, when you need it, then you should be able to call the client and ask? Just ask for the missing information, not to tell them anything?" "Yes, that's all. What's the big deal?" My friend is an experienced manager, so he replied, "Let me think about it." Uri left muttering, "What is there to think about.?" Turning back to my friend, I asked, "Do you expect any problems with the account manager?" "With Yosi?" he laughed. "When we have a late shipment, who do you think gets the phone call from the furious client? I will not have any problem getting Yosi's agreement. But I have to get it from him before I make this exception to the rule, not after." "Good," I concluded. And getting to my feet to leave, I summarized, "You removed the misalignment, and from now on you will not have to deal with this type of fire. Why won't you do it systematically? Do it every time that one of your lieutenants comes to you demanding a decision or action?" "I wish I could," he sighed. I sat back down. "Why can't you?" "Because I don't have you here all the time." "What does it have to do with me?" I was genuinely surprised. "Come-on," he said. "I won't say that you manipulated Uri to come up with a solution, but you definitely played him like a violin. I don't know how to do that." "Do you want to learn?" "Frankly, no." And standing up he added, "I know my limitations." "No, you don't" I stared at him until he sat down again. "One of your lieutenants comes to you with a fire," I started to explain the generic process. "You now know that s/he came because of a misalignment between his responsibility and authority. You already know how to construct the diagram that exposes the conflict that caused the misalignment." "Sort of," he said. Pointing to the diagram outlining the five questions I asked, "What do you mean, sort of?" "I guess I need more practice. One example is not usually sufficient." I accepted and continued, "Generically, the result of answering these questions will be..." And I wrote down the answers under the questions: "Once you construct this diagram," I continued to explain, "don't try to find a way to remove the conflict. You are too used to handling these conflicts by ad-hoc compromises." Seeing that he didn't agree, I laid it on him. "You accept the compromises to the extent that just half an hour ago you claimed that none of your lieutenants suffer from any misalignment. They are the ones who don't accept them as satisfactory compromises. So, if you are not on an ego trip, call your lieutenant and start to expose the diagram. He nodded agreement, so I continued. "Always start from the objective," I pointed to the diagram. "And then move clock-wise. When you reach the third box refer to the specific fire that triggered the whole thing, and ask the lieutenant why he didn't put out the fire himself. If you noticed, that is exactly how I provoked Uri." "You sneak," he smiled. "Of course the response will be to blame the rule." "Remember," I cautioned him, "the lieutenant always knows the rule that makes his life miserable, but that doesn't mean that he knows the reason for the rule. As a matter of fact he usually doesn't. Which, once you explain it, allows him to look at the reason with a fresh view." "Unbiased, you mean?" "Not unbiased, not at all," I beamed. "He hates the rule. But he is not engraved, like you are, to accept the reason for it. The combination of a fresh view and strong emotion is powerful." "I see," he thoughtfully said. "Do you think that it's powerful enough to always come-up with reasonable suggestions to remove the conflict?" "What do you have to lose? Try it." Since then, literally thousands of managers have tried it, and they claim it always works. Frankly, my expectations of four years ago have been surpassed. But we have not finished yet. There is still a very important question that we haven't answered: Are there other causes preventing or blocking empowerment to the extent that misalignments do? Or in other words, are there additional core problems? If there are, and we neglect to address them, empowerment will improve but we will not get the break-through we hope for. The second core problem. To find out whether or not there are additional core problems we must approach the subject more systematically. It's not enough to propose an hypothesis (like misalignments are a core problem) and validate it. We have to dive deeper, to the place that will enable us to deduce such hypothesizes in a systematic manner. If we assume, (as we are) that empowerment is a desirable thing to have, a good starting point is the conflict that empowerment forces on every manager. The objective of a manager is, of course, to manage well. In order to do that, two different necessary conditions must be fulfilled. The first one was always there: in order to manage well a manager must make sure that, no matter what, the job gets done. The recognition of the desirability of empowerment brought forth the second necessary condition: In order to manage well a manager must empower his lieutenants. So what is the conflict? Well, in order to empower your people you must not interfere with their work. Alas, sometimes, in order to make sure that the job gets done, you don't have any choice but to interfere. The following diagram is a concise presentation of the conflict, with the arrows representing necessary conditions: As long as the conflict exists we don't have any choice but to dance between the drops. Nobody is really happy, not you nor your lieutenants. Empowerment is in danger of becoming no more than lip service. So let's concentrate on the place that blocks empowerment, on the valid observation that: "In order to make sure that the job is done you must interfere with the lieutenant's work." What is an underlying assumption of this logical connection? Or in other words, why must you interfere? Because the assumption (which too often is very valid) is that they cannot do the job by themselves. Therefore, we must conclude that the only way to reach effective empowerment is to make sure that they are able to do the work by themselves. If that's the case then the key question becomes: Why can't they do it by themselves; what is the nature of the obstacle standing in their way? There are two different valid answers to this question. The first, is that they don't have all the required authority. The second is that they don't possess all the required know-how. The first answer leads us to the core problem we dealt with above - the misalignment between responsibility and authority. The second answer reveals that, as we suspected, there is a second, not-less-important core problem. "They don't possess all the required know-how." But we do do something about it. Actually, we do a lot about it. Many organi-zations invest tremendously in formal training, and in every organization, in each department, there is a lot of on-the-job training. So why is, "they don't possess all the required know-how" so prevelant? Maybe it is because of us? Maybe our training lacks an essential ingredient? Maybe when we try to transfer the required know-how to our lieutenants we are missing something vital? Or, to put it bluntly, maybe we simply don't know how to give clear instructions! We don't know how to give clear instructions. Let me demonstrate what I am alluding to by an honest-to-God true story. I live in a small town in the suburbs of Tel Aviv. Our neighbors have a few apple trees which had a problem. In the spring, when the apples begin to form, there is a bug which lays its eggs inside the baby apples. Worms then hatch inside the growing apple, with plenty of fresh produce to fuel their voracious appetites. There is a solution which, although time consuming, is supposed to be effective. Paper bags are tied around the immature fruit, which then grows and ripens inside the bags, protected from the bugs. Well, our neighbors decided to try this solution, and they told their daughter, "Put the apples into paper bags." They gave her the proper equipment, and left her to enjoy her task in the spring sunshine. They returned to find all the tiny apples sitting in paper bags, on the ground. Until today everyone in the neighborhood torments, the now grown woman, with the apple-bag-story. If we want to empower people, it's important to not just tell them what to do, "Put the apples into paper bags." It is as important to tell them why. And here is exactly where we go astray. Most of us don't even notice that the why contains much more than one element. An essential part of the why is explaining why the action is needed - to prevent the bug from laying her eggs in the apple. Notice that this explanation would still not have prevented the absurd result. So, another why has to be explained as well: why we take the action. In other words, what is the expected objective of taking the action - not to find half a worm when we bite into a ripe apple. But that's not enough. Irritating as it is, we have to answer a third why. Why we claim that the action will satisfy the need to the extent that we will get the desired objective - the paper bag is sufficient to keep out the bugs. Explaining this would have prevented using bags with holes. If you think that you are doing a fine job explaining to your people all the relevant why's, do the following check. Search for a case where you gave meticulous, clear instructions. You even wrote them down, step-by-step. Examine the instructions that you gave. What do you see? You see that you have detailed what should be done, and how it should be done. You have detailed the actions. What about the why's? Yes, you probably wrote the objective, and maybe the need for the entire procedure, but did you detail all the relevant why's for each and every action? If you did, you are a startling exception. Why do I stress this point so much? For a few reasons. One, is that when we give instructions detailing the actions but not the why's, the chance is very high that, lacking the why's, a lieutenant will flounder. Our reaction then is to detail the instructions even more. Have you noticed that the more detailed the instructions, the less the empowerment? Whereas if we do give the why's, the actions are much less important; the lieutenant is free to improvise his actions as long as the why's are satisfied. True empowerment flourishes. But there is another reason. We do provide the why's, but you know when? When the lieutenant messes up, only then do we explain the why. We call it on-the-job training. No wonder it takes so long; the lieutenant has to make many mistakes until he squeezes out all the why's. What blocks us from giving all the relevant why's? It's not maliciousness or hidden agendas, It's simply the fact that we are not trained to do it. We are not used to verbalizing through meticulous cause-and-effect. Is it hard to learn? No. Start with any written procedure that exists in your department. For each step of the procedure insert all the three why's. Then, between each two steps of the procedure insert another additional why - why the latter step must follow the earlier one. This work will bring you some major benefits. First: probably, in the efforts to explain the procedure (inserting the why's) you'll significantly modify it. The many managers who have done it report that they found out that at least 50% of the procedures they explained previously contained major errors or inefficiencies. Second: Using the procedures that outline all the why's shrinks the time required for on-the-job-training to less than 10%. But most importantly, as you gain some experience, you'll get the third benefit: Whenever you discuss things with your people, naturally the why's start to take center stage, leading more and more to true empowerment. Conclusion At last, empowerment is recognized as one of the necessary conditions for an effective organization. Alas, as I have tried to prove in this article, the core problems blocking or impeding empowerment are not widely recognized. As a result, organizations do not employ simple, effective techniques to remove the obstacles preventing empowerment. Unfortunately, the same is true for two other, not-less important issues: communication and team-work. The core problems are not recognized and the techniques to overcome them are not devised. Rather, most efforts are still aimed to win a war that is already won - to stress the importance of empowerment, communication and team-work. I am afraid that if the core problems are not widely recognized and the techniques to overcome them widely used, empowerment, communication and team-work will first turn into lip service and then into a decaying fashion. ---------- Management Skills Workshop program description & schedule ---------- Copyright 1998 Avraham Y Goldratt Institute +( general introduction to a presentation From: "Jim Bowles" To: "Constraints Management SIG" Subject: Re: [cmsig] Popularity of TOC - Bad Solutions Date: Tue, 19 Apr 2005 12:44:00 +0100 I recall the first TOC workshops that I went to. At that time, 1987 onward, there were lot's of companies failing. Survival was the name of the game for many. The Quality movement was gathering pace. New Technology was the being installed as the way forward to better results. And JIT was being explored by many here in the UK. The penalty for stagnating performance - level profits for many years meant death to the organisation and still does as shown by MG Rover last week. Heads were rolling everywhere and still do. JIT, Quality systems and many other options were being placed on the table to help people improve. So Eli's challenge to those who were having difficulties was - so you want to survive - and to do this you want to make more money now and in more in the future? Are you with me? And you think that you know how to do so, right? Well let's see how good your decision making processes are. Oh and you also want Continuous improvement Eh? And at present everytime you improve something (the focus being on manufacturing with programmes such as JIT) you lay off people. And the first time you lay off people you show that there is a link between improvement and people losing their jobs. Good Strategy? So where do you start?. What you are saying is that your customers are also demanding better products, better quality, better service, at lower cost. And you don't know how to give them these things? Am I right? And you are also saying that your masters want better returns on their investment. [Or they will put their money elsewhere.] What !? And your employees are telling you they want something in it for them too. Security and Satisfaction. And your Customers want more from you too, better products, better services. Quite a juggling act you have on your plate, Eh? OK so let's start to take a systems view of the problem. Which one are you going to start with? Today everybody says that the customer is King. So is he the number one priority? Without them where would you be? But you need more from your employees too, more training, more attention to products and processes. Should they be you number one priority? Oh and you need to raise money to do these things so that you have a future and can survive into next year and beyond? Am I right? So is it a question of balance? No it a question of doing the right things and doing them right. But how? And that's the question that the TOC body of knowledge has been directed towards for almost 30 years. +( goal From: "J Caspari" Subject: [cmsig] RE: Goal(s) again... Date: Sat, 8 Jun 2002 10:12:51 -0400 Richard asked << Can anyone tell me the exact book and page number (I have all of them), or the tape and minute (of the Goldratt Satellite Program), where Eli offers a definition of 'goal' and 'necessary condition'? [Not that we are limited to Eli's definition, but why not start the discussion with the words of the Founder, and discern his original intent first?] >> (1) Haystack, pp 10-13, 49. ? (2) The title of the book, *The GOAL: a Process of Ongoing Improvement*, second and second revised editions? --- From: TOCreview@aol.com Date: Sat, 8 Jun 2002 10:33:14 EDT > Can anyone tell me the exact book and page number > (I have all of them), or the tape and minute (of > the Goldratt Satellite Program), where Eli offers a definition > of 'goal' and 'necessary condition'? rez as you know (if you have them all ) eli is apparently opposed to indices. i NEVER buy a book without an index. of course with one obvious exception. try "the haystack syndrome" bottom page 11 through 13 to understand his position on goals. and a hint of necessary conditions confused with goals. or consider the more thorough treatment in "leading the way to competitive excellence" mid page 67 through diagram on top of page 68. note the reference to goldratt on page 74. but why? marketing guru al ries states in "bottom up marketing" which imnsho, is brilliant, that it is tough to inject a new idea into a small mind. better to leverage the concepts already in the mind of the person to be influenced. in other words, goals are what ever my client thinks they are. part of any good process at the start of an assignment [or would be asignment] is simply called "level setting." get the client's definition. note that they might actually prefer objective to goal. or strategic direction, etc. then build your solution around their terminology. introduce TOC terms and concepts only when necessary. life is too short. --- From: "Tony Rizzo" Sent: 06-Sep-05 5:50:32 AM To: "tocleaders@yahoogroups.com" Cc: Subject: RE: [tocleaders] Digest Number 1148 Larry, The whole discussion of necessary conditions and goal often misses the point, particularly the necessary conditions part. For a moment, let's talk purely in mathematical jargon, and let's discuss optimization. In any problem of mathematical optimization the definition of the problem almost never involves only the objective function. An equally important part of the problem, which is indispensable for achieving an acceptable solution, consists of the boundary constraints imposed upon the solution to the problem. Necessarily, nearly always we constrain the solution to a region of its multidimensional space where the solution is practical or at least physically possible. For example, for all their mathematical elegance, imaginary solutions (which involve the square root of -1) are not all that practical. So we restrict the region within which the optimization algorithm of choice searches for a solution. The optimization algorithm is never allowed to run amuck. Should the algorithm ever cross a constraining boundary, it is instructed immediately to cease all progress in that unacceptable direction and to seek a direction that allows it maximum progress toward the greatest _acceptable_ optimum, which by definition is always within the region defined by the constraining boundaries. Notice that the constraining boundary conditions override the directive to optimize the objective function. They do so, because we, who define the problem and the acceptable solution space, perceive no value in solutions external to the boundaries. At the time that we define the optimization problem at hand, we value staying within the region of acceptable solutions infinitely more than we value any solution external to that region. In other words, the conditions that define the constraining boundaries are infinitely more important than the objective function. The same is true of enterprises. The conditions that define the constraining boundaries within which any enterprise seeks to optimize its own objective function are infinitely more important than the objective function of the enterprise. As such, the search algorithm of the enterprise (which is implemented by the management team), should never be allowed to cross any one of those boundaries. So what's the problem with many enterprises today? The problem is that the CEO often does not define the constraining boundaries for everyone else in the enterprise. When this omission (deliberate or otherwise) leads to such a boundary being crossed, it is up to the greater system to knock some sense into that CEO and to wake up the enterprise. Unfortunately, the folks in charge of the greater system often lack the means with which to detect when an enterprise is approaching a constraining boundary at breakneck speed. They should not need such means. The CEO of each enterprise bears the responsibility of ensuring that his/her charge remains within its region of acceptable solutions at all times. This is what responsible leadership is about. --- From: "Tony Rizzo" Date: Tue, 6 Sep 2005 09:27:33 -0400 Subject: RE: [tocleaders] Digest Number 1148 Our actions can be, either, random or premeditated. This discussion is moot if random actions are acceptable. So I'll focus on premeditated actions. We take premeditated actions either to avoid anticipated problems (maintaining status quo) or to make changes (ideally in search of improvements). If our actions are to be premeditated, then we need to evaluate the effects of our actions, before we take those actions. Without a criterion, no evaluation is possible. Evaluation requires a criterion. The criterion that we use to evaluate the effects of contemplated future actions is the goal measurement [see the note below for a brief digression. Can we optimize more than one goal measurement simultaneously? No, we cannot. At best we can define a composite measurement from two or more functions. But once we have defined the composite measurement, again, we have a single objective function, a single goal measurement. What's the difference between the goal measurement and our so-called necessary conditions? It is this. The goal measurement is the one measurement that we always compromise in favor of any necessary condition, whenever such a necessary condition is in jeopardy. When no necessary condition is in jeopardy, the goal measurement continues to be the criterion with which we convert candidate actions to premeditated actions. Tony [A digression: A contemplated future action is not premeditated until the likely effects of the action are evaluated. In the absence of any evaluation, actions are random. In the absence of a proper criterion, the evaluation of contemplated actions is random at best. More often the evaluation is erroneous and misleading. Most managers in most corporations lack the proper criterion with which to evaluate contemplated actions. You take the next leap.] +( goal - make MORE money now and in the future Subject: Re: _Make "MORE" money_ is a dangerous DISTORTION of Golratt's words From: Tim Sullivan Date: Thu, 15 Jun 2000 14:45:36 -0700 (PDT) For anyone who is interested, there is a video tape I got from the Goldratt Institute called "TOC Strategic Planning/Introduciton to Marketing & Sales" for the OCT> 18-21, 1993. > In it, Eli discusses the goal and necessary conditions. He says that in the book "The Goal" he wrote that 'the goal of a for profit company is to make more money now and in the future.' Later, as he is introducing the necessary conditions he restates the goal as: > "To make money now as well as in the future. If you notice, I have dropped the word more." > I don't know what Eli's current position is, but in 1993 he went on record as removing the word more from the definition of the goal. > For what it's worth, tim -- Subject: The concept "dangerous DISTORTION of Goldratt's words," is a dangerous notion. From: Tony Rizzo Date: Thu, 15 Jun 2000 21:14:44 -0400 Here are some facts with which you will have to live. TOC is the first serious attempt to apply scientific principles to the management of human systems. Like all sciences, TOC is built upon a few underlying axioms that we all believe to be valid within our reality. The rest of TOC can be proven logically, once we accept the underlying axioms. This is the stuff of science. However, even within the sciences there is room for opinion and preference. For example, I accept the laws of physics, and I can derive the solutions to any number of problems, logically, beginning with the underlying laws of physics. But I can also express my preference with respect to any situation, and I can express my opinion with respect to any situation. I can do this at will, as can Eli Goldratt, so long as I, he, you, and anyone else who expresses his/her opinion clearly labels that expression as opinion. Here's a thought. If we want TOC to spread, then we must prevent the perception that TOC is a religion rather than a management science. Why is this necessary? It is necessary, because those who today look to TOC with a curious eye also have a skeptical eye right alongside the curious one. This seemingly unending argument about the goal of an organization and Eli's use or non use of the word "more" in his statement of _his_opinion_ of the goal of a for-profit organization has had the tone of a religious battle since it began. Eli Goldratt started the Theory of Constraints. He expressed some of the underlying axioms upon which TOC is built. He is the first scientist to derive some of the solutions in the TOC solution set. May God bless him for doing so. However, Eli Goldratt will be the first to tell you that Eli Goldratt is not the founder of a religion. Every scientist who ever began a new body of knowledge will tell you that he/she is not the founder of a religion. What's the difference between science and religion? Science tolerates personal preference and personal opinion. Eli Goldratt, since he is intelligent and a scientist, and not the founder of a religion, tolerates the personal opinions of others and the personal preferences of others. He even tolerates my opinions. He may not agree with them, and he tells me as much when that's the case. But he embraces the reality that his opinions and those of others are incongruent. I embrace the same reality. Your opinions and mine are incongruent. I can live with this. You are free to state your opinion on any subject. I am even eager to live with this. But, everything else that you state and to which you attach the label "TOC science" requires a formal proof and a formal peer review process. With this scientific process> you must live, so long as you seek to further TOC science. +( goal by Collins, Porras Date: Fri, 22 Feb 2002 06:20:15 +0000 (GMT) From: Mandar Salunkhe Subject: [cmsig] Jim Collins article I am a Eli Goldratt fan and have gone thru the requisite books on TOC, Goal, Its not Luck, Critical chain et al. Till now I was a ardent beliver of "The goal for a profit-making business has been defined as making more money both now and in the future. ". Till I came accross Built to Last: Successful Habits of Visionary Companies by James C. Collins, Jerry I. Porras Book info : http://www.amazon.com/exec/obidos/ASIN/0887307396/002-8920976-2962462 This analysis of what makes great companies great has been hailed everywhere as an instant classic and one of the best business titles since In Search of Excellence. The authors, James C. Collins and Jerry I. Porras, spent six years in research, and they freely admit that their own preconceptions about business success were devastated by their actual findings--along with the preconceptions of virtually everyone else. Built to Last identifies 18 "visionary" companies and sets out to determine what's special about them. To get on the list, a company had to be world famous, have a stellar brand image, and be at least 50 years old. We're talking about companies that even a layperson knows to be, well, different: the Disneys, the Wal-Marts, the Mercks. A really fascinating book, gives the same experince one has when one reads the Goal for the first time. In this book, Collins makes a statement which will strike as a bit different to us TOCites. MYTH 3: The most successful companies exist first and foremost to maximize profits. Reality: Contrary to business-school doctrine, 'maximizing shareholder wealth' or 'profit maximization' has not been the dominant driving force or primary objective through the history of the visionary companies. Visionary companies pursue a cluster of objectives, of which making money is only one - and not necessarily the primary one. Yes, they seek profits, but they're equally guided by a core ideology - core values and sense of purpose beyond just making money. Yet, paradoxically, the visionary companies take more money than the purely profit-driven comparison companies. A detailed pair-by-pair analysis shows that the visionary companies have generally been more ideologically driven and less purely profit-driven than the comparison companies in 17 out of 18 pairs. This is one of the clearest differences we found between the visionary and comparison companies." Also in another article by Collins, he refers to Peter Drucker and talks about the same. "The Classics. The complete guide to the best business and management books ever written. By Jim Collins Copyright 1996 by Jim Collins. This article first appeared in Inc., December 1996. " "Drucker stands as the most significant management thinker of the 20th century. Enlightened and, above all, effective management is to him the central skill needed in all parts of a free society. Effective management dispersed throughout society - in business, in nonprofits, in education, in local government - made the triumph of the free world and the end of the Cold War possible and is the only workable alternative to a resurgence of tyranny or dictatorship. Drucker's goal is to make society more productive and more humane. He strives to lift us to a higher standard, not merely to help us be successful or amass wealth. As I culled through Drucker's prolific writings, a few timeless themes emerged. The primary function of business management isn't making a profit; it's making human strength productive and human weakness irrelevant. 'The rhetoric of profit maximization and profit motive are not only antisocial,' he writes. 'They are immoral.' Channel your energies into building on strength, not into remedying weaknesses. Give people freedom and responsibility within the context of well-defined objectives; 'enable your people to work!' Authority must be grounded in competence, not in position or status. An organization must have built-in mechanisms for self-induced change, or else it rots. A business enterprise is not strictly a private institution; it is a social institution that must exercise social responsibility in exchange for its freedom from societal control. Yet social consciousness does not excuse poor performance or incompetence; the foundation for doing good is doing well. " +( Goals by Eli Goldratt Subject: [cmsig] Re: _Make "MORE" money_ is a dangerous DISTORTION of Golratt's words That sounds about right to me. I've posted earlier messages describing my quest for a "unified field theory" that describes success for both for-profit and not-for-profit organizations. It goes something like this: 1) An organization exists for a purpose, defined by its creators and modified by their successors. 2) To achieve that purpose (the real goal), the organization must satisfy at least three necessary conditions: a) Satisfy (delight) those who are supposed to benefit from its products or services (customers). b) Satisfy (delight) those who provide the necessary physical resources (investors). c) Satisfy (delight) those who provide the necessary human resources (employees, or volunteers). If I look at it this way, I can apply the same rules to both IBM and the local Methodist church. Now if any one group of stakeholders gains disproportionate power and seeks its own short-term self-interest rather than the long term self-interest of the organization, the life of the organization, and its ability to meet its purpose is threatened. An overly powerful union or greedy investors can get a bunch of short-term golden eggs at the expense of the goose. Then it's on to find for another goose. Could this be what is killing some of our corporations? Rick Gilbert rick.gilbert@weyerhaeuser.com --- Date: Fri, 16 Jun 2000 07:09:12 -0700 (PDT) From: Tim Sullivan Allow me to give those who are interested a little more detail of what Eli said at that 1993 Jonah conference. He was responding to a question about which method of strategic planning is the best model to use. In the course of his response he introduces the "goal' and the 2 necessary conditions. He goes on to say... "If you do treat it right, take any one of the three as a goal, provided that you DEEPLY understand that the other two are ABSOLUTE necessary conditions, then there is no quarrel between the three. None what so ever." (The capital letters represent Eli's emphasis, not mine.) (Paraphrasing...) Why did he chose the first (make money) as the goal for the book? Because of a "technical problem": measurement. Long ago some genius invented money which allows you to compare a cow and an apple. To date no one has invented a universal measurement system for satisfaction or security. He knew he was going to tangle with the cost world - a measurement based approach, so he chose for the goal the element that had a solid measurement system. Eli then says, just before going into the 3 step approach to strategy... "The real challenge is, how do you as a top manager, working in conditions that by definition theoretically can not be predicted in the details, how can you guarantee that all 3 elements will be constantly satisfied. That's the real question." --- From: "Richard E. Zultner" Subject: [cmsig] One More Time: The Goal of a Company is ... Date: Thu, 22 Jun 2000 00:02:20 -0400 I raised the the question about the GOAL of a company awhile back -- that is, why it is NOT "make money now and in the future". There were many responses, but none of them directly addressed the basic questions. I'll try again, one more time. First, some givens: A. It is accepted that there are Three Necessary Conditions: NC1. Make money now, as well as in the future [for the owners] NC2. Provide a secure and satisfying environment to employees now, as well as in the future NC3. Provide satisfaction to customers now, as well as in the future We will leave for later the question of whether there are MORE necessary conditions than just these three (like, minimize impact on the environment, etc.), and exactly why is it that these are "necessary". B. It is accepted that you must not fall below some minimum level on any of these, or you fail. (That's what 'necessary condition' means, right? That you must SATISFY all three necessary conditions -- that is, you must at least not cause them to deteriorate below some "necessary" point. Note that this is different from 'maximize' or 'cause to increase'.) This means that if necessary conditions are not deficient, then you don't work to increase them (as you would a goal). C. It is accepted that the owner(s) of the business can choose any damn goal(s) they please. That's not the question. The question here is, "what SHOULD the Goal(s) be?" -- for a Jonah-CEO? Note: I raised the question not out of orneriness, but because I think that ToC may be over-emphasizing NC1, and ignoring NC2 and NC3 (inertia, perhaps?). For one example, in the Lean vs. Drum-Buffer-Rope debate, I've haven't seen any ToCers credit Lean for doing better than DBR on NC2 for satisfaction. [Because everyone makes improvements everywhere in Lean, all workers learn how to improve their work, improve it, and then they can take pride in that. Where is the satisfaction in improving their own work for NON-constraint workers in DBR? Not only don't they learn to improve, they're told if they did improve, it wouldn't matter. Sure, they get security, but what of satisfaction?] Here are the Three Pernicious Questions, again: 1. Is NC1 really a 'goal' (so that we try to maximize it) or just a 'necessary condition' (so we satisfice it, or at least avoid decreasing it)? In other words, once we reach the satisficing level of the NC's, what are we willing to trade off for more of THE GOAL? If NC1 is THE GOAL, then we should be willing to trade off ANYTHING else to get more of it (yes, subject to not taking the other NCs below their "necessary" level). Are we? Case in Point: If we think our defective automobile fuel tank design may kill a few customers, but the cost of fixing it is much more than the expected losses from lawsuits, then we should NOT make the change -- IF NC1 is THE goal. Is that ToC's position? Historically, this WAS the position of the American "robber baron" capitalists in the late 19th century. (And perhaps in Russia today?) At the time, it was "common sense" for successful founder-owners of companies to act this way. Should we return to those days? Peter Drucker makes this case: breathing air is a necessary condition for each of us to survive. Yet none of us claim our "goal" in life is simply to breathe. (Just "breathing" would be a tragic goal for a person.) Simlarly, companies need profits to survive (it is their "financial air"). Profit is a necessary condition for companies -- no question about it. But companies are made up of people. Just as your goal is not "breathing", the goal of a company is not "profit". It would be sad and unworthy for a group of people (a company) to have such a low aim as only to "make money". Society did not grant legal immortality on corporations in order for them to engage in self-aggrandizement and just "make money" for their owners. No one responded to Drucker's challenge in previous discussions. Peter Drucker says NC1 is a necessary condition, but not a goal. Is Peter wrong? 2. Why only ONE goal? Most strategy experts start with a position that there are MULTIPLE goals for a company (and multiple stakeholders who advocate for multiple and different goals), and then argue about how to prioritize them, and trade them off. Are they all wrong? I accept all three NCs as at least necessary conditions, but why can't NC2 and NC3 be more than that? Why can't they both be goals? (And possibly, companies can have additional goals besides NC2 and NC2, yes?) Why can't a system have multiple purposes? My Swiss Army knife certainly does... 3. Whether or not NC2 and NC3 are goals -- aren't some companies below the necessary condition level here? Can't a company be "OK" on NC1, and NOT OK on NC2 and/or NC3? So where are the ToC methods for increasing employee satisfaction to the "necesary" level? Or for raising the satisfaction of customers to the "necessary" level? And dare I ask about measurement? And how much is "necessary"? [And hasn't there been some research done on both employee satisfaction AND customer satisfaction? Or do we ignore that because it wasn't done by ToC believers?] --- davesimp@ca.ibm.com Date: Thu, 22 Jun 2000 09:48:13 -0400 Subject: [cmsig] Re: One More Time: The Goal of a Company is ... RIchard, My belief is that this is the next step in Tony's new thread about the scientific basis of TOC. Once we agree on some fundamental definitions of a system, we again need to confront the "output function" question in a more scientific fashion. As to the number of Necessary Conditions, my belief is that there should be one for each stakeholder in the organization. A list of stakeholders would include: 1) owners - make money now and in the future 2) employees - Provide a secure and satisfying environment to employees now, as well as in the future 3) customers - Provide satisfaction to customers now, as well as in the future 4) public - Make effective use of limited resources (i.e full product life cycle environmental responsibility) now, as well as in the future. 5) bondholders - meet your financial obligations now, as well as in the future 6) government - obey all laws now, as well as in the future, pay taxes? 7) competition - compete ethically now, as well as in the future I certainly welcome improvements to this list. I also disagree with your given B. Just because a necessary condition has not yet fallen below the minimum acceptable level (the failure point), doesn't mean that I won't do something to prevent it from getting down to that level now, as well as in the future. This is a form of management "insurance" called pain avoidance - nobody wants bad PR for example, so some companies do a lot to avoid environmental mishaps, for example. It doesn't mean that the necessary condition has become a goal. Dave Simpson, Supply Chain Specialist, --- From: davesimp@ca.ibm.com Date: Thu, 22 Jun 2000 09:48:13 -0400 My belief is that this is the next step in Tony's new thread about the scientific basis of TOC. Once we agree on some fundamental definitions of a system, we again need to confront the "output function" question in a more scientific fashion. As to the number of Necessary Conditions, my belief is that there should be one for each stakeholder in the organization. A list of stakeholders would include: 1) owners - make money now and in the future 2) employees - Provide a secure and satisfying environment to employees now, as well as in the future 3) customers - Provide satisfaction to customers now, as well as in the future 4) public - Make effective use of limited resources (i.e full product life cycle environmental responsibility) now, as well as in the future. 5) bondholders - meet your financial obligations now, as well as in the future 6) government - obey all laws now, as well as in the future, pay taxes? 7) competition - compete ethically now, as well as in the future I certainly welcome improvements to this list. I also disagree with your given B. Just because a necessary condition has not yet fallen below the minimum acceptable level (the failure point), doesn't mean that I won't do something to prevent it from getting down to that level now, as well as in the future. This is a form of management "insurance" called pain avoidance - nobody wants bad PR for example, so some companies do a lot to avoid environmental mishaps, for example. It doesn't mean that the necessary condition has become a goal. --- From: "Potter, Brian (James B.)" Date: Fri, 23 Jun 2000 16:36:42 -0400 Steve, Certainly "making ¨more? money now and in the future" IS a necessary condition (especially for a for profit organization). No debate there, right? Now, imagine a for profit organization which chooses as its GOAL the "satisfy the market better now and in the future" necessary condition. How will that organization behave? - It will deliver high quality products - It will be socially responsible - It will obey laws - It will engage in continuous improvement of its products and services (to better satisfy the market) - It will engage in continuous improvement of its internal systems (to better improve its products and services) - It will satisfy its employees (so they will join the efforts to improve market satisfaction) - It will expect (and get) payments from the market which will fund all the above AND offer the owners sufficient return on their investment so that the owners will continue the organization's existence (rather than closing the organization and using their cash to start a better money making machine) Failure to satisfy the owners prevents satisfying the market. Failure to satisfy the market prevents satisfying the owners. Failure to satisfy employees prevents satisfying the market. Failure to satisfy the market prevents satisfying the employees. Failure to satisfy employees prevents satisfying the owners. Failure to satisfy the owners prevents satisfying the employees. All three necessary conditions are NECESSARY. What is wrong with picking continuing market satisfaction (or continuing employee satisfaction) as THE GOAL? Both lead to "more money now and in the future" (or else the owners close the organization, cash out, and invest in a better money making machine [which defeats any possible goal]). Drucker would probably say that "satisfy the market better now and in the future" is THE GOAL. He would probably also assert that employees were part of the market to satisfy. Union leaders might urge "satisfy the employees better now and in the future" as THE GOAL. Union leaders who think seriously about the future would work very hard to focus union members on satisfying owners and the market. This whole GOAL debate is a tempest in a teapot. Our choices are all three or nothing. Pick the one which helps your organization focus best on doing all three as THE GOAL, NEVER forget the other two, and move on. Why pick only one? Maximizing two or more things will create conflicts (real or imaginary chases after local optima) which will disrupt the focus on improving the organization and is products to the benefit of the owners, the market, and the employees. Focus on a single GOAL will force proper attention the other necessary conditions (when advancing THE GOAL requires advancing another necessary condition) without diverting focus away from continually improving the total system. If ANY necessary condition drops below its "sufficiency level," the entire organization may (perhaps, catastrophically) fall to mediocrity or even failure. Might (perhaps subliminal) awareness of such risk encourage some organizations to remain mediocre? --- From: "Potter, Brian (James B.)" Subject: [cmsig] Multiple Goals: Boring? (So what's the answer?) Date: Mon, 26 Jun 2000 15:16:55 -0400 Perhaps, SOME folks find the debate boring (well, tedious any how) because it really does not matter (your multiple choice answer #1). The "necessary conditions" are so strongly intertwined that you CANNOT pick them apart. As soon as you pick any one necessary condition for a goal, you find yourself committed to improving ALL of them. Lip service certainly will not suffice, and the more I look, the more I suspect that the interrelationships will always drive continuing improvements to all three necessary conditions. Perhaps, the condition distinguished as THE GOAL will run a little ahead of the others, but only as leader of a three horse team pulling the organization toward "better." Consider these four prominent "management philosophers" ... - Deming approached organizations as a statistician inquiring into ways to produce "higher quality" products. - Drucker approached corporations as a "social ecologist" inquiring into how the corporation best fits as an element of society. - Goldratt approached organizations (perhaps) as a physicist (maybe, generic scientist) inquiring into how the organization might maximize its (¨financial? [depends upon what one chooses to label as THE GOAL]) performance. - Senge writes of "learning organizations" (I'm still reading his stuff, and anything I say is quite tentative). All four gurus seem to have their hands on different reasons for (and aspects of) running organizations the SAME way. The unity (and indivisibility) of the three ToC necessary conditions is one significant contribution of the ToC philosophy. Deming offered similar organizational behavior notions, and he was right. Outfits trying ONLY the "cost saving" bit of the "Deming philosophy" often found that it quickly "quit working" or failed immediately. Looking at the situation from the ToC view makes it easier to understand WHY. Deming attacked a key point of the production system with down to earth tools and inferred the "social behavior" required of leadership before an organization could benefit from those tools. His countrymen ([lip service excepted] mostly) ignore him, but many Japanese firms have brought his ideas vividly to life. In many ways, Deming missed only the notion of the constraint which can focus the organization on the "best" improvement (and help identify "improvements" which amount only to fictions in the managerial accounting system) first. In fairness, when he began his work, product quality may have BEEN the constraint in many (if not most) organizations. Drucker may be even more widely admired while also being observed mostly in the breach (to date, Drucker's ability to handle "trans-Atlantic arrows" without needing the intermediate entities and relationships astounds me most, but that ability may leave too many of his readers in the dust). The whole notion of "How a corporation should behave to 'fit in' as a responsible member of a 'social ecology.'" must sound like pie in the sky to most folks. At root, it comes down to, "satisfy all three of Goldratt's 'necessary conditions.'" Keep the market (including governments and neighbors), your employees, AND the investors happy and you will win. Goldratt has offered the tightest coupling (with Deming a not too distant second) between "what to do" and "how to do it." He has also expanded the formal analysis tools into more "business functions" while offering the key notion of "the constraint." Goldratt's production and development notions harmonize so well with Deming's that the only major difference is the issue of the constraint. Deming would probably have understood and acknowledged its importance very quickly. Failure to recognize (and properly manage) constraints may well have caused a significant fraction of the failures of Deming's ideas. Senge seems to build bridges between Drucker and Goldratt. He even hit on the notion of the constraint, but missed its universality. Many of Senge's "systems thinking" and "organizational learning" ideas fit very nicely with Deming's continuous improvement and Goldratt's POOGI. Like Goldratt, Senge recognizes that "cause" and "effect" may happen widely separated by time, organizational function, distance, or other barriers to human perception. Also like Goldratt, he points out that we are often our own worst enemy (just another way to say "policy constraint"). One way or another the gurus are telling us that it is time for systems thinking in organizational management. Which guru one chooses to follow may mean a great deal less than how well one thinks systemically. :-) Brian -----Original Message----- From: Richard E. Zultner [mailto:Richard@Zultner.com] Sent: Sunday, June 25, 2000 2:58 PM Subject: [cmsig] Multiple Goals: Boring? (So what's the answer?) In recent discussions, several people have commented: "I've found the goal discussion and the NC discussion trying too. It's old, and the navel lint is now categorized, cataloged, examined, analyzed, tagged, and still it remains something to pick at. I'm just as bored with it as you are. " Please help out here. Why is this boring? 0. It's boring because in ToC we are taking "the only goal of a business is to make money now and in the future" as an axiom. We are assuming this is the case, and if you challenge that axiom, the discussion is not ToC -- so it should go on a different list. [Just as non-Euclidean geometry starts with different axioms than Euclidean geometry, we would get a non-Goldrattian ToC?] 1. It's boring because it doesn't make any difference what the answer is. [Well, can someone explain to me where the following analysis is wrong: ] 1.a. The question of "multiple goals" matters because: If there are multiple goals (that is, making money now and in the future is not the ONLY goal of a business), then it seems possible that there are multiple constraints. That is, that what constrains the business from making [more] money is not necessarily what constrains it from satisfying [more] customers, or from providing [more] secure and satisfying jobs to employees. This raises the possbility that we may need to address multiple courses of action to improve a business. They might even conflict. Note that multiple constraints also applies in the case where an organization is below the "necessary" level on any of its necessary conditions. [This is boring?] 1.b. Appealing to your intuition, which company would you rather work for: one where top management states that their only goal is "to make money now and in the future", or one where top management states that their goal is to "create customers" (Drucker consulted with them). Doesn't the goal of the company you work for matter to you? Intuitively, the difference matters. [This is boring?] 2. It's boring because I know the answer already. I know the answer because: 2.a. Eli Goldratt told me so. [Thank you very much. I will file further discussion under "inertia" or "ToC as a Religion".] 2.b. We have previously discussed and analyzed this, and reached a conclusion. [Can you send me the analysis? Or point me to it? This would make a great paper for somebody! There are some very respected management theorists -- like Peter Drucker -- who argue persuasively that "making money now and in the future" CANNOT be the goal of a business. (He does agree it is a necessary condition, but necessary conditions are satified, not optimized. If ToC has the refutation of his analysis, I'd like to see it!) The goal of a business is an issue of fundamental importance. By the same token, if ToC does NOT have a Drucker-busting rationale, then how do we justify using ToC to Drucker-reading top managers?] 3. Something else. [What am I missing? Please educate me.] --- From Peter Senge in SMR "The Leader's New Work - Building a Learning Organization" (Sloan Management Classical) : Johnson&Johnson's credo created in the 1940's : - service to customers comes first - service to employees and management comes second - service to community comes third - service to stockholders comes last --- Date: Wed, 05 Jul 2000 18:10:59 -0400 From: Tony Rizzo Interestingly, Ackoff has changed my views on what constitutes a system. First, a goal is not a necessary condition for a system. We can change the goal of the system. Changing the goal doesn't necessarily change the system. The system stands on its own. We may choose to redesign the system, so that it might achieve the new goal more efficiently. but we can have the original system pursue the new goal, efficiently or inefficiently. The system and the goal are distinct and separate. Further, Ackoff makes some good points about systems. I'll provide a brief summary of his categories: 1) Deterministic systems are the ones that are goal-seeking. The behavior of these is determined completely from external inputs. 2) Animated systems are purposeful (Ackoff's word). He makes a distinction between goal and purpose. To me, that's a fuzzy distinction. But, I can live with it. People are purposeful, animated systems, unless I misread his stuff. Think of animated systems as self-directing systems. 3) Social systems may have as subsystems both animated and deterministic systems. Social systems may also be nested. For example, in corporations we have groups, departments, etc. In society we have nations, which have corporations as social subsystems. If I read Ackoff correctly, social systems have purpose. That purpose is to advance the developement of the components of the social system. Interesting! 4) Finally, he describes ecological systems. These have no externally imposed purpose. In fact, unless I read too fast, they pursue no purpose. They just exist. Ackoff makes some powerful points about systems, with which I agree wholeheartedly. Specifically, he states that a system is DEFINED BY THE INTERACTIONS BETWEEN ITS COMPONENTS. Take that, ABC! Great stuff, those interactions. He gives a number of examples to make this powerful point. The definition of a system makes no mention of organizational goals. In fact, even my small system extends beyond my rather narrow organizational boundary and goals. I rely on external partners, to survive and thrive. By the way, I haven't abandoned the system discussion. I've only paused it, so that I have time to read these new books. I read slowly. :-) --- From: "Bill Dettmer" Date: Wed, 14 Feb 2001 19:23:20 -0800 ----- Original Message ----- From: Christopher Mularoni To: CM SIG List Sent: Wednesday, February 14, 2001 8:41 AM Subject: [cmsig] Re: The GOAL of any company > Factory Physics by Hopp & Spearman presents the following as the fundamental > objective for almost any manufacturing firm: > > "Increase the well-being of the stakeholders (stockholders, employees, and > customers) over the long term." > > I believe this could serve as the goal for almost any organization if you > exclude the psydo scientific eco terrorists/environmentalists. In the interest of starting a new discussion thread into a heretofore uncharted territory... Goldratt has suggested that the goal of a for-profit company is to make more money, now and in the future. This implies that the goal is a RATE phenomenon, not some arbitrary fixed value. He has further suggested that there are necessary conditions for achieving this goal, and that they include customer satisfaction and employee satisfaction/security. Most people seem to agree that necessary conditions (NC) are minimum requirements: zero-or-one values that are either met or not. If not, the goal is not achievable. But what about if so? Does the sum of the NCs produce the goal? No, because the NCs are necessary but not sufficient (sounds like a catchy title for book...). Something else is missing. Since the goal is a rate phenomenon and the NCs are not, nor are they sufficient alone or together to result in the goal, some critical "X" factor is missing. And this critical factor is the element that imparts a rate phenomenon to a goal. DISCUSSION QUESTION: What IS that factor? --- From: "J Caspari" Date: Thu, 15 Feb 2001 19:50:56 -0500 But, they ARE sufficient. If you have satisfied the COMPLETE set (sum of the NCs) of necessary conditions they will be sufficient. If you do not get the desired effect, then there is something missing - another necessary condition. Is it possible that Goldratt led us astray when he suggested that it might be the case that necessary conditions could be interchageable with the global goal of an orgnaization? What is the real relationship of necessary conditions to the Constraints Management philosophy? Try this one on. All active constraints are currently unsatisfied necessary conditions. We distinguish constraints for which elevation has a positive throughput effect from those constraints that satisfying prevents a negative effect. The latter are typically called necessary conditions. We made this distinction because analytical technique associated with each differs slightly. +( Goldratt - Deming - Drucker - Senge convergence From: JmBPotter@aol.com Date: Tue, 1 Aug 2000 17:56:45 EDT Rick, The convergence you notice may exist on a much broader scope. Deming, his elves, and their followers have developed continuous improvement processes which harmonize well with DBR and other ToC notions. About the only important piece of ToC thought they do not actively embrace is the notion that the constraint is the same as the throttle (choke point) and both are the same as a leverage point (an effective control for the entire system). Kaizen, TQMP, and Six-Sigma are all variations on the Deming theme. The only significant flaw I find in Deming is his assertion that one needs "profound knowledge" to know what to change and the direction of change. Most of us probably lack Deming's "profound knowledge," but some awareness of a constraint will offer a good shot at a substitute. Drucker's idea of the corporation (organization) existing as a member of a human-made ecology harmonizes almost perfectly with Goldratt's "three necessary conditions." You might say that Drucker makes it even "simpler" by treating "employees" as part of "the market" (along with government, customers, suppliers {other than employees}, neighbors, the natural environment, and other stake holders). Drucker says that leaders need to know what questions to ask, and they need information (not data) which answers those questions. How well does that fit with Goldratt's notions from _The Haystack Syndrome_? How sharply might some constraint awareness focus a leader's ability to ask the right questions? Senge shines a light on systems thinking from another angle and talks about "learning organizations." He even hits upon the critical nature of the constraint in one scenario he calls "Limits to Growth." If he had realized that a constraint always (rather than just in some special cases) limits every organization, he might have duplicated much of Goldratt's work in HBR sanctioned venues. Deming, Drucker, and Senge ALL reach conclusions VERY SIMILAR to Goldratt's regarding appropriate behavior for an organization's formal leaders. Similarly, all stress the need for continuing improvement over the current state. The ToC procedures fill a large portion of the gap between the "what" Deming, Drucker, and Senge advocate and the "how-to" many of us need. Brian In a message dated 2000/08/01 10:01:05, rick.gilbert@weyerhaeuser.com writes: While looking for another paper, I found a really interesting article by David and Sarah Kerridge on the importance of (and some guidelines for) formulating the right question. Prior to reading this, I saw no conflict between Deming and Goldratt, but talk about reinforcement! As I read the article, I kept saying to myself: "Yes, that aligns with TOC... Yes, figuring out what to change is the essence... Yes, describe the system with the minimum number of assumptions... Yes, that is simply common sense..." It also fit the way that I try to look at the problems/questions that people bring to me in my job. One of my colleagues "complains" that my first response to any question is always another question. If interested, take a look at the following link on the "Deming Electronic Network." I've been a lurker on that group's mailing list for some time now. Title: What is the Right Question? at http://deming.eng.clemson.edu/pub/den/files/question.txt +( Goldratt activities From: "Mark Woeppel" Subject: RE: [cmsig] TOC Groups Date: Thu, 7 Apr 2005 18:03:36 -0500 AGI is not connected to any of them. TOC for education is also independent, but was initially funded by Eli. ToC-ICO is an independent organization, to certify practitioners The Goldratt Group - Eli's company, he has the following two Goldratt Marketing - the group responsible for disseminating ToC results Goldratt Consulting - the group responsible for developing a process to implement ToC +( Goldratt network in Europe The TOC names in Alex Meshar's territory which includes Holland, Belgium, Germany, Austria and Switzerland are below. I don't know all of them. 3 other key addresses are: Alex Meshar: alex.meshar@goldratt.nl Stefan van Aalst stefan.vanAalst@goldratt.nl and Marco Zuiderwijk marco.zuijderwijk@goldratt.nl The phone number in Holland is 0031 70 345 45 99 - the location of the office is in The Hague. "Raibstein" "Bengt Nilsson" , "Charlotte Hartelius Klaar" , "Chris-Hans Dirks" , "Christian Kuper" , "Eline Dekker" , "Husken, Bert" , "Jean-Claude Miremont" , "Juergen Bremer" , "Lars Peterson" , "Leonard, George" , "Maj-Inger Johansson" , "Mark Govers" , "Patrick Hoefsmit" , Rudolf G Burkhard/EUR/DuPont, "Stefan Dubois" , "Thomas Hummel" , "Tove Janzon Rusch" , "Ulli Cremers" , "van der Zeeuw, Otto" The two key names in the UK are Oded Cohen and Martin Powell. The phone number there is 0044 1628 674 468. Their emails should be (there might have been a change) oded@toc.co.uk and m.powell@toc.co.uk Another interesting one for you would be John Tripp the associate in Scotland. He is starting a remote learning via the internet and has a simulation? or at least a problem at his site which you can solve - and maybe use in October. I have not had the time to see it yet. He charges a small amount for the access . His email is: j.tripp@toc.co.uk +( Goldratt resources From: Goldratt Info [mailto:goldrattinfo@btconnect.com] Sent: Tuesday, June 21, 2005 4:37 PM To: hpstaber@fciconnect.com Subject: Goldratt News Dear Hans Peter A number of people ask us for advice on pursuing their interest in the Theory of Constraints (TOC). We have therefore compiled the following as a guide and hope that you will find it helpful: Operations Read - The Goal and The Race - http://www.goldratt.co.uk/bkshp/index.htm Read - Managing Operations - http://www.goldratt.co.uk/bkshp/index.htm Read - Manufacturers Guide to Implementing TOC - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC Self Learning Program on CD-ROM Session 1 - http://www.goldratt.co.uk/oa/slpintro.html Study the Production Self Learning Kit - http://www.goldratt.co.uk/bkshp/bk-prd1.htm Read these articles - http://www.goldratt.co.uk/lib/lib-dbr.htm Read these success stories - http://www.goldratt.co.uk/succ/index.htm#dbr Attend our 1 day Introduction to Managing Constraints workshop - http://www.goldratt.co.uk/oa/educ/introtomc.htm Finance & Measurements Read - The Measurement Nightmare and Throughput Accounting - http://www.goldratt.co.uk/bkshp/index.htm Read - Measurements for Effective Decision Making - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC Self Learning Program on CD-ROM Session 2 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-fin.htm Project Management and Engineering Read - Critical Chain and Project Management in the Fast Lane - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 3 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-cc.htm Read these success stories - http://www.goldratt.co.uk/succ/index.htm#ccpm Attend our 2 day workshop on Critical Chain (£500.00 + VAT) - http://www.goldratt.co.uk/oa/educ/ccpm2day.htm Distribution and Supply Chain Read - It's Not Luck - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 4 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-replen.htm Read these success stories - http://www.goldratt.co.uk/succ/index.htm#replen Marketing Read - It's Not Luck - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 5 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-sales.htm Read these success stories - http://www.goldratt.co.uk/succ/index.htm#sales Sales Read - It's Not Luck - http://www.goldratt.co.uk/bkshp/index.htm Read - The Cash Machine - http://www.goldratt.co.uk/bkshp/index.htm Read - Unblock the Power of Your Sales Force - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 6 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-sales.htm Read these success stories - http://www.goldratt.co.uk/succ/index.htm#sales Attend our 3 day Solutions for Sale workshop - http://www.goldratt.co.uk/oa/educ/sfsale.htm Managing People Read - The Measurements Nightmare and It's Not Luck - http://www.goldratt.co.uk/bkshp/index.htm Read - Great Boss, Dead Boss - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 7 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-hr.htm Strategy Read - Viable Vision - http://www.goldratt.co.uk/bkshp/index.htm Study the TOC SLP on CD-ROM Session 8 - http://www.goldratt.co.uk/oa/slpintro.html Read these articles - http://www.goldratt.co.uk/lib/lib-strat.htm Thinking Skills and non-TOC titles Read - It's Not Luck - http://www.goldratt.co.uk/bkshp/index.htm Read - Thinking for a Change - http://www.goldratt.co.uk/bkshp/index.htm Read - Creating a Thinking Organization - http://www.goldratt.co.uk/bkshp/index.htm Read - Accelerated Learning - http://www.goldratt.co.uk/bkshp/index.htm Read - BrainSell - http://www.goldratt.co.uk/bkshp/index.htm Read - Master it Faster - http://www.goldratt.co.uk/bkshp/index.htm Study the CD-ROM - Yani's Goal - http://www.goldratt.co.uk/bkshp/index.htm Generally we recommend that people who are interested in TOC attend our 1 day - Introduction to Managing Constraints (£200+VAT) - http://www.goldratt.co.uk/oa/educ/introtomc.htm as an overview of the core of TOC thinking. In order to learn the formal TOC Thinking Processes we suggest starting with the 4 day Effective Thinking Processes programme - Module 1 We also recommend that you visit the TOC Reference Bank, the leading source of people and organisations who have achieved phenomenal results through the common sense logic of Dr. Goldratt's Theory of Constraints. You can view documented success stories from around the world and/or search a specific company, industry or TOC application. It can be found at www.eligoldratt.com , under the TOC Reference Bank heading. +( HOW MUCH WOOD COULD A WOODCHUCK CHUCK? Multitasking How many projects are too many projects? When schedules start slipping and the red flags start popping up in your project management software, a common thing your staff will blame is too many projects. "We're juggling too many priorities and everything is late," they will exclaim, faces turning ruddy. When our children do poorly in school, we throw down the yellow flag and yell "A.D.D." When they tell us they can't focus on their work, we often go to the pharmacy. Unfortunately, coffee and Jolt cola are the closest things to medication for engineering productivity. The study addressing engineering workload that gets the most attention is the one conducted by Steven Wheelwright and Kim Clark (Revolutionizing Product Development, The Free Press, 1992, p. 91). Their findings indicate that 2 projects are the most an engineer can work on concurrently and remain effective. If even one project is added, productivity takes a sharp decline. Preston Smith and Don Reinertsen spend an entire chapter addressing "overload" in Developing Products in Half the Time (2nd Edition, John Wiley, 1998, Chapter 11). They agree with Wheelwright and Clark, but add that if speed is the project objective, resources must be dedicated to only that project to accomplish a necessarily fast cycle. This is the perspective I took with me when I recently attended MRT's conference on "Applying Constraints Management to Product Development" in Chicago. But during the extended Q&A session with TOC creator, Dr. Eli Goldratt, I heard him say: "When bad multitasking is eliminated and projects are managed according to the critical chain, an engineer should have the capability of contributing effectively to as many as 12 projects!" Twelve projects? Where is Dr. Goldratt coming from and why does his view differ so greatly from the others? The obvious answer is that Wheelwright, Clark, Smith and Reinertsen assume the constrained project management system whereas Goldratt advocates opening the flow fundamentally. One assumes a reality of constrained capacity and the other attacks the basic idea of capacity. Of course, the freely flowing, critical-chain oriented system is much rarer than the fire-fighting environment that the majority of us live in. Bad multitasking abounds. Most places face self-fulfilling overload as projects get jammed into the front end of the pipe while much less goes out the other end. The problem with changing this situation is that the remedies almost universally suggest that the road to profitability begins with doing less. This is counterintuitive to most and will often be immediately dismissed as impractical. This is similar to what happens with lean manufacturing. When shop floor managers are told that machine setup times need to be a fraction of what they are, you can almost guarantee a revolt and that you will be produced with a list of details on how it is impossible. Many books about lean are filled with stories of Japanese consultants who show up on the floor unannounced and start breaking down machines with crowbars. Manufacturing personnel are therefore shocked into submission. When Steve Jobs chose an engineer he wanted to work on the original Macintosh, so the story goes, he literally pulled the plug on what the guy was working on and carried his computer to a new cube. The prescription for change is simple, it's just not very easy to accomplish. When the desired result makes sense to you but does not fit your current reality, then change the reality. Just make sure to hedge your bets. +( idle resources must exist EliG in NBNS Fft am 10.September 2001 Vortragslogik : 0) Fluámodell zurVerifizerung,daá es Engpaá geben muá - ansonsten k”nnte der Durchfluá ins Unendliche steigen 1) Firma modellhaft darstellen als Folge ABHŽNGIGER Ressourcen 2) wir nehmen an das irgendein APLZ "X" zu 100% ausgelastet werden soll 3) wegen "Murphey" brauchen wir einen Sicherheitsbuffer vor APLZ "X" 4) alle APLZ vor "X" brauchen mindestens die selbe Kapazit„t wie "X" plus Reservekapazit„t um den Sicherheitsbuffer im Ernstfall wieder auffllen zu k”nnen ==> es muá nicht ausgelastete Maschinen geben und es ist keine "verschwendung" +( Implementation Guidelines Date: Thu, 28 Oct 1999 13:02:10 +0000 From: "Rudolf Burkhard" To: Hans Peter Staber 1. Determine where your business constraint is - if your boss is introducing TA accounting then the system will be bigger - including all the plants. 2. Develop all the Throughput per Constraint Hour numbers for every customer/product combination. You can do this by product if you sell only at one price and/or you have a single customer for each product. This will rank your product line by products that are most profitable at the constraint - the conclusions are obvious. 3. Develop your operating expense for the business by category (salaries, energy, depreciation etc.) 4. Develop your investment statement for the business. 5. Develop a Throughput statement which shows the market demand by product and the actual sales plan of the organisation. Based on the market demand you can develop the optimal Throughput for the business (where you have no constraints like having to deliver a low Throughput article in order to get the profitable business). You can then develop the planned Throughput and the difference between the two Throughputs - so you can see the impact of your actual practices on the earnings of your business - especially if you have a bottleneck. The same table has a column which cumulates the usage of the constraint so that you can see as of what sales volume you have to start to turn away business. 6. Develop your P&L statement (T-OE) and (T-OE)/I from the above numbers - again for the optimal column and the sub-optimal column. The business team can then start to work on the directions in which it wants to go - how to use the constraint optimally given other things you must do (serve the Lopez disciples). If you have excess capacity it also allows you to determine which products - if you are able to sell more - will improve your business the most. You may have excess capacity, but not too much - so you want to fill the plant with products that use the constraint efficiently. Some products are free - they don't go through the constraint. Be careful with these that you do not increase volume too much and cause an interactive constraint that can very quickly destroy the capability of your plant. I would also recommend you use some of the SPC techniques to monitor your parameters - most of the fluctuations you deal with are within the capability of the process - you should only react to points that indicate a real change has occurred or you don't like the capability of your process in the first place. This helps keep the organisation focused on what is really important! A good place to start is Thomas Corbett's book called 'Throughput Accounting'. What I have said above and more on how to use the info is in there. Once you have this in place (it is not difficult - most accounting systems can supply what you need), then you can start on the two measures Throughput Dollar Days and Inventory Dollar Days. You will find Corbett's book a bit limited - it does not look at multisite TA accounting enough - so your people will have to do some in depth thinking of how you want to deal with transfer prices from within Framatome and the like. Corbett's book is available from North River Press ISBN 0-88427-158-7. I got it from Amazon US. If it is available from Amazon De then that is much cheaper (mailing!). Good Luck! Keep me posted - would like to work with you to implement the above and other parts of TOC. Rudi Rudi Burkhard Essex House CH-1187 St. Oyens/VD Switzerland Tel: 0041 21 818 4051 or 0041 21 828 3491 email: rudiburkhard@compuserve.com =========================================================================== From: "Graaf, Menno (Menno)" To: "CM SIG List" Subject: [cmsig] Implementing TOC/CC in a cost driven environment Date: Mon, 10 Jan 2000 11:42:24 +0100 I'd like to get an idea if any of you have seen examples of partial TOC/CC implemetations in companies which, as a whole, are and stay cost driven. I am especially interested in any negative branches and how you handled these (if at all)? I know from Eli G. that he has experienced this as a major problem (see below). Though, on a theoretical level, I understand what he is saying, I'd like to get a feel of some real life examples. In a big company I do not nessecarily see major negative branches. People that become "redundant" may be able to get a new job within the company. Work enough! And if the business unit, implementing TOC, is independent of other area's, then we also do not have the issue of other area's being "choked" by the increased output. So, what other scenario's are we talking about? To be clear, I am not looking for names. I just am looking for real life scenario's. Thanks, Menno Graaf p.s.: In his POOGI Forum letter nr 2 Eli says the following: - The improvements in the section that implements TOC lead to negative impact on the performance of the company. Examine for example a company where Production is feeding a large distribution network. If distribution is not adopting TOC, it is likely that improvement in production lead time will not be translated into a drastic cut in the desired levels of inventory in distribution. The increased throughput in production will result in inflated inventories in distribution - sometimes to the extent that the company might suffer from cash shortage. - The ongoing improvements in the section that implements TOC do not, after a while, contribute to the bottom line. Simply, after a while, the constraint of the system is no longer in that section. Then the impact of this section's superior subordination is wasted since the other section does not know how to manage the constraint. - The most frequent case: The TOC implementation is squashed. This happenswhen the head of the section is promoted and a new "cost world" manager takes over. - The most devastating case: The constraint is no longer in the section that implemented TOC. The section continues to improve. The outcome is not an increase in throughput, but an increase in excess man-power. Then corporate decides to trim the excess. People are punished for doing the right things, and as a result the section folds. =========================================================================== From: "Richard L. Franks" To: "CM SIG List" Subject: [cmsig] Re: Shooting Ourselves in the Foot? Date: Thu, 13 Apr 2000 17:22:21 -0700 Warning: This is fairly long. It also assumes a fair amount of understanding about the realities of implementing TOC Project Management I gave a 3 hour session on Single Project Management at the Management Roundtable conference. In it I listed 5 examples of mistakes people make in trying to implement TOC PM and the problems that result. The point was to warn them away from mistakes others have made. I included the dangers of applying TOC Single Project Management in a multi project environment. Most of the experts I know who successfully facilitate implementation of TOC PM would say the same thing. I do want TOC PM to spread, but I certainly don't want someone to start an implementation which is highly likely to fail. After all, some project overruns are disastrous (for the client, the company, and the people involved). The basic issue is that TOC PM planning and execution are based on some tasks completing in less than the time estimate and some taking more than the time estimate. Buffer sizing and buffer management both depend on this. But it won't happen unless there is a behavioral change by both the workers and management. Consider a very common case in multi project organizations: The people on your project are multi tasking on other projects where they are held to task milestones. If this is true, then 1. they can't road run on yours (because they are multi tasking), 2. they will tend to give your tasks lower priority than their other tasks (because they are held to milestones on the other tasks and no milestones on yours) If you followed the guidelines for task sizing, your estimates will be about 1/2 the usual, based on the expectation that people will be working differently on your project. Be they won't be. This guarantees that your project tasks will take longer (perhaps much) longer than planned. If you sized the buffer according to the usual guidelines you'll probably over run it and be late, possibly quite late. This is very likely to happen. Of course you could estimate the project differently, size the buffer differently, have people multi task and even have task milestones on your project. But if you do that, you aren't doing TOC PM or anything like it. You also won't get the speed and capacity increases. In the 3 hour session, I did tell people how they could succeed in doing TOC PM in a multiple project environment. The problem is that it is difficult to set up the conditions. Here are two different conditions which would work. 1. Get agreement from management that your project is top priority and that people should work your tasks to completion before doing other project tasks. That would solve the problem, but is unlikely to occur. Also, if it does occur not many people will be impressed with your success. Under those conditions, lots of people will believe they could succeed with almost any project management method. 2. Or, Get most of the people on the project dedicated to it. For the few people who are shared, either get an agreement that they won't multi task, or inflate their average times to account for it. If they are going to multi task, watch they like a hawk to make sure your project gets its share of their time. If you don't keep on the pressure, the task milestones on other projects will drive people to spend more time on those projects than yours, possibly much more The second set of conditions is more likely to occur. But that basically says that you are "almost" in a single project environment. I've seen very few environments which were pure single project environments. However, a good enough approximation is good enough. Dick Richard L. Franks Oak Hill Consulting, A Certified Associate of the Goldratt Institute rlfranks@oakhill.net 530-347-4907 +( improvement through introduction of technology If you introduce new technology you have to change the rules and procedures ! BELIEF 1) the technology can bring benefits if and only if it deminishes a limitation 2) long before the availability of a technology we developed modes of behaviour, policies, measurements and rules to help us accomodate the limitation THE PROCESS OF SEARCHING FOR NEW TECHNOLOGY ask the following questions in the order they appear (e.g. : the new technology is : introduce an MRP system) a) what is the main power of the technology A : fast calculation of net requirements b) what limitations does it diminish A : time to calculate net requirements c) what rules helped us to accommodate the limitation A : calculation of net requirements is done monthly d) what rules should we use now A : calculation of net requirements is done weekly --- TOC’s Six Necessary & Sufficient Questions relating to Technology (N&S) to identify the power of a new technology, the limitation it addresses, the rules created to cope with the previous limitation that must be changed, the new rules needed to exploit the power of the technology, the resulting required changes in technology itself and finally how the technology provider, integrator and user can work together to enable the implementation of the required changes on win/win basis +( Information Cycle and Business Units +c Rizzo BUSINESS FRAGMENTS ARE NOT BUSINESS UNITS The Information Cycle And The Theory Of Constraints Tony Rizzo tocguy@pdinstitute.com The Product Development Institute, Inc. http://www.pdinstitute.com/ ************************************************************ *** *** *** A business fragment doesn't become a business unit *** *** simply because we call it a business unit. *** *** *** ************************************************************ Are the phrases "cost center" and "profit center" used with frequency by your colleagues? If they are, then your new-product introduction company probably consists of business fragments that are being called business units. Your company's bottom line is surely suffering the consequences. In an effort to improve the financial performance of their clients, business consultants have preached the business unit model to those clients. The concept of the business unit, of course, is useful and valid. But the _application_ of the business unit model has been anything but valid. To understand the problems caused with the incorrect application of the business unit model, we need to understand what constitutes a true business unit. For this, we need a valid and useful abstraction of a new-product introduction business. We begin by defining such an abstraction. We'll call it the Information Cycle. THE INFORMATION CYCLE To begin to understand the Information Cycle model, consider this. The customers of your business are the beginning and end of the Information Cycle of your own business. The information cycle begins when your customers provide input information in its most raw state. This input information arrives as a list of customer needs, wants, or complaints. We call this information state 1. If your new-product introduction business is like most, then your marketing people process this raw information input and convert it to information at state 2. This is a list of features and project deliverables. It is, in other words, a description of the product and value proposition that your new-product introduction system intends to create for your customers. But, in a very real sense, it is still information about your customers' needs. Your product development people transform this same information from state 2 to state 3. At state 3, the same customer-needs information becomes a project plan. This is a logistics network, a resource plan, a budget, and a schedule with which your development engineers expect to create not only the design of your company's new product but also the production line with which to produce it. The prototype, which is the first physical copy of your company's new product, contains all the same information that your customers provided initially. The difference is that now the information is much refined and matches the original customer needs much more closely. The prototype represents information at state 4. What is information state 5? It is the production line. The production line contains exactly the same information about the same customer needs. But, at state 5 that information takes the form of machines, machine instructions, worker instructions, bills of materials, purchasing schedules, and production schedules. The production line, of course, imparts its information content onto the raw materials and purchased components that flow through it. These are transformed from materials devoid of information to the product itself, which is information state 6. In fact, we could think of the product as the anti need. Now, think of an anti need (product) colliding with a customer need and annihilating it, creating an explosion of customer pleasure and an ensuing shower of monetary particles. This is information state 7. This is the most complete state of information possible. This information state exists only when a product has been purchased and used, successfully. For your company, information state 7 is also the most useful information state. This is when your people know the degree to which their efforts are successful. This is also when they learn of your customers' next set of needs, which, when captured, enter your new-product development cycle as information at state 1. Thus, the Information Cycle continues. Is the Information Cycle a useful model? It is useful only if it helps us to understand and improve the performance of our real organizations. But, before we even attempt to use this model, we had better verify its validity. To verify the validity of this model, we compare that which the model suggests with the things that we know already to be true. What does the Information Cycle model suggest? It suggests that two things are vitally important in the new-product introduction business: speed AND quality. The speed part is clear. The distinct lack of speed has been demonstrated to cause loss of market share and profitability. Thus, speed is vital, and we know this. But, the Information Cycle model also suggests that quality is every bit as important as speed. The quality of the information that flows through the cycle ensures that our efforts to increase the speed of the cycle have a positive impact on the bottom line. In the absence of information quality, which ultimately becomes product quality, our rapid-fire Information Cycle systems shoot blanks, i.e., they deliver products that nobody wants. Thus, quality is important. Quality of inputs, quality of product specifications, quality of project plans, quality of prototypes, quality of production operations, quality of products, and even quality of delivery and service, measured relative to customer needs and expectations, are all important. Is the implication, that the quality of information content and ultimately that of the product are important necessary conditions, consistent with our experience? Most certainly, it is. The message of Dr. W. Edwards Deming is still fresh in the minds of millions. And the conclusions that our Information Cycle model suggests are entirely consistent with that message. Quality counts! Ironically, the frequently perceived conflict between speed and quality, in product development, is really no conflict at all in light of the Information Cycle model. Both are vitally important. BUT, WE'RE ALREADY USING THE INFORMATION CYCLE Well, yes, your company already does use the Information Cycle described in the previous section. It has no choice but to use it. Each product line that your company brings to market is the result of one information cycle. If your company has ten product lines, then your company has ten Information Cycles. So, indeed, you are already using the Information Cycle. This is not the question of interest here. Rather, the question of real interest is, how well does your company use its Information Cycles? Before we can answer this question, we need to understand the basics of the Theory of Constraints (TOC). We'll use the Information Cycle model, itself, to gain this understanding. Each of your company's many Information Cycles is just that, a cycle. It begins and ends with customers, and then it starts all over again. Flowing throughout this cycle there is a special working fluid, not unlike the working fluid of a closed loop power cycle. However, in these cycles, the working fluid is not water but information about the needs of your company's customers. When your product development system works well, information flows through your cycles at a fast pace, and it is information of high quality, i.e., it is information that describes your customers' more important needs. Now, in your mind's representation of your company's many Information Cycles, picture the cycles moving at an increasingly fast pace, so that with each pass through the various functions the overall speed of the information flows increases. Can this acceleration continue indefinitely? Can your cycles reach infinite speed? Of course, they cannot. Something is certain to restrict the speed with which information flows through your company's Information Cycles. Perhaps, it is the speed with which the engineering subsystems complete their projects. Or, perhaps it is the ability of your marketing people to capture the real needs of your customers. Whatever it happens to be, there is always something that restricts the flows. There is always something that constrains the performance of your Information Cycles and, consequently, the profitability of your entire company. This is the underlying principle of the Theory of Constraints. Logical, isn't it? The Theory of Constraints suggests that we focus our profit improvement efforts upon whatever happens to be restricting each of a company's Information Cycles. This focused effort should improve the profitability of any company's business units. It should improve the profitability of your business unit as well. Will it? That depends. The Theory of Constraints is certain to increase your business unit's profitability significantly, if your business unit encompasses a complete Information Cycle. But, do you have a real business unit, or do you have only a business unit segment, a shard, a fragment? Let's see. If your company is like most large companies that exist today, then it is very likely that the business unit model has been applied to it, incorrectly. The company's Information Cycles have been segmented by functions. Very probably, your financial people treat each function as a profit center. They track the cost of each function, and they calculate a profit quarter by quarter, forcing each function to charge sister functions for goods and services, even when no money flows into the overall company. Worse, each of the other function managers also thinks that he has a business unit, with profit and loss responsibility. Therefore, each function manager is in constant conflict with every other function manager. Arguments over transfer prices are frequent and extremely counterproductive, even damaging. The constant struggle, of course, causes the severe degradation of the affected Information Cycles. Inevitably, profitability suffers. So, do you have profit and loss responsibility for such a business unit fragment? Are you responsible for the entire company's profitability? No matter. If the business unit model has been applied as I've just described, then profitability eludes you. It will continue to elude you so long as this false business unit model continues to constrain your company's Information Cycles. SO, WHAT DOES CONSTITUTE A BUSINESS UNIT? By now you must be dying to ask one specific question, what is a business unit? This is an easy question to answer. In fact, Dr. E. M. Goldratt, the founder of the Theory of Constraints, has already provided the answer. A business unit is the collection of all functions needed to bring a product line to market. As a minimum, a business unit includes a Marketing Subsystem, an Engineering Subsystem, a Production Subsystem, and a Sales/Distribution Subsystem. These define the Information Cycle of the business unit. They are the front line functions of the business. In addition, a business unit requires a finance function, a purchasing function, a human resources function, and a management function. These, at times, can be shared by several business units. Some of these, in fact, can be outsourced, given the right circumstances. But the core of a business unit is the set of all functions that define its Information Cycle. Manage these as a system, and you'll achieve extreme profitability. Permit these to be fragmented, as with the false business unit model, and profitability becomes a distant memory. That's it, for this brief article. In a later article we'll discuss the damage inflicted upon the stockholders of large corporations, by the functionally organized business group model. You will see why these throwbacks can never achieve speed in new product introduction. For now, look for companies that adopt a true business unit model, and adopt a buy and hold strategy. +( interco sales remark by E. Goldratt at the end of video 4 DISTRIBUTION Date: Tue, 13 Jun 2000 23:09:17 +0100 From: Hans Peter Staber The ongoing trend to globalization and to M&A results in increasing problems with intercompany sales. I'm aware that E.Goldratt's proposal to solve the problem of intercompany sales is to account the total sales in the books of everybody who contributed to the production or to the sale. I was pondering a bit about this proposal and would like to know more about the reasoning behind it. Any thoughts or pointers ? From: "Gilbert, Rick" Date: Thu, 15 Jun 2000 07:14:35 -0700 To Hans and Brian and others, 1) I seem to recall a CMSIG presentation or published work detailing the multiple flaws of transfer price, along with recommending the approach that Hans references below. Any idea where I might have seen that? 2) In partial answer to Hans: I think the proposal was to credit everyone involved with the Throughput, not just the Sales. This gives credit to the team for producing results for the overall business. Example: I supply an intermediate material that another plant in my company can process into a product to sell to the market. By the way, I also have an external market for my intermediate product. The company's goal is met when both I and my sister plant make decisions that maximize real benefit to the business. Consider the following: Case A) I make intermediate, i and sell it for price, P(i) and incur a variable cost of VC(i). My sister plant buys the intermediate externally at a cost C(ext) and incurs an additional variable cost, VC(p) in making product p that it sells to the market for price P(p). How much money does the company make? Total Throughput = P(p) - VC(p) - C(ext) + P(i) - VC(i) Case B) I supply intermediate i to my sister plant. My sister plant processes it at the same variable cost as before into product p. P is sold at price P(p). How much money does the company make? Total Throughput = P(p) - VC(p) - VC(i) The difference in throughput between the two cases is -C(ext) + P(i), or more conventionally, P(i) - C(ext). Should I sell i to the outside, or ship it to the sister plant? If the price we receive for selling i on the market exceeds the cost of buying it for use in the sister plant, it makes sense to sell i to the market. Otherwise, I should supply i internally. If I get credit for the Throughput that i contributes, whether it is sold directly or incorporated into p, then my decisions will be guided in a way that maximizes the benefit to the company. If, on the other hand, I am measured on my plant's "profit" using some artificial transfer price (all transfer prices are artificial), I may spend my time unproductively negotiating for higher transfer prices, an exercise that has no value to the parent company. My reservation has to do with answering the question, "Why would the sister plant want to buy from the outside, rather than from me?" Assuming that my variable costs for making "i" are always lower than the market price -- and I hope they always would be, then the sister plant would always want to get its intermediate from me in order to maximize its throughput. How does the company make the sister plant happy about the decision to sell i directly to the market and source the sister plant from external producers? My first response is that when the internal source of "i" is not available, they will know that it is for the benefit of the company throughput and will feel OK about securing an external source. Somehow, that response isn't entirely satisfactory, so I'd welcome others' comments. 3) To Brian Potter: No you can't roll up these throughputs to compute the throughput for the company. That is not the purpose of this measure. Nor do you have to have transfer price to compute total company profit. Total company sales (external) less total company expenses (variable and fixed) equals total company profit. After all, what happens to the profits shown by transfer price when they are rolled up to the company report? -- They drop out. Transfer price received by one unit is offset by transfer price paid by the receiving unit. If one has to use transfer price for tax or regulatory reporting reasons (I'm not an accountant, so I don't know), hide it in a dark closet and keep it out of the unit performance measures. Rick Gilbert rick.gilbert@weyerhaeuser.com > From: Potter, Brian (James B.)[SMTP:jpotter5@ford.com] > Sent: Wednesday, June 14, 2000 10:09 AM > > I have two clarity requests: > > - What do you mean by "M&A"? Perhaps, I am > wearing blinders this morning. > > - What do you mean by "intercompany?" So > many native English speakers confuse the > prefixes "inter-" (together, mutually, > mutual, between, or occurring between) > and "intra-" (within or inside of) that > I have come to rely on context to ferret > out intended meanings. Technically, > "intercompany" sales should be external > sales to a different company, but your > context leaves me unsure. From: "Potter, Brian (James B.)" Date: Thu, 15 Jun 2000 11:30:43 -0400 Rick, You are (as far as I can tell) absolutely right about the global impacts. I believe that the Goldratt Satellite Program from last year is one source for this thought chain. Significant issues surface when the components of the global organization are in their own right legal entities responsible for paying taxes, submitting financial reports to the Securities and Exchange Commission (or its counterparts outside the US). Consider relationships like General Motors to its international holdings and the lot with the assorted governments and stockholders ... Each of those entities may have legal obligations to report profits while paying taxes on profits, property, payroll, ... If the total throughput for the parent organization appears on the financial books of EVERY subsidiary touching every sale dependent upon internal sales, very strange and unwanted things will happen. "Below market" transfer prices may even violate laws or international treaties when internal sales cross national boundaries. Of course, we COULD show the global throughput (and the transfer price equal to unit variable expense) ONLY in the managerial accounting world. That takes care of the decision making. What transfer prices should the cooperating units report on their FINANCIAL results to show profit (or lose) for each unit? Even where the units are not "complete business units" in Goldratt's sense of containing all the critical functions, they will often have (as "legal persons") those external reporting and tax paying obligations. For those external purposes, the internally insignificant transfer prices come to life with a vengance. To the extent that an artificial throughput number hits the FINANCIAL books, a similar issue will rear its ugly head. From: "Danley, John" Date: Thu, 15 Jun 2000 13:09:37 -0500 Eli Schragenheim's book "Management Dilemmas:The Theory of Constraints Approach to Problem Identification and Solutions" has an interesting case study on the impact of transfer prices as it relates to "decentralizing" an organization. From: "Bill Dettmer" Date: Thu, 15 Jun 2000 11:34:13 -0700 ----- Original Message ----- From: Ted Cadien Sent: Thursday, June 15, 2000 8:22 AM > N.B. The paper has a SAP flavour to it but is also a general concept. It > leads to three views of data... financial, operational, and tax/legal > entity. > > There are also references to the use of internal profits as a management > practice to help drive the business: > > "Responsibility Accounting by Profit Centers: Responsibility Accounting > helps you monitor internal profit and financial performance measures for > profit centers. Profit Center Accounting provides you with an internal > profit and loss statement as well as the relevant balance sheet information > so that you can calculate needed financial key performance measures (e.g. > economic value added, cash flow statements,...). > > Transfer Pricing: Transfer Pricing helps you manage decentralized > organizational units (profit centers) as distinct companies acting > independently on the market. You can evaluate internal sales and exchanges > of goods and services with internal revenues and costs and make this > information available to your profit center managers as additional control > information. Moreover, you can also evaluate goods and services from the > viewpoint of the entire group and use this for overall group control. " [Damn! Scary, isn't it...?] From: "Jim Bowles" Date: Thu, 15 Jun 2000 22:25:41 +0100 Rick you asked, 1) I seem to recall a CMSIG presentation or published work detailing the multiple flaws of transfer price, along with recommending the approach that Hans references below. Any idea where I might have seen that? Perhaps you are thinking of Eli's Late Night Discussions. There is one devoted to this topic. It can now be found in he book of Essays Jim Bowles From: "Ching, Clarke" Date: Fri, 16 Jun 2000 10:44:26 +0100 or look on the web: http://www.goldratt.com/lnd3.htm --- From: "Brinton, Russ" Date: Mon, 19 Jun 2000 12:41:30 -0700 What is the value added at each step!! What is included? What is "left out" -- How do you calculate "value added" when labor is considered (in TOC) "fixed expense" -- i.e. part of Operating Expense. What you propose is currently the situation we have in the industry and it results in the current conflict/tension among operating divisions which SHOULD be working together! The "supplier" division trys to "get the best price" i.e. "the most" so they look good, and the "customer" division argues for a "lower price", to make their metrics look "better". Hmmm sounds like a cloud!! (which has been done) The other problem with this form of transfer pricing: The supplier Division must include that part of the corporate overhead that is being allocated to them. yet when their work is off-loaded (to save money) that corporate overhead gets reallocated (doesn't go away!!) to the other divisions, immediately making their products "more expensive". This is the situation that leads to all those "good business cases" for off-loading!! "Look how much it costs to have XYZ division make this part!!! -- Acme Mfg. down the street can do it for 20% less!!"" Consider this conflict A division that is a customer and a supplier is successful in establishing the lowest prices for the "stuff" coming in and the highest prices for the "stuff" going out. If they were operating alone, they would be considered wildly successful, however -- as part of a series of steps in the corporation, lets look at the result. Value added = "price" going out minus "price" coming in. by being "successful" at getting the "best" price at both ends, this division would be making it's products APPEAR VERY EXPENSIVE!! -- are they really? Any division that does this can count on getting closed down very quickly as the Corporate heads will definately find someone outside the company to "do it cheaper" and the division's work will be off-loaded. If they have High Prices coming in and Low prices going out -- they don't appear very valuable -- "anyone can do it" -- and if they don't add much value, how can the corporation charge a higher price. And why is the division paying so much for their facilities, workers, etc (the divison still has to "pay" for all the operating expenses etc) and once again they come under fire. Hmmmm Unless I'm missing something this appears to be a situation that doesn't lend itself to making good decisions toward the "Goal". --- From: "Potter, Brian (James B.)" Date: Thu, 15 Jun 2000 08:14:30 -0400 X-Message-Number: 1 > Hans, > I seem to be your only taker so far. Greg Latner and I had a big off-list discussion (still unresolved in my mind) regarding transfer prices to which this topic relates. A glossary follows my close for terms tagged with an asterisk (*), below. > If you simply do accounting for internal sales* in the usual way appropriate to external sales*, the transfer prices* will influence the reported profit (or loss) for both the buying and selling subsidiaries. At the level of the parent organization, the transfer price cancels out to zero. The seller's revenue and the buyer's expense are equal and opposite when the results of the two subsidiaries "roll up" into the parent organization's books. > Goldratt has proposed ... > - Use unit variable expense as the transfer price for internal sales. > - When goods or services sell externally, give (for managerial accounting purposes, only) each subsidiary contributing to the delivered goods or services "credit" for the TOTAL throughput (Hans, the distinction between "throughput" and "sales revenue" is IMPORTANT and different from what I read in your original post) received by the parent organization. > "Sharing" external sales revenue "fairly" among subsidiaries contributing to those external sale raises intractable issues similar to those created by absorption cost accounting. How much revenue did each subsidiary "earn?" How can you tell? "Allocating revenue" looks very much like the other face of the "cost allocation" coin. Both are inconvenient fictions. As I understand the situation (this should attract some commentary), those two notions (when [and ONLY when] taken together) create the following effects: > - Subsidiaries waste no resources negotiating transfer prices which have no net impact on the global results in any case. Benefit: sales, purchasing, and executive talent at each subsidiary can focus on issues that might make a difference rather than squabbling over nothing. > - The transfer price(s) no longer divide(s) the throughput from the ultimate external sale among the subsidiaries which cooperated to create that external sale, BUT traditional accounting methods will "credit" the full throughput to the unit selling the "end product" to the external customer. Negative branch: one unit gets ALL the throughput and other units which "touched" the product along the way receive no throughput to offset the operating expenses committed to their effort in creating the external sale. Inspecting only the local managerial accounting records, the transaction LOOKS like a loser (OE committed and NO Throughput earned). > - Injection: Have EVERY subsidiary record the TOTAL throughput AT THE PARENT ORGANIZATION for each external sale as its throughput earned by internal sales leading to the external sale. Now (Goldratt and some others assert), every subsidiary will evaluate its product mix "correctly" to maximize throughput for the PARENT ORGANIZATION. > Personally, I have a problem with this approach because "rolling up" the managerial accounting results to the parent organization seems to inflate> the throughput by including it once for each subsidiary which "touched" the product along the way to the final sale. I admit that (for cases I have tried) this combination of transfer price and throughput reporting leads to "correct action" by each member of the value chain as that member faces the following critical decisions: > - Acquiring inputs: make, buy externally, or buy from another subsidiary? > - Selling outputs: sell externally or sell to another subsidiary? > - Product mix at the constraint: how much of which products to make? > I admit that I have not seen a counterexample to the assertion that these behaviors maximize performance for the parent, and that maximizing performance for the parent is THE proper obligation of each subsidiary. None-the-less, the odd behavior of the managerial accounting "roll up" STILL troubles me ... > Yours, > Brian Potter Warranty Analyst and FAB Site Coordinator AutoAlliance International, Inc. > Voice: 734/783-8326 Fax: 734-7838/333 e-mail: jpotter5@ford.com (work) JmBPotter@aol.com (home) > If winning is not important, then tell me Commander, why keep score? - Warf > > Glossary: > External sales: sales which bring revenue into the parent organization rather than moving cash from one subsidiary to another > Internal sales: sales to another piece of the same parent organization (even if the buying and selling components are distinct legal entities in different countries) > Transfer price: the price one component of an organization pays another component of the same organization for goods or services "sold" internal to the parent organization > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Wednesday, June 14, 2000 15:29 To: CM SIG List Subject: [cmsig] Re: intercompany sales > > Potter, Brian James B. wrote: > I have two clarity requests: > - What do you mean by "M&A"? Perhaps, I am wearing blinders this morning. > American terminology here in Europe : mergers and aquisitions :) > - What do you mean by "intercompany?" So > Sales between business units or sites belonging to the same corporation. It may be within the same country (different legal entities) or in different countries (different legal entities). > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Tuesday, June 13, 2000 18:09 To: CM SIG List Subject: [cmsig] intercompany sales > The ongoing trend to globalization and to M&A results in increasing problems with intercompany sales. > I'm aware that E.Goldratt's proposal to solve the problem of intercompany sales is to account the total sales in the books of everybody who contributed to the production or to the sale. > I was pondering a bit about this proposal and would like to know more about the reasoning behind it. > Any thoughts or pointers ? > > HP Staber/Salzburg > ---------------------------------------------------------------------- > Subject: CC/MPM & the Sydney Olympic Games From: "Potter, Brian (James B.)" Date: Thu, 15 Jun 2000 09:30:00 -0400 X-Message-Number: 2 > Fellow List Denizens, > Andrew Fenwick sends his thanks. As before, I shall forward any other postings on this thread to him. > :-) > Brian > ... mechanics spent three-quarters of their time waiting in line for parts. - W. E. Deming > > -----Original Message----- From: Fenwick, Andrew [mailto:Andrew.Fenwick@team.telstra.com] Sent: Thursday, June 15, 2000 06:27 To: 'Potter, Brian (James B.)' Cc: 'Peter.Evans@au.uu.net' Subject: RE: OFF-List: [cmsig] CC/MPM & the Sydney Olympic Games > > Hi Brain & Peter, > Thank you for your comments. I like the idea on resource managers. You are> right too! Bloody Telstra. Recources to burn - but getting less & less. > I have a first TOC workshop running tomorrow & I will float the RM suggestion. > Thanks again guys, and Peter, if you have any trouble with your phones, drop my name - it can't make it any worse! > Andrew > -----Original Message----- From: Potter, Brian (James B.) [SMTP:jpotter5@ford.com] Sent: Friday, 9 June 2000 10:24 To: 'Andrew.Fenwick@team.telstra.com' Cc: 'Peter.Evans@au.uu.net' Subject: OFF-List: [cmsig] CC/MPM & the Sydney Olympic Games > Andrew, > This suggestion comes from a respected CMSig contributor living in your neck of the planet ... I hope you are not getting more advice than you bargained for ... > :-) > Brian Potter > We have met the enemy and he is us. - Pogo > > -----Original Message----- From: Peter Evans [mailto:Peter.Evans@au.uu.net] Sent: Thursday, June 08, 2000 19:45 To: CM SIG List Subject: [cmsig] RE: CC/MPM & the Sydney Olympic Games > Well, I live in Sydney, not too far from the Olympic site, and am a victim (sorry, customer) of Telstra. Unlike most of the people I know, I will probably stay here for the Olympic period. Given the short period of time that Andrew has left, I suggest the only real action with short-term effect is to vigorously attack multi-tasking. > Can this be done without implementing MPM? How about the following as a start: > 1. For each resource group, shift one of the managers (Telstra has never been under-managed, so should be no lack of bodies) into a resource manager (RM) role. > 2. The RM is the task gatekeeper to the people under hes control. > 3. The RM assistant has the role of checking that each task hitting the group is ready for work, and keeping track of task completion, and staff availability. > 4. With some simple rules about not starting a task until told to do so. > Without changing anything else, this will improve the workflow. > Regards Peter Evans > ---------------------------------------------------------------------- > Subject: RE: intercompany sales From: "Gilbert, Rick" Date: Thu, 15 Jun 2000 07:14:35 -0700 X-Message-Number: 3 > To Hans and Brian and others, > 1) I seem to recall a CMSIG presentation or published work detailing the multiple flaws of transfer price, along with recommending the approach that Hans references below. Any idea where I might have seen that? > 2) In partial answer to Hans: I think the proposal was to credit everyone involved with the Throughput, not just the Sales. This gives credit to the team for producing results for the overall business. > Example: I supply an intermediate material that another plant in my company can process into a product to sell to the market. By the way, I also have an external market for my intermediate product. The company's goal is met when both I and my sister plant make decisions that maximize real benefit to the business. Consider the following: > Case A) I make intermediate, i and sell it for price, P(i) and incur a variable cost of VC(i). My sister plant buys the intermediate externally at a cost C(ext) and incurs an additional variable cost, VC(p) in making product p that it sells to the market for price P(p). > How much money does the company make? Total Throughput = P(p) - VC(p) - C(ext) + P(i) - VC(i) > Case B) I supply intermediate i to my sister plant. My sister plant processes it at the same variable cost as before into product p. P is sold at price P(p). How much money does the company make? Total Throughput = P(p) - VC(p) - VC(i) > The difference in throughput between the two cases is -C(ext) + P(i), or more conventionally, P(i) - C(ext). > Should I sell i to the outside, or ship it to the sister plant? If the> price we receive for selling i on the market exceeds the cost of buying it for use in the sister plant, it makes sense to sell i to the market. Otherwise, I should supply i internally. If I get credit for the Throughput that i contributes, whether it is sold directly or incorporated into p, then my decisions will be guided in a way that maximizes the benefit to the company. > If, on the other hand, I am measured on my plant's "profit" using some artificial transfer price (all transfer prices are artificial), I may spend my time unproductively negotiating for higher transfer prices, an exercise that has no value to the parent company. > My reservation has to do with answering the question, "Why would the sister plant want to buy from the outside, rather than from me?" Assuming that my variable costs for making "i" are always lower than the market price -- and I hope they always would be, then the sister plant would always want to get its intermediate from me in order to maximize its throughput. How does the company make the sister plant happy about the decision to sell i directly to the market and source the sister plant from external producers? My first response is that when the internal source of "i" is not available, they will know that it is for the benefit of the company throughput and will feel OK about securing an external source. Somehow, that response isn't entirely satisfactory, so I'd welcome others' comments. > 3) To Brian Potter: No you can't roll up these throughputs to compute the throughput for the company. That is not the purpose of this measure. Nor do you have to have transfer price to compute total company profit. Total company sales (external) less total company expenses (variable and fixed) equals total company profit. After all, what happens to the profits shown by transfer price when they are rolled up to the company report? -- They drop out. Transfer price received by one unit is offset by transfer price paid by the receiving unit. If one has to use transfer price for tax or regulatory reporting reasons (I'm not an accountant, so I don't know), hide it in a dark closet and keep it out of the unit performance measures. > Rick Gilbert rick.gilbert@weyerhaeuser.com > When all else fails, read the directions. If that fails, read ALL the directions. If that fails, FOLLOW the directions. - Gilbert's guide to hardware and software installation. ---------- From: Potter, Brian (James B.)[SMTP:jpotter5@ford.com] Reply To: cmsig@lists.apics.org Sent: Wednesday, June 14, 2000 10:09 AM To: CM SIG List Subject: [cmsig] intercompany sales > Hans, > I have two clarity requests: > - What do you mean by "M&A"? Perhaps, I am wearing blinders this morning. > - What do you mean by "intercompany?" So many native English speakers confuse the prefixes "inter-" (together, mutually, mutual, between, or occurring between) and "intra-" (within or inside of) that I have come to rely on context to ferret out intended meanings. Technically, "intercompany" sales should be external sales to a different company, but your context leaves me unsure. > Thanks, > Brian > The first step ... is to let go of the notion that cause and effect are close in time and space. - Peter Senge > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Tuesday, June 13, 2000 18:09 To: CM SIG List Subject: [cmsig] intercompany sales > > The ongoing trend to globalization and to M&A results in increasing problems with intercompany sales. > I'm aware that E.Goldratt's proposal to solve the problem of intercompany sales is to account the total sales in the books of everybody who contributed to the production or to the sale.> > I was pondering a bit about this proposal and would like to know more about the reasoning behind it. > Any thoughts or pointers ? > TIA > HP Staber/Salzburg > --- > > ---------------------------------------------------------------------- > Subject: Re: intercompany sales From: Ted Cadien Date: Thu, 15 Jun 2000 11:22:05 -0400 X-Message-Number: 4 > There is an on-line white paper dealing with Transfer pricing and inter/intra-company/department sales/profit at: http://emedia.sap.com/usa/html/8572/body1.htm > N.B. The paper has a SAP flavour to it but is also a general concept. It leads to three views of data... financial, operational, and tax/legal entity. > There are also references to the use of internal profits as a management practice to help drive the business: > "Responsibility Accounting by Profit Centers: Responsibility Accounting helps you monitor internal profit and financial performance measures for profit centers. Profit Center Accounting provides you with an internal profit and loss statement as well as the relevant balance sheet information so that you can calculate needed financial key performance measures (e.g. economic value added, cash flow statements,...). > Transfer Pricing: Transfer Pricing helps you manage decentralized organizational units (profit centers) as distinct companies acting independently on the market. You can evaluate internal sales and exchanges of goods and services with internal revenues and costs and make this information available to your profit center managers as additional control information. Moreover, you can also evaluate goods and services from the viewpoint of the entire group and use this for overall group control. " > > > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Wednesday, June 14, 2000 3:29 PM To: CM SIG List Subject: [cmsig] Re: intercompany sales > > Potter, Brian James B. wrote: > I have two clarity requests: > - What do you mean by "M&A"? Perhaps, I am wearing blinders this morning. > American terminology here in Europe : mergers and aquisitions :) > - What do you mean by "intercompany?" So > Sales between business units or sites belonging to the same corporation. It may be within the same country (different legal entities) or in different countries (different legal entities). > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Tuesday, June 13, 2000 18:09 To: CM SIG List Subject: [cmsig] intercompany sales > The ongoing trend to globalization and to M&A results in increasing problems with intercompany sales. > I'm aware that E.Goldratt's proposal to solve the problem of intercompany sales is to account the total sales in the books of everybody who contributed to the production or to the sale. > I was pondering a bit about this proposal and would like to know more about the reasoning behind it. > ---------------------------------------------------------------------- > Subject: intercompany sales From: "Potter, Brian (James B.)" Date: Thu, 15 Jun 2000 11:30:43 -0400 > You are (as far as I can tell) absolutely right about the global impacts. I believe that the Goldratt Satellite Program from last year is one source for this thought chain. > Significant issues surface when the components of the global organization> are in their own right legal entities responsible for paying taxes, submitting financial reports to the Securities and Exchange Commission (or its counterparts outside the US). Consider relationships like General Motors to its international holdings and the lot with the assorted governments and stockholders ... Each of those entities may have legal obligations to report profits while paying taxes on profits, property, payroll, ... If the total throughput for the parent organization appears on the financial books of EVERY subsidiary touching every sale dependent upon internal sales, very strange and unwanted things will happen. "Below market" transfer prices may even violate laws or international treaties when internal sales cross national boundaries. > Of course, we COULD show the global throughput (and the transfer price equal to unit variable expense) ONLY in the managerial accounting world. That takes care of the decision making. What transfer prices should the cooperating units report on their FINANCIAL results to show profit (or lose) for each unit? Even where the units are not "complete business units" in Goldratt's sense of containing all the critical functions, they will often have (as "legal persons") those external reporting and tax paying obligations. For those external purposes, the internally insignificant transfer prices come to life with a vengance. To the extent that an artificial throughput number hits the FINANCIAL books, a similar issue will rear its ugly head. > :-) > Brian > > -----Original Message----- From: Gilbert, Rick [mailto:rick.gilbert@weyerhaeuser.com] Sent: Thursday, June 15, 2000 10:15 To: CM SIG List Subject: [cmsig] RE: intercompany sales > > To Hans and Brian and others, > ... > 3) To Brian Potter: No you can't roll up these throughputs to compute the throughput for the company. That is not the purpose of this measure. Nor do you have to have transfer price to compute total company profit. Total company sales (external) less total company expenses (variable and fixed) equals total company profit. After all, what happens to the profits shown by transfer price when they are rolled up to the company report? -- They drop out. Transfer price received by one unit is offset by transfer price paid by the receiving unit. If one has to use transfer price for tax or regulatory reporting reasons (I'm not an accountant, so I don't know), hide it in a dark closet and keep it out of the unit performance measures. > Rick Gilbert rick.gilbert@weyerhaeuser.com > When all else fails, read the directions. If that fails, read ALL the directions. If that fails, FOLLOW the directions. - Gilbert's guide to hardware and software installation. > ---------------------------------------------------------------------- > Subject: RE: intercompany sales From: "Danley, John" Date: Thu, 15 Jun 2000 13:09:37 -0500 X-Message-Number: 6 > Eli Schragenheim's book "Management Dilemmas:The Theory of Constraints Approach to Problem Identification and Solutions" has an interesting case study on the impact of transfer prices as it relates to "decentralizing" an organization. > -----Original Message----- From: Gilbert, Rick [mailto:rick.gilbert@weyerhaeuser.com] Sent: Thursday, June 15, 2000 9:15 AM To: CM SIG List Subject: [cmsig] RE: intercompany sales > > To Hans and Brian and others, > 1) I seem to recall a CMSIG presentation or published work detailing the multiple flaws of transfer price, along with recommending the approach that Hans references below. Any idea where I might have seen that? > 2) In partial answer to Hans: I think the proposal was to credit everyone involved with the Throughput, not just the Sales. This gives credit to the team for producing results for the overall business.> > Example: I supply an intermediate material that another plant in my company can process into a product to sell to the market. By the way, I also have an external market for my intermediate product. The company's goal is met when both I and my sister plant make decisions that maximize real benefit to the business. Consider the following: > Case A) I make intermediate, i and sell it for price, P(i) and incur a variable cost of VC(i). My sister plant buys the intermediate externally at a cost C(ext) and incurs an additional variable cost, VC(p) in making product p that it sells to the market for price P(p). > How much money does the company make? Total Throughput = P(p) - VC(p) - C(ext) + P(i) - VC(i) > Case B) I supply intermediate i to my sister plant. My sister plant processes it at the same variable cost as before into product p. P is sold at price P(p). How much money does the company make? Total Throughput = P(p) - VC(p) - VC(i) > The difference in throughput between the two cases is -C(ext) + P(i), or more conventionally, P(i) - C(ext). > Should I sell i to the outside, or ship it to the sister plant? If the price we receive for selling i on the market exceeds the cost of buying it for use in the sister plant, it makes sense to sell i to the market. Otherwise, I should supply i internally. If I get credit for the Throughput that i contributes, whether it is sold directly or incorporated into p, then my decisions will be guided in a way that maximizes the benefit to the company. > If, on the other hand, I am measured on my plant's "profit" using some artificial transfer price (all transfer prices are artificial), I may spend my time unproductively negotiating for higher transfer prices, an exercise that has no value to the parent company. > My reservation has to do with answering the question, "Why would the sister plant want to buy from the outside, rather than from me?" Assuming that my variable costs for making "i" are always lower than the market price -- and I hope they always would be, then the sister plant would always want to get its intermediate from me in order to maximize its throughput. How does the company make the sister plant happy about the decision to sell i directly to the market and source the sister plant from external producers? My first response is that when the internal source of "i" is not available, they will know that it is for the benefit of the company throughput and will feel OK about securing an external source. Somehow, that response isn't entirely satisfactory, so I'd welcome others' comments. > 3) To Brian Potter: No you can't roll up these throughputs to compute the throughput for the company. That is not the purpose of this measure. Nor do you have to have transfer price to compute total company profit. Total company sales (external) less total company expenses (variable and fixed) equals total company profit. After all, what happens to the profits shown by transfer price when they are rolled up to the company report? -- They drop out. Transfer price received by one unit is offset by transfer price paid by the receiving unit. If one has to use transfer price for tax or regulatory reporting reasons (I'm not an accountant, so I don't know), hide it in a dark closet and keep it out of the unit performance measures. > Rick Gilbert rick.gilbert@weyerhaeuser.com > When all else fails, read the directions. If that fails, read ALL the directions. If that fails, FOLLOW the directions. - Gilbert's guide to hardware and software installation. ---------- From: Potter, Brian (James B.)[SMTP:jpotter5@ford.com] Reply To: cmsig@lists.apics.org Sent: Wednesday, June 14, 2000 10:09 AM To: CM SIG List Subject: [cmsig] intercompany sales > Hans, > I have two clarity requests: > - What do you mean by "M&A"? Perhaps, I am> wearing blinders this morning. > - What do you mean by "intercompany?" So many native English speakers confuse the prefixes "inter-" (together, mutually, mutual, between, or occurring between) and "intra-" (within or inside of) that I have come to rely on context to ferret out intended meanings. Technically, "intercompany" sales should be external sales to a different company, but your context leaves me unsure. > Thanks, > Brian > The first step ... is to let go of the notion that cause and effect are close in time and space. - Peter Senge > > -----Original Message----- From: Hans Peter Staber [mailto:hpstaber@compuserve.com] Sent: Tuesday, June 13, 2000 18:09 To: CM SIG List Subject: [cmsig] intercompany sales > > The ongoing trend to globalization and to M&A results in increasing problems with intercompany sales. > I'm aware that E.Goldratt's proposal to solve the problem of intercompany sales is to account the total sales in the books of everybody who contributed to the production or to the sale. > I was pondering a bit about this proposal and would like to know more about the reasoning behind it. --- From: Rudi Burkhard [mailto:rudiburkhard@compuserve.com] Sent: Thursday, March 15, 2001 10:54 AM Subject: [cmsig] Transfer Prices, Taxes and Customs Duties >> >> Here is a problem that ought to stir up a bunch of TOC experts! >> 1. We operate in many countries. 2. All countries want to collect taxes. 3. Dealings between companies are supposed to be arms length. 4. We usually make most of our profits (assuming the product is profitable) in the country of origin. 5. Other countries, where the product is distributed, there is a range if margin that is acceptable to the tax people (say 3-5%) 6. We set transfer prices between our companies (legal entities) in a way that based on budgets and forecasts (volume and price) we should end up on target. 7. Forecasts are never correct. 8. If we are over or under in our local earnings we have to make year end adjustments to keep the taxman happy. (the wolf from the door!) 9. If we are below target on earnings we are under invoicing and therefore not paying the right amount of custom duties - we are open to penalties. 10. If we are too high in local earnings we are losing money due to over paying duties - something that cannot be re-cuperated. +( introduction to ToC From: "Clarke Ching" Date: Sun, 15 Oct 2000 18:23:26 +0100 I'd recommend that your first stop is to look at Chapter 11 of Dr Goldratts book "Critical Chain", which is available online at http://www.goldratt.com/chpt11.htm Then, for a bit more depth, look at the following article written by Bill Dettmer, a TOC author and active member of this list. http://www.goalsys.com/HTMLobj-308/ConstraintManagement.PDF I am assuming you have read Dr Goldratt's Novels, but if you haven't track down "The Goal", "It's not luck", and "Critical Chain". You'll also find other books at amazon.com, if you search under "Theory of Constraints" or "Goldratt". Two of my favourites are "Breaking the Constraints to World-Class Performance", by H. William (Bill) Dettmer, and "Management Dilemmas : The Theory of Constraints Approach to Problem Identification and Solutions" by Eli Schragenheim. --- In addition you should check out www.goldratt.com www.rogo.com/cac/ and www.pdinstitute.com from where you will get other links. +( inventory dollar days - IDD From: "John Maher" Subject: [cmsig] Inventory Dollar Days Date: Wed, 29 Aug 2001 16:08:28 -0500 With regard to inventory dollar days, I would like to make sure that I am correct in how to calculate it. To me there seems to be three cases where I want to calculate inventory dollar days: 1) Raw Material 2) WIP (work-in-process) 3) Finished Goods (I believe the calculation will be the same with semi-finished goods that are stocked) Calculations For the calculations I will assume we are speaking of a manufacturing environment. 1) Raw Material - This calculation seems to be straight forward. The number of days the item has been in inventory multiplied by the purchase price. IDD = Days since receipt * Purchase Price 2) WIP - When raw material is released into the production system I feel as though the clock should begin again. Meaning the calculation for the work in process should be the number of days since the material was released to the floor multiplied by the purchase price of the raw materials. I don't think that this should be the number of days since raw material receipt (from vendor) multiplied by the purchase price. IDD = Days since release to production * raw material purchase price Now, if out-plant operations are performed on the raw material after release, should this be included in the inventory dollar days. And if it is, should it be multiplied by the number of days since the item was received back in from outplant. Maybe a bit too complex? IDD = (Days since release to production * raw material purchase price) + (Days since receipt from outplant * outplant cost) 3) Finished Goods I believe the clock would begin again. So finished goods would be the number of days the finished item has been in inventory multiplied by the purchase price of all raw material required to make the item. Here I also think it will be important to take into account the outplant costs. IDD = Days finished good has been in inventory * (raw material costs + outplant) The other option here would be to calculate it based on the number of days each raw material has been in house (includes the days it sat as raw material and the days it was in WIP) multiplied by the raw material cost of each raw material. It probably would also need to take into account outplant (from the date item was received back from outplant operation). IDD = SUM(Days since raw material receipt * purchase price)+ SUM(Days since outplant received back in * outplant cost) The concern I have with starting the clock over at each point is that if I am being measured by my supplier, I believe he will want to know the total inventory dollar days for his parts from date of receipt to date of shipment to my customer. From an internal (within my company) perspective, I may wish to start the clock again at each phase. I say this because upper management would use IDD to determine where there was a clog in the system. If I release a raw material into the system that is worth $10 that has been in inventory for 100 days, it might look like a rat going through a snake once released into production (if only looking at IDD). Since it is to be a pointer of where the clog is, I would think I would want to know if there was too much being purchased, or a clog in the production system, or too much being built and placed in finished goods. This is why I think of the clock renewing at each step. John Maher --- Date: Thu, 30 Aug 2001 00:28:22 -0400 From: Brian Potter Subject: [cmsig] Inventory Dollar Days You have the basic IID concept down. In most environments, one of two ways will be best (depending upon the computing or accounting system data organization): 1. IID = sum over all states of( beginning_inventory_state_value X time_in_state ) 2. IID = sum over all states of( inventory_value_increment_on_state_entry X time_from_state_entry_until_shipment ) As long as you add up all the pieces without skipping either time on the calendar or a state change that modifies inventory value, the distributive law of addition and multiplication will assure that you get where you want to go. Esoteric computer arithmetic point: If the numbers you add together differ in scale by a factor of more than about ten million AND you use floating point arithmetic (rather than integer arithmetic) AND you have MILLIONS of terms to sum, you may wish to consult a numerical analyst to avoid unfortunate errors in the arithmetic. In most business situations, it will not matter because any arithmetic errors will be so small compared to the IID total that the error will not modify any decisions that matter. If you make a "wrong" choice because of an arithmetic error, USUALLY the choice will have been between two essentially equal choices, AND the difference will probably be less than the normal variation in your operations. IF your operation is so tightly controlled that processes within 7-sigma yield GOOD (better than merely acceptable) results, AND you have MANY transactions, call a numerical analyst. Mortals need not worry. --- From: "John Maher" Subject: [cmsig] Re: Inventory Dollar Days Date: Thu, 30 Aug 2001 11:07:38 -0500 The original scope of this question is not how should I value inventory. There has been sufficient discussion on this and I am not interested in what is best for the financial books. I am interested in inventory dollar days as the operational measurement for effectiveness. From Eli Goldratt's discussion of supply chain management at TOC World 2001 I refer to the IDD and TDD measurements. As managers, we wish to know what is not done properly. We want to measure based on: Reliability - Things that should have been done and were not. Effectivess - Things that should not have been done, but nevertheless were. Eli Goldratt proposed that TDD, which is the Sum((sales dollars) * (days of delay)) should be used as the measurement of reliability. And Eli proposed that IDD, which is the Sum((Inventory Dollars) * (Days on hand))should be used as the measurement of effectiveness. ----- Original Message ----- From: "Potter, Brian (James B.)" Sent: Thursday, August 30, 2001 7:59 AM Subject: [cmsig] Inventory Dollar Days > Clare, > > It is surely true that IDD (like any other aggregate) loses detail (MUCH > detail) when computed for all inventory. TDD has a similar impact when > computed for all sales. What happens when you compute IDD and TDD by > purchased entity and apply SPC techniques in search of extreme cases? > > Who cares what the "value of inventory" "is" as long as we manage the > inventory for global organization performance? Let the accountants and tax > lawyers wrangle over "value" while we focus on managing inventory. If > (relative to the rest of the organization) cash tied up in inventory is > small AND turns are high AND inventory subordinates to operations and sales, > inventory "value" will be a pimple on a gnat in the consolidated financial > statements. > > -----Original Message----- > From: Clarence Maday [mailto:cmaday@nc.rr.com] > Sent: Thursday, August 30, 2001 8:32 AM > Subject: [cmsig] Re: Inventory Dollar Days > > The evaluation of Inventory is one of the sticky issues that gets us into > > 1) a TOC vs. ABC Accounting debate, > 2) measures, > 3) cash flow analysis, and > 4) calculations (with units, that is). In Newtonian mechanics we use three > numbers to locate a particle, and three more numbers to describe its > velocity. Rigid bodies are even more interesting with twice the numbers. > > The mechanical engineers and civil engineers will remember from solid > mechanics > that stress at a point is described by 9 numbers in the stress tensor. > Since > the stress tensor is symmetric, however, there are only 6 independent > numbers and this does not include dynamical considerations. > > Similarly, I believe that a single number is not sufficient to describe > inventory, let alone inventory days. Issues include: > 1) raw material or purchased part costs (money which crossed the > corporate boundary) > 2) amount I can get for it if I have to sell it today > 3) money spent on processing the material (euphemistically called "value > added"). If you want to get into an interesting discussion, just tell a > manufacturing plant manager that manufacturing actually removes value > from the organization. Only sales brings in true value. > 4) money lost if the inventory is not available when needed > > I'm sure you can identify other issues. The above listing is not meant to > be comprehensive. But the bottom line is still the bottom line. How does > Inventory affect cash flow across system boundaries? And watch those units. Date: Mon, 10 Sep 2001 20:20:39 +0100 From: HP Staber On the occasion of the NBNS event in Frankfurt today I had the chance to follow up on IDD measurements : > From: "John Maher" > Subject: [cmsig] Inventory Dollar Days > Date: Wed, 29 Aug 2001 16:08:28 -0500 > With regard to inventory dollar days, I would like to make sure that I am > correct in how to calculate it. To me there seems to be three cases where I > want to calculate inventory dollar days: > 1) Raw Material > 2) WIP (work-in-process) > 3) Finished Goods (I believe the calculation will be the same with > semi-finished goods that are stocked) > Calculations > For the calculations I will assume we are speaking of a manufacturing > environment. 1) for all types of inventory just take the material and subcontracting cost as you will find it in your BOM 2) in order to calculate the days you have to look up the date in your MRP/ERP system when you booked this part number and subtract that from todays date : RM : the date when you receive it into your inventory (until it get's booked to WIP) WIP : the date when you book it from RM into your WIP (until it get's booked to FG) FG : the date when you book it from WIP to FG (until it is shipped) 3) IDD = SUMi (COSTi * DAYSi) for each part nbr "i" +( inventory evaluation and GAAP From: "Aaron M. Keopple" Subject: [cmsig] Reporting Inventory to the IRS Date: Wed, 1 Nov 2000 14:37:34 -0800 In pursuit of becoming a total TOC company, we have made a decision to eliminate all labor and overhead absorption from our accounting system. As a result, as material is processed through our shop, it does not magically absorb anything. It remains valued at raw material cost until it is sold. This decision has proved to be very benificial for our internal purposes but has caused us some greif in terms of trying to provide inventory absorption values to the IRS. We know our total labor & overhead expenses for each period. We also know our starting raw material inventory value, the amount of raw material added and our ending raw material value. Given those variables, how do we put together our financial statements to comply with GAAP. To put it another way, how do we value the inventory that remains on hand at the end of the period vs. what can be expensed because it sold. --- From: Norm Henry Date: Wed, 1 Nov 2000 13:16:00 -0800 You seem to be asking two different questions. First you express concern with values for the IRS. Later you express concern for GAAP. These are different. For GAAP: You can find how to do this in many accounting texts. Another excellent source is to read Chapter 8 of the book The Measurement Nightmare by Debra Smith. For the IRS: Adjust to GAAP and then using this, adjust further for IRS valuations. Your accountant should be able to handle all of this without problems. The total process is actually much easier than if you valued inventory per GAAP throughout the year. The little bit of work at the end of the year certainly is easier than all the work throughout the year for GAAP. --- From: OutPutter@aol.com Date: Wed, 1 Nov 2000 16:58:13 EST You may get a more detailed answer from some of the accounting experts on the list but I'll throw out a brief outline on the chance it'll help. 1) At the end of the year, decide how many days of inventory you have. 2) Decide how much labor expense you had for the year and divide by the number of days in the year to get a daily labor cost. 3) Decide how much operating expense you had for the year and divide by the number of days in the year to get a daily operating cost. 4) Multiply the days of inventory by the two numbers you calculated above and add it to the inventory value you already carry. Hope that helps you get a rough idea. To be sure though, you probably need an accountant to take a look at the end of the year. Jim Fuller --- From: Norm Henry Date: Wed, 1 Nov 2000 14:21:17 -0800 Jim gives one workable way to value the inventory. The important point to recognize here is that you do not need to value the inventory by item for GAAP or for the IRS. You only need to have a total inventory value for all items combined. This is what makes the process so much easier. +( inventory valuation - effect on P&L In a message dated 12/09/1999 22:01:13, John Caspari caspari@iserv.net writes: The following is from my current revision of the TOC section of the Management Accountants' Handbook: << INVENTORY. As originally described in *The GOAL: Excellence in Manufacturing* (The 1994 edition of The GOAL), the symbol "I" (for *inventory*) included all of the assets of the organization. Work in process and finished goods inventories are valued at raw materials cost only--that is, no "value-added" costs are recognized as a part of I. No distinction is drawn between current and non-current assets. The objective here is to eliminate the generation, and smoothing, of apparent profits through a cost allocation process. The concept reflects the cash-flow emphasis of TOC and emphasizes that firm sales must be made in order to have profits. As the application of the TOC has been expanded into the service and not-for-profit sectors, the definition of "I" has become somewhat confused in practice. It appears that the term is now used in three ways within the TOC. 1. *Total assets*, the traditional TOC definition, provides a measure of the capital invested in the organization. This metric is used for return-on-inventment (ROI) calculations of relative global profitability. 2. *Incremental inventory* represents a change in cash investment that is made. This may result from a capital expenditure or a change in work in process levels. Incremental inventory is reflected in change in total assets. This concept is used for local decision making. 3. A notion of *"what's in the pipe"* represents the work (including finished goods) that is currently being done to create throughput. This may or may not have a cost measurement associated with it. We will use the term, Inventory/Investment (I), to refer collectively to these possible meanings. >> I note that Goldratt (in the GSP Tapes) now uses the term, "Inventory/Investment" for "I" also. TA is nothing more than a form of direct costing, used by management accountants since the 1930's (introduced into the accounting literature in 1936). ============================================================================== Absorbtion costing, full costing, direct costing, variable costing From: Norm Henry To: "CM SIG List" Subject: [cmsig] RE: Making good decisions on Contribution to and impa ct on Profit Date: Thu, 14 Oct 1999 17:46:53 -0700 The costs which are then assigned to products are included in the inventory value. Any costs which remain in inventory are costs which are not included in the cost of goods sold on the income statement. If the inventory is sold in the period in which it is made, there is no difference. However, if the inventory is not sold until a later period, then the inventory value of the product is not charged against cost of goods sold on the income statement until the period in which it is sold. Therefore, if the inventory is held, and if the inventory value includes some fixed costs, these fixed costs are not reflected on the income statement until the period in which the product is sold. This is why one can improve income by making extra inventory. The extra inventory includes fixed costs in its valuation. The fixed costs are not expensed when the product is made. This makes the income look good. On the other hand, when the product is sold, the fixed costs included in the inventory valuation are charged against cost of goods sold. So if the inventory is reduced, there is less fixed costs sitting in the inventory valuation and the income is then lower. So making excess inventory improves profits. Reducing inventory hurts profits. This, of course, is under absorption costing only and is the opposite of reality. This is just what the income statement reflects under absorption accounting. Direct costing and TOC accounting will charge the fixed expenses in the period in which they are incurred without regard to how much product is made. So if one makes excess inventory there is no advantage to the income and actually there is likely a detriment to the income due to the other issues which relate to excess inventory. And it certainly hurts the return on the investment. Norman Henry, CMA ============================================================================= From: JmBPotter@aol.com Sent: Sunday, December 05, 1999 8:13 AM To: CM SIG List Subject: [cmsig] Calculation of Inventory/Investment Gary, The choice between classifying an expenditure as "OE" or as "I" can be a close one. The topic has produced several (often heated) discussions in this forum (no doubt, in other places, too). The general guidelines follow ... - If you would have spent the money whether you made anything or not, it is "OE". Some examples: payroll (except sales commissions), rent, property taxes, equipment leases, payroll taxes, benefits expenses - If you spend a small increment more for each unit you sell, it is "I". Some examples: PURCHASED MATERIAL (raw material or purchased parts), COMMISSIONS paid to sales team members (possibly, payroll taxes on commissions, too; check with an actual accountant), OUTBOUND LOGISTICS (this one could fall either way, "OE" if you pay a flat fee [say monthly] or "I" if you pay per individual customer shipment [or per ton-mile or ...]; it can bin both places if you operate your own distribution system), WEAR on TOOLING or EQUIPMENT (a charge for reduction in the useful life of equipment. For example, a die you bought for $10,000 will yield 10,000,000 parts before it is too worn to produce good parts. Each part you produce with the die has an "I" contribution to WiP of $0.001. Each part you scrap [e.g., during setup, or downstream] has an "OE" contribution of $0.001. Note that this is NOT the same as the "depreciation expense" on the die. The depreciation expense is for external reporting and tax paying. It falls into neither "I" nor "OE", and it is NOT part of your "throughput accounting" managerial accounting system.) - Materiality: If it is "very small" relative to both "I" and "OE", do not worry too much, stick it either place in an unbiased fashion (do not stick ALL of them in "OE" to make "RoI" look good, and do not stick all of them in "I" to make "OE" look good). Naturally, be consistent about how you classify expenditures of the same kind. Ask your accountant how small something must be to qualify as "immaterial" because it is "very small." If you plan implementing a throughput accounting system for managerial accounting, you may want to consult one of the references on the subject or speak with a consultant who has detailed knowledge in that area. I have prejudices, but I do not know enough to offer concrete suggestions for either references or accountants. ============================================ From: Norm Henry To: "CM SIG List" Subject: [cmsig] RE: Calculation of Inventory/Investment Date: Mon, 6 Dec 1999 08:27:11 -0800 TOC is not going to teach accounting. Let your accountant do this for you. Your accountant will know how. But one can look at the GAAP balance sheet to get the assets and then adjust the inventory valuation according to TOC by taking the overhead OE out of the valuation. Do not overlook that I (Inventory/Investment) includes more than inventory and equipment. Again, look at your balance sheet and you will see the assets which are investment. Specifically relating to leased equipment, the accountant will consider whether it is an operating lease and therefore the lease payments are OE, or whether it is a capital lease and therefore it is capitalized the same as purchased equipment. For equipment you own, use the net book value. Again, the balance sheet has all of this information. You do not have to re-invent everything just because you want to use TOC. If you want to learn more yourself, however, read nearly any accounting textbook which deals with accounting theory for balance sheet preparation and then simply use TOC for the inventory valuation piece of the assets. That is about all TOC changes. ============================================== From: JmBPotter@aol.com Date: Tue, 7 Dec 1999 23:29:17 EST Subject: [cmsig] RE: Calculation of Inventory/Investment To: "CM SIG List" Michael, Gary, et al, Thank you Michael, I think you just helped me square away some of my confusion regarding "I". Would "I" sometimes be "inventory" in the sense of "stuff we intend to sell, soon?" Would "I" also sometimes be "investment" as in things one needs to convert inventory into something customers will buy as in the things more traditionally called "assets?" Does using assets until they (gradually) wear out convert "assets-type-I" into "inventory-type-I"? In the classical financial accounting world, I think the denominator of "((T-OE)/I)" would be the "assets-type-I" plus the "inventory-type-I" (including "overhead applied" to finished goods and WiP). "I" in the throughput accounting world would be similar. The "I" in the denominator would STILL include "I" of both types, BUT the sum would be smaller for similar operations with the same sales revenue because ... - Throughput accounting would not load overhead onto "I". - A firm using throughput accounting will not have much inventory beyond the minimum needed to operate. They use other ToC principles, too. They do not waste their cash on inventory they do not need. --- From: "Scott D. Button" Date: Mon, 15 Oct 2001 06:18:19 -0700 Subject: Re: [tocexperts] CLARIFICATION ON THE TERM "INVENTORY" AS USED IN TOC We need to make a distinction between work in process inventory, and raw material inventory. In "The Race", Goldratt and Fox discuss six competitive edge issues associated with reduced work in process inventory: 1: The link between inventory level and quality. Suppose you manufacture a 1,000 lot order, and the product is damaged in the first operation of the process. A common situation is that this damage is not detected until the product is finished or nearly finished. In the low inventory environment, when the damage is detected at the last operation, the first operation is still producing. Less parts are scrapped, and root cause analysis is enabled. 2: New Product Introduction: In a high inventory environment, it is difficult to introduce new products to market rapidly. The new products must either wait behind the backload of old products, or the old products or inventory must be scrapped. 3: Lead Time and overtime. Higher inventory means longer lead time. Overtime is often needed to meet delivery dates. 4: Purchase of excess capacity:; Higher inventory means longer lead time. Inability to handle peak loads driven by waves of inventory can result in a perceived need to purchase additional capacity. 5. Due date performance With high inventory in the system, it is difficult to predict when orders will be completed. Due date performance is unreliable. 6: Shorter quoted lead times. Lower inventory can mean shorter lead times. Shorter lead times can often elevate a market constraint. You also wrote: "To further elaborate this idea, the ideal would be to have the constrain as near as possible to the end of the chain, in front of all other resources is the optimum (see scout paradigm in the Goal)." In "The Goal", Alex placed Herbie at the front of the scout troop. All the other scouts were forced to travel at Herbies pace, thus subordinating to Herbie. The production analogy was that the trail was raw material, and the scout troop processed the trail starting with Herbie. Once Alex (last in line) walked the trail, it was finished goods. This means the ideal is to put the constraint at the front of the chain, not at the end. Of course, it depends on how you define the front. My definition is the front of the chain is the first operation, that gates in the raw materials. Date: Mon, 15 Oct 2001 20:36:37 +0100 From: HP Staber Leon Nicos wrote: > > As nothing changes, then if the resources situated before the constrain have > idle time, why is it harmful, if they produce? Reg inventory you are right. If you apply throughput accounting then working on inventory will not change overall inventory. But ... 1) Inventory is everything that can be turned into throughput. If you work on inventory and convert it in inventory which_can_not be converted into throughput, than you waste your efforts. deltaI = 0 deltaT = 0 as well 2) However since you consume resources for doing something which will not lead to throughput : deltaOE > 0 Therefore all you did is spend operational expenses without gaining throughput (even though you did not change I). As GenMgr you should be aware of it. If you judge a situation you will always have to look at the system as a whole and at ALL the equations which describe the system - not just one variable. Date: Mon, 15 Oct 2001 22:20:48 +0100 From: HP Staber Norm Henry wrote: > > You will increase Inventory if the consumption of raw material triggers the > replenishment of the material in order to protect a raw material buffer. Or > you will increase inventory if the material is used to make extra of product > A so that now you do not have the material to use for product B and > therefore buy more material. Correct. You do this to overcome the restriction of not being able to create throughput out of the already "used" inventory. +( investment and/or inventory From: "J Caspari" Subject: [cmsig] Inventory/Investment (Was: How are "shares" classified ...) Date: Thu, 20 Dec 2001 16:45:50 -0500 In the Constraints Accounting Seminar "I" is initially described as: << Inventory/Investment. [Some cash may be] expended to acquire the resources necessary to establish the operating capability to carry out the organization's business strategy. Goldratt and Cox (*The Goal: a Process of Ongoing Improvement*) call these expenditures inventory or "I". This includes property, plant and equipment as well as intangible rights such as patents, trademarks, and computer software, in addition to raw materials and work-in-process and finished goods product inventories. In a broad sense, accountants refer to such costs as assets. Since inventory has a well-established meaning in the accounting literature, the name is expanded to Inventory/Investment. Goldratt and Cox describe the inventory/investment as "all the money that the system invests in purchasing things the system intends to sell." This would include the capabilities of the system as well as raw materials and purchased parts. >> You are not alone in being confused about the meaning of "I". Later in the course, we note: << Inventory/Investment (I) has been defined as all the money the system invests in purchasing things the system intends to sell. As originally described in 1984, the symbol "I", for inventory, included all of the assets of the organization and did not distinguish between current assets and fixed assets. As the application of the TOC has been expanded into the service and not-for-profit sectors, the definition of I has become somewhat confused in practice. As with Throughput, it appears that the term is now used in several ways within the TOC community. (1) Total assets, the traditional TOC accounting definition. (Goldratt; Noreen et al) (2) Capital, the "owner's current value of the investment in the organization to keep it going." (Schragenheim) (3) Incremental inventory represents a change in cash investment that is made. This may result from a capital expenditure or a change in work-in-process and other current position levels. (4) A notion of what's in the pipe represents the work (including finished goods) that is currently being done to create throughput. This may or may not have a cost measurement associated with it. (Schragenheim) (5) Raw materials cost (or truly variable production cost) is the monetary valuation assigned to stocks of raw materials and product inventories. The term, Inventory/Investment (I), is used to refer collectively to these possible meanings. >> With respect to accounting for common stock issued and treasury stock transactions I believe that the accounting transaction recording under Throughput Accounting (as a form of what the accountants call Direct or Variable Costing) would be handled exactly as it is using Generally Accepted Accounting Principles (GAAP). Although I haven't tested the concept yet, I suspect that the same would apply for the paper you referenced. The Liability section of the Balance Sheet (monetary amounts owed to external entities), however, presents some interesting issues in a cash flow (as opposed to accrual accounting) reporting environment. This is a new topic. Finally, the use of the word "liability" in within the TOC is not completely clear. I cringe when I hear advocates of TOC saying that "inventory is not an asset, it is a liability." (This undoubtededly comes from Chapter 33 of *The Goal*) But, inventory is not a liability in the accounting sense, and the accounting analysis in *The Goal* does not appear--to me--to be correct in calling this a liability, even figuratively, in the accounting sense. The distinction that Lou (a character in *The Goal*) should have drawn was between assets (costs that are expected to benefit the future) and expenses (costs whose value has expired), rather than assets and liabilities. Then the discussion makes sense. Of course, the way in which this is treated from an accounting point-of-view (liability or expense) makes a difference in the amount of profit that is reported for the period. Previous discussion by the group on inventory and investment this topic will be found under the topics, "I, OE, and Depreciation" and "Impact of Inventory on Bottom Line", "TOC and Public Service", and "T, I, and OE" in the TOC Discussion Archives section at http://casparija.home.attbi.com/ --- From: "Norm Henry" Sent: Monday, February 11, 2002 8:24 PM Subject: [cmsig] RE: The I word > I am certain that this is just a weak definition for this "I" word. On page > 35 of the Viewer Notebook the other, more recently adopted "I" word is used > with a different definition which would be more inclusive to include > non-current assets. This other "I" word is the term Dr. Goldratt now uses > regularly in order to avoid some of the earlier confusion which was limiting > the scope of what makes up "I". This now fits much better with terminology > to which accountants are accustomed to using as well as avoiding the narrow > scope which manufacturing persons may apply. +( Investment Dollar Days - Flush From: JmBPotter@aol.com Date: Sun, 30 Jan 2000 19:43:10 EST Subject: [cmsig] Time Value of Money To: "CM SIG List" Dirk, Perhaps, you recall mentions of the "flush" method used to compare investment alternatives. At minimum, Goldratt mentions "flush" toward the end of "Critical Chain" and in "The Haystack Syndrome." The matter also received attention in the satellite program last winter and spring. The way I understood Goldratt's assertion, the fact that "flush" considers both time (like "pay back period") and money (like "NPV") makes it superior to either for ranking investments. When he spoke in the satellite program, I understood Goldratt to say that present value effects (though real) had a much smaller influence than the impact from considering investments in terms of currency-time. By implication, an organization (similar risk) considering alternatives with similar cash requirements over similar durations may ignore effects from present value. With significantly different risk, investment level, or planning horizons; perhaps one should ALSO consider present value impacts (especially among projects with small differences in rank relative to their scales). For a SHORT response to the original post, STOP HERE. If this really excites you, or you'd like some "thought questions," read on ... In a nutshell to compare two or more investment alternatives: 1. Subtract each investment from its alternative's current gross return at the time of the investment. 2. Add any revenue to its alternative's current gross return at the time of the revenue. 3. Calculate the "area" in units of currency-time (e.g., "US$ times months" or "Euros times days") between each the "gross return curve" and the "money equals zero axis" for each alternative using the same units for all alternatives. Note that areas on opposite sides of the axis cancel each other. 4. Select a feasible (maximum investment does not overwhelm available funding) collection of alternatives with the maximum (largest in the "return" direction) value in currency-time units. Contrived Example: A project requires investments of US$100,000 at the beginning of each month for twelve months. During month 12, it begins to cover its own operating expenses and produces a return stream (returns at the beginning of each month starting with month 13) for one year according to the following table (please, view the tables with a "monospaced font" like "Courier" for best viewing): Month Return 13 $100,000 14 100,000 15 200,000 16 300,000 17 500,000 18 700,000 19 600,000 20 500,000 21 400,000 22 300,000 23 200,000 24 100,000 Now, when the project begins to generate revenue at the beginning of the thirteenth month, the investment level will be 1.2 mega-$ (12 months times 100 K$ per month) and the investment in "$-months" will be 7.8 mega-$-months (12 * $100,000 for the first month's investment plus 11 * $100,000 for the second month's investment plus 10 * $100,000 for the third month's investment ... plus 1 * $100,000 for the twelfth month's investment). The table below shows the "flush" value for the investment over its second year (positive numbers show investment commitments and negative numbers show returns resulting from the investment): Beginning Ending Beginning Ending Investment Investment Investment Investment Month Level ($) Level ($) in $-months in $-months 13 $1,200,000 $1,100,000 $7,800,000 $8,900,000 14 1,100,000 1,000,000 8,900,000 9,900,000 15 1,000,000 800,000 9,900,000 10,700,000 16 800,000 500,000 10,700,000 11,200,000 17 500,000 -0- 11,200,000 11,200,000 18 -0- -700,000 11,200,000 10,500,000 19 -700,000 -1,300,000 10,500,000 9,200,000 20 -1,300,000 -1,800,000 9,200,000 7,400,000 21 -1,800,000 -2,200,000 7,400,000 5,200,000 22 -2,200,000 -2,500,000 5,200,000 2,700,000 23 -2,500,000 -2,700,000 2,700,000 -0- 24 -2,700,000 -2,800,000 -0- -2,800,000 The example has a "payback period" of sixteen months and a day (if you prefer, seventeen months) and a "net present value" of $2,302,128.31 (at 10% per year compounded monthly). The "payback period" is short enough so that the project has a good chance "pay for itself" before major economic changes alter the rules. The project's NPV is slightly over twice the NPV of the required investment (better than a 100% return over two years). By both these conventional criteria, the project looks pretty good. How does the project look under the "flush" rules? The investment peaks at 11.2mega-$-days. The project's net return after 24 months is 2.8mega-$-months (a 25% return in over the project's two-year life). On the down side, the 24-month project does not reach its break-even point in $-months until the twenty-third month. Might this extreme delay before the project becomes profitable foreshadow vulnerability to Murphy? Applying present values to the cash flows before beginning the "flush" calculation makes the project look much worse. The return drops to almost 410K$-months on an investment of nearly 11mega-$-months showing a two-year return on $-months invested of less than 4%. Under this scheme, the project returns its investment and delivers its modest returns all during the last month. One observation favoring "flush:" At the same interest rate, two investments with the same $-time values will earn the same (uncompounded) interest return. For example, consider $100 invested for a year at 10% per year, $1,200 invested for a month at 10% per year, and $300 invested for 4 months at 10% per year. That observation implies that (considering time-value of money) ... - All cash flow streams with the same net "flush value" have the same investment value - Investments with larger returns according to the "flush" rules "pay better" than investments with smaller returns according to the "flush" rules. - BUT, how do the "flush" rules with nominal cash flows treat "risk?" - What impact would "flush" rules with cash flows discounted at a "risk adjusted cost of capital" rate have on the calculation? +( ISHIKAWA contra CRT's From: "Opps, Harvey" Date: Mon, 21 May 2001 13:41:45 -0500 Let's remember that the TOC approach is the system wide scientific one of Effect-Cause-Effect. I have an EFFECT, I hypothesize a CAUSE, I check to see if a predicted EFFECT exists or doesn't exist. I go through processes of validation and verification This is vastly different from a single cause producing a single effect which is an engineering approach found in Fishbone (ISHIKAWA) diagrams. The major difference between the two is: Fishbone = multiple causes for one effect vs.= multiple actions and projects TOC is one cause for multiple effects. This is why TOC is so much more efficient than other processes It does force you to look at a bigger description of the system with a bigger payback because you have found a more valuable cause which in total takes much less action --- From: "Bill Dettmer" Subject: [cmsig] Re: CRT and ISHIKAWA Date: Sat, 26 May 2001 08:27:05 -0700 Well, for one, I'm not aware that the Ishikawa diagram includes anything comparable to the Categories of Legitimate Reservation to test the validity of presumed causes. From the applications I've seen, it does a great job of listing all the possible causes of a single outcome (identifying potential additional causes), but it doesn't seem to me to finish the job--i.e., determine which of the possible causes is actually active and which is not. From: "Wilson Kaser" Date: Sat, 26 May 2001 10:47:36 -0700 Fishbone diagram is a dissection tool, you dissect the problem into a whole bunch of potential causes, none of which are logically linked, i.e. cause and effect. The "root" cause is determined by voting or guess work. CRT is a dissection and integration tool, you dissect the problem via cause and effect and at the same time integrate the causes down to determine the root cause. The root cause is determined by logic. The advantage of the Fishbone is that it is quicker but you will probably fix a bunch of things that are not the root cause, the advantage of the CRT is you get the right answer, but it takes more work to do properly. From: "Jim Bowles" Subject: [cmsig] Re: CRT and ISHIKAWA Date: Sat, 26 May 2001 22:26:25 +0100 They are indeed problem solving tools. The main difference being that the Ishikawa's Fishbone Diagram uses Coorelation between Effect and probable root cause(s) (The second level of science.) The CRT uses E-C-E (the third level of science.) The two logics are different. I see the Fishbone diagram as a tool to get people to understand the sources of probable variation. And root causes. The CRT gives a deeper understanding of the core problem. From: "Ward, Scott" Subject: [cmsig] Re: CRT and ISHIKAWA Date: Tue, 29 May 2001 16:57:43 -0500 The biggest difference I've experienced between Current Reality Tree (CRT) and Ishikawa diagram (fishbone) is the analysis of multiple problems (aka UDE's) on the CRT whereas only one is analyzed at a time on the fishbone. The power of the CRT is then evident in focusing efforts on a few causes (single, perhaps) rather than all potential problems from a fishbone--and still not addressing other UDE's. +( Jonah Program On-Line From: "James R. Holt, WSU-V" Subject: [cmsig] Jonah Program On-Line Date: Wed, 17 Jul 2002 01:12:00 -0700 Good News! Now you can earn a Jonah Certificate on-line. Pass the word! The TOC Center http://www.tocc.com/ certified Washington State University's Engineering Management course EM 526 Constraints Management http://www.vancouver.wsu.edu/fac/holt/em526/ and will issue Jonah Certificates to students successfully completing the course. Registration is now open at http://www.cea.wsu.edu/engrmgt/ for the Fall 2002 EM 526 Constraints Management class starting August 26th. This on-line course has some self study with weekly lectures over the Internet. If you are interested in viewing an on-line class, I suggest you visit the old class website http://www.vancouver.wsu.edu/fac/holt/em526/ to check out your ability to accept on-line classes at your location. EM 526 Constraints Management is intended for graduate engineering management students. Other qualified graduate students may apply. Qualified undergraduates can now apply for EM 426 Constraints Management and also earn the Jonah Certificate (with a few less responsiblities). Over 350 students have completed EM 526 in the past seven years. This is a well established and challenging course in the Theory of Constraints Thinking Processes. I hope you will consider enrollment. The enrollment in EM 526 Constraints Management will be limited. The registration process takes a while, so best to get started right away. If you have questions about the enrollment / registration process, please contact Patti Elshafei (pelshafei@wsu.edu) at 590-335-0125. The class Syllabus is at: http://www.vancouver.wsu.edu/fac/holt/em526/em526syl.htm From: "James R. Holt, WSU-V" Date: Thu, 18 Jul 2002 08:01:37 -0700 A couple of points. First, The Jonah Certificate offered in WSU's EM 426/526 Constraints Management class is awarded by The TOC Center. AGI awarded the Jonah Certificate to our students for many years but was less comfortable with the on-line format and decided not to continue. While there are many excellent instructors and companies out there who teach the Jonah Program material, AGI and The TOC Center are the only official groups who I believe can award a Jonah Certificate. I was a Certified Associate when The TOC Center spun off from AGI. Part of the separation agreement was for The TOC Center to also have ownership of the Jonah Material and the ability to offer the Jonah Certificate as originally developed by Eli Goldratt. While the TOC Center and AGI grew in different directions during the mid 1990s, I think they have once again come very close to each other. I'm pleased to work with both of these organizations and recommend them for people who really need to solve their core conflicts/problems right away. WSU is an educational institution. We deliver an excellent education-especially in Engineering Management. The Theory of Constraints blends into many of our courses. We educate. We are not full time consultants who can be dedicated to your specific problems. The EM 526 Constraints Management (Jonah Program) is excellent. Student learn the structured way to think. (We all think all the time, but some how, we have little rigor on our thought direction or effectiveness--just remember how many times the same thought/worry comes back into your head.) EM 526 can help you find your core conflict and verify its the thing to change. It forces you to prove the solution will in fact work and not create negative side effects. And, you will create the step-by-step action plan to overcome all the obstacles to the implementation of your solution. But, you do not have to implement the solution to succeed in EM 526. If you really want to solve your problems, you will have to implement after class on your own. While my students continue to get advice later, I can't give the support and long term assistance available through a good consultant. In addition, if you have an urgent problem causing $100,000 per week of pain (lost potential), why would you take a 15-18 week program to become a Jonah? You need the solution now! Or in two-weeks (from AGI, The TOCC or other reputable provider) at whatever the cost is! WSU's Engineering Management Program is a self-sustaining program. That means the cost of classes reflects the costs of delivery and operation. Internet classes are not as easy as it might seem. The cost per credit is $675 per semester hour or $2025 for the three-credit class for either EM 426 or 526. Oh, and there was some other confusion about certificates. The Jonah Certificate issued by The TOC Center for successfully completing EM 526 is that. WSU offers another certificate from the University called the Constraints Management Certificate. This certificate requires a broader understanding of the Theory of Constraints in several contexts. You can learn more about this certificate at: http://www.engrmgt.wsu.edu/Certificate%20Program.htm. +( Kritik an ToC From: "Shahrukh A. Irani" Date: Tue, 3 Apr 2001 11:06:07 -0400 Subject: NWLM: Re: Marriage of Value Stream Mapping and TOC In the spirit of discussion, I would like to rebut your posting. Please keep in mind that I do not believe in "flavor of the week" concepts, and that my goal is ultimately to graduate IE's who are well-equipped to work in companies such as yours. > Both processes offer a "vision" of the operation that is usually lost in > the daily focus of manufacturing. However TOC is only a set of general > steps that does not really offer a comprehensive approach to enhancing the > entire "value-stream". The TOC approach is simply a set of steps to > realize the "bottleneck" in the operation. Operations that use this > approach normally default to constant fire-fighting and chasing the "moving > bottlenecks" using traditional techniques ( TQM, IE) , without making > significant "value-stream" improvements. The solutions of "Elevate" and > "Subordinate" is really sloganism and not really an improvement TOOL or > strategy. ==> To the non-IE, TOC would be sloganeering. Sure, I agree with you! But, to the IE, it is pointing him/her to focus their improvement efforts (5S, Setup Reduction, SPC, DOE, Sequence-dependent setups, Capacity constrained Order Release, etc.) on that particular process in the Value Stream (which is simply the physical flow path of a key product!) on that process that is constraining the throughput of the stream! When you "elevate" the constraint, you develop formal strategies for "reclaiming" capacity that ought to be available on the bottleneck! And when you "subordinate", you simply do not release raw material into the system that would queue up at the bottleneck. There is an entire science for scheduling multiple jobs on a single machine that TOC makes one bring to bear on the shopfloor! The VSM thought process would make one assume that every facility in the world is punching out 2-3 similar gizmos for time immemorial. What they call the "Pacemaker Process" is the "Bottleneck" in the TOC world. And, when you start to "elevate the constraint", how else would one do it but by applying the various problem-solving tools of TQM such as Fishbone Diagrams, Pareto Charts, 5 Why's, etc.? TOC is sloganism if studied in a non-technical manner. Take away the hype, look at the substance and what you have is a Finite Capacity Scheduling approach that requires layout changes, line balancing, kanban signals linking shipping, receiving and the bottleneck process, and much more. With TOC, one allows multiple values streams for different products to be "sewn together" because they utilize the bottleneck. Whereas, in VSM, the product family is defined "almost by magic"! Try applying VSM to a fabrication jobshop that makes 600 different cabinets -------- quite a challenge! Instead, visit the same situation treating Welding and Post-Weld Grinding as a Bottleneck, and the operations could be improved. Just my viewpoint. > Value Stream Mapping is a more comprehensive and advanced approach to > visualize the entire value-stream (material and information flow) and > provide targeted improvements that improve the TOTAL value of an operation. > VSM has many other features that TOC cannot approach. > VSM has an integrated tool (Pull Systems) in the future state map that > allows a facility to focus on Lead Time reduction. VSM also allows an > operation to dig into deeper problems (i.e. setups, manpower, maintenance, > downtimes, etc.) and see how they impact the entire stream. VSMs are also > versatile in many operations from Admin, to Engineering, Design, > Distribution, both Global and Local, at multiple levels. Plants that use > VSM, and really understand the power of the information, have a strategic > advantage in developing continuous improvement activities that positively > impact Flow, Customers, and the Financial Statements. ==> WHATEVER you do/drive via VSM, I could institute using TOC, especially in non-repetitive and high variety environments. Why so? Because, when you fishbone diagram reasons why the bottleneck in a product flow stream (or vale stream) is not being utilized, issues of scrap, scheduling, setup, labor absenteeism, machine breakdown, etc. WILL get considered, EXACTLY as the VSM methodology achieves. I have read numerous case studies in trade journals, even done it at a local jobshop, where the technical validity of the TOC method is well-validated by IE concepts! > In conclusion, TOC is Theory, but VSM is a Visualization, Strategy, and > Application Tool. ==> I would submit that TOC could use the "total picture mapping" capabilities of VSM, but that TOC would give a crucial finite capacity scheduling and inventory control/lead time management capability to VSM, ESPECIALLY when a facility does not have demand that could be expressed as a takt time (this would be the "drum beat" of the bottleneck in the TOC world). Now I have a question: When we calculate takt time to drive the generation of the Future State Map in VSM, how does one compute the available capacity on the bottleneck without first applying the IE methods to assess what is the current and best possible available capacity that could be got from the bottleneck? Which comes first? Improving the bottleneck of designing a line for a variable takt time? Please educate me. --- Date: Thu, 5 Apr 2001 07:20:00 +0100 From: HP Staber Michel Baudin wrote: > > Regarding TOC, let's go back to Eli Goldratt's own statement of what it is: > 1.. Identify the system's constraints. > 2.. Decide how to exploit the systems' constraints. > 3.. Subordinate everything to the above decision. > 4.. Elevate the system's constraints. > 5.. If in previous steps a constraint has been broken, go back to Step 1, > but do not allow inertia to cause a system constraint. > > It's so general that it doesn't give much to agree or disagree with. > What Goldratt's other work makes clear is that the constraints he sees > on the shop floor are mostly machines. So you focus on the > capacity-constraint machine and you don't pay much at tention to the > rest. This perpetuates an equipment-centric bias that is common among > managers in manufacturing companies but 180 degrees from lean > manufacturing. Your citation is correct but not complete. Later on a 0..draw the system was added in order to emphasise, that first you have to identify what you are going to analyse. The system cannot be a machine or a group of people. A system always includes the collection of the business processes such as marketing&sales, new product development, order fullfillment and operations ... Goldratt in his works does_not_see the constraint in the shop floor. He sees the following 4 constraints : a) ressources (the "classical" constraints) b) vendors (e.g. monopolists) c) the market (e.g. if it is saturated) d) policies and procedures The most devastating constraint being the last. Therefore your view of ToC is not correct. Another thing which needs to be pointed out and was adressed by Goldratt in his book "Theory of Constraints" is : TQM and JIT as "synonyms" of recent mgmt philosophies stand for the following : TQM : do - all - the things right JIT : don't do what is not needed (no waste) - everywhere The missing FOCUS of both approaches leads to a "waste" of ressources for change activities in areas, where no immediate_bottom_line_effect will be identified. This is critical in the context of short engineering ressources (or qualified personnel in general) and impatient shareholders. The other downside is poor positive feedback to front line personnel which killed a lot of TQM and JIT change projects. In GATEWAY2LEAN there was a post this week from somebody who did VSM, gemba kaizen, 25% lead time reduction including a big "sigh" why there is no cost improvement to be seen. ToC offers tools to identify the hot spot for improvement, which will give quick results and immediate feedback to the troopers. As usual I do not advocate any of the "religions" JIT, TQM or ToC. I pick whatever fit's my current needs. +( labour cost From: Norm Henry Subject: [cmsig] RE: Ignore labor costs? Date: Tue, 31 Oct 2000 08:05:58 -0800 Operating Expenses should be taken into account when they are subject to change. And then the change should be compared with the change in Throughput to see that there is an increase in T greater than the increase in OE. Operating Expenses should also be taken into account when they start to differ from what is expected and it cannot be established that the change in OE positively affects T. Short term gains can be achieved by focusing on a reduction in OE. Long term gains must focus on T in excess of OE. Reductions in OE must be avoided which harm future T. Of course, you know all of this. What it gets down to is that managers have a difficult time shaking the cost world. They think that they cannot ignore OE. What makes this challenging to explain is that it is true that OE is important and saying that it is not just gets people to stop listening. I think that they must develop understanding of the relationship between OE and T and why, while both are important, there are interdependencies and the T is what must have the greatest "respect." While OE is important, what is really important is (T-OE)/I. All must be considered together. None can be ignored. It is just that emphasis on T is the greatest way to improve (T-OE)/I. -----Original Message----- From: Tony Rizzo [mailto:TOCguy@PDInstitute.com] Sent: Monday, October 30, 2000 6:56 PM Here's another situation. An organization is seriously trying to make sense of TOC. The managers of the organization keep hearing that the focus should be on Throughput and not on labor cost. They are told that in the equation, Profit = Throughput - (Operating Expenses) The OE term should be considered constant. Yet, the managers know well that if they ignore labor costs, their business won't last very long. The poor managers are confused, and rightly so. When and how should operating expenses be taken into account? +( literature about ToC From: Nicolas Hauduc Date: Mon, 5 Jun 2000 11:19:32 +0100 In addition to Goldratt's relevant books on the subject ("The Goal", "The Race", "The Haystack Syndrome"), you may want to check these articles (given here in no particular order): Eliyahu M. Goldratt (1988) "Computerized Shop floor scheduling", Int. J. Prod. Res., 1988, Vol. 26, No. 3, 443-455 Spencer, Michael S and Cox, James F. (1995) "Master Production Scheduling Development in a Theory of Constraint Environment", Production and Inventory Management Journal, First Quarter 1995. Fawcett, Stanley E. and Pearson, John N. (1991) "Understanding and applying constraint management in today's manufacturing environment", Production and Inventory Management Journal, Third Quarter, 1991 Rahman, Shams-ur (1998) "Theory of Constraint, a review of the philosophy and its application", Int. J. Operations & Production Management, Vol. 18, No 4., 1998, pp. 336-355 Rodrigues, Luis Henrique and Mackenss, John Robert (1998) "Teaching the meaning of manufacturing synchronisation using simple simulation models", Int. J. Operation & Production Management, Vol 18, No. 3, 1998 pp 246-259 Chakravorty, Satya S. and Atwater, Brian J. (1996) "A comparative study of line design approach for serial production systems", Int. J. Operation & Production Management, Vol. 16, No. 6, 1996, pp. 91-108 +( logic behind trees a) in order to ... we must ... because b) if ... then ... because Instead of (or in addition to) "If A then B", try some of these : - A causes B - B exists as a result of A - Why does B exist ? Because A does ? - B is an unavoidable consequence of A - The reason B exists is that A exists From: Dennis Marshall Date: Wed, 03 Oct 2001 19:39:41 -0400 Subject: Re: [tocexperts] CLRs J Caspari wrote: > I believe that the existing CLRs are robust with respect to sufficiency > trees. But, I have some difficulty applying them as a sufficient set to > PreRequisite Trees (and I.O. Maps) and Evaporating Clouds. The cloud in > particular seems to be problematic. John, You are right on. The reason for this is: CRT's, FRT's, NBR's and TRT's are logic structures of sufficiency. They logically present EFFECT-CAUSE-EFFECT. However, PreRequisite Trees (PRT) and Evaporating Clouds are logic structures of NECESSARY CONDITIONS. Therefore some of the CLR's do not apply. (eg:Causality,insufficiency, additional cause, predicted effect) Although it is not commonly done, using the CLARITY reservation (asking for understanding) with PRT's and Clouds can be very productive thinking. Checking the logic of PRT's: Read the PRT from top down: In order to have (the top I/O)we must have (the bottom I/O) to overcome the obstacle (the obstacle of the bottom I/O). If the sentence makes sense, then the bottom I/O is truely a NECESSARY CONDITION for the top I/O. Checking the logic of clouds: The cloud structure has 5 arrows. Therefore it presents 5 NECESSARY CONDITIONS. Each can be checked by reading: 1.In order to have "A" we must have "B" 2.In order to have "B" we must have "D" 3.In order to have "A" we must have "C" 4.In order to have "C" we must have "D'" 5."D" is in direct conflict with "D'" If each sentence makes sense, the cloud is a precise problem description. If not, you know where to look for the logic flaw. +( logic trees - effect-cause-effect relationships From: Dennis Marshall Date: Wed, 03 Oct 2001 19:39:41 -0400 Subject: Re: [tocexperts] CLRs J Caspari wrote: > I believe that the existing CLRs are robust with respect to sufficiency > trees. But, I have some difficulty applying them as a sufficient set to > PreRequisite Trees (and I.O. Maps) and Evaporating Clouds. The cloud in > particular seems to be problematic. > > Thoughts? John, You are right on. The reason for this is: CRT's, FRT's, NBR's and TRT's are logic structures of sufficiency. They logically present EFFECT-CAUSE-EFFECT. However, PreRequisite Trees (PRT) and Evaporating Clouds are logic structures of NECESSARY CONDITIONS. Therefore some of the CLR's do not apply. (eg:Causality,insufficiency, additional cause, predicted effect) Although it is not commonly done, using the CLARITY reservation (asking for understanding) with PRT's and Clouds can be very productive thinking. Checking the logic of PRT's: Read the PRT from top down: In order to have (the top I/O)we must have (the bottom I/O) to overcome the obstacle (the obstacle of the bottom I/O). If the sentence makes sense, then the bottom I/O is truely a NECESSARY CONDITION for the top I/O. Checking the logic of clouds: The cloud structure has 5 arrows. Therefore it presents 5 NECESSARY CONDITIONS. Each can be checked by reading: 1.In order to have "A" we must have "B" 2.In order to have "B" we must have "D" 3.In order to have "A" we must have "C" 4.In order to have "C" we must have "D'" 5."D" is in direct conflict with "D'" If each sentence makes sense, the cloud is a precise problem description. If not, you know where to look for the logic flaw. --- From: lawrence_leach@hotmail.com Date: Fri, 05 Oct 2001 02:07:04 -0000 Subject: [tocexperts] Re: CLR Clarity > John Heiman and others presented arguments in favor of the necessity of CLRs. Works for me. So, here is the other shoe dropping: How about sufficiency? I have three concerns. 1.The first, and by far the major one, is that the whole CRT could be a group think fantasy. That is, it could be logically correct, but not a description of reality. It may be a group (think) perception of reality, but that does not prove it is reality. If you follow Popper's thinking on the Scientific Method, the CRT is only a hypothesis. After critical review, you have to test it to see if it works, and if it does, you can use it. That does not prove it is reality...just that it works, hopefully better than the other theories you may be considering...including none at all. Further, I have never had a CRT I brought to scrutiny seriously challenged as being totally off the point. People nit pick using the CLRs (cause I explain them to them), but do not question the overall content. This means to me that there is a fundamental flaw in that type of review. (de Bono explains why) They are not thinking independently. I know I am not that perceptive. Also, as I learn new things (e.g more about psychology and behaviorism), I am able to add them to trees in a way that they are the primary influence. No one questions that either. This relates to the second... 2. The second is that we do not have a way to check overall sufficiency. I am pretty sure if we had several groups do a CRT of the same problem, but one group were Engineers, another group Psychologists, and a third group Ecologists, we would get very different trees; all of which would pass the CLRs. For example, earlier trees did not have the Policies, Behaviors and Measures check (should this check be a CLR?) What else are we missing? How about Argyris' 'theory-in-use' vs. 'espoused theory.' (The CRT is the espoused theory...you won't get them to agree with the theory-in-use until you video them and make them watch themselves in action...at least this is Argyris' experience with many, many groups of mangers.) Gareth Morgan provides an enlightening look at this in 'Images of Organizations.' He describes organization behavior from various frames, e.g. as Mechanizm Ecosystem Learning and Self-organizing Cultures Interests, Conflicts, and Power Plato's Cave (Orgs as Psychic Prisons)...I like this one! Each of them would lead to different CRTs and FRT. 3. The third concern is more nit picking. Logic texts contain long lists of logical fallicies. One I had, but did not keep had several hundred (I was depressed by it, so took it to the used book store). One I did keep has several dozen, which I will be happy to list later, but not now as the first two items are much more important...and I fear if I list those, that is where the discussion will go...as happens with CRT scrutiny. More detail review of logical fallicies does not solve the first two concerns. While there is elegance in simplicity, if you believe one predicted effect is enough to disprove an entity, then you should belive one undetected fallacy can kill a tree. +( LP linear programming and ToC From: Norm Henry Subject: [cmsig] RE: linear programming Date: Mon, 1 Oct 2001 09:39:22 -0700 Monte Swain and Jan Bell authored a small booklet titled The Theory of Constraints and Throughput Accounting as part of a modular series for Management Accounting published by McGraw-Hill in 1999. In this the authors do address linear programming and what they say are critical limitations. These authors say, "When comparing LP and TOC models, you should note that LP is strictly a mathematical formulation of a business setting. It is important to understand, however, that TOC (with the support of throughput accounting) is a comprehensive management tool that addresses many realities of process management that are difficult to incorporate within a strict mathematical model. For instance, the realities involved in applying constraint management include the following: 1. The dependence of operations on one another. 2. The potential for idle times on both constrained and unconstrained resources. 3. The need for buffer inventory stocks to manage dependencies and idle times within the system." The authors continue by using examples where the LP solution is invalid because of the above exclusions from the LP model. I would suppose that perhaps LP can be programmed along with TOC guidelines to have a more comprehensive model. If LP is left by itself, its value may be far less and could lead to poor conclusions. -----Original Message----- From: Mark Woeppel [mailto:m.woeppel@gte.net] Sent: Monday, October 01, 2001 9:03 AM A friend of mine is creating a LP product to optimize the enterprise. On the face of it, it looks very much like a detailed ToC analysis with lots and lots of data input. Is there any logical flaw in the approach that would generate a "wrong" answer or strategy? --- From: "Potter, Brian (James B.)" Date: Mon, 1 Oct 2001 13:18:59 -0400 Generally, LP will never do better than a ToC model. Under some circumstances, LP analyses might report a superior "optimum" result, but LP results can easily wreck on the rocks of variation, nonlinearity, or instability. A ToC approach can adapt and c ontinue delivering "good enough" solutions under circumstances where an LP approach would require reformulating the model nearly from scratch. LP can deliver excellent business results in specialized circumstances where the requirements it demands harmonize with the real business situation (e.g., best product mix for a refinery given available inputs, predicted near term demands, and refinery chemical dynamics; however, note that a small error in the demand prediction may void the entire calculation). In the general case, collecting enough information, quantifying the information well enough, constructing the model, and applying the soluti on will create a decision cycle which is either too slow, too reliant on obsolete data, too vulnerable to changes in the reality being modeled, or combinations of the three. One can reduce an LP model to a rigid procedure (a potential advantage), but i f circumstances change in ways which undermine the LP model, following the outputs blindly may lead to unwise actions. +( market segmentation and price politics Goldratt Session 5 on Marketing: see image c:\pim\db\markt.dat strategic rule : segment the market, not the ressources ^ | n | u M A R K E T S E G M E N T S | m | b *** | e *** ** | r ** ** | ** | ever ** | o * | unhappy * | f * | bitching and* | * | moaning |* | c * you | customers; | * | l * never | zero loyalty| * | i * reach | | * | e * these | usually the | * | n * customers bulk of | * | t * | your customers * | s * | |these * | * | |customers | * | |will buy* | ** | | *** |_____***________________|_____________|____________***_____> perceived value/price What is the job of marketing : to raise the clients perception of value. This will break the marketing cloud. Do it in a way that the competition has no possibility to immitate you You have to give both low end and high end customers justification, that they pay the right price for the same product. Umbrella contract : discount based on rate of sales and not on large order quantities. Increase inventory in favour of reducing payables. Use consignment stock to monitor customers Different customers have different needs and therefore different perception of needs/value. Find out what these needs are, they may be : - need to use a product rather than have a product (CD's, cars...) - need for cheap immediate solution, allow field upgrades of downgraded product. This way you are actually carrying only "one" product - the generic approach of offering only the capabilities of a product +( market segmentation proposed by Bill Hogdon From: Lisa Scheinkopf To: "CM SIG List" Subject: [cmsig] Market constraint question Date: Mon, 3 May 1999 12:47:40 -0400 My name is Bill Hodgdon and I'm with Chesapeake Consulting. Lisa Scheinkopf forwarded your message to our consultants for evaluation and a possible response. I'm glad you raised this issue because it gives me an opportunity to express my own views (at least my current ones) on TOC thinking in general. My expertise is marketing strategy and sales tactics. I worked with Eli for a part of "It's Not Luck". I want to give you my perspective of your situation from a marketing standpoint rather than a manufacturing standpoint. It is not likely to be a popular position within the TOC community. In general, manufacturing strategy does not and should not drive marketing strategy. It needs to be the other way around. There is broad acceptance within the TOC community that simply improving manufacturing performance by reducing inventory and improving due date performance and lead times will drive new business to the plant. Sometimes it does and sometimes it doesn't. There is also a belief that by pricing based on Throughput per constraint unit, manufacturing can drive new business into the plant simply by offering lower prices on high T/Cu products. Sometimes it does and sometimes it doesn't. What is the problem with all of this? Almost all of the focus is inside looking at ourselves! TOC people, because almost all of them come from an operations background, tend not to focus much on the market. For example, let's look at the way you've phrased your problem. Specifically the line that says "else I would have chosen to produce and sell all I could make of the product to other customers". Manufacturing does not get to make this choice. Sales must make this choice. Sales is the group that must find the customers and help them to choose to buy the product. Just because manufacturing says they want to make more of a product doesn't mean that sales can or will sell it. And the increase in sales rarely occurs quickly, especially if your products are being sold to businesses. There are huge implications, both positive and negative, to your business of changing the pricing strategy or dropping products or changing lead times, etc. It is the role of marketing and sales to make these decisions because they are the ones out there talking to customers. The role of manufacturing is to make what customers want. They must do it fast and they must do it profitably. The TOC Principles help a manufacturing plant to be flexible enough to respond to market needs fast and helps them make money fast. Marketing must use these huge improvements to develop new strategies to attack weaker competitors and they must be planning their attack while manufacturing is improving. If you improve in a vacuum, you will have excess capacity and you will have a problem keeping your people. Even The Goal expressed this problem, but it gave the impression that by simply offering lower prices (because they could make lots of T due to having excess capacity) they could quickly drive new business in. This is almost never true in the real world and even when it is for the short term it usually isn't for the long term. Now I'm certain you've received all sorts of answers to your specific question about whether you are internally constrained or not. Here's mine. You are internally constrained if your plant cannot produce the orders it has in hand in the timeframe that the customers want their orders delivered. Your plant is internally constrained, period. The fact that you could make more Throughput dollars by making different products is irrelevant. If you cannot meet current market demand, then you are internally constrained. It may be wise to make the higher T products you have orders for but it cannot be your decision. It is unlikely that you know the long term implications of arbitrarily deciding which of your in hand orders to make first. Like it or not (and having spent years in manufacturing myself I'm sure you don't), you must have guidance from sales. The goal of a business is to make more money now and in the future. Your efforts to optimize T today could very easily reduce future T and you would have no way of knowing this. It is a rare TOC implementation that includes marketing and sales from the very beginning. This is a huge strategic mistake because if marketing knew, well in advance, how much your results would improve they could proactively prepare to develop the additional business you will eventually have to have to continue to make more money in the future. Lastly, you mention the term "elevating the constraint". At Chesapeake this means you have spent money to buy more capacity for the constraint resource. All other efforts to squeeze more out of your current constraint can be lumped under the "exploit" and "subordinate" steps of the 5 step process. _______________________________________________________ From: "Richard E. Zultner" To: "CM SIG List" Subject: [cmsig] RE: segmentation, and other basic marketing concepts, from the Master Date: Mon, 19 Apr 1999 11:31:46 -0400 > -Original Message- > Ana Mercedes Uribe R. wrote: > > Would anybody please clarify me the "segmentation" concept? > > Is there any literature about it besides the one in It's not luck? "Market segmentation is the subdividing of a market into distinct subsets of customers, where any subset may conceivably be selected as a target market to be reached with a distinct marketing mix" --Philip Kotler (the "Eli Goldratt of Marketing") from the chapter on segmentation from his Marketing Management: Planning, Analysis, and Control -- the widely-recognized definitive "the one book on marketing to have if you only have one". Revised every four years, buy the latest one. Very readable and well organized. Highly, highly recommended. _________________________________________________________ Date: Wed, 12 May 1999 23:00:44 -0400 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] Re: TOC Pricing and Fixed Costs This is really a question of market segmentation. At this time, due to a drastic reduction in demand for current products, you have excess capacity. To recover some profitability, you would like to sell off that excess capacity, even if it's at prices that are lower than the prices that you could get from your normal customers. But, as some folks have already cautioned, you risk ending up with all your capacity going for the lower price. So, the real question is, how can you sell excess capacity now for a lower price, without locking yourself into the lower price permanently, which would ruin your business. It's a market segmentation issue, really. Here's one idea that will probably raise the hair on the necks of the quality folks. One way to segment your market is by differentiating with quality. You could purposely reduce the quality of some products, to justify the lower price. Thus, you could sell lower priced product to new customers who can tolerate the lower quality. By the way, an easy way to reduce quality without making any physical changes to the product is by reducing the degree to which you guarantee the product's quality. For example, a water heater with a 5-year warantee costs less than the same water heater with a 10-year warantee. Another way to segment your market is with lead time. Typical orders are priced at the normal price. Rush orders get a premium price. And long lead time orders get a lower price. A third way to segment your market is with color. A fourth way to segment your market is with texture. A fifth way to segment your market is with physical packaging. A sixth way to segment your market is by adjusting the terms of the deal, such as offering a lower unit price for bulk orders. The important thing is that you associate a specific set of conditions with the lower priced product, which in the minds of your customers justifies the lower price and which is acceptable only to a specific set of customers, preferably not your usual customers, who are acustomed to paying the higher prices. Let me know if I'm missing the boat. Tony Rizzo tocguy@lucent.com David G. Himrich wrote: > I have an issue in my work that has reached near emergency proportions, and I'd like some help from the list in addressing. > I am the QA Manager at a medium-sized iron foundry. In recent years, a large proportion of our sales have been to large agricultural equipment manufacturers. This market has declined precipitously during the past several months, to the point that w e are now producing castings only 4 days per week, and could probably get by with 3 days. > > Our Controller and I are students of TOC. We have achieved significant improvement in on-time deliveries and reduction in overtime by employing the rudiments of DBR scheduling. We did this really without explicitly implementing TOC. Now the time h as come to address an external constraint, and our use of TOC principles is more obvious. > We have traditionally used an Activity-Based Costing system, which our boss (CEO) understands reasonably well. Recently, we have had an opportunity to take on some new work, but our costing practices identify these jobs as marginally profitable. We have demonstrated how to calculate Throughput contribution of a job, but not everybody really "believes" these numbers. We can point out that some money is better than no money, but the objection we are now facing is this: what happens if we fill up our capacity with these kinds of jobs, and then the total Throughput is not sufficient to cover all our fixed costs? We don't have a satisfactory answer. > We have a good idea where our initial internal constraint would lie, and which labor resource we would have to ramp up as demand increases. We can exploit that first internal constraint by operating more than 5 days per week, or with some other effo rts. After that, we will likely hit a second internal constraint that would require significant capital expenditure (probably a new facility) to expand. > In the new work that we have an opportunity to quote, should we have one eye on the future, and only bring work in at a price that will cover the necessary fraction of all our fixed costs? That is, should the Throughput contribution of the job at pr esent be sufficient to cover all our fixed costs if that job took up all our capacity? Or, should we attempt to price new work according the market rate, and worry about being at capacity when the day comes? This last alternative seems difficult to e xplain to the Powers That Be, who are comfortable with our current costing system. > If you read this far, thank you, and I would appreciate your input. I think some livelihoods depend on figuring this out. > Dave Himrich +( marketing - Eli Schragenheim Date: Sat, 07 Jul 2001 17:30:59 +0300 From: Eli Schragenheim Subject: [cmsig]: TOC application for marketing - choosing the potential clients I think that only in very rare cases we are able to directly impact our client's core problem. Suppose I have a shop for men's shirts. How many of my clients find a full remedy to their core problem by dressing a shirt? Some might, but I suspect these are the absolute minority. Still, wearing very nice shirts is valuable to many. Lucent makes fiber optics cables. Are always those cables (in any offering you like to think of) the core problem of the client' business? Even a great unrefusable offer only seldom touches a real core problem. If it succeeds to do that the value gained is so huge that the seller can get VERY high price indeed. Others are still ready to pay good price for much less than eliminating their core problem. It is enough to eliminate or relieve an UDE (undesired effect) of the client. If you can relieve SEVERAL of your client UDEs you give higher value that makes the sale easier and generate better pricing opportunities. Relieving several UDEs may be still far from eliminating the core problem. From a different perspective, helping to subordinate better to the right constraint has definite value. The main point I like to discuss here is the choice of the potential customers - who do we hope will buy our products? Should we start to develop the marketing approach by analyzing the clients? Or by analyzing our own capabilities? And note, I mean CAPABILITIES not just CAPACITY. Capacity is, to my mind, only one particular factor within the capabilities of a system. Good marketing has to address the two variables: the perception of value of the clients and our capability to meet that perception. This is a two variable equation. The client's perception of value is also dependent on the other alternatives the client has (like competing products) and the alternatives that are likely to emerge in the future. Naturally we consider those clients we assume we have the capability of delivering value. We all choose our potential clients! This is a critical decision to make. The analysis has to highlight the link between the internal capabilities and the client' UDEs that may be relieved by those capabilities. One of the common causes for a severe market constraint is the paradigm that we must stay within our present client base. Certainly in many cases we don't need to look any further, just analyze better their needs and develop unrefusable offers. However, this is not always the case. Sometimes we need to look for new markets and break the basic paradigm that ties us to our current market. Companies who specialize in specific niches frequently face the situation where finding new clients is necessary for survival. Many other companies should re-assess their current choice of customers. I agree with Rudi's claim that "certainly all the solutions I have seen are oriented towards changing the way we do business - but not changing the product." Changing the product and even changing the clients is, nevertheless, something we can develop the appropriate process and supporting tools for. I think that verbalizing the organization's internal capabilities can lead people to new ideas that MIGHT have value to others. Then the analyze can identify the probable UDEs of the clients that are positively impacted by the new ideas. This analysis merges current-reality-tree (CRT) analysis on the client and future-reality-tree (FRT) based on the new offering. Only then we can develop the specs for a new product for the market that is new to us. +( Material Requirement Constraint From: Edmundwschuster@aol.com Date: Fri, 7 Apr 2000 11:31:40 EDT Subject: [cpim] Re: Material Constrained Planning (Information Resources) To: "Certified in Production and Inventory Management" A General Comment on Constrained Material Planning: There are some approaches to the material constrained planning problem that utilize sophisticated mathematical modeling. For example, in the chemical industry, there are "take or pay" arrangements. In such cases, a manufacturing plant is under contract to take a certain amount of raw material each day. If customer demand fluctuations result in more or less requirement for the material then originally contracted, a substantial cost penalty must be paid. In some cases there is not extra raw material available in times of great demand, so planners must decide when to run the plant at full capacity (in times of low demand) in order to build a buffer to cover the surges. Stuart Allen published nice paper in the Production and Inventory Management Journal (published by APICS) about ten years ago on a model he developed to optimize cost under conditions of constrained materials. The model utilized nonlinear programming and was applied by a major chemical company. The paper notes substantial, verified savings. Several of the finite capacity vendors that serve the chemical industry have similar models to plan under conditions of constrained materials. I have also noticed some work in this area by semiconductor firms. Usually any type of industry that has a V-shaped bill of material must face the possibility that a material shortage will devastate operations. These type of firms have developed unique methods to make the best decision. From my experience, this type of problem requires some form of mathematical programming to find an acceptable solution. In the process industries, we find many companies need to include such variables as pollution output and limited raw material supplies when doing production planning. You may also find some good approaches by searching the INFORMS data base. +( maxager software Date: Mon, 14 May 2001 10:59:05 -0500 From: "Rick Denison" Subject: [cmsig] Re: Pricing-Maxager Pricing John Caspari asked, "Is anyone using Maxager to set prices?" At this time NO. US Steel is going to try to, but there are issues that may prevent them from making use of it. I am no longer with Maxager, but I worked on 5 projects during the 2 years I was with them. Maxager very much desired that the customers would use Maxager for Pricing, but in reality most customers were more interested in increasing output, or maximizing their control point. Many times I have wanted to discuss the Maxager experience with the list, but I was so involved with trying to deal with it that I had little time to discuss it. Maxager was under the delusion that if you show the customers the reports, they would make the necessary changes to their operation to increase output, and to strategically change their pricing strategies. There are some really good reports that come from Maxager. but in reality, the customer still has the same issues as all manufacturing companies: they didn't know what to change, what to change it to, and least of all how to cause the change. Maxager use to employ a fair number of consultants to help customers increase output and to work with the customer on strategic pricing, but this had to be done through the use of the software. However, Maxager doesn't really have the necessary capabilities to accomplish that. If it was to be combined with The Goal System, Synchrono, ManuSync, Thru-put , or some other TOC Scheduling system, than you would have something really powerful. You would need to remove some of the Cost World thought that Maxager maintains, but there would be real potential for a system like that. In the end, it took a consulting effort to change the standard business practices. Only once that was accomplished was there any change to the customer. This was done through the use of a Maxagerized Executive Decision Making course. There is a lot to be learned from the Maxager experience, for there is much to be answered concerning Control Points, Constraints and Organizational Improvement. At some point, we former Maxager people will have to take the time to record what has been learned, but at this point I am more concerned with trying to find a new job. --- I recently looked at a new plant floor coordination software package. I avoid the use of the term "scheduler", because it doesn't do that. We can all agree that maintaining a schedule electronically requires a tremendous amout of data, usually killing the effectiveness of the scheduling algorithm. This latest package - Synchrono - instead uses TOC-DBR concepts to provide current priorities for each workstation, based on the completion of previous steps in the process. It's a lot simpler to use - only need to indicate that I'm done one job to get at my list of next jobs. The list is colour coded, based on shipping buffer penetration, and is arranged by due date into the buffer for items of otherwise equal ranking. Dave Simpson, Solution Sales, Global Toyota Team IBM Canada 2/F06/4601/BURN Phone 604-297-2606 +( measures 1 From: "Gilbert, Rick" Date: Wed, 4 Oct 2000 10:54:26 -0700 John, Srikanth and Robertson, 1995, Measurements_for_Effective_Decision_Making, describe a plant-level performance report that goes something like this: 1) Customer satisfaction (incl measures for % on-time delivery, returns as % of sales, and lead time. 2) Operational performance (Throughput, T/I -annualized, T/OE) 3) Constraint measures (uptime and yield measures at the internal constraint) T/OE is reported both as a monthly and rolling average measure (rolling avg T / rolling avg OE). Somewhere in there the authors talk about using something like a 6-month rolling average. T/I is computed on month end and rolling avg basis. The authors argue that T/OE is a measure of productivity - money earned for each $ expended. T/I is an indicator of velocity - rate at which money tied up in inventory is converted to throughput. I won't necessarily defend them to the death, but the reasoning seemed OK to me. Rick Gilbert rick.gilbert@weyerhaeuser.com ---------- From: J Caspari[SMTP:casparija@home.com] Sent: Wednesday, October 04, 2000 10:42 AM Subject: [cmsig] T/OE Ratio (Was: Measurements question) (Note to readers: in previous postings Mike had suggested the use of the T/OE ratio as being a very powerful measurement, and John had responded that he could see no use for the ratio and that it probably contained significant arbitrary cost allocations.) > John: [ Hi Mike, ] > John: [ My comments are embedded in << your text >> below in [brackets]. ] Mike: << Since I actually read your very informative writings, I am surprised at your email. Part of the Holy Grail of TOC says that we want throughput to go up and OE to go down or at least stabilize. In our bidding and financial reporting we attempt to "decouple" the OE from the Financial math as we want to focus on Throughput, sound familiar? >> John: [ I do not know what the "Financial math" is, but, yes, I would want to decouple OE from Throughput. ] Mike had previously written: << We . . . have a target . . . of 1.66 >> [for the T/OE ratio] John: [ This is an excellent example of a measurement that links, or couples, throughput (T) with operational expense (OE). ] Mike: << Since the term "allocation" is a firing offense here, then maybe this blip below will help it make sense. >> Mike: << The OE is based on last 3 months average, >> John: [If I understand your procedures correctly, that average is actually an allocation. For example: historical OE in February, $509,329; in March, $514,237; April $519,379. Average (mean) of January - March = $514,315 = an allocated amount "per month". Fire that guy. ] Mike: << divided by the number of work days in current month >> John: [ Another allocation: allocating the monthly cost to days. $514,315 / 20 working days in April this year = $25715.75 per April working day. Of course, if this were June, instead of April, then there would have been 22 working days and the OE per day would have been about 10% less ($23,377.95). On the other hand, had it been July, with 19 working days, the OE per day would have been about 5% higher ($27,069.21). A 15 percent difference overall. Fire hem also.] > Mike: << and then that number becomes the baseline. Since the OE is fairly constant. (plus or minus 1%), >> John: [ If this is the daily OE used for a particular month, then, given the calculations described above, I would think that it should be absolutely constant for each day of the month. On the other hand, if this OE is the total OE for each month and the 1% refers to the month to month variation, then that is remarkably constant. Particularly since it contains the payroll costs and the number of working days varies from month to month. Perhaps these costs are either estimated costs, or costs, such as property taxes, which come in lump sums once a year, but which have been allocated to across the months. I don't know if anyone needs to be fired here or not.] Mike: << and the Throughput is based on real time shipped invoices, it keeps us focused on Throughput, not OE. >> John: [ Okay, if the OE is about constant, then you are saying that everyone just ignores the OE and focuses on throughput, which accounts for approximately 100% of the difference in the measurement. Then I am with Peter Evans, who wrote, << This just begs the question: why not just use T? >> ] Mike: << Now as far as a decision making tool for say adding our OE by $100 and out T by $120 it [the T/OE ratio] is not used for that purpose. >> John: [ Well, what DO you use the T/OE ratio for? ] Mike: << We actually have other spreadsheets that we employ that take and compare past OE with any future addition or reduction of OE and compare it to any proposed subsequent increase OR decrease in Throughput.. (Got that out of a TOC research project) >> John: [ That sounds sensible, but doesn't have anything to do with the T/OE ratio. ] Mike: << I would like to take credit for this but after reading Debra Smith's works, Tom Corbett's, many others, yours and a well known TOC consulting firm (all outstanding), we changed the way we measure ourselves.>> John: [ I don't know what your consulting firm recommended, but I know that I haven't recommended the use of the T/OE ratio and I couldn't find it in either Corbett's Throughput Accounting book or Smith's Measurement Nightmare book either. Can you point me to it? ] Mike: << In fact we ditched all our standard cost accounting reporting. We employ a solid TOC F/S sheet, which we will convert to the beloved GAAP at years end. In short we are VERY focused on throughput... >> John: [ Again, that sounds sensible, but doesn't have anything to do with the T/OE ratio. ] Mike: << By the way I hope I have made it clear in the past how impressed I was with your writings, they had a significant impact on many of my decisions. >> John: [ Thank you, I'm pleased if I have helped in some way. ] ----- Original Message ----- From: "J Caspari" Sent: Monday, October 02, 2000 2:54 PM > | Darned if I can see how this ratio would be of any use what-so-ever. | | Surely, that daily OE is a powerfully allocated number--probably even | allocations of allocations. Those allocations are both arbitrary and | incorrigible, that is, incapable of redemption. Even if you in fact buy your OE on a daily basis, let's see how we would use it for decision making. Assume that we have the opportunity to undertake a proposal that will increase our OE by $100 and out T by $120. We would not want to do that because it would reduce out T/OE ratio from its current value of 1.34. | ----- Original Message ----- | From: "Mike Cahoon" | Sent: Monday, October 02, 2000 1:53 PM Cindy, we post a daily hand graph of the "T/OE" . Since this develops a ratio it is very powerful. For example, if it is below 1.00 then it is below break even. We use green ink over the 1.00 line and red below on 3x5 graph paper. We also have a target line of 1.66 and usually hit around 1.34. This works for us and keeps the dollars out. I hope this helps. | ----- Original Message ----- | From: "Cindy L. Van Wyhe" | Sent: Monday, October 02, 2000 12:50 PM The company I work for is going to institute T, I, and OE measurements. It is our intent to post charts for the employees to see. However, it is also a policy to not post $ figures. In the 'old' measurement system (cost world) the $ values were converted to a scale and posted. This is currently the plan with the T, I and OE measurements. Does anyone have any experience with posting T, I and OE measurements without including $ amounts? --- Date: Wed, 04 Oct 2000 18:55:37 -0400 From: Tony Rizzo Subject: [cmsig] RE: T/OE Ratio (Was: Measurements question) One problem with the T/OE measurement is shared by all ratio measurements. I haven't kept up with this discussion, but this caught my eye. I can't speak for John. But, the problem with ALL ratio measurements is that when, from time to time, it becomes very difficult to improve the numerator, the ratio creates organizations full of denominator managers. These are managers who find it easier to squeeze the denominator, rather than focusing on improving the numerator. This sort of ratio measurement caused some very damaging behaviors among managers and executives, in the early '90s. At that time, the ratio measurement of choice was EVA (economic value added). When the numerator of the measurement could not be increased, due to adverse economic conditions at that time, many managers and executives trashed valuable assets just to get them off their books, even though the assets were fully paid for at the time. Can you say stupid?! But, denominator management is precisely the sort of behavior that we risk with ratio measurements. Mark Woeppel wrote: > John, > I am very surprised to see you come out against T/OE. I have to agree with Mike that it is a very important measurement of how effectively he is managing his expenses to produce throughput. The basic idea is to measure the global productivity of hi s dollars spent. If he maintains his ratio of 1.66, and it is above his break even, he will profitable. To me, the reason for measuring that ratio is to evaluate productivity over time. The idea is to focus attention on the things that matter, and to ask the question, "If I spend this money, will throughput increase?" The ratio, in global terms, show s how well that question is being asked. You say that this ratio is a bad thing, but you haven't explained the negative effects. Could you elaborate on the possible distortion that this measure introduces? --- From: "Gilbert, Rick" Subject: [cmsig] RE: T/OE Ratio (Was: Measurements question) Date: Thu, 5 Oct 2000 10:55:38 -0700 I'm having a little problem understanding the reservations. After all, can we name a single measure that one could follow blindly and not ultimately risk business suicide? The discussion here is treating T/OE that way (not just Tony's remarks) as though it were the only measure considered. I don't recall anyone saying that they only computed T/OE and never looked at anything else. I think we shall always have to look at multiple measurements. My own interpretation of the tenets of throughput accounting is approximately as follows: 1) Actions that increase T while reducing I and OE are almost always good things to do (have to consider Now and In the Future). 2) Focus first on increasing T, then on reducing I, then on reducing OE. 3) Never take any action without considering the effect on all three measures. If I use the ratios T/OE and T/I, I COULD act stupidly and fire some folks or decree "no buffers," but I could INSTEAD use these measures as part of my prudent decision making. Is it wrong to find ways to reduce OE without affecting T? Is it wrong to reduce I if I can do so without affecting T? What would happen to these ratios if I did so? Aren't there some businesses where I would not be comfortable with a small T/OE ratio? OE is generally a structural expense that I incur regularly as a function of time, but sales are not always that predictable. If I operate close enough to T/OE =1 in a somewhat fickle market, I may find my self unable to pay all the bills. Could someone explain to me how the T/OE ratio is subject to abuse in ways that the individual measures are not? I believe that some character will always be looking at combining the measures (e.g., NP = T - OE), and rightfully so. The business does have to offset its OE to the degree required to satisfy its investors. If T is hard to improve, does the fact that this person is not calculating a ratio reduce the temptation to slash OE to increase NP? A last bit about the ratios. I can use the ratios in cases where the numerator and the denominator are not in the same units, as long as I am consistent and the units make sense for the items measured. I've proposed this before in this group as a way of measuring performance in non-profits, whose goal units usually are not $. ---------- From: Tony Rizzo[SMTP:TOCguy@PDInstitute.com] Sent: Wednesday, October 04, 2000 6:55 PM One problem with the T/OE measurement is shared by all ratio measurements. I haven't kept up with this discussion, but this caught my eye. I can't speak for John. But, the problem with ALL ratio measurements is that when, from time to time, it becomes very difficult to improve the numerator, the ratio creates organizations full of denominator managers. These are managers who find it easier to squeeze the denominator, rather than focusing on improving the numerator. This sort of ratio measurement caused some very damaging behaviors among managers and executives, in the early '90s. At that time, the ratio measurement of choice was EVA (economic value added). When the numerator of the measurement could not be increased, due to adverse economic conditions at that time, many managers and executives trashed valuable assets just to get them off their books, even though the assets were fully paid for at the time. Can you say stupid?! But, denominator management is precisely the sort of behavior that we risk with ratio measurements. --- Date: Thu, 05 Oct 2000 15:52:32 -0400 From: Dennis Marshall John, First let me say Hi! It has been some time since we have talked. Secondly, I do recommend using the T/OE ratio as one important measurement in a T, I & OE based measurement system. So I want to understand your thinking on the subject better. Please share your thinking on two Questions. 1. What is the purpose of a measurement system? 2. Do you agree there are only two financial decision making rules in a for-profit business -If the increase in "T" the increase in "I" and "OE" -If the decrease in "T" < the decrease in "I" and "OE" If the answer is yes, do it. In all other situations, don't do it. I totally agree with your reservation on ensuring the "T", "I" & "OE" we are working do not contain any hint of the "Allocation" disease. ----- Original Message ----- From: "Mike Cahoon" Sent: Monday, October 02, 2000 1:53 PM Cindy, we post a daily hand graph of the "T/OE" . Since this develops a ratio it is very powerful. For example, if it is below 1.00 then it is below breakeven. We use green ink over the 1.00 line and red below on 3x5 graph paper. We also have a target line of 1.66 and usually hit around 1.34. This works for us and keeps the dollars out. I hope this helps. --- Date: Thu, 5 Oct 2000 17:15:03 -0700 Identifying problems using the ratio T/OE, Peter Evans says "As an operational measure, it is open to manipulation, and encourages negative behavior, eg if I spend money in this quarter to make sales next quarter I get punished this quarter." True. So if one does not use the ratio of T/OE and if instead relies on T, then one could spend money in this quarter never to make money and never to be punished. The "problem" which Peter brings out is true if one used T, if one uses T-OE, or if one uses (T-OE)/I. Regardless of the measurement one must recognize that the goal is to make money now AND in the future. I am not so sure that the problem is with the measurement as it is with the managers who enforce the measurement over a short term rather than taking a long term view. While T/OE may have fluctuations in the short term, it seems like using T/OE to stay on course north or south while using T/I to stay on course east or west, will keep one centered on course overall. If both ratios are adjusted appropriately, the end objective of (T-OE)/I, which is ROI, will come out where one wants to be. The T/OE or T/I may help one to see how to stabilize the direction. Are these ratios necessary to use? No. One can use T, I, and OE. One can use T-OE and (T-OE)/I or one can use the combination of T/OE and T/I. The point is they all stem from T, I, and OE. Decisions must still be made based on the effect on T, I, and OE. The measurement approach selected should be what works for balancing these three measurements with the proper weight to reach the goal of making money now and in the future. Norman Henry --- Date: Fri, 06 Oct 2000 09:17:32 -0400 From: Tony Rizzo Boy, Mike Cahoon really started a tempest with his use of the T/OE measurement. However, as usual, the tempest is caused by poor communication among all of us, rather than the measurement itself. First, let's discuss what's good about the measurement, and there is much that is good about it. Here it is. If taken properly, T/OE is a SYSTEM efficiency measurement. As the owner of the overall system, I would want to know not necessarily if my system were efficient in absolute terms but, rather, if a recent change really did improve the system's overall efficiency. There's nothing wrong with this. In fact, as performance measurements go, this is one that at least tells the truth. From Mike Cahoon's position (system owner), he needs to see if a change had a positive effect or a negative effect. And, clearly, his use of this measurement is effective. He's making money, and he anticipates making more money in the future. BUT, BUT, BUT, BUT, BUT, BUT, BUT, BUT, BUT, BUT, BUT.... Please note that T/OE is a PERFORMANCE measurement, not an OPERATIONAL measurement in the same sense that the critical constraint buffer is an operational measurement or the project buffer of a project is an operational measurement. As such, the T/OE measurement is not a control parameter. It is simply a mere observable parameter. That is, it is a parameter that can and should be monitored, but it should not be used to make day-to-day operational decisions by managers. Now, what's wrong with the measurement? Nothing is wrong with the measurement. That which is wrong and damaging is the inappropriate use of managers who fail to distinguish between performance measurements (observables) and operational measurements (controllables). Occasionally, if a manager makes an operational decision based on this this OR ANY PERFORMANCE MEASUREMENT, then the manager risks making the wrong OPERATIONAL decision. Here's an example. Suppose the shipment for a set of books for one of my workshops is late, and it is late only because my client provided the required shipping information to me late. Were I to use overnight shipping for the books, my T/OE measurement for my entire operation as well as for that event would decrease. Were I to base my decision on this measurement, I'd be tempted to choose a less expensive means of shipping the books, and I'd risk having to teach a workshop without books. This is one example where the measurement would get me in trouble. Why? Because it is a performance measurement, and I've used it as an operational measurement. Are most managers too stupid to know what they should really do in such cases? No. Most managers are more than smart enough to know that they should ship for reliability in such cases, not for improved T/OE. However, this inappropriate sue of this and other performance measurements creates conflicts for managers, which are not always resolved in favor of the system. The mistake, again, is that a performance measurement is being used as an operational measurement. There's more. Look for it in the next article on this subject, where we'll talk about the use of system-level measurements for the evaluation of subsystem performance. +( measures 2 From: Greg Lattner To: "CM SIG List" Subject: [cmsig] RE: TOC Production Date: Tue, 20 Jul 1999 17:27:05 -0600 Interesting comment Bill. Thanks for contributing. Here's another good reason to identify the Long Term Constraint. More specifically you want to identify the "Long Term Physical Constraint" (not policy, but related to physics, physical). Why? Because you measure T carefully on that Long Term Physical Constraint. You measure T, T/Hr and the Effectiveness of that Long Term Physical Constraint. Justification of new equipment is made based on T/Hr per product on the Long Term Physical Constraint. It becomes the long run focal point even though you may have a temporary policy or temporary capacity constraint. Having the long run perspective, vision, horizion, helps to understand the temporary constraints and where you are heading, like rowing a boat. Maybe the ultimate Long Term Constraint is the market. But it helps to keep a horizon on the INternal Long Term Physical Constraint. Effectiveness is measured like the traditional Industrial Engineering measures Operational Equipment Effectiveness (OEE), but since it is only on the Long Term Physical Constraint, rather than every machine it can be called the Constraint Equipment Effectiveness. The formula is % Utilization (24hrs x 7 days a week) x % Yield x % Performance (actual vs standard time) = CEE only on the Long term Physical Constraint. I'm curious what people have experienced for CEEs. What does this group consider a good CEE? 99%? 50%? What factors are involved? Or, what is your CEE? 20%? 10%? Hope this helps generate some more good discussion. Greg Lattner -- From: Bill Dettmer[SMTP:gsi@goalsys.com] Reply To: cmsig@lists.apics.org Sent: Tuesday, July 20, 1999 2:54 PM To: CM SIG List Subject: [cmsig] RE: TOC Production -Original Message- From: Andrew Tsang To: CM SIG List Date: Tuesday, July 13, 1999 3:27 PM Subject: [cmsig] RE: TOC Production > >In TOC production, on one hand we look for the constraint (at this point, >assume a machine) at the PRESENT TIME to be the drum and the plant "march >according to it". > >On the other hand, Goldratt suggested that we should determine a constraint >for the LONG RUN as the drum so that the plant does not need to keep >re-aligning itself once the constraint change to another machine. > >Is the above contradict to each other, or I am confused. > >My situation is for the "long run" I choose a group of machines as my >constraint, however, the "current" constraint is not them. It is some >other machines downstream from the "long run constraint" [After extensive discussion and examination of philosophy (we're talking DAYS here, not hours), Eli Schragenheim has persuaded me that the "long-run" constraint is ALWAYS the market and its demand for your products or services. Any internal resource constraint is bound to be a temporary constraint that could be considered interactive with the market. If this is a valid point of view (and I am convinced that it is), one must always act in a way that exploits the market (maximize future Throughput) and subordinates internal resources to that constraint. In the short term, however, market demand may exceed your internal capability. In this case, it makes sense to take actions to relieve the internal constraint (free up internal capacity at the capacity-constrained resource) so that you can subordinate properly to the market. There are ways to do this (reduce demand on an internal capacity-constrained resource) and make more money at the same time. I used to think that given a choice, I would rather have the constraint INSIDE a system than OUTSIDE, based on the rationale that it should be easier to control. I am now convinced that the opposite is more generally true--and you can make more money that way...] BILL DETTMER Goal Systems International "Constructing and Communicating Common Sense" www.goalsys.com gsi@goalsys.com (360) 683--7034 Date: Mon, 31 May 1999 12:12:55 +1000 From: Peter Evans To: "CM SIG List" Subject: [cmsig] Measures I am working my way through the operational and performance measures issue. Below is a summary that I am putting together. Additions, corrections requested. The following list is a start only, help with filling the gaps appreciated. OPERATIONAL PERFORMANCE General Measures * What to do What happened * Minute, hour, day timescales Week, month, year * Frame as questions Frame as results * Must be able to take a Learn lessons from decision that affects the value what happened of the measure * Feedback and guidance Appraisal and evaluation People Measures * Terminations LTO * Individual pay levels Salary cost * Behaviours (what people are doing) Results achieved Finance Measures * Cash flow EBIT * Net Profit * T rate per constraint hour T * Will planned spend increase T, OE decrease OE or I? * Scrap $ Scrap % of sales Distribution Measures * Time between order receipt On-time performance and shipping * Inventory age per SKU Stock turns * Shipping buffer * Missing items per order Order fill rate * Order backlog Supplier Measures * Stock availability ? same as distribution? Customer Satisfaction * Complaints * Returns Net sales * Lost customers * Progressive Sales per customer per order period Sales per month, year Project Measures * Buffer availability Buffer used Questions 1. Can there be operational measures without performance measures? The answer appears to be no, otherwise you will not be able to determine whether the operational decisions/actions had an effect. 2. Are performance measures without operational measures of any use? The answer appears to be no, because if the performance measured does not have the potential to learn operational lessons, then why bother. Thanks to those who help us all progress. Peter Evans Date: Tue, 23 Nov 1999 09:49:35 -0500 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] RE: Focus and Synchronization - branched off of Structured For Speed You have to be very careful when playing with measurements, because you get that which you measure. Specifically, backorder frequency is not an operational measurement. Having this measurement does little to help people fill current orders. It is only a performance measurement. Inventory turns also is a performance measurement. This measurement does not provide information that can be used to make day-to-day decisions. The same is true of the relative size of inventory. For optimum performance we need to use the operational measurements. We also need to avoid other measurements. As soon as we begin to record a new measurement like inventory turns, the new measurement becomes important in the minds of managers. At that point, the new measurement imposes a limitation on the performance of the system. The system's performance will be optimized, but only to the point that continued optimization of system performance doesn't threaten the new measurement. This is precisely why the cost-centric measurements are so devastating. People, managers, know what to do often. They do it, but only so far as the (perceived) more important cost-centric measurements are not compromised by their decisions. A much better approach is to measure only the operational measurements and to report only those to the managers. All other things that you choose to measure, to evaluate your own success in designing and running your system, are observable quantities only. They are not control (operational) measurements in the sense that the buffers are control measurements. Instead of reporting these observable quantities, it is best to simply let them settle at whatever level the system needs them to be. It is appropriate to monitor them, so as to be able to spot sudden, drastic changes in performance. Such changes are indicative of problems, and should be investigated. But it is necessary to avoid reporting the observables. Tony Rizzo tocguy@lucent.com Jerry Keslensky wrote: > > The discussion on competing constraints is lucid and appropriate. And in > this case the inventory is certainly the correct candidate for > subordination. We certainly could, in this case, apply the following > subordinated measurements to inventory: > 1. Backorder frequency: How often are ordered items not available in stock? > (availability) > 2. Inventory turns: How frequently is the inventory being recycled? > (velocity) > 3. Relative size of inventory: What is the size of the inventory in relation > to throughput? (efficiency) > > So to our list of steps we could add the following modification: > > For operational measurements we are looking for buffers (possibly time > and/or resources), determining which describes and protects the primary > constraint and subordinating the others to that primary constraint. > > We continue making progress, now what is the next step? Are we working from > the top or the bottom? Should we think about identifying where we want to > locate our primary constraint? And if so, do we have a choice, in this case, > other then "picking and shipping"? Any suggestions? This is an open group > discussion, but staying on the topic will prove most productive and is > greatly appreciated. > > Jerry > mailto:Jerry.Keslensky@connectedconcepts.net > > -Original Message- > From: bounce-cmsig-1111@lists.apics.org > [mailto:bounce-cmsig-1111@lists.apics.org]On Behalf Of Tony Rizzo > Sent: Monday, November 22, 1999 9:22 AM > To: CM SIG List > Subject: [cmsig] RE: Focus and Synchronization - branched off of Structured > For Speed > > Excellent summary! > > In the last message I created the possibility for a disaster. I suggested > that > we'd have to keep track of time buffers and inventory buffers. I'm > surprised > that Eli S. hasn't mentioned anything yet. He must not be reading these. > In any event, if we follow through as we started initially, we create the > very real possibility of having interacting constraints. These would be > the folks filling orders and the inventory system itself. > > Interacting constraints are disastrous. If P1 is the probability that > constraint 1 is available, and if P2 is the probability that constraint > 2 is available, then the probability that both are available at one time > is the product, P1*P2. This is much less then either, P1 or P2. In > other words, when we have interacting constraints, then we really kill > throughput. Therefore, it makes sense to subordinate one constraining > function to our plan to exploit the other. In this case, I would suggest > that we subordinate the inventory system to the order filling people. > This would mean that we would pusposely carry as much inventory as needed, > to avoid starving the order filling people at least 95% of the time. > With this change, we need to base scheduling decisions only on our > time buffers. > > Note that with the change we make the inventory system a supporting > function rather than an operational function. > > There is an opportunity here to address a vitally important topic that > affects scheduling, performance, quality, and consequently throughput. > It has to do with a slightly different way of thinking about scheduling. > I have to thank my friend, Mike Dinham, for triggering the thought. > > Tony Rizzo > tocguy@lucent.com > > Jerry Keslensky wrote: > > > > For clarity of this discussion I going to try to recap the steps suggested > > so far toward developing local measurements that will focus and synchronize > > actions toward the achievement of the global objective, the goal. > > > > First we clearly define the system. We establish the internal and external > > boundaries. Our concern is goal alignment within the system. > > > > The system goal is clearly defined. > > > > We need to understand the necessary conditions that must be observed > > constantly. > > > > At this point we want to define the business process or value lane that is > > involved. And identify each function as operational or supporting. > > > > Next want to define an operational model and identify those functions that > > directly interact with the customer to implement the operational model. > > > > For operational measurements we are looking for buffers (both for time and > > resources). > > > > The measurements for the supporting functions must be subordinated to > > operational measurements. > > > > If these are the appropriate steps so far, then where do we go next to > > insure that the entire system is totally aligned on our global goal? Is our > > approach top down or bottom up? > > > > Jerry > > mailto:Jerry.Keslensky@connectedconcepts.net > > > > Tony, > > I'll need some time to digest what you have presented. On a first pass it > > appears to be an excellent start toward defining a generic approach. And I > > truly appreciate the input, time and thought from all who are contributing > > particularly Gene and Tony. I invite others who have actually implemented > > measurement alignment programs in TOC solutions to join the discussion. > > > > On a sidebar, I have the following thoughts: > > I'll gladly accept Tony's definition of the system for this example, > > although it is the classical approach to defining a business system and > > works in many cases, one can lose valuable insight and performance in > > establishing a supply chain that treats the beginning or ending partners as > > totally external. It certainly simplifies the analysis but does not, as in > > this case, allow for alignment of the measurements of the field service > > organization customer whose business success is an extension of the > > distribution company. They can actually be viewed as a component of the > > value lanes similar to a series of downstream plants or company owned retail > > outlets. The final customer, the people whose equipment is being repaired is > > certainly external to the system. Throughput to the field service > > organization customers is a strong measure of performance but their > > technician productivity, equipment repair time, size of their parts buffers, > > size and condition of the installed equipment base are some examples of > > measurements that assist in focusing and synchronizing performance. What I'm > > saying is that in the collaborative world of supply chain thinking, business > > ownership boundaries are not necessarily system boundaries. By the way, > > Tony, in your system definition, where do you place the parts suppliers > > (inside or outside the system)? > > > > Jerry > > mailto:Jerry.Keslensky@connectedconcepts.net > > > > -Original Message- > > From: bounce-cmsig-1111@lists.apics.org > > [mailto:bounce-cmsig-1111@lists.apics.org]On Behalf Of Tony Rizzo > > Sent: Friday, November 19, 1999 10:38 PM > > To: CM SIG List > > Subject: [cmsig] RE: Focus and Synchronization - branched off of Structured > > For Speed > > > > Jerry, > > > > First, despite earlier discussions on what constitutes a system, let's > > recognize that the field service customers are, indeed, customers. They > > are external to the sytem of functional organizations that you list. > > With this boundary drawn and understood (no arguments from the peanut > > gallery - I'm defining this system, and you are required to use my > > definition for this discussion), the next step is to define an operational > > model. By this, I mean a method by which the orders are filled. The > > following questions need answers: > > > > 1) Which functions are directly involved in the filling of orders? > > > > 2) How are the order filled? > > > > 3) What information does each of these functions need, to fill each > > order successfully and rapidly? > > > > Let's begin by listing the subset of functions, from your list, which > > deal directly with the customers. I identify the following: > > > > Customer Service - responsible for answering questions > > Order processing - responsible for taking parts orders > > Picking and Shipping - responsible for filling orders > > Parts kitting - responsible for assembling multiple suppliers parts into > > service kits (value-added distribution) > > Traffic - responsible for managing freight carriers, damage claims and > > transportation expenses > > Finance and Accounting - responsible for invoicing, credits, supplier > > payments, and financial reporting > > > > Of these, Customer Service, Order Processing, Picking & Shipping, and > > Parts Kitting affect customers directly during the filling of orders. > > The functions that these provide in the direct service of customers is > > clear. Now let's talk about the other two. > > > > You say that Traffic is responsible for managing freight carriers > > (presumably in the delivery of customer parts), damage claims > > (presumably from customers), and transportation expenses. Already I > > see much opportunity for problems. This department is really filling > > three functions, two of which involve customers. Those two are the > > management of freight carriers and the handling of damage claims. > > The third function, managing transportation expenses, is not a > > function that aids in the fulfillment of customer orders, even if > > it is important to the bottom line. So, there's a potential for > > conflict here. Let's just keep this in mind for now. > > > > Finally, we have Finance and Accounting. These are really two > > distinct functions. Accounting is responsible for invoicing, > > credits, and supplier payments. Finance, at least in my mind, > > would be responsible for financial reporting. Let's take > > Finance out of this game for now. It doesn't serve the customer > > directly. > > > > So, we have the following revised list: > > > > Customer Service - responsible for answering questions > > Order processing - responsible for taking parts orders > > Picking and Shipping - responsible for filling orders > > Parts kitting - responsible for assembling multiple suppliers parts into > > service kits (value-added distribution) > > Traffic - responsible for managing freight carriers, damage claims and > > transportation expenses > > Accounting - responsible for invoicing, credits, and supplier payments > > > > Let's step through what might be a typical order fulfillment process: > > > > 1) The customer calls for information, prior to placing an order. > > 2) The order processing people take the customer's order. > > 3) Parts Kitting assembles the kits that the customer needs. > > 4) Picking & Shipping take the kits and the remaining parts, pack > > them, and hand the packaged of parts to traffic, for shipping > > to the customer. > > 5) Accounting sends out an invoice. > > > > So far we've answered two of the three questions. Recall. The > > third question is, what information does each of these functions > > need, to fill each customer order successfully and rapidly? > > > > Order processing needs the following pieces of information: > > > > 1) customer identity, > > 2) the customer's desired arrival date for the parts, > > 3) the destination for the parts, > > 4) the earliest time that the customer can expect the parts > > under normal operations, > > 5) the earliest time that the customer can expect the parts > > with expiditing (for a higher price) > > > > Notice that I said nothing about the parts being in stock. > > This piece of information is implied in items 4 and 5. > > > > In short, the order processing people need information > > about the customer's needs and information about the system's > > ability to meet those needs. This latter component of > > information must necessarily include the due-dates and the > > status of all prior orders. Otherwise, the order processing > > person can't know how soon this new order can be filled. > > So, what operational measurements (indicators) does this > > department need? How about the remaining buffer size > > for each of the prior orders. In the event that this new > > order is an urgent request, with this information the > > order processing person can determine if this new order > > can be inserted ahead of the prior orders. In the event > > that the order does not require expiditing, the order > > processing person can simply put it at the end of the > > current queue of orders. > > > > Now let's talk about the stock levels. Just as the > > order processing person needs to know how much of the > > system's capacity is already allocated to previous customers, > > the individual also needs to know how much of the system's > > inventory is already allocated to prior customers. In > > other words, the required information must include the > > state of the inventory and the impact on prior orders, > > should this one be inserted at the head of the queue. > > > > With this information, the order processing person can > > make a commitment that the system can meet. More > > importantly, the order processing person can avoid > > making commitments that the system can't meet, except > > by violating earlier commitments to other customers. > > So, the projected buffer levels are the right operational > > measurements for the order processing department. These > > must include time buffers and stock buffers. Of course, > > implicit in this discussion is the goal: maximizing > > throughput, while meeting all customer needs and > > expectations. > > > > I see that this discussion could become quite long. So, > > I'm going to stop here for now. Let me say this. Once > > we determine the goal, the few necessary conditions that > > really must be observed constantly, and the operational > > model, we can then identify the operational measurements > > that the various functions require for successful ongoing > > operations. > > > > We can easily apply a similar argument to each of the > > remaining operational departments. The other departments, > > which provide support services of one form or another, > > are a different story. For now I'd like to point out > > that ALL the measurements of the support departments > > must be made subordinate to the operational measurements > > of the system. > > > > Tony Rizzo > > tocguy@lucent.com > > > > Jerry Keslensky wrote: > > > > > > Suppose we take a more concrete example and develop a generic approach. > > > Although alignment through measurements is stressed as critical to achieving > > > a common goal, and the concept of clearly communicating the impact of local > > > actions on global performance is a key objective of any good TOC solution, > > > there seems to be very little in the way of a roadmap for achieving this. > > > (A gap in the literature?) It may be the missing tool from the Thinking > > > Process. We need a Measurements Alignment Tree. > > > > > > Here is a simple example of a functional organization. > > > > > > STORY LINE: A distribution company provides parts gathered from many source > > > suppliers into a centralized redistribution model. The parts are used in the > > > field repair of equipment. The field service organizations are independent > > > contractors who chose to utilize the parts distributor as opposed to > > > sourcing their own repair parts. > > > > > > The parts distributor has the following functional groups: > > > Marketing and Sales - responsible for maintaining and growing the field > > > service customer base (people who use repair parts) > > > Order processing and Customer Service - responsible for taking parts orders > > > and answering questions > > > Procurement and Inventory Management - responsible for low parts costs and > > > inventory size > > > Product receiving - responsible for getting parts physically into stock > > > Inventory control - responsible to protect assets > > > Picking and Shipping - responsible to fill orders > > > Parts kitting - responsible for assembling multiple suppliers parts into > > > service kits (value-added distribution) > > > Returns - responsible for handling returned parts and their restocking, > > > exchange, repair, or disposition > > > Systems and Operations - responsible for maintaining support IS systems > > > Traffic - responsible for managing freight carriers, damage claims and > > > transportation expenses > > > Finance and Accounting - responsible for invoicing, credits, supplier > > > payments, and financial reporting > > > Executive Management - responsible for the bottom line. > > > > > > As our scenario begins we find the distribution company functional groups > > > all misaligned as their measurements are oriented toward functional > > > optimization. In most cases, as expected, there is significant friction > > > between groups, and the distribution company's performance is suffering. TOC > > > suggests that everyone align toward a common goal. Yet, each group has a > > > restricted local focus. Additionally, we have a supply chain of field > > > service organizations and parts suppliers and transportation carriers to > > > align as well. > > > > > > The distribution company's goal is to "Make more money now and in the future > > > by assisting independent field service organizations in being more > > > successful with their repair operations through providing low cost, fast, > > > consistent, reliable parts order fulfillment which maximizes equipment > > > uptime and technician productivity and minimizes repair organizations costs > > > of operations". > > > > > > How do we logically develop communicate and obtain buy-in for a working set > > > of alignment measures that insure that local actions are understood in terms > > > of global objectives (the goal)? > > > > > > Jerry > > > mailto:Jerry.Keslensky@connectedconcepts.net > > > Hello Jerry, > > > > > > I'd like to begin by describing the currently popular structure, > > > which clearly does not please everyone. For most large > > > corporations, it is a highly functional structure, maintained to > > > absurdly high levels. Specifically, a large corporation like, > > > say, Toyota, is divided into very tall functional silos. There > > > may be the marketing silo, the product development silo, the > > > manufacturing silo, the sales silo, the support silo, etc. > > > > > > Let's assume that the product development silo of a corporation > > > spans all three business units of the corporation. There is > > > always some VP in charge of such a silo, and that VP usually > > > reports to the CEO. There are also other VPs who are responsible > > > for the various business units. They, too, report to the CEO. > > > In such functionally divided corporations, this command structure > > > exists with every functional silo and with every business unit. > > > > > > So, what's the problem with this structure? Imagine that you are in > > > charge of one of the three business units. You're after a specific > > > market, and you want your projects and resources to be focused on the > > > goal of penetrating that market. This is a typically laudable goal > > > for anyone in such a position. Are the managers and worker bees who > > > typically work your projects aligned with this goal? Well, they > > > probably have some sort of dotted line relationship to you, on the > > > corporation's organizational chart. But they have solid line > > > relationships with the VP of their silo. This means that you can > > > hope to merely influence their behavior. The one with the real > > > control is the VP in charge of the silo. That person will subject > > > the people in her silo to a set of measurements, which are very > > > likely to cause behaviors that are entirely inconsistent with your > > > objective. > > > > > > The managers in her silo are in constant conflict. They have > > > to survive, and to do that, they have to optimize their silo > > > measurement. Often this means that they have to make decisions > > > which are certain to delay your projects. At the same time, > > > the managers have to look like they're being responsive to you. > > > To appear to be responsive to you, they have to show progress > > > on your projects. But they also have to show progress on the > > > projects of the VPs in charge of the other two business units. > > > After all, the same managers have dotted line relationships with > > > those VPs too. > > > > > > Well, the managers do show progress on your projects and on the > > > projects of the other two VPs. They do so usually by spreading > > > their people very thinly across all the projects. As a result, > > > you can kiss your objective good-bye. The silo measurement wins. > > > > > > With this structure, you don't control the resources that your > > > projects require. You don't have the opportunity to apply any > > > one set of measurements to those resources. In fact, it is > > > very likely that the silo VPs regularly impose measurements > > > designed to minimize costs or, at least, to keep each cost > > > within its budgeted level. When this happens, of course, > > > you can kiss your objective good-bye again. > > > > > > Companies that are structured this way can't get out of their > > > own way. They are barges, and the market is their ocean. When > > > a swell comes by, they go up. When the swell passes, they go > > > down. Frequently, the response of such a barge, to a swell, is > > > entirely out of phase; the barge finds itself ill prepared > > > to exploit any significant opportunity. Of course, proactive > > > attempts to position the barge strategically for future swells > > > are virtually impossible. Barges don't fast enough for that. > > > > > > An alternative is to have a real business unit structure. Then, > > > the silo VPs are put out to pasture, and the managers who > > > formerly reported to them now report directly to you, or they > > > report directly to each of the other two business unit VPs. > > > > > > Is this structure better? Well, let's see. Are you able to > > > apply a single set of measurements to all the managers and > > > resources who work your projects? Certainly you are. Is > > > there any more interference from the measurements imposed by > > > the silo VPs? No, that's gone. Now, you have a much faster > > > system for Command, Control, Coordination, and Information. > > > Synchronizing the efforts of all those people becomes a > > > distinct possibility. Now, instead of being in charge of > > > a barge, you're in charge of a battleship. > > > > > > With TOC, your battleship is equipped with radar, sonar, > > > a high-speed data network, and even a global positioning > > > system. You no longer worry about the barges that may > > > share your ocean. You can outmaneuver them at will. If > > > you so choose, you can even blow them out of the water. > > > > > > But, you have one obstacle left. You have to identify one > > > goal-centric measurement with which you and your managers can > > > constantly make real-time corrections and, consequently, > > > keep your entire operation completely focused on your goal. > > > A few years ago, I would have told you that such a measurement > > > didn't exist. Happily, that's no longer the case. Such a > > > goal-centric measurement does exist for product development > > > organizations. You'll find it described in some detail > > > in the next issue of PM-Network magazine, in an article > > > by a familiar author. PM-Network is a publication of the > > > Project Management Institute (PMI). > > > > > > Tony Rizzo > > > tocguy@lucent.com ÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄÄ From: "Richard E. Zultner" To: "CM SIG List" Subject: [cmsig] RE: Individual Measurements -- But what are the Goals? Date: Tue, 13 Jul 1999 02:55:19 -0400 >Subject: RE: Individual Measurements >Walter Wasilewski wrote: >I would like to submit that measurements of individual performance for >direct resources in a TOC environment are relatively straight forward i.e. >measure the performance against T,I and OE. Are T, I, and OE sufficient for non-financial concerns? As It's Not Luck states very well, we have three goals (alright, one goal and two necessary conditions): 1. Satisfy Stockholders (make money now and in the future) 2. Satisfy Customers (...now and in the future) 3. Satisfy Employees (provide secure and satisfying jobs now and in the future) I can see how T, I, and OE work nicely for #1. But what about #2 and #3? If we have two people, one (Alpha) who improved T by one stockholder goal unit, and another (Bravo) who improved customer satisfaction by ten customer goal units -- how could we tell? Who do you reward? And what about manager Charlie who improved employee satisfaction by 100 employee goal units? Surely you don't "meet" necessary conditions #2 and #3 without planning and action? And that requires measurement? So how do we do that? And taking a "strategic view" of the business would have to address all three goals, yes? +( measures 3 operational and performance Date: Sat, 03 Jun 2000 00:59:21 -0400 From: Tony Rizzo To: "CM SIG List" Subject: [cmsig] Re: Depreciation confusion. Let's put depreciation aside for now. Here are the givens for this discussion: 1) We have a system. 2) We have a clearly defined goal for the system. That goal is to make more money, now and in the future. 3) Throughput is the surrogate measurement that we optimize, in cases where the decision at hand does not involve changes to the system. 4) We are considering only decisions regarding the short-term use of the system's existing resources. I agree that managers make many decisions that cause direct changes to the system. But, we are not considering these decisions for this discussion. This is not to say that these are unimportant. It is to say, simply, that they are in a different category. In other words, we are considering only operational decisions, not system design decisions. In fact, since you are in the airforce, let's put it in terms of airplanes. If the system were an airplane, then operational decisions would be the ones made by the pilot, as he/she flies the plane. Should T be considered when making operational decisions? Absolutely! Should OE be considered? Absolutely! Should I be considered? Yes, but only the raw material component of I should be considered, and of this, only the variable component of that raw material need be considered. Should we consider depreciation, when making operational decisions of this sort? Well, no! Taking depreciation into account, with operational decisions, is tantamount to a pilot considering the depreciation of a new avionics system as the pilot flies the plane. Depreciation is as irrelevant to a manager facing an operational decision, such as a resource assignment, as it is irrelevant to the pilot flying the plane. When should depreciation be taken into account? It should be taken into account when we are making system design decisions. These are decisions that alter the system in some way, often through the acquisition of some sort of equipment. John seems to think that by using this definition of operational decisions we have defined ourselves into a tight corner, implying that there's nothing to be gained by further discussion. I respectfully disagree with this position. In my opinion, it is the complete failure of nearly all managers to make this distinction, between operational decisions and system design decisions, that has caused the measurements nightmare. Managers often try to use performance measurements as operational measurements. Performance measurements are necessary, and they are useful when evaluating an earlier change to the system. But, they are useless in operational situations. Again, let me put it in terms of an airplane, for the sake of clarity. While a pilot is actually flying a plane, the craft's real-time airspeed if of great interest to the pilot, particularly when taking off or landing. After the flight is over, the average air speed becomes of interest, because this is an indicator of the performance of the craft. Now, imagine the pilot's confusion if, as he tries to land the aircraft, his airspeed indicator displays not instantaneous airspeed but the average airspeed over the last fifty miles of flight. Managers who face operational decisions today and who have available to them only performance measurements face the same degree of confusion as our poor pilot. Those managers need throughput, or an appropriate surrogate, as a key operational indicator. In fact, while throughput is a good operational measurement, it would appear that it is poor measurement with which to evaluate system design options. For system design considerations, profit is the better measurement with which to evaluate candidate designs. John made precisely this point, with his depreciation example. However, once the system design decision has been made, then throughput again becomes the measurement of choice. Recognize, too, that often we can substitute surrogates for throughput, such as project buffers, or dollar-days, or.... However, these surrogate measurements are useful only in the context of an operational model. For example, only once the organization has adopted the TOC Multi-Project Management Method do the project buffers become useful, surrogate operational measurements. --- From: Billcrs@aol.com Date: Mon, 5 Jun 2000 16:23:11 EDT Subject: [cmsig] Re: Depreciation confusion. My claim would be that there can be no such thing as a "strategic" measure. A superior strategy will cause superior operational performance IN THE FUTURE. Therefore, I would have said that any measure must necessarily be an operational measure or there would be no need for the measure. This presumes of course that the purpose of a measure is to monitor performance and make decisions to improve those measures if they need improved. This is why I claim DBR cannot be used to manage marketing capacity. The important part of marketing, segments to serve and offerings to serve them with, is strategic in nature and future oriented. If marketing's strategy is sound then both sales and operations should be productive in the future. DBR can be used to manage both the sales capacity and operations capacity because both of these departments must produce short term results. Interestingly many TOC people will say that they focus on the operating measure of Throughput to manage their business, but that's not really true is it. We don't focus on T when managing the plant we focus on adhering to the Drum, adhering to the Rope, and managing the Buffers. It is the management of the Buffers and our response to problems the Buffers have identified that determines how successful the plant will be in producing Throughput on a day to day basis. So clearly operational measures have a priority in terms of their use in making decisions to improve short term results. +( measures 4 on efficiency Date: Tue, 19 Sep 2000 10:06:11 -0700 From: Norm Rogers Subject: [cmsig] Re: Productivity vs Eficiency I am in agreement with Bob. Efficiency is more of a measure of the time it is supposed to take to produce a part versus the actual time it took to make the part. Productivity is the amount of time the item is producing. Both can be measured against an employee, machine or work center. Both numbers can be manipulated however and therefore I believe a measurement based on both should be used. If an employee spends 4 hours on making parts; that should normally take 5 hours, in an 8 hour day and the rest of the time he is cleaning/ meetings/ training ...then his efficiency rating is 125% (5/4) but his productivity is only 50% (4/8). But if the employee decides to charge all 8 hours to making the 5 hours worth of parts, his productivity climbs to 100% (8/8) but his efficiency drops to 63% (5/8). The constant struggle to increase efficiency and productivity is what leads to the "lean" environment. Would you agree with this BOB? Building inventory just for the sake of keeping up productivity and efficiency instead of building to customer orders (both existing and anticipated) is never an option. If you are in a sales down turn THEN - IF you have plenty of cash reserves AND marketing/sales says the sales slump is only temporary then you can continue producing inventory. The slight increase in carrying costs is more than offset in "through put" ability once sales resume. The customer is much more happier to hear you say "I have that in stock and will ship it now" instead of "I will ship it as soon as I build it for you". - Original Message - From: "Tom Turton" Sent: Tuesday, September 19, 2000 7:36 AM Subject: [cmsig] Re: Productivity vs Eficiency > Bob, > > At first blush, your definitions/examples make perfect sense, and I > understand the goal to remove "subjective" downtime, etc from the > performance (productivity) measurement. > > However, having said that, I still feel some bounds have to be placed on > it. If a given machine just plain and simply requires X hours of > downtime for maintenance, then it cannot possibly be used during that > time to produce and so it should not factor in. Likewise, if you are > operating under a constraint that dictates only one 8-hour shift, then > you can only count that 8 hours...at least until that time when you can > remove the constraint and "allow" extended/multiple shifts. Although > having said that, I can see another side of the argument: > > Productivity should account for ALL time that the that particular item > is potentially available. > > I.e. if a machine requires 1 hour (off shift) maintenance a day, and is > used for 8 hours a day, then Efficiency would use 8 hours in the > denominator; Productivity would use 23 hours in the denominator (or > using your example: Eff = 700 pieces/ 8*100 pieces = 87.5%, Productivity > = 700 pieces / 23*100 pieces = 30.4%). > > But if current labor contracts or other facility limitations constrain > us to 1 shift only, then "worker" efficiency/productivity would be > limited by the 8 hours. So a given worker capable of making 10 parts > an hour, makes 70 parts during a shift, but spends 30 minutes each day > in a Coordination Meeting (prescribed), and loses another 30 minutes > during startup and shutdown "slop", Efficiency = 70 / 10*(8-1 > startup&shutdown and Coordination time) = 100%; Productivity = 70 / > 10*(8-0.5 Coordination) = 93.3% > > Agree/disagree? > > ---Tom Turton > > Bob George wrote: > > > > Ignacio, > > Efficiency measures the time the machine or process > > has been designated to run, typically removing the > > downtime. This number is then used to calculate the > > amount of pieces that should be produced in that > > amount of time. > > > > Productivity, on the other hand, takes ALL time into > > consideration, regardless of scheduled time or down > > time when calculating the amount of pieces produced. > > > > For instance, let's say you have a single 8-hour shift > > operation. You can produce 100 pcs. an hour. You > > should be able to produce 800 pieces in 8 hours. Lets > > say that you were able to produce 700 pieces during > > that period. To calculate the efficiency, you would > > divide 700 by 800 to determine you have a 87.5% > > efficiency rating on that process. > > > > Using the same output, 700 pcs., your productivity > > percentage would be calculated using 24 hours as the > > multiplier (instead of 8 that is used for efficiency). > > So your productivity index would be 700 divided by > > 2400 (29.2% productivity). Productivity encompasses > > the total time the asset is available to run. +( measures 5 the right one From: "Sheets, Floyd W" Date: Tue, 19 Jun 2001 10:22:08 -0700 Reply-To: nwlean@yahoogroups.com Subject: RE: NWLM: Commercial organization Advice in two LEAN sentences: * Form should follow function, function should enable efficient flow along the critical path of deliverying the goods and services that maximizes the wealth of your customers. * Measure progress on impact to Throughput (Revenue less Direct Costs NOT volume of product); Delivery Date Adherence (right part/product/service; right place, right time, right quantity, right quality); and Inventory Turn rates. Any other measures will lead you astray. +( measures 6 productivity Date: Wed, 20 Jun 2001 22:44:58 +0100 From: HP Staber Tony Rizzo wrote: > Are we assuming that utilization at non constraints is a > useful measurement? At times, in fact, often, it is necessary > to take action at non constraints, which are aimed at exploiting > or elevating the constraint. Measurements are the indicators > that tell us the effects of our actions, at non constraints and > at constraints. 1) I just read the section about Little's Law in Factory Physics which includes a clear statement that sometimes it makes sense to take action at non-constraints in order to reduce variability in the total system. Variability beeing a core problem in operations and being poisson for a constraint. 2) I have an example of a "good" local measurement : consider quality performance, which is usually measured as "ppm". I was able to reduce the ppm-level in my plant by orders of magnitude. However the number of customer complaints per reporting period remained at the same level - it incresed a bit (probably due to our growth). So it seems ppm is not a good measurement. I'm sure that my customers are angry about the incidents in the first place (issuing a report, follow up, reporting ...) and only in the second place do they care about the gravity of the complaint (=ppm level). So we have decided to switch to a different quality measurement which is known as "nbr of days free of customer complaints". If you use this measure and apply it to the total system (=plant), than you will end up at very low values and your people will not identify themselves with this value or even think about corrective actions. If you apply it to the individual work cell the "nbr of days free of customer complaints" will become bigger and you will achieve buy-in at the front line. It will be the front line's pride to take care to grow the nbr for the line for which they are working/responsible. Eventually this will effect bottom line also - keeping in mind that scrap, error and rework account for something like 10% to 30% of your P&L. +( measures 7 - TA versus lean From: Brian Potter Date: Tue, 06 Mar 2007 09:44:15 -0500 Subject: [Yahoo cmsig] TA versus Lean metric to drive down transaction costs For any constraint resource, the LEAN efficiency metrics are OK. Unlike most other resources a constraint resource ... (1) should be working 100% of planned availability (2) should as near as possible produce outputs (each an all but certain sale) at the highest rate possible while in operation (3) will yield a global improvement for every successful local improvement For ALL resources (this will include constraints), the accounting system should track the number and severity of subordination failures (red zone holes in the shipping buffer, red zone holes in any constraint buffer, late deliveries, and late material arrivals at a constraint causing lost constraint production time). Good metrics for these events include ... (1) Mean time between subordination failure events (document with SPC chart on time between events and log of each event). This one has been with us a long time in the maintenance area where they call it "mean time between failures" (aka, MTBF) and often apply SPC as mentioned here to confirm that maintenance practices are keeping equipment in stable, good condition. (2) Severity of events as indicated by an event log (describing what happened, when, and where) plus one of . . . 2.a. Preferred: $-days of damage (sales revenue from each delayed order multiplied by the lateness in days). Document with an SPC chart on $-days of damage for each event. 2.b. Maybe even better but less well documented (because I think that I made it up on the fly answering this post): Use an SPC chart on "$-days of damage divided by days since the last event" (combining MTBF information with $-day information in the the same metric). This metric has an interesting characteristic: its dimensional units are just $ (plain old scalar money) which should help many people understand it. 2.c. $ in damage for each event (sales revenue delayed without considering time) and use an SPC chart on revenue delayed as both documentation and a decision tool. 2.d. Build an SPC chart on "$ of revenue damage divided by days since the last subordination failure event. . . . where the SPC run charts illuminate the situation in ways that suggest when something odd is happening and when "normal" events may indicate that a time has come to alter the system to create a better normal. The combined chart described in 2.b along with the log may tell enough of the story, but there may also be some virtue in a separate chart on time between events. Frequent small disruptions (a warning of big trouble waiting to happen) might "run under the radar" with approach 2.b, alone. If it is not too much burden, I think I'd do #1 (the MTBF run chart) and #2.b. (the $-day/day run chart) along with a log recording raw data (what resources, what time, what operations, what materials [including supplier{s} with batch identification], what setup, hypotheses about why, ...) about each delay event. Experience might show that either #1 or #2.b is actually redundant. If so, drop the one you find less useful or more burdensome. It is not clear to me whether or not all the data necessary for these calculations will exist in a managerial accounting system. If so, you're off and running. If not, either altering the accounting system or creating the metrics outside the existing accounting system is probably worth the effort. Remember, a little over a century ago, managerial accounting was unusual and just coming into vogue. Organizations began collecting the data required for managerial accounting computations (by hand and later by mechanical means) and reporting. If different data, metrics, computations, and reporting serve a business's decision making needs, now; there is no reason that today (with computing and data collection both costing so much less than they did a century ago) an organization cannot engage in an information revolution even more profound than the one that Fred Taylor and the early cost accountants launched as the 19th Century ended and the 20th Century began. +( momentum From: anselmo garcia Date: Wed, 16 Mar 2005 03:20:07 -0800 (PST) Subject: NWLEAN: Momentum Hi all, I have received a lot of emails asking about Momentum and I would like to clarify the issue. Momentum is just an indicator that combines quantitative information with qualitative information regarding service level. Its formula for an specific orderline is Moline =units pending x days of delay. The global momentum of the company is the sum of all the individual Moline. MGlobal = Moline1+Moline2+Moline3... Following the formulation it is expressed in unitsday. If instead od using pending units to be delivered, you use gross marging (the one that corresponds to the orderline) pending to be served you can have an interesting way of looking at your backlog and the impact of your service level in your P&L measured in cash flow not achieved. The original idea come from physics disciplines where Momentum is Force times distance and is expresed in Newtons times metre. The tricky thing as usual comes from the way of implementing it. You have to drive your team to arrive to the conclusion that they have invented MOmentum. It is them that have invented the indicator, it is them that have to find the weay of combining both quantitative and qualitative measures, it is them that have to find the appropiate report (I will forward our Excel spreadsheet to all of you that have asked for it in the next few days). You have to drive them to arrive to this solution. It is the same as the rest of the lean tools we have implemented. The solution will be a u-shapped line, but the project will be more powerfull, people will be more committed, the implementation will be faster if they invent (properly guided)a u-shapped line. Then, after having it implemented, you can expand the whole organization into the momentum philosophy. Again, more marketing applied in the different departments rather than the tool itself is what works. +( Nature's Designs & Organizations by Tony Rizzo Nature's Designs & Organizations Tony Rizzo tocguy@lucent.com When faced with tough design problems, engineers often have looked to the animal kingdom for innovative solutions. For example, the design of an aircraft's wing has a great deal in common with the wing design of a bird. The shape is basically the same, with a rounded leading edge, a gentle curvature, and a tapered trailing edge. The skeletal structure of a bird, with its hollow bones (very efficient structural members), also has been copied by airplane designers. The structures of small aircraft consist of hollow tubes. Indeed, evolution has provided many effective technical solutions. At times engineers have had to do little more than observe and copy nature's carefully developed designs. Given our success in adapting nature's solutions for our own purposes, might we find effective organizational solutions in the animal kingdom? Let's assume that we want to design an organization for endurance. Are there animals that possess tremendous endurance? Well, yes, there are. The caribou is one example. It can maintain a marathon pace indefinitely. Tests have shown that the animal can maintain an even faster pace for hours at a time. In fact, most herd animals are capable of running at a relatively fast pace almost indefinitely. Such herd animals possess some interesting features. First, they tend to not accumulate large stores of fat. If they did, they would be lethargic, slow, and cumbersome. Second, herd animals are also capable of significantly greater speeds, if only for brief periods. Their ability to sprint is invaluable in the presence of hungry predators. How might this design be adapted to organizations? Well, let's see. Do we need organizations that can sustain a comfortable level of performance indefinitely? If we ask stockholders, we're likely to hear that we do. Should such organizations be permitted to bloat themselves unnecessarily? Again, the answer is clear. Significant amounts of excess weight, such as warehouses full of unsold, finished goods inventory, do not promote performance and good health for an organization. What might we say about the ability to sprint? Would that be useful in an organization? Let's see. Does an organization ever face not a hungry predator but a hungry competitor? Yes, it most certainly does. The ability to achieve great speed, if even in short bursts, is invaluable. It can mean the difference between business life and business death at times. However, if we look at many organizations today, particularly the larger organizations, we don't see the design features that nature has bestowed upon the caribou and other endurance runners. Instead, we see organizations designed to minimize costs. This is suggested by the functional silos that we observe in many corporations. For example, some large corporations group all their sales people in the same functional organization. Similarly, all the manufacturing operations are grouped in the corporation's single manufacturing silo. The functional structure is designed to achieve economies of scale. Each functional silo wields extensive clout with its suppliers, often forcing suppliers to negotiate very low prices. In such corporations, a business unit finds itself partitioned, segmented, divided by the functional barriers. Such segmentation greatly impedes the flows of information and the decision-making processes of the businesses. Synchronization toward a common goal becomes a distant dream. Still, such functional organizations are able to minimize costs. They are very efficient. Therefore, they tend to survive, if at a reduced pace. Has nature created a similar design? Specifically, does an animal exist, which is designed for minimum cost, i.e., which can survive with a minimum expenditure of energy? Not surprisingly, nature has filled this niche as well. There does exist one animal the design of which ensures that the animal's expenditures of energy are always at a minimum. Nature's highly effective design for a minimum cost operation is the sloth. IF YOU CARE, THEN SHARE. +( negative branches out of YES-BUT if somebody gives you a Yes-But and you are not absolutely clear about what he means : don't just take it but say that you want to think about it. However you need more info to commence with your homework. Therefore ask in a friendly manner : - so what ? - why is it what you don't like ... until you understand the mechanism behind his Yes-But. +( NPROI net profit and ROI Date: Thu, 2 Aug 2001 14:08:27 +0100 From: HP Staber Subject: Re: [cmsig] Interdivisional/Supply Chain Transfer Pricing Aaron M. Keopple wrote: > The question is; How do you determine how much Throughput each contributor > should receive from the end sale? Should it be based upon the percentage of > variable expense each put into the product? Should it be based on labor > minutes invested? Should it be based on Constraint time invested? As Jim Bowles proposed it will need negotiation and agreement of all chain segments involved. I think that a Supply Chain is only a loose conglomerate of individual systems (legal entities) rather then the grouping of subsystems. Every strategy will have to take this "individuality" into account. The two fundamental profit equations apply to all of the chain elements: 1) NP = T - OE 2) ROI = NP / I = (NP/sales)/(I/sales) The chains work in different conditions however. At the beginning of the supply chain you will most likely have process industries requiring machinery and equipment (=> higher %I of sales, high working capital) while at the end of the supply chain you will find assembly and distribution functions (=> less %I of sales, higher sales turns, very often negative working capital !). If all chain segments want to provide (the same level of) return to their shareholders they will require different NP or T contributions. If you analyse financial data of supply chains (e.g. Automotive Industry) you will find high NP at the beginning of the supply chain and low NP at the OEM or dealers. Their ROI's are more or less at the same level however. The same holds true for intercompany relationships. So bring everybody involved together and try to find a win-win for all (not an easy task, I know). --- From: "Christopher Mularoni" Subject: [cmsig] Re: Clarification on I Date: Tue, 14 Aug 2001 19:49:03 -0400 It has been a while since I watched the Satellite Series. However, I am presently rereading The Goal and am taking notes. The following two paragraphs are the excerpts from my notes that seem most relevant. I hope this helps. There is more than one way to express the goal. Three measurements that express the goal are throughput, inventory, and operational expense. Throughput is the rate at which the system generates money through sales. Inventory is all the money that the system has invested in purchasing things that it intends to sell. Operational expense is all the money the system spends in order to turn inventory into throughput. These definitions should be considered as a group. If your want to change one of them, you will have to change at least one of the others as well. The depreciation on a machine is operational expense. The portion of the investment remaining in the machine, which could be sold, is inventory. Likewise buildings are inventory. Lubricating oil for machines is operational expense, as it is not sold to the customer. Likewise scrap is an operational expense, however the portion of the investment that can be recovered when sold to the scrap dealer is inventory. Carrying costs are an operational expense. Money for knowledge which yields new manufacturing processes, something that turns inventory into throughput, is an operational expense. However if you intend the knowledge, as in the case of a patent or a technology license, then it's inventory. But if the knowledge pertains to a product that the company itself will build, it's like a machine - an investment to make money that will depreciate in value as time, goes on. And, again, the investment which can be sold is inventory; the depreciation is operational expense. Secretaries, chauffeurs, foremen, managers, etc. are generally operational expense. --- >From: "Becky Morgan" >Date: Tue, 14 Aug 2001 08:58:06 -0400 > >In watching the Satellite Series, Goldratt talks about I as Investment, >including Inventory, Plant, Equipment, Land, etc. In much of what I read, >I is described as Inventory only. > >I am presuming that the conversations centers on the Inventory part of I >because it is the most flexible. In addition, if we assume that the plant, >equipment, land are necessary to creating throughput, they cannot be >reduced, while Inventory can. --- From: "Opps, Harvey" Date: Tue, 14 Aug 2001 14:31:59 -0400 However, when this was first developed, and I don't know if ELI G. has changed his thinking on this matter, * Inventory - is all of the money that the system has invested in purchasing things which it intends to SELL. this included plant, property and equipment. Companies always sell off equipment, property, they even sell the business units, eg outsource where you sell off the assets and trade off the employees. All assets were fair game for selling. +( OPT and DBR From: Stephen Franks Date: Mon, 19 Mar 2001 12:28:19 -0000 Oded, Avraham, Stephen, Paul Zainea, Mark, Jack Goodstein I was passed a chain of emails and asked to comment - please note I am not a member of cmsig. I hope the information is of value if not there is a handy button on the left> DELETE. [cmsig] Re: OPTvs. DISASTER - IF YOU DON'T LIKE TO READ PRODUCT NAMES THEN THIS EMAIL IS NOT FOR YOU. (I HAVE NO PRODUCT AND NO AXE TO GRIND) I have been involved in TOC since February 1985 originally working for Creative Output mostly with the OPT philosophy not the software. The discussion on which is best which came first is largely irrelevant. But what is absolutely clear is that DBR is not implemented perfectly by any of the systems available today. We have to be clear that DBR is a methodology aided by software - the big problem is that all the systems people think that it is only with software (very expensive software) that DBR can be implemented. This is almost completely wrong. DBR can be implemented in all but the most complicated environments without the aid of commercial software. I would go as far as saying that software is a liability in the early stages of implementing DBR. To make it sustainable long term software is sometimes needed. Be smart implement manually first get the benefits and make all the software people work hard for their very fat cake. If they understand DBR they will be able to justify their price. OPT predated DBR and DBR is not an explanation of how OPT schedules - See Oded Cohen's scheduling lecture 1984 which compares brilliantly the scheduling techniques of MRP, finite, infinite backward, kanban and the finite/infinite backward method employed in opt/serve. What has to be said is that it is undeniable that OPT was part of Eli's early development of the whole of TOC and that included DBR. What concerns me about the chain of emails is that it obscures the real issues. (1) DBR is not Software, software is only there as a tool to help. (2) The legitimate software vendors are few. (3) There is a huge market, with only one or two main suppliers - why fight each other. (4) Very few buyers understand what constitutes a DBR computer aid. - so we are all being ripped off by the many software companies claiming they have constraint management tools implemented in their packages. All the software vendors are viewed badly and all the support (TOC Experts!) are given a much harder job. If this discussion is anything other than an 'academic' exercise what we need is a simple procedure and test models to check each vendors claims against. There is a little discussion on what I understand about OPT, Disaster and Resonance below. OPT is a very very good scheduling system. OPT included the last time I looked in 1988 many of the aspects of DBR. It identifies potential constraints, it correctly only schedules the prime constraints and the Critical Constraint Resources, not every bottleneck. It includes buffers to protect the critical aspects of the schedule. It subordinates the other resources to the needs of the constraint schedule, although the word subordination did not exist in OPT it was called SERVE. One difference between DBR and OPT originally (I don't know now what has been upgraded, maybe Stephen can tell us) is that the serve module was an infinite backward scheduler, so there was no dynamic buffering (resolution of temporary overloads in the SERVE module - overloads were clearly identified but not resolved. It needed a post run intervention and maybe another schedule run or two to resolve these issues. Another key difference is protective capacity which is a fundamental aspect of DBR, how this is handled in OPT I am not sure. OPT also included many unique features - the OPT modelling language, the ability to make its own decisions within parameters on batch size, to split and overlap batches in serve, using a brilliant application of logical kanbans it produces far better finite forward schedules than exist almost anywhere else (and we should note that DBR is setting the Constraint schedul es backward from the due date not finitely forward). Disaster is a much simpler tool than OPT but equally effective. It was a dramatic advance in scheduling technology and many companies that installed it in the early nineties are still using it today - as with OPT there is a great pedigree. Disaster is a full finite backward scheduler which follows almost all of the steps of DBR and has a brilliant interface with the person actually scheduling. During the schedule run problems are identified and resolved. Data checking, Protective Capacity, Tailored buffering to fit the component and or Job, Drum identification, Drum Scheduling, Unique set up and overtime options, and Subordination are all interactively controlled in one blisteringly fast scheduling run. Its biggest weakness was its poor interface to the host system and its output file formats which caused most IT department to give up. Resonance, or now Thru-put is also a good system, it again has almost all of the features needed to support a DBR implementation. This is a sophisticated system with many interesting features, too many to describe here. Three of these features are the waterfall for buffer analysis, (2) the earliest start addition to the Drum schedule to allow for pre Drum operations and (3) the claimed ability to take material constraints as well as resource constraints into account. -----Original Message----- From: Jack & Sara Goodstein [mailto:goodstein@home.com] Sent: 09 March 2001 07:00 Subject: [cmsig] Re: OPTvs. DISASTER One advantage of working for a mature, high technology manufacturing company is having access to world class library and reference facilities. I acquired my copy of the article about ten years ago. I have no idea if it is on the web. I suspect that university libraries would have it in their microfilm archives. Anyway, I don't want to take sides. This may be one for King Solomon. I'll quote selected lines with page reference. Since everything below except [things in brackets] and my typographical errors are quotations, I am omitting the quote marks except at beginning and end. Jack Goodstein From an article by Dr Goldratt, "Computerized Shop Floor Scheduling," Int. J. Prod. Res., 1988, vol 26, no 3, 443-455: " [p. 443 from the summary paragraph at the beginning of the article] . This article describes that evolutionary process from basically a computerized Kanban to an attempted computerization of the Drum-Buffer-Rope technique. . . . . . . the real key lies mainly in the conceptual framework under which we run our organizations. [p. 447] 4. Computerized kanban . . . In retrospect (since at that time we had no knowledge of the Japanese scheduling method), the scheduling logic that OPT[registered trademark] used in its early stage (1978) was basically automated Kanban. . . . [p.448] 5. The HALT concept exposes inherent idleness . . . The concept of HALT was introduced in late 1980. . . . The recognition of the contradiction between balanced flow and balanced capacity in an environment which has statistical fluctuations and dependent resources started to become clarified. The OPT[registered trademark] rules started to be formulated with the growing understanding that the software's superiority stemmed not from its algorithm but mainly from these underlying concepts. . . . . [p.450] Bottleneck/non-bottleneck scheduling . . . In 1982 . . . . The separation between the OPT[registered trademark] BRAIN module (forward scheduling of bottlenecks) and the SERVE module (backward scheduling of non-bottlenecks) was introduced . . . [p.451] This step of splitting between bottleneck schedules and non-bottleneck schedules represents a drastic conceptual departure from the Kanban scheduling logic . . . [p.452] . . . Only in 1985 did it become clear that inventory and time are not two separate protective mechanisms, but actually one. The TIME-BUFFER concept was developed . . . At this stage only a fraction of our efforts were geared toward shop floor scheduling and much more effort was devoted to finding conceptual replacements for the cost procedures . . . The 'Drum-Buffer-Rope' approach was formulated and explained in that book [The Race, which came out in 1986]. [p.454] . . . Thus scheduling it [a capacity constraint resource] forward in time means the creation of excess inventory. We were not totally oblivious to this problem and thus the OPT BRAIN[registered trademark] contains a crude mechanism to prevent excess build ahead (order permission). But the really important conclusion . . . was not drawn. . . . The driving force should not be time but the exploitation of the constraint. . . . " ----- Original Message ----- From: "Avraham Mordoch" To: "CM SIG List" Sent: Thursday, March 08, 2001 1:09 AM Subject: [cmsig] Re: OPTvs. DISASTER > Hi Jack, > > Since many on the list don't have access to the mentioned article by Dr. > Goldratt (is it on the web somewhere?), it'll be interesting if you can give us, > very briefly, his view if OPT was considered by him as a DBR system. > > Thanks, > > Avraham Mordoch > > Jack & Sara Goodstein wrote: > > > For those who don't already know, here's a reference to an article Dr > > Goldratt published: > > "Computerized Shop Floor Scheduling," Int. J. Prod. Res., 1988, vol 26, no > > 3, 443-455 > > > > The article includes his summary of the concepts of OPT. It is particularly > > interesting to consider the timing of its publication, about two years > > before the Haystack Syndrome, which includes the algorithm for Disaster. > > It seemed appropriate to post this reference since others seem to be engaged > > in sharing early history. --- From: "Mark Woeppel" Date: Wed, 7 Mar 2001 08:48:12 -0600 OPT is the way?! Really. I thought Jesus was the way. (someone break out the kool-aid!) I was there, too. My understanding of both OPT and DISASTER line up with Avraham's explanation not yours. DBR was invented to explain the way that OPT scheduled, and the only reason it does it the way it does was to overcome a processing speed limitation with hardware back in the '80's. DISASTER was designed from the ground up as a DBR solution (read the Haystack Syndrome). The scheduling algorithms of OPT and DISASTER are very different. Your commercial disguised as a response to Avraham deliberately misleads the people on the list. -----Original Message----- On Behalf Of Stephen Franks Sent: Wednesday, March 07, 2001 6:28 AM Subject: [cmsig] Re: Goldratt's involvement with i2 I was disturbed as many of my colleagues were to see this note from Avraham on the list. We observe and participate at times always attempting to keep product references out of the communications. However the misleading thrust of Avraham's comments left us with no other route other than to respond as follows to put the record straight. Please excuse as from referencing our technology directly...won't do it again Dr. Stephen Franks STG a wholly owned subsidiary of Manugistics ------------------------------------------ "OPT(r) technology paved the Way"! Really. OPT(r) technology is the way. We do not need to get too theological about this, but to cast the OPT(r) approach as John the Baptist and Disaster as the Messiah really takes the biscuit. Come off it Avraham. We were there too! The fact that OPT(r) technology has been delivering sustainable DBR solutions for nearly 20 years is often deliberately obscured by more recent vendors. They use words like "modern" or "new" to disguise the lack of functionality in their systems when compared to "old" OPT(r) software. DBR is OPT(r) technology and OPT(r) technology is DBR. There is no disguising that. The truth is that OPT(r) technology, the original DBR solution created by the founders of Constraint based thinking, has been in continuous development over these 20 years. There is no substitute for length of experience when it comes to delivering depth and breadth of capability. So it should come as no surprise that it out-performs all other packages in the TOC field. So it should surprise no-one that OPT(r) technology continues when others fall by the wayside. OPT(r) technology today, available on a wide variety of hardware and software platforms is a fully functioning DBR solution that meets the needs of the real manufacturing world. Whether process, assembly or configuration plants - or any combination thereof, manufacturers need powerful tools to implement concepts and policy and to sustain those implementations and build upon them. . Today's OPT(r) Solution Suite, like its predecessor versions, provides Step Change and Continuous Improvement through DBR architecture and extensive modelling capabilities. And as Eli is always reminding us, there is so much to do. Anyway, it's nice to hear from you again Avraham and to find you still involved. But don't knock the OPT(r) approach. It was very good in your day and it's even better today. Drop by and see us some time. -----Original Message----- From: Avraham Mordoch [mailto:mordoch@inter.net.il] Sent: 23 February 2001 17:38 I'm sorry to read that you was disappointed with Disaster (the Goal System). I hope that I was not the guy who presented it to you back in the beginning of 1990. Anyway, give Disaster the credit that it was the first Drum-Buffer-Rope system ever developed in the world (if you don't count OPT which was not really DBR). It paved the way. Also, up until today, 14 years after the development of Disaster started, there are very few, too few, reasonable Drum-Buffer-Rope systems available in the market place. With all the respect to I2 and their huge achievements, I2 is not one of those. Saying that you use TOC related methodologies is one thing but really using it in the software and make the user using it his reality, is a different thing. Avraham Mordoch STOCT & Associates Pampas2@aol.com wrote: > Intellection (now I2) started, as I know it, by Sangiv Shidu an ex-IT manager > who had to develop a system to run their production systems. He used Op > Research techniques and of course TOC philosophies to develop their software. > > I ran into them in 1990-1 when I was looking for a replacement to our own > home grown finite scheduler and found myself very dissapointed with the early > versions of Dissaster. --- Date: Thu, 08 Mar 2001 08:41:46 +0200 From: Avraham Mordoch A few notes for the benefits of the list's members who are not familiar with the history of OPT / DBR / TOC / Disaster etc., just to put the record straight: 1. I don't have any interest, as it may be implied in your e-mail to the list, in Disaster or in its successor - The Goal System. I'm out of the scheduling software business for many years. 2. I was a part of the development of OPT, and I believe that I was a major part, when OPT was originated and developed in the beginning of 80's. I knew OPT and the OPT concepts very well. I don't need others to tell me what OPT is all about. I was there. 3. STG probably improved OPT since they got it under their wings and I don't know the product today. In the 80's, as long as OPT was under the direct influence of Dr. Eli Goldratt, OPT was not a DBR system. 4. Since I led the development of Disaster and since I made the first sales of Disaster and since I led the first implementations of Disaster, we can safely assume that I also know the product, the concepts it was based on and its business environment. Disaster was the first DBR system. As the first such product it is not good enough. Other TOC / DBR based better products exist. 5. It may even be that "OPT© Solution Suite" today is an excellent DBR product. I'm not familiar with its today capabilities. If you have a new envelope around the old stuff, it is probably not. If you kept upgrading it conceptually and technically, it may very well be. 6. I'm, most probably, the most experienced person in the world in developing and implementing both, TOC based software and TOC methodologies. Between the many things I learnt, I know, and you know it as well, that it is not the software that will do the work for the customer. A good software is nice to have but you need some other things also in providing the customers good return on their investments. Sometimes the software is even just an obstacle in helping your customers to achieve the aggressive growth in both, revenues and profit, they are looking for. 7. My last point is that I suggest you go back and read what I wrote about OPT. Don't tell me that I knocked OPT. As a matter of fact I respect it very much. It paved the way for you and me and many of the other participants of the list. It just paved the way. Don't forget to whom you owe it. Stephen Franks wrote: > I was disturbed as many of my colleagues were to see this note from Avraham > on the list. We observe and participate at times always attempting to keep > product references out of the communications. However the misleading thrust > of Avraham's comments left us with no other route other than to respond as > follows to put the record straight. Please excuse as from referencing our > technology directly...won't do it again > > Dr. Stephen Franks > > ------------------------------------------ > > -OPT© technology paved the Way-! Really. OPT© technology is the way. > > We do not need to get too theological about this, but to cast the OPT© > approach as John the Baptist and Disaster as the Messiah really takes the > biscuit. Come off it Avraham. We were there too! > > The fact that OPT© technology has been delivering sustainable DBR solutions > for nearly 20 years is often deliberately obscured by more recent vendors. > They use words like -modern- or -new- to disguise the lack of functionality > in their systems when compared to -old- OPT© software. DBR is OPT© > technology and OPT© technology is DBR. There is no disguising that. > > The truth is that OPT© technology, the original DBR solution created by the > founders of Constraint based thinking, has been in continuous development > over these 20 years. There is no substitute for length of experience when > it comes to delivering depth and breadth of capability. So it should come > as no surprise that it out-performs all other packages in the TOC field. So > it should surprise no-one that OPT© technology continues when others fall > by the wayside. > > OPT© technology today, available on a wide variety of hardware and software > platforms is a fully functioning DBR solution that meets the needs of the > real manufacturing world. Whether process, assembly or configuration plants > - or any combination thereof, manufacturers need powerful tools to implement > concepts and policy and to sustain those implementations and build upon > them. . Today-s OPT© Solution Suite, like its predecessor versions, > provides Step Change and Continuous Improvement through DBR architecture and > extensive modelling capabilities. And as Eli is always reminding us, there > is so much to do. > > Anyway, it-s nice to hear from you again Avraham and to find you still > involved. But don-t knock the OPT© approach. It was very good in your day > and it-s even better today. Drop by and see us some time. +( organizational silo's see also section "Nature's Designs & Organizations" of this database -----Original Message----- From: bounce-cmsig-1111@lists.apics.org [mailto:bounce-cmsig-1111@lists.apics.org]On Behalf Of Tony Rizzo Sent: Friday, November 19, 1999 10:38 PM To: CM SIG List Subject: [cmsig] RE: Focus and Synchronization - branched off of Structured For Speed > > Hello Jerry, > I'd like to begin by describing the currently popular structure, which clearly does not please everyone. For most large corporations, it is a highly functional structure, maintained to absurdly high levels. Specifically, a large corporation like, say, Toyota, is divided into very tall functional silos. There may be the marketing silo, the product development silo, the manufacturing silo, the sales silo, the support silo, etc. > Let's assume that the product development silo of a corporation spans all three business units of the corporation. There is always some VP in charge of such a silo, and that VP usually reports to the CEO. There are also other VPs who are responsible for the various business units. They, too, report to the CEO. In such functionally divided corporations, this command structure exists with every functional silo and with every business unit. > So, what's the problem with this structure? Imagine that you are in charge of one of the three business units. You're after a specific market, and you want your projects and resources to be focused on the goal of penetrating that market. This is a typically laudable goal for anyone in such a position. Are the managers and worker bees who typically work your projects aligned with this goal? Well, they probably have some sort of dotted line relationship to you, on the corporation's organizational chart. But they have solid line relationships with the VP of their silo. This means that you can hope to merely influence their behavior. The one with the real control is the VP in charge of the silo. That person will subject the people in her silo to a set of measurements, which are very likely to cause behaviors that are entirely inconsistent with your objective. > The managers in her silo are in constant conflict. They have to survive, and to do that, they have to optimize their silo measurement. Often this means that they have to make decisions which are certain to delay your projects. At the same time, the managers have to look like they're being responsive to you. To appear to be responsive to you, they have to show progress on your projects. But they also have to show progress on the projects of the VPs in charge of the other two business units. After all, the same managers have dotted line relationships with those VPs too. > Well, the managers do show progress on your projects and on the projects of the other two VPs. They do so usually by spreading their people very thinly across all the projects. As a result, you can kiss your objective good-bye. The silo measurement wins. > With this structure, you don't control the resources that your projects require. You don't have the opportunity to apply any one set of measurements to those resources. In fact, it is very likely that the silo VPs regularly impose measurements designed to minimize costs or, at least, to keep each cost within its budgeted level. When this happens, of course, you can kiss your objective good-bye again. > Companies that are structured this way can't get out of their own way. They are barges, and the market is their ocean. When a swell comes by, they go up. When the swell passes, they go down. Frequently, the response of such a barge, to a swell, is entirely out of phase; the barge finds itself ill prepared to exploit any significant opportunity. Of course, proactive attempts to position the barge strategically for future swells are virtually impossible. Barges don't fast enough for that. > An alternative is to have a real business unit structure. Then, the silo VPs are put out to pasture, and the managers who formerly reported to them now report directly to you, or they report directly to each of the other two business unit VPs. > Is this structure better? Well, let's see. Are you able to apply a single set of measurements to all the managers and resources who work your projects? Certainly you are. Is there any more interference from the measurements imposed by the silo VPs? No, that's gone. Now, you have a much faster system for Command, Control, Coordination, and Information. Synchronizing the efforts of all those people becomes a distinct possibility. Now, instead of being in charge of a barge, you're in charge of a battleship. > With TOC, your battleship is equipped with radar, sonar, a high-speed data network, and even a global positioning system. You no longer worry about the barges that may share your ocean. You can outmaneuver them at will. If you so choose, you can even blow them out of the water. > But, you have one obstacle left. You have to identify one goal-centric measurement with which you and your managers can constantly make real-time corrections and, consequently, keep your entire operation completely focused on your goal. A few years ago, I would have told you that such a measurement didn't exist. Happily, that's no longer the case. Such a goal-centric measurement does exist for product development organizations. You'll find it described in some detail in the next issue of PM-Network magazine, in an article by a familiar author. PM-Network is a publication of the Project Management Institute (PMI). > Tony Rizzo tocguy@lucent.com --- From: Billcrs@aol.com Date: Mon, 1 Jan 2001 22:48:44 EST In a message dated 12/21/2000 8:37:09 AM Eastern Standard Time, OutPutter@aol.com writes: << Greetings List, I've been tossing around the question of organizational structure recently and hope to get the group's opinion on the subject. What organizational structure is the most suited for a ToC company? In a typical mid-sized company, how would the divisions be divided into VPs? Who and what function would report to what VP? Who would control the inventory and inventory policies? Any and all ideas are welcome. I hope you all have the fulfilling holiday of your choosing! Jim Fuller >> Organizational structure is not a function of TOC or any other business management tool. There is a proper structure for virtually any business and it has been exhaustively researched and documented by a Dr. Ichak Adizes. The book that explains it in the most detail is How To Solve The Mismanagement Crisis and the audio tapes are called Adizes Analysis Of Management. In short, some departments in a business are naturally future oriented and some departments are present oriented. Long term view and short term view departments should never be under one head unless it is the General Manager, Managing Partner, COO, or CEO. For example Marketing's primary role is to look to the future and figure out what markets they should be serving in the future and what products and services to serve those markets with. The primary role of the Sales department is to make sales NOW. When there is a VP of sales and marketing the long term is almost always sacrificed for the short term because short term results are always perceived to be more important. Therefore in this organizational structure marketing is spending the majority of it's time and budget on sales support activities rather than future oriented activities. The same holds true for production and engineering. Engineering should be deciding what is needed for the future while Production has to produce results NOW. When engineering and production are under the same head engineering ends up doing maintenance. When a business is not large enough to have a head of each department then short term and long term departments should be combined. Thus we want a VP of sales and production and a VP of marketing and engineering rather than the more common combination. Organizational structure is just one facet of his work, but if you are truly interested in a deep understanding of why businesses grow and die (and what to do about it), organizational structure, why people succeed or fail as managers, and the science of management then I would urge you to begin learning more about this man's work. He has been called the greatest management thinker in the world today. Enjoy and Best Regards, Bill --- From: Billcrs@aol.com Date: Mon, 1 Jan 2001 23:04:55 EST Subject: [cmsig] RE: ToC Organizational Structure To: "CM SIG List" Reply-To: cmsig@lists.apics.org In a message dated 12/22/2000 10:41:42 AM Eastern Standard Time, m.woeppel@gte.net writes: << Brian, I think you are incorrect in your statement that functional silos are damaging. Most organizations are organized around functions. Most organizations do not experience severe difficulties. Since that is the case, functional organizations are not that damaging. I think you need to look for another cause. How about this? 1. Most functions within organizations are measured on the performance of their part of the accounting equation (revenue for sales, expenses and productivity for other departments). 2. The measurement practices in many organizations do not give clear information about the results of local decisions. 3. Some organizations are experiencing difficulty when different functions work together. Let's check for the resulting effect that those organizations that are experiencing difficulty are being measured on the wrong things. I suspect they are. The question of what is an appropriate organizational strategy/structure (in line with TOC?) should be asked in light of every other question we ask about a decision in the organization: "Does the result of our decision move us closer to the goal of the organization?" Since each organization is different, with different competencies and market requirements, I don't think there is a "right" structure, only one that works to advance the organization to its goal. I'll get off my soapbox now. Blessings to all in this holiday season. Mark Mail to: mwoeppel@mfgexcellence.com Pinnacle Manufacturing Consulting We wrote the book on implementing Constraint Management. http://www.mfgexcellence.com/ >> You ask this question in your text above, "Does the result of our decision move us closer to the goal of the organization?". If we pretend that the goal of a business is to make money now and in the future, then what measures do you offer that decisions being made today to position our company to make more money in the future are good decisions? In other words, if I do not have a short term result to measure such as production or sales or some other measure of day to day productivity how can I really know that I'm making good decisions? I'll be even more specific. How should Marketing be measured on a day to day basis? Not the sales support part of marketing to generate leads, but the strategy part of marketing to help insure a profitable future? TOC provides excellent answers for functions that have to produce short term results, but I don't think it can offer us too much for functions whose primary role is to insure the future. Best Regards, Bill --- From: "Jean-Daniel Cusin" Subject: [cmsig] Re: ToC Organizational Structure: The Right People ain't enough... Date: Mon, 8 Jan 2001 13:25:56 -0500 The devil hides in the details. Trying to solve the "more inventory vs less inventory" problem by "having the right people in the right positions and having them figure it out" is like giving the devil a signed blank cheque. These "right people" will presumably do what is the "Right Thing to do", i.e. look at the constraints of the system, subordinate the non-constraints, elevate the constraints, etc. Or they will do the other "Right Thing", which is to buy an ERP system to link demand with production, etc. Unfortunately, they will be no better off than most other "Right People" in the field in establishing how much inventory is enough. In constraints management, under the chapter "sub-ordinate the non-constrained", no-where is it explained how this relates to inventory management and mix management. Heads-up! And in the ERP world, same gargantuan problem: when ERP creates planned orders (basically, the input to the schedule that creates the inventory, no-where does it ever consider or optimize the size of these planned orders based on the mix that must be handled by the production constraint. Having the "Right People" apply the "Right Thing to do" will NOT generate the Right inventory configuration if this means doing more of the same "best practices". We need to extend the ERP and CM body of knowledge to get there. Here are a few deductions about that: 1.1 Average inventory is based on lot-size (half of it, to be precise) plus safety stock. 1.2 Safety stock is based on demand variability over the replenishment lead-time. 1.3 Replenishment lead-time is based on how much time the bottleneck work center requires to cycle through the various items in its product mix. 1.4 The time the bottleneck work center requires to cycle through the various items in its product mix is dependant on the number of the items in the product mix and the lot-sizes used for each one of these items. Conclusion A: The predominent factor in establishing average inventory as well as lead-times is lot-sizing. 2.1 When ERP systems are implemented, it is extremely rare that anyone looks at the lot-sizing parameter for each item to make sure it makes sense. (In fact, they are usually transfered from the previous system, or a mass update is made to some arbitrary number.) 2.2 Some ERP systems offer some support for lot-sizing : EOQ, POQ, Lot for Lot, etc. 2.3 All of these lot-sizing approaches consider the item in isolation of the rest. EOQ and POQ only consider a couple cost factors. Non of these lot-sizing approaches considers bottleneck constraints and management objectives in terms of stock turns and lead-times. 2.4 All ERP systems require the planner to provide what lot-sizing or order policy the ERP is to use in creating planned orders. Conclusion B: ERP systems provide no help in establishing appropriate lot-sizing, given shop constraints and management objectives. Conclusion C: ERP systems are generally not very efficient at improving stock turns and lead-times. (This corelates to the high failure rate of ERP implementations.) 3.1 The focus in TOC is to maximize throughput $. 3.2 In the establishment of "buffer" in the Drum, Buffer, Rope technique, there is no discussion of how lot-sizing impacts buffer size and how to establish lot-sizes accordingly. Conclusion D: The TOC body of knowledge does not address the appropriate buffer issue (ie: Right Inventory) Conclusion E: This is a big opportunity waiting to happen! Because the two predominent models of thought about production and inventory management have not licked that inventory monster, and because this is a fundamental issue to all production and inventory planners. The Big Opportunity is about: a) The establishment of appropriate lot-sizes that respect production constraints, implement management objectives such as stock turns and lead-times, integrate item-level lot-sizing constraints such as minimum and maximum lot sizes and shelf-life, while minimizing all the costs involved. b) The establishment of lead-time and safety stock parameters that are synchronized with the lot-sizes so that the production and stocking strategies are aligned one with another. The value of the Big Opportunity: an improvement of 20 to 35% (and often much more) in lead-times and in stock levels with no loss in efficiency. Yes, we have a solution to sell that fits with an ERP, TOC and/or Kanban environment. We have a solution to sell only because this opportunity exists and there is such a big payback potential, and it is quite difficult to do manually. If you would like more information, please contact me directly. I'd like to tell you about Lance-Lot(tm). -----Original Message----- In a message dated 1/2/2001 10:04:42 PM Eastern Standard Time, OutPutter@aol.com writes: << Thanks Bill, I'll read the book. One short question. As you know, ToC has a generic solution for production and inventory control that addresses the conflict of holding more inventory vs. holding less inventory. How would you say Adizes would address control of the inventory, especially of finished goods? Jim Fuller >> He would not address it. He would have the right organizational structure and the right people in the right positions and then let them figure it out. +( outsourcing From: "Bill Dettmer" Subject: [cmsig] RE: Also graph T/headcount (=productivity) or T/OE over time. Date: Fri, 4 May 2001 10:01:33 -0700 ----- Original Message ----- From: "HP Staber" To: "CM SIG List" Sent: Thursday, May 03, 2001 12:43 PM Subject: [cmsig] Also graph T/headcount (=productivity) or T/OE over time. > Jeff Schueller wrote: > > > > How to you quantify T/headcount when the company has undertaken an > > aggressive out-sourcing program? > > > > Throughput could remain the same or even increase under out-sourcing while > > head count would be dose diving. > > This is an assumption which I would like to challange. Subcontracting > decreases your T as it is a variable expense. In reality only T will > nose dive after a subcontracting decission while managers are reluctant > to layoff the people. [This looks like a classic case for applying the "meta-level" decision criterion: delta "T" minus delta "OE." A subcontracting decision, i.e., moving work out-of-house (notice I didn't say "to the outhouse"!), is usually justified on the basis of a reduction in fixed OE. So if an internal function is completely dissolved with the expectation that outsourcing will save money, the assumption is usually made that the variable cost increase will be more than outweighed by the OE saving. Is this true? The only way to know for sure is to calculate the increase in variable cost over some extended period--say a year or more--and recalculate "T" for the same period based on that new variable cost. Then recalculate OE for the same period, subtract, and compare the answers with the original T and OE for the same period to determine whether dT-minus-dOE is positive or negative. Even if it turns out to be positive, the value of that difference should be weighed against the change in internal capability (a non-financial issue) to determine whether the benefit-to-cost saving ratio is worth the administrative hassle, the pain inflicted both on those "down-sized" and those remaining, and the impact on capability to generate "T" in the future. Clearly, this has deeper ramifications, and more system interdependencies) than just a bean-counting decision (though very few companies will see it that way).] +( P-Q example see also games.db Goldratt: The Haystack Syndrome p 72 1) System beschreiben, Basisdaten erfassen Sales/#, Purch/#, Kapazit„ten, Ressourcenverbrauch, Marktvolumen 2) Throughput pro Art berechnen 3) Engpasskapazit„ten berechnen 4) Throughput pro EngpassStunde und Produkt berechnen = ThroughpuContribution per product 5) Prio im Mix hat Produkt mit der h”chsten ThroughputContribution 6) Threshhold Level berechnen = minimum TP pro Engpass heute the minimum througgput that we are getting today per constraint minute Haystack Syndrom page 95 Date: Thu, 04 Nov 1999 08:27:42 -0500 From: Ed Walker I use the "classic" P-Q problem (from The Haystack Syndrome) to introduce TOC to my students. In the base scenario, traditional measures DO indicate that the "wrong" product should be produced. However in various alternate scenarios that I use, traditional and TIOE measures are in agreement. I specifically use those examples to point out that traditional measures are not ALWAYS wrong, but I do stress that I have not seen an example (academic or real-world) where TIOE measures lead to incorrect (less profitable) decision making. I believe this point is critical when "converting" the CPAs in my classes. If you'd like to try this, take the base P-Q problem and add another "B" machine (and $300 to weekly OE and forget about the purchase price of the new "B" machine for now.) The market becomes the constraint, and traditional and TIOE measures correctly indicate that Q is the preferred product. (Actually, TIOE would imply that both markets should be completely satisfied -- if you have the resource and material capacity, orders should never be shipped late.) OR Take the base P-Q problem (with one "B" machine) and evaluate the impact of alternative material sourcing. Let's say that your purchasing manager has found an alternate vendor for raw material #2 (the material common to both P and Q). If you sign an agreement such that the new vendor is the EXCLUSIVE vendor for raw material #2, that vendor can sell the good to you for 1/2 the current price (new price = $10/unit) but only in quantities of 100 per week. Now the material supply is the constraint, and again, both traditional and TIOE measures indicate that Q is the preferred product. [You end up making less money by going with the new vendor.] The point is that TIOE measures indicate either P or Q as the preferred product depending upon the constraint, but traditional measures ALWAYS indicate Q. Ed D. Walker II, Ph.D. Office (912) 681-5085 Asst. Professor of Management Fax (912) 681-0710 College of Business Administration P.O. Box 8152 Georgia Southern University Statesboro, GA 30460 +( paradigm definition Joel Barker (1992) in his book "Future Edge" : A paradigm is a set of rules and regulations (written or unwritten) that does two things : 1) it establishes or defines boundaries 2) it tells how to behave inside the boundaries in order to be successful +( paradigm shift From: "Greg Altman" Date: Mon, 20 Dec 2004 23:29:51 -0500 Subject: RE: [tocleaders] What's it take to learn new paradigms This brings to mind something I've seen from several sources the last 18 months. Bill Dettmer's book Strategic Navigation, several books on Neuro-linguistic Programming (NLP) and Donald Rumsfeld make reference to the Johari Window (not all necessarily refer to it by that name) in describing how we learn. Paraphrasing this concept, there is a two dimensional array concerning understanding and awareness - Awareness Conscious Unconscious Understanding 3 4 Understanding Not understanding 2 1 In learning we tend to go from 1 Unconscious / not understanding Ignorance to 2 Conscious / not understanding Confusion to 3 Conscious / understanding Competence to 4 Unconscious / understanding Instinct / second nature If at some point you want to change your understanding or beliefs, you've got to "regress" from 4 to 3 to 2 then go back to 3 and finally to 4 as you learn and adopt the new understanding. In shifting paradigms this seems particularly important. An existing paradigm typically has become second nature or instinctive. First we have to become aware of it. Then we have to become aware of the new concept that we don't yet understand and accept that the old paradigm is perhaps inadequate. We have to give up our sense of understanding. (These steps may, in many cases, be difficult.) We then need to get to a new level of understanding where the new knowledge has been integrated with or replaced the old understanding. Finally in adopting the new paradigm, it needs to become instinctive or second nature. Frequently these are hard steps to process, particularly without expert help. In NLP I've also seen reference to a 5th level that is important for training or leading change Consciousness of unconscious understanding. Many times you need someone to be operating at a higher (or meta) level to lead you through these changes. I think we've all had professors who had an outstanding understanding of a subject, but were poorly equipped to transfer their understanding to their students. We've also experienced people with the knack of communicating at the proper level to lead us to understanding. They've got to function at two levels, 1) understanding the subject and 2) understanding the state of the people they are dealing with. Some of this may help in understanding why it's hard to shift paradigms and how this might be addressed more effectively. Greg -----Original Message----- From: Jim Bowles [mailto:j_m-bowles@tiscali.co.uk] Sent: Monday, December 20, 2004 8:49 AM To: tocleaders@yahoogroups.com Subject: [tocleaders] What's it take to learn new paradigms Hi Tony I think that Prasad's questions trigger many responses on this subject just as they have done, along with Ed's on the cmsig list. Only recently have I observed how much he is locked into the Cost World with his thinking and offerings. For me this presents a challenge in how to address the needed paradigm shift from a teaching point of view. The challenge I think is "how to get them to want to learn what we have learned". I was seeing the need it in the light of a recent experience I had with riding a bicycle. I managed to get my wife to go on a bike ride after trying for several decades to do so. She agreed when I told her that they had three wheeled machines. She did very well and rode the bike straight from the bike sheds and we covered about 7 miles without any difficulty [expect when we came to a rest area and I asked if she wanted to sit down at one of the picnic tables. She said, "No but I do want to stand up!"] I then tried to ride the trike, it was very difficult, it went every where but where I wanted to go. I couldn't ride it in a straight line. It turns out that having learned how to balance a bike the brain and body try to do the same thing even when given the self balancing machine. My question is this: Can the TOC paradigms be likened to having to learn how not to use something that has become intuitive, such as "balance". It would have taken quite some time with the new machine to learn how to switch off that intuitive ability, it was far easier to get back on my bike and ride back to base. Now on a different tack maybe you would like to comment on some question I have managed to verbalise this week: To whom it may be of interest. Over the past few days I have been putting into words a concern that I have regarding simulation. I accept that such methods have a place and that they do help people prepare themselves prior to implementation. This is what I have written: PS I had a conversation with Karl Buckridge he managed to help me verbalise something that has bugged me for some time. When we talk about CCPM we speak of managing uncertainty. But what is this thing called uncertainty? At one level we have statistical variation in how long it takes to get the task completed. At another we have statistical variation in the estimates of the tasks and their duration. But commonly in projects we have something else. Uncertainty in what we have to do to get the result we are looking for. Many times we have to do something before we can even start to see what it is that we have to do or how long it may take. Fortunately we can apply statistical principles to "guesstimates" and by so doing improve the results we obtain. To me this is about accepting the world of "probability" not "certainty" = "deterministic". During my conversation with Karl we recalled a time that we spent with Oded. We were using the TOC production simulator. We were using the buffer management screen. And we were having a hell of a time in interpreting what the information was telling us. We were going round and round in circles, getting nowhere. We kept running the simulation over and over again. But we couldn't get it. Oded eventually came and turned the computer off. He simply said: if you continue to repeat the programme it just becomes a game. But remember this session when you are teaching others. Did I learn the power of buffer management from that session? I don't think so but for sure it made me want to know. I can remember the next struggle came when we had the Self Learning Kit. It took quite some time to visualise the method from the illustrations. I didn't like what was in the book so I used several different methods to get the illustration to work for me. The question that this raises is how do we best teach someone a different paradigm without the use of a simulator? And for you those who are willing to respond, how does simulation deal with my third type of uncertainty - Not knowing what we don't know? +( policy From: "John Curran" Subject: [cmsig] company policy Date: Sat, 10 Mar 2001 16:29:13 -0500 Start with a cage containing five monkeys. In the cage, hang a banana on a string and put a set of stairs under it. Before long, a monkey will go to the stairs and start to climb towards the Banana. As soon as he touches the stairs, spray all of the monkeys with cold water. After a while, another monkey makes an attempt with the same result -- all the monkeys are sprayed with cold water. Pretty soon, when another monkey tries to climb the stairs, the other monkeys will try to prevent it. Now, Turn off the cold water. Remove one monkey from the cage and replace it with a new one. The New monkey sees the banana and wants to climb the stairs. To his horror, all of the other monkeys attack him. After another attempt and attack, he knows that if he tries to climb the stairs, he will be assaulted. Next, remove another of the original five monkeys and replace it with a new one. The newcomer goes to the stairs and is attacked. The previous Newcomer takes part in the punishment with enthusiasm. Again, replace a third original monkey with a new one. The new one makes it to the stairs and is attacked as well. Two of the four monkeys that beat him have no idea why they were not permitted to climb the stairs, or why they are participating in the beating of the newest monkey. After replacing the fourth and fifth original monkeys, all the monkeys which have been sprayed with cold water have been replaced. Nevertheless, no monkey ever again approaches the stairs. Why not? Because that's the way it's always been around here. And that's how company policy begins.... +( Prioritization Algorithms From: "Jim Fuller" Subject: Re: [cmsig] Prioritization Algorithms Date: Tue, 5 Apr 2005 07:48:52 -0500 I've had trouble explaining my thoughts on this before so I'll leave out the analogies. I see there being three possible scenarios. Scenario 1 - The company has enough capacity that if/when a seasonal peak occurs, there is no lateness or other ill effects on customer service. Scenario 2 - The company has less capacity than needed during a seasonal peak but enough capacity in a longer term (say, a year). Scenario 3 - The company has less capacity than needed at all times. I see different inventory strategies in each scenario. Scenario 1 - The company can choose to build to order because inventory is not necessary. Scenario 2 - The company must build some or all products to stock to avoid customer service issues during the peak. Scenario 3 - The company has no capacity to build inventory. We are talking about a company in Scenario 2 for this discussion. If I read you right, you suggest a strategy that would build to stock some quantity of all products to some level. I would differ only in that I would build to stock only some of the products. The reason why is that in my experience, there are always some products with less predictable demand than others. If I build something that doesn't sell in the next peak, I've got to use current capacity to produce the product. I want to avoid some capacity needs during the peak by building to stock during the valley. So, I will build only the products that have a highly predictable level of demand. That means that the probability that my inventory will free up capacity during the peak will be as high as possible. If I still need more inventory after making the highly predictable products, I'll go to the next most highly predictable products and so on. I agree that there needs to be a limit on how much I build during the valley. I've always used the rule of thumb that the limit is equal to expected sales during the peak since I'm building the most predictable products. If I build less predictable products, I build less than the expected sales like you suggest. We sure don't want a years supply of pet rocks. ----- Original Message ----- From: "Potter, Brian (James B.)" To: "Constraints Management SIG" Sent: Monday, April 04, 2005 2:34 PM Subject: [cmsig] Prioritization Algorithms You wrote, "Then you would build inventory in the gaps left from the due date sorting." I assume that (before each expected demand peak) you would build FGI to levels LESS than expected peak sales (say, expected peak sales minus one-sigma to expected peak sales minus three-sigma depending upon how large the variation in your projections and how much the peak will exceed your short run capacity) for all SKUs so that production during the peaks can absorb the difference between actual peak demand and FGI. Thus, the post-peak period will not leave you with a ten-year "Pet Rock" FGI if demand for some SKUs comes in surprisingly lower than expected. +( questions/socratic method The Socratic Method: Teaching by Asking Instead of by Telling by Rick Garlikov The following is a transcript of a teaching experiment, using the Socratic method, with a regular third grade class in a suburban elementary school. I present my perspective and views on the session, and on the Socratic method as a teaching tool, following the transcript. The class was conducted on a Friday afternoon beginning at 1:30, late in May, with about two weeks left in the school year. This time was purposely chosen as one of the most difficult times to entice and hold these children's concentration about a somewhat complex intellectual matter. The point was to demonstrate the power of the Socratic method for both teaching and also for getting students involved and excited about the material being taught. There were 22 students in the class. I was told ahead of time by two different teachers (not the classroom teacher) that only a couple of students would be able to understand and follow what I would be presenting. When the class period ended, I and the classroom teacher believed that at least 19 of the 22 students had fully and excitedly participated and absorbed the entire material. The three other students' eyes were glazed over from the very beginning, and they did not seem to be involved in the class at all. The students' answers below are in capital letters. The experiment was to see whether I could teach these students binary arithmetic (arithmetic using only two numbers, 0 and 1) only by asking them questions. None of them had been introduced to binary arithmetic before. Though the ostensible subject matter was binary arithmetic, my primary interest was to give a demonstration to the teacher of the power and benefit of the Socratic method where it is applicable. That is my interest here as well. I chose binary arithmetic as the vehicle for that because it is something very difficult for children, or anyone, to understand when it is taught normally; and I believe that a demonstration of a method that can teach such a difficult subject easily to children and also capture their enthusiasm about that subject is a very convincing demonstration of the value of the method. (As you will see below, understanding binary arithmetic is also about understanding "place- value" in general. For those who seek a much more detailed explanation about place-value, visit the long paper on The Concept and Teaching of Place-Value.) This was to be the Socratic method in what I consider its purest form, where questions (and only questions) are used to arouse curiosity and at the same time serve as a logical, incremental, step-wise guide that enables students to figure out about a complex topic or issue with their own thinking and insights. In a less pure form, which is normally the way it occurs, students tend to get stuck at some point and need a teacher's explanation of some aspect, or the teacher gets stuck and cannot figure out a question that will get the kind of answer or point desired, or it just becomes more efficient to "tell" what you want to get across. If "telling" does occur, hopefully by that time, the students have been aroused by the questions to a state of curious receptivity to absorb an explanation that might otherwise have been meaningless to them. Many of the questions are decided before the class; but depending on what answers are given, some questions have to be thought up extemporaneously. Sometimes this is very difficult to do, depending on how far from what is anticipated or expected some of the students' answers are. This particular attempt went better than my best possible expectation, and I had much higher expectations than any of the teachers I discussed it with prior to doing it. I had one prior relationship with this class. About two weeks earlier I had shown three of the third grade classes together how to throw a boomerang and had let each student try it once. They had really enjoyed that. One girl and one boy from the 65 to 70 students had each actually caught their returning boomerang on their throws. That seemed to add to everyone's enjoyment. I had therefore already established a certain rapport with the students, rapport being something that I feel is important for getting them to comfortably and enthusiastically participate in an intellectually uninhibited manner in class and without being psychologically paralyzed by fear of "messing up". When I got to the classroom for the binary math experiment, students were giving reports on famous people and were dressed up like the people they were describing. The student I came in on was reporting on John Glenn, but he had not mentioned the dramatic and scary problem of that first American trip in orbit. I asked whether anyone knew what really scary thing had happened on John Glenn's flight, and whether they knew what the flight was. Many said a trip to the moon, one thought Mars. I told them it was the first full earth orbit in space for an American. Then someone remembered hearing about something wrong with the heat shield, but didn't remember what. By now they were listening intently. I explained about how a light had come on that indicated the heat shield was loose or defective and that if so, Glenn would be incinerated coming back to earth. But he could not stay up there alive forever and they had nothing to send up to get him with. The engineers finally determined, or hoped, the problem was not with the heat shield, but with the warning light. They thought it was what was defective. Glenn came down. The shield was ok; it had been just the light. They thought that was neat. "But what I am really here for today is to try an experiment with you. I am the subject of the experiment, not you. I want to see whether I can teach you a whole new kind of arithmetic only by asking you questions. I won't be allowed to tell you anything about it, just ask you things. When you think you know an answer, just call it out. You won't need to raise your hands and wait for me to call on you; that takes too long." [This took them a while to adapt to. They kept raising their hands; though after a while they simply called out the answers while raising their hands.] Here we go. 1) "How many is this?" [I held up ten fingers.] TEN 2) "Who can write that on the board?" [virtually all hands up; I toss the chalk to one kid and indicate for her to come up and do it]. She writes 10 3) Who can write ten another way? [They hesitate than some hands go up. I toss the chalk to another kid.] 4) Another way? 5) Another way? 2 x 5 [inspired by the last idea] 6) That's very good, but there are lots of things that equal ten, right? [student nods agreement], so I'd rather not get into combinations that equal ten, but just things that represent or sort of mean ten. That will keep us from having a whole bunch of the same kind of thing. Anybody else? TEN 7) One more? X [Roman numeral] 8) [I point to the word "ten"]. What is this? THE WORD TEN 9) What are written words made up of? LETTERS 10) How many letters are there in the English alphabet? 26 11) How many words can you make out of them? ZILLIONS 12) [Pointing to the number "10"] What is this way of writing numbers made up of? NUMERALS 13) How many numerals are there? NINE / TEN 14) Which, nine or ten? TEN 15) Starting with zero, what are they? [They call out, I write them in the following way.] 0 1 2 3 4 5 6 7 8 9 16) How many numbers can you make out of these numerals? MEGA-ZILLIONS, INFINITE, LOTS 17) How come we have ten numerals? Could it be because we have 10 fingers? COULD BE 18) What if we were aliens with only two fingers? How many numerals might we have? 2 19) How many numbers could we write out of 2 numerals? NOT MANY / [one kid:] THERE WOULD BE A PROBLEM 20) What problem? THEY COULDN'T DO THIS [he holds up seven fingers] 21) [This strikes me as a very quick, intelligent insight I did not expect so suddenly.] But how can you do fifty five? [he flashes five fingers for an instant and then flashes them again] 22) How does someone know that is not ten? [I am not really happy with my question here but I don't want to get side-tracked by how to logically try to sign numbers without an established convention. I like that he sees the problem and has announced it, though he did it with fingers instead of words, which complicates the issue in a way. When he ponders my question for a second with a "hmmm", I think he sees the problem and I move on, saying...] 23) Well, let's see what they could do. Here's the numerals you wrote down [pointing to the column from 0 to 9] for our ten numerals. If we only have two numerals and do it like this, what numerals would we have. 0, 1 24) Okay, what can we write as we count? [I write as they call out answers.] 0 ZERO 1 ONE [silence] 25) Is that it? What do we do on this planet when we run out of numerals at 9? WRITE DOWN "ONE, ZERO" 26) Why? [almost in unison] I DON'T KNOW; THAT'S JUST THE WAY YOU WRITE "TEN" 27) You have more than one numeral here and you have already used these numerals; how can you use them again? WE PUT THE 1 IN A DIFFERENT COLUMN 28) What do you call that column you put it in? TENS 29) Why do you call it that? DON'T KNOW 30) Well, what does this 1 and this 0 mean when written in these columns? 1 TEN AND NO ONES 31) But why is this a ten? Why is this [pointing] the ten's column? DON'T KNOW; IT JUST IS! 32) I'll bet there's a reason. What was the first number that needed a new column for you to be able to write it? TEN 33) Could that be why it is called the ten's column?! What is the first number that needs the next column? 100 34) And what column is that? HUNDREDS 35) After you write 19, what do you have to change to write down 20? 9 to a 0 and 1 to a 2 36) Meaning then 2 tens and no ones, right, because 2 tens are ___? TWENTY 37) First number that needs a fourth column? ONE THOUSAND 38) What column is that? THOUSANDS 39) Okay, let's go back to our two-fingered aliens arithmetic. We have 0 zero 1 one. What would we do to write "two" if we did the same thing we do over here [tens] to write the next number after you run out of numerals? START ANOTHER COLUMN 40) What should we call it? TWO'S COLUMN? 41) Right! Because the first number we need it for is ___? TWO 42) So what do we put in the two's column? How many two's are there in two? 1 43) And how many one's extra? ZERO 44) So then two looks like this: [pointing to "10"], right? RIGHT, BUT THAT SURE LOOKS LIKE TEN. 45) No, only to you guys, because you were taught it wrong [grin] -- to the aliens it is two. They learn it that way in pre-school just as you learn to call one, zero [pointing to "10"] "ten". But it's not really ten, right? It's two -- if you only had two fingers. How long does it take a little kid in pre-school to learn to read numbers, especially numbers with more than one numeral or column? TAKES A WHILE 46) Is there anything obvious about calling "one, zero" "ten" or do you have to be taught to call it "ten" instead of "one, zero"? HAVE TO BE TAUGHT IT 47) Ok, I'm teaching you different. What is "1, 0" here? TWO 48) Hard to see it that way, though, right? RIGHT 49) Try to get used to it; the alien children do. What number comes next? THREE 50) How do we write it with our numerals? We need one "TWO" and a "ONE" [I write down 11 for them] So we have 0 zero 1 one 10 two 11 three 51) Uh oh, now we're out of numerals again. How do we get to four? START A NEW COLUMN! 52) Call it what? THE FOUR'S COLUMN 53) Call it out to me; what do I write? ONE, ZERO, ZERO [I write "100 four" under the other numbers] 54) Next? ONE, ZERO, ONE I write "101 five" 55) Now let's add one more to it to get six. But be careful. [I point to the 1 in the one's column and ask] If we add 1 to 1, we can't write "2", we can only write zero in this column, so we need to carry ____? ONE 56) And we get? ONE, ONE, ZERO 57) Why is this six? What is it made of? [I point to columns, which I had been labeling at the top with the word "one", "two", and "four" as they had called out the names of them.] a "FOUR" and a "TWO" 58) Which is ____? SIX 59) Next? Seven? ONE, ONE, ONE I write "111 seven" 60) Out of numerals again. Eight? NEW COLUMN; ONE, ZERO, ZERO, ZERO I write "1000 eight" [We do a couple more and I continue to write them one under the other with the word next to each number, so we have:] 0 zero 1 one 10 two 11 three 100 four 101 five 110 six 111 seven 1000 eight 1001 nine 1010 ten 61) So now, how many numbers do you think you can write with a one and a zero? MEGA-ZILLIONS ALSO/ ALL OF THEM 62) Now, let's look at something. [Point to Roman numeral X that one kid had written on the board.] Could you easily multiply Roman numerals? Like MCXVII times LXXV? NO 63) Let's see what happens if we try to multiply in alien here. Let's try two times three and you multiply just like you do in tens [in the "traditional" American style of writing out multiplication]. 10 two x 11 times three They call out the "one, zero" for just below the line, and "one, zero, zero" for just below that and so I write: 10 two x 11 times three 10 100 110 64) Ok, look on the list of numbers, up here [pointing to the "chart" where I have written down the numbers in numeral and word form] what is 110? SIX 65) And how much is two times three in real life? SIX 66) So alien arithmetic works just as well as your arithmetic, huh? LOOKS LIKE IT 67) Even easier, right, because you just have to multiply or add zeroes and ones, which is easy, right? YES! 68) There, now you know how to do it. Of course, until you get used to reading numbers this way, you need your chart, because it is hard to read something like "10011001011" in alien, right? RIGHT 69) So who uses this stuff? NOBODY/ ALIENS 70) No, I think you guys use this stuff every day. When do you use it? NO WE DON'T 71) Yes you do. Any ideas where? NO 72) [I walk over to the light switch and, pointing to it, ask:] What is this? A SWITCH 73) [I flip it off and on a few times.] How many positions does it have? TWO 74) What could you call these positions? ON AND OFF/ UP AND DOWN 75) If you were going to give them numbers what would you call them? ONE AND TWO/ [one student] OH!! ZERO AND ONE! [other kids then:] OH, YEAH! 76) You got that right. I am going to end my experiment part here and just tell you this last part. Computers and calculators have lots of circuits through essentially on/off switches, where one way represents 0 and the other way, 1. Electricity can go through these switches really fast and flip them on or off, depending on the calculation you are doing. Then, at the end, it translates the strings of zeroes and ones back into numbers or letters, so we humans, who can't read long strings of zeroes and ones very well can know what the answers are. [at this point one of the kid's in the back yelled out, OH! NEEEAT!!] I don't know exactly how these circuits work; so if your teacher ever gets some electronics engineer to come into talk to you, I want you to ask him what kind of circuit makes multiplication or alphabetical order, and so on. And I want you to invite me to sit in on the class with you. Now, I have to tell you guys, I think you were leading me on about not knowing any of this stuff. You knew it all before we started, because I didn't tell you anything about this -- which by the way is called "binary arithmetic", "bi" meaning two like in "bicycle". I just asked you questions and you knew all the answers. You've studied this before, haven't you? NO, WE HAVEN'T. REALLY. Then how did you do this? You must be amazing. By the way, some of you may want to try it with other sets of numerals. You might try three numerals 0, 1, and 2. Or five numerals. Or you might even try twelve 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ~, and ^ -- see, you have to make up two new numerals to do twelve, because we are used to only ten. Then you can check your system by doing multiplication or addition, etc. Good luck. After the part about John Glenn, the whole class took only 25 minutes. Their teacher told me later that after I left the children talked about it until it was time to go home. . . . . . . . My Views About This Whole Episode Students do not get bored or lose concentration if they are actively participating. Almost all of these children participated the whole time; often calling out in unison or one after another. If necessary, I could have asked if anyone thought some answer might be wrong, or if anyone agreed with a particular answer. You get extra mileage out of a given question that way. I did not have to do that here. Their answers were almost all immediate and very good. If necessary, you can also call on particular students; if they don't know, other students will bail them out. Calling on someone in a non-threatening way tends to activate others who might otherwise remain silent. That was not a problem with these kids. Remember, this was not a "gifted" class. It was a normal suburban third grade of whom two teachers had said only a few students would be able to understand the ideas. The topic was "twos", but I think they learned just as much about the "tens" they had been using and not really understanding. This method takes a lot of energy and concentration when you are doing it fast, the way I like to do it when beginning a new topic. A teacher cannot do this for every topic or all day long, at least not the first time one teaches particular topics this way. It takes a lot of preparation, and a lot of thought. When it goes well, as this did, it is so exciting for both the students and the teacher that it is difficult to stay at that peak and pace or to change gears or topics. When it does not go as well, it is very taxing trying to figure out what you need to modify or what you need to say. I practiced this particular sequence of questioning a little bit one time with a first grade teacher. I found a flaw in my sequence of questions. I had to figure out how to correct that. I had time to prepare this particular lesson; I am not a teacher but a volunteer; and I am not a mathematician. I came to the school just to do this topic that one period. I did this fast. I personally like to do new topics fast originally and then re-visit them periodically at a more leisurely pace as you get to other ideas or circumstances that apply to, or make use of, them. As you re-visit, you fine tune. The chief benefits of this method are that it excites students' curiosity and arouses their thinking, rather than stifling it. It also makes teaching more interesting, because most of the time, you learn more from the students -- or by what they make you think of -- than what you knew going into the class. Each group of students is just enough different, that it makes it stimulating. It is a very efficient teaching method, because the first time through tends to cover the topic very thoroughly, in terms of their understanding it. It is more efficient for their learning then lecturing to them is, though, of course, a teacher can lecture in less time. It gives constant feed-back and thus allows monitoring of the students' understanding as you go. So you know what problems and misunderstandings or lack of understandings you need to address as you are presenting the material. You do not need to wait to give a quiz or exam; the whole thing is one big quiz as you go, though a quiz whose point is teaching, not grading. Though, to repeat, this is teaching by stimulating students' thinking in certain focused areas, in order to draw ideas out of them; it is not "teaching" by pushing ideas into students that they may or may not be able to absorb or assimilate. Further, by quizzing and monitoring their understanding as you go along, you have the time and opportunity to correct misunderstandings or someone's being lost at the immediate time, not at the end of six weeks when it is usually too late to try to "go back" over the material. And in some cases their ideas will jump ahead to new material so that you can meaningfully talk about some of it "out of (your!) order" (but in an order relevant to them). Or you can tell them you will get to exactly that in a little while, and will answer their question then. Or suggest they might want to think about it between now and then to see whether they can figure it out for themselves first. There are all kinds of options, but at least you know the material is "live" for them, which it is not always when you are lecturing or just telling them things or they are passively and dutifully reading or doing worksheets or listening without thinking. If you can get the right questions in the right sequence, kids in the whole intellectual spectrum in a normal class can go at about the same pace without being bored; and they can "feed off" each others' answers. Gifted kids may have additional insights they may or may not share at the time, but will tend to reflect on later. This brings up the issue of teacher expectations. From what I have read about the supposed sin of tracking, one of the main complaints is that the students who are not in the "top" group have lower expectations of themselves and they get teachers who expect little of them, and who teach them in boring ways because of it. So tracking becomes a self-fulfilling prophecy about a kid's educability; it becomes dooming. That is a problem, not with tracking as such, but with teacher expectations of students (and their ability to teach). These kids were not tracked, and yet they would never have been exposed to anything like this by most of the teachers in that school, because most felt the way the two did whose expectations I reported. Most felt the kids would not be capable enough and certainly not in the afternoon, on a Friday near the end of the school year yet. One of the problems with not tracking is that many teachers have almost as low expectations of, and plans for, students grouped heterogeneously as they do with non-high-end tracked students. The point is to try to stimulate and challenge all students as much as possible. The Socratic method is an excellent way to do that. It works for any topics or any parts of topics that have any logical natures at all. It does not work for unrelated facts or for explaining conventions, such as the sounds of letters or the capitals of states whose capitals are more the result of historical accident than logical selection. Of course, you will notice these questions are very specific, and as logically leading as possible. That is part of the point of the method. Not just any question will do, particularly not broad, very open ended questions, like "What is arithmetic?" or "How would you design an arithmetic with only two numbers?" (or if you are trying to teach them about why tall trees do not fall over when the wind blows "what is a tree?"). Students have nothing in particular to focus on when you ask such questions, and few come up with any sort of interesting answer. And it forces the teacher to think about the logic of a topic, and how to make it most easily assimilated. In tandem with that, the teacher has to try to understand at what level the students are, and what prior knowledge they may have that will help them assimilate what the teacher wants them to learn. It emphasizes student understanding, rather than teacher presentation; student intake, interpretation, and "construction", rather than teacher output. And the point of education is that the students are helped most efficiently to learn by a teacher, not that a teacher make the finest apparent presentation, regardless of what students might be learning, or not learning. I was fortunate in this class that students already understood the difference between numbers and numerals, or I would have had to teach that by questions also. And it was an added help that they had already learned Roman numerals. It was also most fortunate that these students did not take very many, if any, wrong turns or have any firmly entrenched erroneous ideas that would have taken much effort to show to be mistaken. I took a shortcut in question 15 although I did not have to; but I did it because I thought their answers to questions 13 and 14 showed an understanding that "0" was a numeral, and I didn't want to spend time in this particular lesson trying to get them to see where "0" best fit with regard to order. If they had said there were only nine numerals and said they were 1-9, then you could ask how they could write ten numerically using only those nine, and they would quickly come to see they needed to add "0" to their list of numerals. These are the four critical points about the questions: 1) they must be interesting or intriguing to the students; they must lead by 2) incremental and 3) logical steps (from the students' prior knowledge or understanding) in order to be readily answered and, at some point, seen to be evidence toward a conclusion, not just individual, isolated points; and 4) they must be designed to get the student to see particular points. You are essentially trying to get students to use their own logic and therefore see, by their own reflections on your questions, either the good new ideas or the obviously erroneous ideas that are the consequences of their established ideas, knowledge, or beliefs. Therefore you have to know or to be able to find out what the students' ideas and beliefs are. You cannot ask just any question or start just anywhere. It is crucial to understand the difference between "logically" leading questions and "psychologically" leading questions. Logically leading questions require understanding of the concepts and principles involved in order to be answered correctly; psychologically leading questions can be answered by students' keying in on clues other than the logic of the content. Question 39 above is psychologically leading, since I did not want to cover in this lesson the concept of value-representation but just wanted to use "columnar-place" value, so I psychologically led them into saying "Start another column" rather than getting them to see the reasoning behind columnar-place as merely one form of value representation. I wanted them to see how to use columnar-place value logically without trying here to get them to totally understand its logic. (A common form of value-representation that is not "place" value is color value in poker chips, where colors determine the value of the individual chips in ways similar to how columnar place does it in writing. For example if white chips are worth "one" unit and blue chips are worth "ten" units, 4 blue chips and 3 white chips is the same value as a "4" written in the "tens" column and a "3" written in the "ones" column for almost the same reasons.) For the Socratic method to work as a teaching tool and not just as a magic trick to get kids to give right answers with no real understanding, it is crucial that the important questions in the sequence must be logically leading rather than psychologically leading. There is no magic formula for doing this, but one of the tests for determining whether you have likely done it is to try to see whether leaving out some key steps still allows people to give correct answers to things they are not likely to really understand. Further, in the case of binary numbers, I found that when you used this sequence of questions with impatient or math-phobic adults who didn't want to have to think but just wanted you to "get to the point", they could not correctly answer very far into even the above sequence. That leads me to believe that answering most of these questions correctly, requires understandingof the topic rather than picking up some "external" sorts of clues in order to just guess correctly. Plus, generally when one uses the Socratic method, it tends to become pretty clear when people get lost and are either mistaken or just guessing. Their demeanor tends to change when they are guessing, and they answer with a questioning tone in their voice. Further, when they are logically understanding as they go, they tend to say out loud insights they have or reasons they have for their answers. When they are just guessing, they tend to just give short answers with almost no comment or enthusiasm. They don't tend to want to sustain the activity. Finally, two of the interesting, perhaps side, benefits of using the Socratic method are that it gives the students a chance to experience the attendant joy and excitement of discovering (often complex) ideas on their own. And it gives teachers a chance to learn how much more inventive and bright a great many more students are than usually appear to be when they are primarily passive. --- From: "Jim Bowles" Date: Tue, 20 Jul 1999 15:06:27 +0100 Now what will you have to have in place for that to happen? Will there be just that or will there be more? Now what will you need to have to achieve these? Once it begins to take shape the questions get easier. Ideally you need someone to help probe your intuition of the project. At first I find that I cannot offer any real inputs but as it takes shape I find it easier and easier to say things like, "Will doing that be sufficient to get from that to that?" Or alternatively, Will you have to do that in series or can it be done in parallel. If you have been a facilitator then you will understand how this works. Later once you start to add the resources and times its relatively easy to see the dependencies. Using the PERT chart method is important when entering the data into MS Project. Otherwise you cannot easily validate your logically network if you enter via the Gantt chart. --- Date: Mon, 26 Jul 1999 15:21:01 -0400 From: Tony Rizzo Subject: [cmsig] RE: Let's talk about it. You're surfacing an obstacle to successful implementation. There are four earlier steps that we have to discuss first. 1) Do you agree with the problem? 2) Do you agree with the direction of the solution? 3) Do you agree that the solution, if implemented, solves the problem? 4) Do you see any real negatives, if the solution is implemented? (= the first 4 of 5 steps of overcomming resistance). If the answers to these four questions are "yes, yes, yes, and yes," then it's time to talk about the obstacles to implementation. --- Date: Fri, 16 Jun 2000 07:04:33 -0500 From: "Pi‚nsalo Colombia Ltda." Clarke: There is a process for knowledge aquisition. 1. I do not know that I do not know 2. I know that I do not know 3. I know what I know 4. I do not know what I know For each step there are questions and answers. Wisdom mean to be on step 4. In order to move from step 2 to step 3 you need to learn. In order to recognize step 2, from step 1 you need to be honest and humble. There is another issue refered to how to guide someonelse throughout questions to find one thruth.I think it depends on the level of resistance, and for each one there are questions. Alejandro --- Date: Fri, 16 Jun 2000 07:58:09 -0400 From: "MARK FOUNTAIN" The socratic method is good, but from the sounds of your discussion you may want something al liittle more specific. A couple of tools for getting to the root issues are the CRT and FRT mentioned in this forum. I have also found TRIZ particularly helpful for problem solving, try p. 56-58 of Systematic Innovation http://www.amazon.com/exec/obidos/ASIN/1574441116/qid=961156621/sr=1-2/002-0357230-2671204 which can be adopted as a general model. --- From: "Richard E. Zultner" Subject: [cmsig] RE: Dean L. Gano, the Apollo method, and efficient improvement with ToC Date: Fri, 29 Dec 2000 11:58:23 -0500 Jon Noble wrote: ...Dean Gano has described what he calls the "Cause and Effect Principle". There are four principles to his idea, but the one I wanted to discuss is the following, "Each effect has at least two causes in the form of actions and conditions". ... REZ> Is this a 'principle' (supported by some underlying reasoning) or merely an observation? For example, in the case of radioactivity, what are the two causes? A piece of radioactive material can be radioactive for centuries, even floating in deep space. (Or perhaps radiation isn't an effect?) REZ>His web site describes the Apollo method of root cause analysis http://www.apollo-as.com/methods.htm but not with the detail to answer this question. His book is described at http://www.swbooks.com/books/book-rcan.htm where it mentions, "This is a life-changing book that will enable you and your organization to effectively communicate without the usual conflicts." And "the fact that every time we ask -why?- we should find at least two causes". Even if there was some "principle" telling us that there are at least two causes, would that really help us? Wouldn't people just invent the "missing" cause? (I've seen people invent the most amazing "causes" and present them with straight faces... and no supporting facts!) What we need is not just a required number of plausible causes, but correct causes. And if we intend to act on the effect, then we need to change a sufficient number of causes with the strength (or "magnitude of effect" to be precise) to produce the desired results in the effect. And for practical improvement purposes, the challenge is to do so efficiently: how do we get the improvement in results we want, with the least investment of effort? Does Dean L. Gano offer any new insights into this question? Can anyone tell me if my $19.95 for his book would be well spent? In Theory of Constraints we do efficient improvement NOT by focusing on causes, but by focusing on THE constraint -- that which most limits the performance of the system. And the constraint does not have to be a cause, does it? P.S. Would you agree that the constraint of a system is usually NOT a cause of the effect? --- Date: Fri, 29 Dec 2000 16:37:51 -0500 From: Frank Patrick At 11:58 AM -0500 12/29/00, Richard E. Zultner wrote: >Would you agree that the constraint of a system is usually NOT a cause >of the effect? No. I would not agree. The existence of a particular constraint IS usually ONE OF the causes related to most UDEs felt by the organization. The nature of a particular type of constraint will definitely have an impact on the types of UDEs faced and therefore on what "improvement" means. As a cause, the constraint is typically accompanied by the response to it -- the manner in which it is being managed (or mismanaged). The policies and practices that result from its existence and the resulting dilemmas (and UDEs) that ensue when that responding management approach comes in conflict with various necessary conditions of success of the organization will connect with the constraint as a cause of the UDEs. Where I think Richard may be going with the question, however, is that the existence of the constraint is probably not the lowest level cause. The perpetuation of that existence is usually driven by deeper conflicts of necessary conditions, which in turn are kept alive by questionable assumptions. The constraint might actually be understood to be the combination of the actual limitation on throughput and the systemic conflict that perpetuates it. Erroneous assumptions are the cause of systemic conflicts, which are the cause of the perpetuation of a particular constraint, which cause some policy responses, which cause dilemmas and UDEs. The constraint is part of the chain/tree of causes that lead to the need for improvement -- the UDEs with which the system suffers. (As an aside, I once wrote briefly about the fractal nature of a constraint and its conflict -- on how the nature of the constraint flavors the whole system. The piece can be found at , but probably needs updating to reflect the idea of the generic conflict, which came together at about the same time I wrote it. The self-similarity of the conflict associated with the constraint with the conflicts associated with the UDEs is a far better fractal metaphor.) +( resistance to change NOKIA CEO assessment in 2011 The letter from Nokia CEO to the employees!!! Hello there, There is a pertinent story about a man who was working on an oil platform in the North Sea. He woke up one night from a loud explosion, which suddenly set his entire oil platform on fire. In mere moments, he was surrounded by flames. Through the smoke and heat, he barely made his way out of the chaos to the platform's edge. When he looked down over the edge, all he could see were the dark, cold, foreboding Atlantic waters. As the fire approached him, the man had mere seconds to react. He could stand on the platform, and inevitably be consumed by the burning flames. Or, he could plunge 30 meters in to the freezing waters. The man was standing upon a "burning platform," and he needed to make a choice. He decided to jump. It was unexpected. In ordinary circumstances, the man would never consider plunging into icy waters. But these were not ordinary times - his platform was on fire. The man survived the fall and the waters. After he was rescued, he noted that a "burning platform" caused a radical change in his behaviour. We too, are standing on a "burning platform," and we must decide how we are going to change our behaviour. Over the past few months, I've shared with you what I've heard from our shareholders, operators, developers, suppliers and from you. Today, I'm going to share what I've learned and what I have come to believe. I have learned that we are standing on a burning platform. And, we have more than one explosion - we have multiple points of scorching heat that are fuelling a blazing fire around us. For example, there is intense heat coming from our competitors, more rapidly than we ever expected. Apple disrupted the market by redefining the smartphone and attracting developers to a closed, but very powerful ecosystem. In 2008, Apple's market share in the $300+ price range was 25 percent; by 2010 it escalated to 61 percent. They are enjoying a tremendous growth trajectory with a 78 percent earnings growth year over year in Q4 2010. Apple demonstrated that if designed well, consumers would buy a high-priced phone with a great experience and developers would build applications. They changed the game, and today, Apple owns the high-end range. And then, there is Android. In about two years, Android created a platform that attracts application developers, service providers and hardware manufacturers. Android came in at the high-end, they are now winning the mid-range, and quickly they are going downstream to phones under ?100. Google has become a gravitational force, drawing much of the industry's innovation to its core. Let's not forget about the low-end price range. In 2008, MediaTek supplied complete reference designs for phone chipsets, which enabled manufacturers in the Shenzhen region of China to produce phones at an unbelievable pace. By some accounts, this ecosystem now produces more than one third of the phones sold globally - taking share from us in emerging markets. While competitors poured flames on our market share, what happened at Nokia? We fell behind, we missed big trends, and we lost time. At that time, we thought we were making the right decisions; but, with the benefit of hindsight, we now find ourselves years behind. The first iPhone shipped in 2007, and we still don't have a product that is close to their experience. Android came on the scene just over 2 years ago, and this week they took our leadership position in smartphone volumes. Unbelievable. We have some brilliant sources of innovation inside Nokia, but we are not bringing it to market fast enough. We thought MeeGo would be a platform for winning high-end smartphones. However, at this rate, by the end of 2011, we might have only one MeeGo product in the market. At the midrange, we have Symbian. It has proven to be non-competitive in leading markets like North America. Additionally, Symbian is proving to be an increasingly difficult environment in which to develop to meet the continuously expanding consumer requirements, leading to slowness in product development and also creating a disadvantage when we seek to take advantage of new hardware platforms. As a result, if we continue like before, we will get further and further behind, while our competitors advance further and further ahead. At the lower-end price range, Chinese OEMs are cranking out a device much faster than, as one Nokia employee said only partially in jest, "the time that it takes us to polish a PowerPoint presentation." They are fast, they are cheap, and they are challenging us. And the truly perplexing aspect is that we're not even fighting with the right weapons. We are still too often trying to approach each price range on a device-to-device basis. The battle of devices has now become a war of ecosystems, where ecosystems include not only the hardware and software of the device, but developers, applications, ecommerce, advertising, search, social applications, location-based services, unified communications and many other things. Our competitors aren't taking our market share with devices; they are taking our market share with an entire ecosystem. This means we're going to have to decide how we either build, catalyse or join an ecosystem. This is one of the decisions we need to make. In the meantime, we've lost market share, we've lost mind share and we've lost time. On Tuesday, Standard & Poor's informed that they will put our A long term and A-1 short term ratings on negative credit watch. This is a similar rating action to the one that Moody's took last week. Basically it means that during the next few weeks they will make an analysis of Nokia, and decide on a possible credit rating downgrade. Why are these credit agencies contemplating these changes? Because they are concerned about our competitiveness. Consumer preference for Nokia declined worldwide. In the UK, our brand preference has slipped to 20 percent, which is 8 percent lower than last year. That means only 1 out of 5 people in the UK prefer Nokia to other brands. It's also down in the other markets, which are traditionally our strongholds: Russia, Germany, Indonesia, UAE, and on and on and on. How did we get to this point? Why did we fall behind when the world around us evolved? This is what I have been trying to understand. I believe at least some of it has been due to our attitude inside Nokia. We poured gasoline on our own burning platform. I believe we have lacked accountability and leadership to align and direct the company through these disruptive times. We had a series of misses. We haven't been delivering innovation fast enough. We're not collaborating internally. Nokia, our platform is burning. We are working on a path forward -- a path to rebuild our market leadership. When we share the new strategy on February 11, it will be a huge effort to transform our company. But, I believe that together, we can face the challenges ahead of us. Together, we can choose to define our future. The burning platform, upon which the man found himself, caused the man to shift his behaviour, and take a bold and brave step into an uncertain future. He was able to tell his story. Now, we have a great opportunity to do the same. Stephen. --- see also in satelite.db How does one change the mind of a supervisor for example that has been with the company for over 45 years, has 2 years left to work, and will only say "It won't work!!!!". Here are a couple of real-world solutions, remembering that free advice is worth every penny you paid for it: 1. You may have to wait 2 years. 2. Offer this person early retirement, the payback could be large. 3. Shunt this person off into a "strategic planning" function and get on with going lean. 4. Sit down with this person and show him/her that this attitude is putting all the other people in the plant at risk by not allowing the company to be more competitive. 5. Make the risk of maintaining the status quo greater than the risk of being a change agent. People have been known to lose some or all of their retirement by not being a team player. 6. Finally, if you cannot effect change, maybe this is the wrong place for you to be working. ----------------------- From KBurns185@aol.com Message-ID Date Fri, 2 Jul 2004 123943 EDT Subject [cmsig] RE Progress Payments In a message dated 7/2/2004 112003 AM Central Standard Time, cesario.n.cirunay@boeing.com writes My suggestion was just presented to have some common ground to take off. RC Remember Newton's Laws of Motion? 1. A body will remain at rest or keep moving in a straight line at constant speed unless acted upon by a force. 2. The rate of change of velocity of a body is directly proportional to the force acting upon it. 3. The action and reaction of two bodies on each other are always equal and opposite. It requires a very small adjustment of language, and absolutely no leap of intuition to make them apply to the phenomenon of change in organizations. Law # 1. An organization's behavior will not change unless acted on by an outside force. Law # 2. The amount of behavioral change will be directly proportional to the amount of effort put into it. Law # 3. The resistance of an organization to change will be equal and opposite to the amount of effort put into changing it. "Unfortunately, we're back to people again,. Since the visible actions of senior managers, are perceived as the embodiment of the organization's management paradigm, change will, at best be very slow if there are not significant changes in their ranks. Even the most trusting believe you can't teach many old dogs new tricks. Consequently who goes, and who replaces them, is the single most significant set of "events" in determining the amount, direction and speed of cultural change." Mike Davidson Change The Challenge of Transformation Page 126 Adding my own quote... You either change the people, or you change the people. --- Date: Fri, 26 Jan 2001 10:04:09 -0500 From: Tony Rizzo Layer 1: "Stop wasting my time. We've got real problems." Countermeasure: Get him to acknowledge the symptoms. Tie the symptoms to the root cause(s). Logic works at times. Simulations work more consistently, so long as they are valid and are believed to be valid. Evidence of success: The head nods repeatedly in agreement with you. Layer 2: "That's just another flavor of the month. It's a waste of time." Countermeasure: Social proof. Show them how similar organizations with similar problems found their solutions along the lines that you suggest. Evidence of success: "OK! Let's take a look." Layer 3: "No, that may have worked for them. But it won't solve our problems. We're different." Countermeasure: Use a simulation to make them live, both, their current situation, the transition period with intermediate policies, and the solution. Evidence of success: They begin talking about who else in the organization "...has to see this." Layer 4: "I can't do that." Interpretation: "You're telling me to jump on my sword." Contermeasure: Find out what that sword looks like, and transform it to a comfortable sofa. Evidence of success: "No problem!" Layer 5: "We'll never be able to roll this out without his/her/their buy-in." Countermeasure: Create a plan with them, designed to get the required buy-in. Evidence of success: "How soon can you be available for the meeting?" Layer 6: "This is a big change. What if it doesn't work?" Countermeasure: "What are you doing now, that's working better?" Countermeasure: "Complete failure means that you end up doing that which you are already doing. Where's the risk?" Countermeasure: "See those folks who look just like you? Look at what they've achieved already!" Evidence of success: You have a funding for your program. --- From: "Philip Bakker" Subject: [cmsig] Nine layers of resistance to change Date: Sun, 28 Jan 2001 19:24:24 +0100 The layers of resistance to change have expanded from 5 to 9 layers, especially because of the work of Efrat Goldratt. In August 2000 I heard a speech from Rami Goldratt, during a conference for TOC for Education (www.tocforeducation.com), about these improved layers of resistance to change. On my computer I also found a text about these 9 layers. Although I'm not sure about the origin and status of this text, I think it's a fairly good summary of Rami Goldratt's presentation. The summary is aimed at people with knowledge of the thinking tools. -------------------------------- Resistance to change In order to achieve POOGI (a Process Of On-Going Improvement) we have to learn how to deal with resistance to change. Overcoming resistance to change requires an effective communication process. In this communication process two things are important: 1) Background conditions - enough time - no chronic conflict between the presentor and his / her audience. - other psychological issues ... 2) A good persuasion process Many of the models about the communication process emphasize the background conditions. Without underestimating these, there are also important issues regarding the persuasion process that are neglected by many. When you don't consider the persuasion process many times you will run into 'ping-pong' conversations. People will change their objections and it will be extremely hard to make any real progress. The problem is that even when you answer one objection you do not make any progress since they keep jumping to another objection. How can we have focus during this process of overcoming resistance to change? You can have it when you build the persuasion process according to a model called: "Layers of resistance to change". Resistance to change have some aspects that are rational. Up to now nine layers of resistance have been distinguished. (Much of this work is done by Efrat Goldratt, based on the original 5 layers of Eli Goldratt) This model is like an onion, because you have to peel each layer before you can peel another. (Some of you might know the thinking tool "Prequisite Tree". Think of overcoming the layers by achieving the intermediate objectives (I.O.), in which the layers of resistance are the obstacles. If the obstacle exists (this means a person has a certain layer of resistance), you must overcome it before you can overcome other ones. The layers are: 1. There is no problem 2. I think the problem is different 3. The problem is not under my control 4. I have a different direction for a solution 5. The solution does not address the whole problem 6. Yes, but the solution have negative outcomes 7. Yes, but the solution can not be implemented 8. It is not exactly clear how to implement the solution 9. Undefined / fear A person doesn't have to have all the layers. But if s/he has a layer, and this layer is prior to other ones, you have to deal with that particular layer before continuing the persuasion proces by trying to overcome the other layers. The problem is that it is very hard to know ahead what layers of resistance your audience has. More than that, even when a person raises an objection, and it fits a certain layer of resistance, it doesn't mean that he or she doesn't have prior layers (with which you have to deal first). For example, even when the first objection a person raises is that there is not enough money to implement the change (layer 7), it doesn't mean that he agrees on what is the problem that should be addressed (layer 2). This means that almost any persuasion process should be directed according to the sequence of al the layers of resistance to change. Here is a summary of each layer: Layer 1: There is no problem Common remarks: - There is no problem. - The current situation is good enough. This layer can exist when there are different conceptions of the goal. In these situations one person can think that there is a problem and that the system does not achieve its goal, while another person, with a different perception of the goal, might think that there is no problem and that the system does in fact achieve its goal. Another situation that may cause this layer is when the person, who has this layer, has a fear of fingerpointing. In other words, he or she might think: "If I will not be blamed for the problem, then there is no problem". How do we overcome this first layer? Step 1: Getting an agreement that there is a problem. We need to get agreement on: - that there is a problem - that the current situation is not good enough And we have to do that without fingerpointing to the audience: In more detail: - Show Undesired Effects (UDE's) - Use a cloud for getting an UDE without fingerpointing = Make an UDE-cloud Layer 2: I think the problem is different This layer often exists when you have to persuade a group of people. Each person, because of his/her personality and position in the system, might emphasize a different problem as the main problem that should be addressed. How do we overcome this second layer? Step 2: Getting an agreement on the problem. A good tool would be the three-cloud approach. This means taking three different problems from three disciplines or areas from the organization at hand and turn them into a cloud. After creating three clouds, these clouds can be transformed in to a generic cloud which comprises all three separate conflict clouds. Layer 3: The problem is not under my control How do we overcome this third layer? Step 3: Getting an agreement that we can impact the problem. In order to achieve this we have to: 1) Clarify the problem 2) Surface assumptions of the generic cloud Each person has influence because some of his/her assumptions are not correct. Clarifying is important since often we have different perceptions of a conflict, even when we do use the same words. Also we act upon believes which are so normal to us that we don't consider them anymore. Since many times we are not aware of our assumptions, we have to state them and clarify them before we can proceed with the next step. Layer 4: I have a different direction for a solution How do we overcome this fourth layer? Step 4: Getting an agreement on a direction for the solution. In order to achieve this we have to: 1) Understand the history of compromises on the conflict arrow 2) Understand how we are polishing up the existing compromises 3) Make the paradigm shift from compromising solutions to win-win solutions Layer 5: The solution does not address the whole problem How do we overcome this fifth layer? Step 5: Getting an agreement that the solution will overcome the problem In order to achieve this we have to: 1) Build a Future Reality Tree (FRT), to see whether our injection will deliver the desired effects; or 2) Show how the direction of a solution is composed of specific injections to all the UDE clouds Layer 6: Yes, but the solution have negative outcomes How do we overcome this sixth layer? Step 6: Getting an agreement that the solution will not have serious negative outcomes In order to achieve this we have to: 1) Trim the negative branches Sometimes we should try to invoke this layer in order to improve our idea. More than that, we should really try to make the people who raise the negative branches, come up with the changes needed to improve the idea. This way we make them "part of the solution". Layer 7: Yes, but the solution cannot be implemented How do we overcome this seventh layer? Step 7: Getting an agreement on a strategic plan, with intermediate objectives to implement the solution 1) Build a Prequisite Tree (PRT) Layer 8: It is not exactly clear how to implement the solution How do we overcome this eigth layer? Step 8: Getting an agreement on the tactics needed to implement the strategic plan 1) Build a Transition Tree (TT) that specifies who is responsible for the implementation of each part of the solution 2) Schedule the time to perform each task (preferrably based on a critical chain schedule) Layer 9: Undefined / fear How do we overcome this nineth layer? Step 9: Overcome the fear of uncertainty A suggestion: do whatever you feel is necessary to surface the hidden obstacles in order to achieve full buy-in. --- from Richard Zultner 9 Layers of Resistance: WHAT to Change? (What is the Problem?) 1. "I don-t have that problem" We agree on the goal (implement CC), we agree the current situation is not good enough, and we agree the problem of resistance exists. So do we also agree that resistance is NOT the resisters' fault? That they are NOT stupid? [Perhaps just ignorant?] Where are the single-UDE clouds for specific problem(s) of resistance? 2. "My problem is different" We agree their problem is caused by a generic conflict -- but what IS that conflict? Could there be several conflicts we must address? Perhaps the local vs. global one (for CC itself) and one for formal project management (for those who don't currently)? What IS the generic conflict behind the resistance? [Is there only one kind of resistance?] 3. "The problem is not under my control" This is a new layer, that emerged in working with students. Students often argue that their teachers and parents control their environment, so there is nothing they can do to change things, so there is no point in even discussing such changes. [Clarity on the assumptions of the generic cloud are critical for this layer.] Do we agree we can impact the problem of resistance? What are the assumptions behind the generic cloud of Resistance? WHAT TO Change TO? (What is the Solution?) 4. "I have a different direction for a solution" This is where I hear many voices: Should try the Layers of Resistance? SPIN selling? Something else? What direction should we go? [If we had an agreed on and annotated cloud, we could test directions with it...] What is the best way to deal with resistance? Can we remove the UDEs we now face without compromising CC? [And what represents and "unacceptable" compromise of CC?] 5. "The solution does not address the whole problem" I'm hearing that there are still some undesirable effects with the Layers of Resistance, as applied. What are they, precisely? What is the model missing? What needs to be added? 6. "The solution has negative outcomes" So far, I have NOT heard that there are negative outcomes from using the Layers of Resistance. Are there? HOW TO Change? (How to implement the Solution?) 7. "There are obstacles to implementing the solution" OK, so people are not going ahead with CC implementations as often as we would like. What's blocking them? Obstacles? 8. "I'm not clear how to implement the solution" Or lack of a clear plan? Is there a clear implementation plan? Does it have sufficient detail to satisfy those asked to implement it? Do they think anything is missing? 9. "Now we have to change what we're used to-" Is there a solid business case to management, to implement CC? Is there management agreement to implement the CC implementation plan? Are there names and dates on the CC implementation plan? --- From: "Jim Bowles" Subject: [cmsig] Re: Some thoughts on TOC implementation Date: Tue, 17 Apr 2001 10:59:55 +0100 You point out that there is a problem in changing the behaviour of bosses.(To get them to adopt TOC.) I'll start by addressing your 4 UDEs. (Using A.Does it exist, B.Is it negative, C.Do I care about it.) 1) Bosses are behaving in remarkably similar ways no matter where they are found. (A,B,C) 2) Efforts to go through peoples/bosses minds to get them to change their behaviour seems to be very difficult - some seem to think it is a failure, and certainly it seems to take an inordinate amount of time. (A,B,C) 3) There seems to be little understanding of how systems behave at whatever level one looks in an organisation. (A,B,C) 4) AGI/schools/teachers (except in grammar school) have historically used the method of working on peoples heads in hopes of changing their behaviour. (I need more clarity on this, I'm not sure that is what we do. In my experience building a CRT can also be a very deep emotional experience.) Now for the three questions: 1. WHAT TO CHANGE? "We must seek to change behaviour (our own first) in order to get to our own/other peoples thinking." (From what to what?) 2. WHAT TO CHANGE TO? "We perhaps may not be taking full advantage of an alternative viewpoint." (From what to what?) 3. HOW TO CAUSE THE CHANGE? --- To: nwlean@yahoogroups.com From: Michel Baudin Date: Sat, 02 Jun 2001 16:13:36 -0700 Subject: NWLM: Re: Investigating Lean In your message, you are making several assumptions that may not be true: 1.. You seem to think that you need a consensus inside the management team in order to get started. 2.. You also seem to think that you can convince the opposition by words and data. In most organizations, neither of these is true. What you need to get started is the following: 1.. The backing of top management, which you seem to have. 2.. One production supervisor who sees this as the great opportunity that it is. 3.. A couple of engineers eager to be involved. If the company is small, this may be you. The engineers can be trained to do the technical work. The supervisor can provide the leadership on the shop floor, particularly concerning the involvement of operators, and top management can clear the road blocks set up by the opposition. The reason you can't achieve consensus is that the opposition is not based on a rational assessment of the subject. A telltale sign of this is its immediate expression. If you can barely finish a sentence before your counterparts start listing 10 reaso ns why it won't work, as usually occurs, it's a knee-jerk reaction, based on fear, not facts and data, which is why reasoning won't affect it. You can argue forever, give them classroom training, and take them on factory tours, it will waste your time and accomplish nothing. If your organization is like most, your opposition is comprised of a majority of fence sitters and a minority of antagonists. Your first priority should be to get something implemented successfully as soon as possible. Work with the people who want to work with you and stay away from the others until you are ready to face them with accomplishments, not words. The fence sitter s will get off the fence and join you when they see that what you do works for the company and helps the careers of those involved. The antagonists then become an isolated minority, whose members eventually either defect to your side or leave. This is the outcome you want, but getting it will probably for you the most difficult part of the whole implementation process. --- Date: Tue, 05 Jun 2001 21:12:06 +0000 Subject: [cmsig] A Better Way From: Rudi Burkhard I came across the following through a MAC mad friend in relation to a MACs superiority to do things. It seems to apply to many of us selling TOC too! **************************************************************************** A BETTER WAY As I was walking down the street the other day, I noticed a man working on his house. He seemed to be having a lot of trouble. As I came closer, I saw that he was trying to pound a nail into a board by a window - - with his forehead. He seemed to be in a great deal of pain. This made me feel very bad, watching him suffer so much just to fix his window pane. I thought, "Here is an opportunity to make someone very happy simply by showing him a better way of doing things." Seeing him happy would make me happy too. So I said, "Excuse me sir, there is a better way to do that." He stopped pounding his head on the nail and with blood streaming down his face said, "W h a t?" I said, "There is a better way to pound that nail. You can use a hammer." He said "What?" I said, "A hammer. Itûs a heavy piece of metal on a stick. You can use it to pound the nail. Itûs faster and it doesnût hurt when you use it." "A hammer, huh?" "Thatûs right. If you get one I can show you how to use it and youûll be amazed how much easier it will make your job." Somewhat bewildered he said, "I think I have seen hammers, but I thought they were just toys for kids." "Well, I suppose kids could play with hammers but I think what you saw were brightly coloured plastic hammers. They look a bit like real hammers, but they are much cheaper and donût really do anything," I explained. "Oh," he said. "Then went on, "But hammers are more expensive than using my forehead. I donût want to spend the money for a hammer." Now somewhat frustrated I said, "But in the long run the hammer would pay for itself because you would spend more time pounding nails and less time treating head wounds." "Oh," he said, "But I canût do as much with a hammer as I can with my forehead," he said with conviction. Exasperated I went on. "Well, Iûm not quite sure what else youûve been using your forehead for, but hammers are marvellously useful tools. You can pound nails, pull nails, pry apart boards, in fact every day people like you seem to be finding new ways to use hammers. And Iûm sure a hammer would do all those things much better than your forehead." "But why should I start using a hammer? All my friends pound nails with their foreheads too. If there were a better way to do it Iûm sure one of them would have told me," he countered. Now he had caught me off guard. "Perhaps they are all thinking the same thing," I suggested. "You could be the first one to discover this new way to do things," I said with enthusiasm. With a sceptical look in his bloodstained eye he said, "Look, some of my friends are professional carpenters. You canût tell me they donût know the best way to pound nails." "Well, even professionals become set in their ways and resist change." Then in a frustrated yell I continued, "I mean come on! You canût just sit their and try to convince me that using your forehead to pound nails is better than using a hammer!" Now quite angry he yelled back, "Hey listen buddy, Iûve been pounding nails with my forehead for many years now. Sure, its painful at first but now its second nature to me. Besides all my friends do it this way and the only people Iûve ever seen using -hammersû were little kids. So take your stupid little childrenûs toys and get the hell off my property!" Stunned, I started to step back. I nearly tripped over a large box of head bandages. I noticed a very expensive price tag on the box and a cost-world company logo on the price tag. I had seen all I needed to see. This man had somehow been brainwashed, probably by the expensive bandage company, and was beyond help.. Hell, let him bleed, I thought. People like that deserve to bleed to death. I walked along, happy that I owned not one, but three hammers at home. I used them every day at school and I use them now every day at work and I love them. A sharp pain hit my stomach as I recalled the days before I used hammers, but I reconciled myself with the thought that tonight at the hammer users club meeting I could talk to all my friends about their hammers. We will make jokes about all the idiots we know that donût have hammers and discuss whether we should spend all of our money buying new fancy hammers that just came out. Then when I get home, like every night, I will sit up and use one of my hammers until very late when I finally fall asleep. In the morning I will wake up ready to go out into the world proclaiming to all non-hammer users how they too could become an expert hammer user like me. Slightly modified by Rudi Burkhard from the original by Stephen Kroese. The only change is underlined. --- Date: Wed, 08 May 2002 10:31:17 -0400 From: Richard Zultner Subject: [cmsig] RE: Where do trees get used? (The Nine Layers of Resistance) Where do trees get used? In discussing "why dry trees don't persuade better" -- or "how to persuade better with dry trees" -- the question came up of "what" trees are used "when" and "why". The most refined approach to Buy-In I know of is that recommended by Eli, the Nine Layers of Resistance. I first heard about this in 2000 in Monterrey. Here is a summary: Content compiled from: Rami Goldratt, "Overcoming the Nine Layers of Resistance to Change", 4th International Conference on TOC for Education, Monterrey, Mexico, 10 August 2000. The presentation summarized the work done by Efrat Goldratt in her doctoral research on resistance to change... presented here in the "Ambitious Targets" format: POOGI steps n. Obstacle (the Layer of Resistance) Intermediate Objective(s) action(s) WHAT to Change? What is the Problem? 1. "I don't have that problem", That problem does not exist here Agree on the goal, Agree current situation is not good enough, Agree problem exists, not their fault show they suffer from UDEs, show conflict underlying their problem, do single-UDE cloud 2. "My problem is different", We have other, more serious problems Agree their problem is caused by generic conflict, Agree on core problem do generic cloud from single-UDE cloud 3. "The problem is not under my control", So there is no point to even discussing solutions Agree they can impact the problem clarify problem, do [communication] CRT, surface assumptions of generic cloud, identify assumptions they make, they can change WHAT to Change TO? What is the Solution? 4. "I have a different direction for a solution", You are offering me more of the same old things Agree on the direction for a solution, Agree on need for a win-win solution explore their solution (or compromise), show how injection removes UDEs, and does not compromise the objectives 5. "The solution does not address the whole problem", There are still some undesirable effects Agree the solution addresses the whole problem show how solution addresses single-UDE clouds, do Future Reality Tree 6. "The solution has negative outcomes", yes BUT. Agree tailored solution will not have negative outcomes tailor solution, trim negative branches, do Negative Branch Reservations HOW to Change? How to implement the Solution? 7. "There are obstacles to implementing the solution", yes, BUT. Agree on strategy to implement solution identify Obstacles blocking solution, and Intermediate Objectives to overcome them, do PreRequisite Tree 8. "I'm not clear how to implement the solution", So how do we proceed? Agree on tactics to implement solution do Transition Tree 9. "Now we have to change what we're used to.", I'm afraid to start the change. Agree to implement solution overcome fear of uncertainty, overcome fear of going first Copyright C 2000 by Richard Zultner, Jonah If anyone has any corrections or additions to this summary of the Nine Layers of Resistance, please let me know (offline would be fine). The wording is mine, so I'm open to suggestions for improvement... Note that this "latest and greatest" formulation of Buy-In ties the PoOGI steps directly to all the Thinking Processes. [Also note step 3, which is very common in ToC for Education, but may not show up in for-profit organizations.] So ALL the trees get used, and several steps are critically dependent on trees to move the audience to the next layer. So, why don't dry trees persuade better? Or, how can dry trees persuade better? --Richard Zultner, Software Jonah +( resistance to change : behavior From: "Larry Leach" Date: Tue, 12 Mar 2002 20:04:12 -0600 You may want to explain to the others what this is about. The information is from Leslie Wilk Barksick, Unlock Behavior, Unleash Profits. There are four ways consequences affect behavior: 1. Positive Reinforcement: Provide desirable consequences following a behavior. 2. Negative Reinforcement: Take away negative consequences. 3. Punishment: Provide an undesirable (negative) consequence following behavior. 4. Extinction: Remove positive reinforcement. So, imagine a four block matrix. Let the vertical scale be Provide, Remove. The horizontal be Desirable Consequence, Undesirable (Negative). Then you get (hope this holds together): <-----------------------Consequences------------------------------> Desirable (+) Undesirable (-) Provide Positive Reinforcement Punishment Remove Extinction Negative Reinforcement From: "Tony Rizzo" Date: Tue, 12 Mar 2002 22:14:05 -0500 Larry is referring to a behaviorist's approach to changing behavior. The reasoning is that executives who choose to not prioritize do so for a reason. Something in the environment, which can be described by one of the cells in the matrix, continues to reinforce the behavior of choosing to not prioritize. My hypothesis is that by choosing to not prioritize the execs are able to remove an undesirable consequence, such as being chewed out by a superior or a customer. Per the matrix, if we want to replace the no-prioritizing behavior with a prioritizing behavior, then we have to remove that which reinforces the no-prioritizing behavior and install something that reinforces positively the prioritizing behavior. We might even associate a punishment for choosing to not prioritize. --- Subject [leansig] RE leansig digest 22 September Date Thu, 23 Sep 2004 170109 -0400 Message-ID I seem to remember from a labor negotiations class in college. (a long time ago ) President Eisenhower became involved in the US steel strike of 1959 when it seemed to come to an impasse. My feeble memory recalls he invited leaders of labor and the steel companies to the white house, put them in a room, fed them peanut butter and jelly sandwiches, and coffee .... ... and said - "Don't come out until it's settled." It worked then. Why? Respect for Eisenhower? Realization this was important to the country? Fear? Answering a call to greatness strikes me as a good motivator. The question is leadership. Some one must make the call. Dave Velzy -----Original Message----- From Donald N. Frank [mailtodfrankasso@optonline.net] Sent Thursday, September 23, 2004 423 PM To Lean SIG Subject [leansig] leansig digest 22 September Donovan's problem of different groups not working together can be solved, provided there is top management commitment, by the following process, called bladder management Put all the contending parties in a room with lots of soft drinks like coffee, tea, soda etc. lock the door and tell them that no one goes to the bathroom until there is an acceptable plan for working together to achieve the targeted ROS (return on sales) goal. +( resistance to change : Kotter Kotter's Process for Creating Change In his 1996 book, Leading Change, Kotter outlines a common sense, eight-step process with which a business leader can cause significant, lasting, cultural change within his/her corporation. To those of us whose career objective is to cause the spread of TOC, Kotter's change process is a Godsend. Had I been aware of Kotter's work earlier in my consulting career, my own success rate with TOC implementations easily might have bettered that of my peers. Here is a summary of Kotter's 8-step process for really changing an enterprise: 1. Create a sense of urgency Read this as, "kill complacency". Unless the members of your team and their people perceive a need for real change, no change will take place. 2. Create the guiding coalition This step puts long-lasting authority behind the change effort. It also creates a good deal of momemtum in the right circles. 3. Develop the vision and strategy This is the Promised Land. You, the members of your executive team, and your people must all see the same Promised Land, or none of you will make the journey successfully. 4. Communicate the change vision How vital this is! So very often, the vision is not communicated, not shared, not reinforced, and not achieved. 5. Empower broad-based action In other words, make the requisite policy changes, measurement changes, and organizational changes. Also, fund the effort, and free your people to act on your vision. 6. Generate short-term wins If your people and others have to wait years, before they see the benefits of your better vision, then they are very likely to lose interest. If they do lose interest, your change process simply fizzles. Short-term gains generate even greater interest in your vision. They get people excited, and they build momentum for your change process. 7. Consolidate gains and produce more change With this step, you exploit the momentum achieved with the short-term wins to launch additional change projects and to generate more support. Grow the scope of your improvement effort beyond the first enterprise, beyond the pilot effort. 8. Anchor the new approach in the culture This is the step that results in the new "how we do things around here". Action Items Kotter's process is a generic one. It can be used to cause any desirable organizational change. However my interest, and possibly yours, is to cause a particular cultural change. Specifically, we want to cause an entire enterprise to adopt the TOC approach to generating and improving profitability forevermore, at least until we learn of a more effective approach. To that end, I suggest the following set of specific actions with which to execute Kotter's change process. Create a sense of urgency: * Stop the happy talk, particularly within all the internal publications. * Highlight the recent successes of your competitors. * Transform a problem into a crisis and rattle your team into an appropriate state of discomfort. If you are fortunate enough to be suffering from a downturn in your market, capitalize on the opportunity to effect real change now. * Focus your team and your people on that which can be achieved with a much more effective operation. * Put numbers on the value of the missed opportunities. * Make sure that the members of your team know how much they are not earning, as a consequence of those missed opportunities. Create the guiding coalition: * Build your team. Take your team of executives to a weeklong meeting, off site. The meeting should begin with an appropriate workshop in TOC. Two to three days of training, followed by a two-day discussion session during which your team members fit the TOC approach to your company, will be most effective. * Above all, make the investment of time. Unless you demonstrate with your own actions that change is important to you, it will be important to your people. * Identify the market segments served by the various enterprises in your corporation or division. * Earmark each enterprise and the resources of the same, using the Information Cycle diagram as a guide. * Reach agreement that each set of resources will be focused on a specific enterprise. * Decide which enterprise should be the first to make the transition to TOC. * Force a decision. Don't permit any toe in the water nonsense. Drive them to a decision, yes or no. * Burn the boats! That is, if the answer is yes, make sure that everyone understand that there's no turning back. * Identify the members of the implementation team. * Determine a timeline that allows zero procrastination, for the change process. * Generate a list of action items and a clear assignment of responsibility for each item. * Schedule regular meetings with your entire team of executives, the purpose of which if for you and your team to review the progress of the implementation. * Be at every one of those meetings, and review the progress of your implementation. Develop the vision and strategy: * Work with your team to envision the better operation, and capture that vision with a statement that creates excitement. The following vision statement appeals to me: To develop and sell products of such exceptional value and with such speed that our customers have no choice but to buy from us (vision), with the use of TOC and Leverage Point Management (and strategy). Of course, you and your team may want to develop your own statement of vision and strategy, but please feel free to adopt his one. Communicate the new vision: * Hold a half-day meeting with your entire organization. Make it a telecast meeting, if necessary. And tell them all what you plan to achieve and how you plan to achieve it. Use whatever consultants you have available, but use them only as a source of material. You must be the one who makes the announcement. * Instruct members of your executive team to hold similar meetings with their people. Your team members, in turn, should present the vision and discuss the change process, as they apply to their specific operations. * Have your team members respond to the questions of their people about, both, your new vision and your strategy for implementation. Remember, nothing causes learning more effectively than the need to teach others. * Make sure that the communication process floods its way to the lowest levels of the corporation, with managers presenting and explaining the new vision and the implementation strategy to their direct reports at each level. * Create an internal web site. The site should begin with the vision statement and should show your implementation strategy and progress. Above all, make sure that the site doesn't become stale. * Train everybody! A brief training session, two days at most, is the most effective way to convey the TOC model and the appropriate TOC solution throughout the organization. It's also the most effective way of achieving the requisite level of buy-in. Believe me. You need buy-in more than you can imagine. Widespread training has been an important part of every successful implementation that I've seen. It was missing from many of the failures. * State your vision again and again and again, at every opportunity, at every meeting, as a toast to every occasion. Go so far as to have bumper stickers with your vision statement glued in the restrooms of your facility. If you do, the humour of such outrageous act will help to communicate your vision throughout your corporation. Empower broad-based action: This is another area where the pitfalls are severe. Just remember this observation from one most memorable, Yogi Berra: "If you do what you always did, you're going to get what you always got." Your organization's performance improvement comes from changing the behaviours of the people of your organization. Unless you enable those behaviours to change, even the most enthusiastic supporters within your enterprise will find it difficult to carry out your vision. Enabling behaviours to change means that you and your team make it safe for your people to behave as you need them to behave. It also means that it becomes safe for your people to stop doing those things that don't make sense to them and don't support your change vision. Therefore, * Let your people tell you, which of the current, so-called best practices are really creating obstacles to change, and annihilate those obstacles. * Listen to your people, when they tell you which of the current, so-called best practices are really creating obstacles to change, and eliminate those measurements. * Empower your people to say no, even to you, if that which they are being asked to do is inconsistent with your change vision. * If any sort of reorganization is necessary, then reorganize and be done with it, rather than expecting your people to overcome the deficiencies of a structure that is inconsistent with your change vision. * Identify everyone who might be hurt in any way, by your change process, and make absolutely sure they are not hurt in the least. Find some way to overcompensate those individuals for their perceived losses. Do this, and you create converts and supporters out of those who would resist your change effort. * Install a web-based discussion group within your intranet, where your people can complain about the things that are getting in the way of your change process, and read the discussions. Respond to those discussions after you've taken action to eliminate their obstacles. * Provide whatever funding is required to support training and the acquisition of necessary tools. Your people will be scrutinising your behaviour every closely. Their scepticism indicators will red-line, if they see you pinching pennies with training and tools. Generate short-term wins: * Rather than trying to convert the entire corporation or the entire division at once, divide and conquer. Identify the enterprises of, say, your division, and begin the process with the one enterprise that gives you the greatest probability of success. * Begin the enterprise-wide implementation of TOC by addressing the constraint of this enterprise. * Measure the performance of your enterprise, before TOC and after TOC, with time histories. Consolidate gains and produce more change: * Publicise the performance improvement and use it to light a fire under the team that runs one of your other enterprises. * Publicise the second round of improvements that that first enterprise anticipates achieving. This will add fuel to those fires. Anchor the new approach in the culture: * Review the progress of the implementation weekly, with your direct reports. * As soon as the appropriate report of operational measurements is available (the buffer report, for example), review this report with our people, weekly. * Have the report compiled by a person who reports directly to you. If you don't, the report will be filtered heavily; it won't be nearly as effective. * Make it unequivocally clear to everyone in the enterprise that they are required to provide the necessary data inputs for the report in a timely manner, voluntarily. In fact, make it clear that it is their responsibility to communicate their data to your assistant. * If you don't have a personal assistant who can help you with implementation and can generate this and other reports for you, hire one! * Have the report distributed widely, two days before the weekly meetings during which you review the report. * Include in the report a measurement of the progress of the implementation. For a multi-project implementation, for example, it is useful to track the percentage of active projects that have been converted to the TOC format. * Ask questions. Upon seeing items, in the report, which give you reason to pause, ask, "What's going on" What are we doing to fix it? What are we doing to avoid this problem in the future?" Only you ask these questions and get the correct responses. * Tie the financial reward of your team and your people directly to the profit improvement that they achieve for their enterprise. * When you begin noticing that your people are taking home huge amounts of money, DON'T LIMIT THEIR REWARD ! Rather, be thrilled about it, because their take-home pay is now an indicator of the profitability of your business. If their take-home pay is huge, then so is your profitability. * Publicise the fact that your team and the people of that first enterprise are taking home huge amounts of money. The news will fuel the fires under the posteriors of those in the remaining enterprises. * Finally, talk to those expensive consultants. Use them as beacons, to ensure that you and your team are implementing the right model in the right way. g:\firma\GL\Goldratt\Kotter.doc +( roadrunner From: "J Caspari" Subject: [cmsig] Re: Roadrunner and Mutli-tasking Date: Fri, 18 Jan 2002 10:38:29 -0500 1 CASPARI: I think that you and I may have very different views of what the roadrunner work ethic behavior represents. I view mutli-tasking and roardunner behavior as two entirely separate issues. LEACH: Please offer your definitions. Perhaps Bill can use them for his BOK. 2 CASPARI: I would agree with Rob Newbold, who says that the two rules (1) << When you have scheduled work, do it as quickly as possible ... >> (2) << When you don't have work, don't pretend you do ... >> << ... are sometimes called the *roadrunner* mentality. >> except that I would change the word, scheduled, in the the first rule to assigned. (Source: *Project Management in the Fast Lane: Applying the Theory of Constraints* p. 194.) LEACH: I see Roadrunner behavior as inclusive of eliminating bad multi-tasking (the primary effect). Roadrunner has a couple of other considerations; i.e. starting as soon as you get the input (eliminating student syndrome)and finish previous task, and pass on result as soon as done. 2 CASPARI: I can see how one might include the multi-tasking issue as a part of the mentality of the first rule. Multi-tasking certainly has a significant effect on the overall time required for the accomplishment of an assigned task even without the effciency loss (associated with multiple setups for the same task?) that you mentioned previously. Add an efficiency loss effect, and the effect is HUGE. The combined effect must be so large that some projects are never completed at all. 2 CASPARI: Nevertheless, I think that it is useful to maintain a distinction between the two issues (roadrunner and multi-tasking) because, in my view, the issues apply to different groups of employees. The roadrunner mentality issue results in a specific behavior of a person *doing* a task. The multi-tasking issue results in a specific behavior of a person *assigning* a task (or a person in a position of higher authority seeking control information about a task). (Note for Jeff Fenbert: Welcome to the Constraint Management discussions. I hope that this posting provides an adequate respoinse for your clarity reservation.) --- From: "Larry Leach" Subject: [cmsig] Roadrunner Behavior Date: Sat, 19 Jan 2002 10:09:12 -0600 LPL Reply: We have had many discussions on this. The negative image is from the cartoons, which some interpret as the Roadrunner being kind of scatterbrained, popping from here to there with no apparent goal. That was not, of course, the intent. For these people, the use of 'Relay Runner' may work better. It conveys the idea that I need the input to start, I put out my max focused effort to get my job done, and I immediately hand off the result to the next in line. John, although I do not have a problem with your clarification for management behavior. I have a sufficiency reservation with Rob's definition. It does not seem deal with the problem that most people face; i.e. they have multiple tasks in front of them, often from different sources, and need to know that they best serve the system by working one at a time (i.e., avoid bad multi-tasking). Most people are driven to multi-task because the system rewards them for doing so; they keep the most people happy by showing progress on multiple tasks at once, even though they all take longer that way. In many of today's project organizations, tasks do not all come from ''an individual's manager;" they come from the various projects. No one manager assigns tasks to an individual. Professional individuals manage their own work. Even with a full multi-project implementation, resources frequently see multiple tasks assigned to them and ready to start (due to the fact that we do not level resources across all projects, and due to statistical fluctuations). They need to know that management's expectation is that they use the buffer report to select which one, and only one, of those to do next. Tony's clarification helps. We do need effective operational definitions for these terms we throw around. Bill's BOK can go towards this. I was going to refer to you the definitions in the glossary of my book, but see I did not include Roadrunner. Nor did I find it in my TOC dictionary (http://advanced-projects.com/TOC/Defs.html). I will make my contribution by adding it there. Here is what I will add as the first cut, and I would appreciate feedback on it: Roadrunner Behavior: The work ethic expected of each individual working on Critical Chain projects, in which they start a task as soon as they are available and have all of the inputs, work on only that one task applying 100% of their work effort, and pass on the result of their work as soon as they complete it. If presented with multiple tasks to work on, resources engaging in Roadrunner Behavior use the Buffer report and buffer management rules to decide which task to work on next. (PS to Bill Dettmer: You might use my and other glossaries as check list for the glossary in your BOK. The glossary of the PMBOK is one of its most useful elements). Date: Sun, 20 Jan 2002 21:33:57 -0800 (PST) From: Tim Sullivan Subject: Re: [cmsig] Re: Roadrunner and Mutli-tasking HP, here is how I teach RoadRunner work ethic. I use the acrostic: PRIDE of the RoadRunner. P = proper sequence R = regular speed I = immeidate start (and don't stop till finished) D = defect free E = expeditious transfer Comments: Proper sequence should be determined by the system, not every individual. It is often first-in-first-out, or earliest due date. But there must be a way for a worker who has more than one thing in queue to know which to start. Regular speed means that you never slow down and stretch the work to fit the available time (Parkinson's law); but you also don't work and breakneck speed and cut corners. Regular speed must be sustainable for a long period of time. Immediate start is, hopefully, obvious. Even if there is not enough work in your queue to keep you busy all day, start NOW! This is also where the multi-tasking question comes in. If the rule for "Proper sequence" means a task that just arrived should interrupt your current task (e.g. it is on the critical chain and the task you are doing now is not -- then drop what you are doing and start the critical task immediately [that would be good multi-tasking]). Otherwise complete your current task. Defect Free is also, hopefully, obvious. Expeditious transfer means you must pass the work along immediately. The next resource can't start immediately if it is not delivered. That is my understanding of the RoadRunner work ethic. It is fairly simple, but hard to implement because it is counter to the culture in most organizations. --- > > -----Original Message----- > > From: Steve Holt [mailto:scholt4@attbi.com] > > Sent: Thursday, December 06, 2001 1:51 AM > > > > I'll jump in on Scott's clarification with another: > > The Roadrunner (the cartoon character again) is very fast, but it's a sustainable pace. He can run that fast all day long. Sometimes when we use the Roadrunner as an analogy people mistakenly think that we're talking about sprinting, that is, going very fast for a short time. The essence of Roadrunner behavior is two modes: 1) full (sustainable) speed and 2) stopped. Or, as Scott said, work at a fast sustainable pace when you have work, don't work when you don't have work. (It's ironic that we have to specifically state the last part, but the concept of "everyone always has to have something to do" is a strong paradigm.) > > > > Nope, that's not what I meant by roadrunner concept. I picked this up from Goldratt's Satellite Series. It refers to two speeds for any worker: flat out or stopped--a reference to the Looney Tunes (Warner Bros.) character, Roadrunner, who was trying to be trapped/captured by Wile E. Coyote (sic?). For international list members, this reference may be obscure. > > > > The concept is: when they have work, they work as hard as possible on it. When they don't have work, they stop: read a magazine, sit in the lunchroom. In this way, WIP is controlled, materials aren't released prematurely, work centers aren't flooded early. +( roadrunner behaviour From: "Potter, Brian (AAI)" To: "CM SIG List" Subject: [cmsig] CCPM in a subcontracting environment Date: Mon, 29 Nov 1999 08:35:33 -0500 Ren‚, In his novel "Critical Chain," Goldratt offers one approach. In essence, offer the supplier the following in exchange for fast response (roadrunner rule behavior) and "resource buffer" signals back to the buying organization as the supplier nears task completion after the supplier receives all inputs ... - Advance notice of timing at increasingly more frequent intervals (that is, give the supplier the same "resource buffer" signals an internal supplier would receive but conform to the supplier's requirements. - Assurance that performance is relative to the date the supplier has ALL materials and information needed to start in hand. - Premium payment for performance of ... o "Roadrunner Rule" behavior o "Resource buffer" signaling Goldratt implies that one must be prepared to negotiate the CCPM contractor/subcontractor relationship on a case by case basis considering the operational needs of both organizations. ------------------------------------------------------------------------------- Date: Wed, 15 Dec 1999 07:17:05 +1100 From: Peter Evans To: "CM SIG List" Subject: [cmsig] Roadrunner ethic - ideas I have found the car race pit crew a useful analogy for critical chain behaviour. This is briefly set out in Newbold's book. The following is how I present the concept, it works well - at an intellectual level - but people very quickly say something like "but I don't know if I can accept people (and me) being _idle_". The explanation I use goes like: Project work is/should be like the way a pit crew works. Each person on the project team: ~ knows the purpose of the project ~ knows who is the customer of the project ~ knows exactly what they are required to do ~ is trained to do what is required ~ can do all required tasks ~ knows exactly where their work fits into the picture, why they are doing what they are doing ~ knows that nothing is more important than being ready to start the next project task ~ works as an integral part of a team ~ is ready to start when required, but does not know the exact starting time ~ works as quickly as possible (safety etc preserved) ~ knows when they are finished ~ always finishes the task without multi-tasking other tasks ~ knows that in between project tasks they must be ready to spring into action, and only work on tasks that prepare for the next task (but do not start it) ~ makes no excuses, but learns from every task how to do it quicker next time Q1. Any suggestions for improvement? Q2. Are there other suitable but different analogies? A similar one might be turning around bombers on the ground in wartime. A surgical operating theatre should operate this way also. Do they always know when they are finished (other than a dead patient)? Q3. Strategies/tactics for selling the solution (understanding the problem is not an issue)? Thanks Peter Evans +( RoI Return on Investment From: "J Caspari" Subject: [cmsig] ROI formula, the real problem Date: Sun, 25 Feb 2001 10:24:23 -0500 Okay, Brian, but does Dr. Maday's formulation capture the essence of the formula that he is restating (on page 28 of *Deming and Goldratt* by Lepore and Cohen? ---------- Dr. Maday wrote: << On p28 of Deming and Goldratt by Lepore and Cohen, the authors present a formula for the change in ROI for some additional investment. There is a glitch here because the formula contains only Delta(I) in the denominator. Thus, for no additional investment the change in ROI would be infinite. The correct formula should be Delta(ROI) = (I*Delta(T - OE) - (T - OE)*Delta(I))/(I**2) >> ---------- The original from Lepore and Cohen on page 28 was: deltaROI = (deltaT- deltaOE) / deltaI where: deltaT = change in Throughput deltaI = change in investment deltaOE = change in operating expenses deltaROI << is the return on investment for the specific investment , and is equal to the ration of the additional net profit [deltaT - deltaOE] with the additional investment required. >> The purpose of this metric is to compare a proposed investment decision with other channels for investing money. ---------- Using the parameters: Element Original New Delta T 1000 1100 100 OE 800 850 50 I 2000 2025 25 T - OE 200 250 50 And substituting into Lepore and Cohen's formula I obtain deltaROI = (100-50) / 25 deltaROI = 2 = 200 percent which appears to be about two orders of magnitude from 2.3.... percent. The formula presented by Lepore and Cohen is, of course, the reciprocal of the traditional payback period formula. This formula places an upper limit on the internal rate of return of the investment proposal and approximates the internal rate of return when the economic life of the proposal is at least twice the payback period and the rate of return is high (e.g., greater than 50%). What is the rate of return on a proposal that offers $100 of T for each of three years and requires no additional investment or operating expense (and which does not preclude reducing I or OE)? -------- Original Message -------- Subject: [cmsig] Re: Deming and Goldratt Date: Sun, 25 Feb 2001 From: Brian Potter Executive summary for those who wish to skip the mathematics and numerical analysis: Numerically computing changes in RoI is an "intrinsically unstable numerical process." The results of the computation may conceal significant events within the time interval over which one calculates RoI. The calculation may also amplify either random fluctuations or artifacts of the number system used by the calculator into apparent (but not real) significance. On the balance, the best approach probably involves picking a reasonably wide time interval (one long enough so that it will not be sensitive to "jumps" created by each individual financial transaction; one or more months probably works well for most organizations) and observing that over that time interval RoI changed from "RoI-start" to "RoI-end" and the difference between the two was "RoI-end - RoI-start" which is "an improvement," "a degradation," or "a statistically insignificant fluctuation." Understanding the differences among the three is one step along the path from "fire fighting" to "managing." Naturally, "Delta RoI" estimates also have a place in considering alternative future actions. Grasping the business meaning of "statistically insignificant fluctuation" may be part of what Deming meant by "profound knowledge." Clare, John, et al, Start with ... RoI = ( T - OE ) / I Differentiate (use the "quotient rule" or treat "/ I" as "* I ^ -1" and use the "product rule") with respect to time (t) yielding ... d RoI / d t = ( d ( T - OE ) / d t ) / I - ( ( T - OE ) * d I / d t ) / ( I ^ 2 ) Switch from differential notation to difference notation ... Delta( RoI ) = Delta( T - OE ) / I - ( ( T - OE ) * Delta( I ) ) / ( I ^ 2 ) ... which matches the formula Clare cited from _ Deming and Goldratt_. A real mathematician in the list can probably offer a more rigorous derivation, but this approach works for the computer scientist cum engineer in me. Applying John's example (expanded below by adding the "T - OE" and "RoI" lines) ... Element Original New Delta T 1000 1100 100 OE 800 850 50 I 2000 2025 25 T - OE 200 250 50 RoI 10% 12.346% 2.346% For future comparison purposes when one carries the computation to absurd precision extremes, the "New RoI" is actually 12.34567901234568% and the "Delta( RoI )" is actually 2.34567901234568%. Jamming in the numbers from John's example (using "Original" values for "T - OE" and "I") ... Delta( RoI ) = 50 / 2000 - 200 * 25 / 2000 ^ 2 = 50 / 2000 - 5000 / 4000000 = 0.025 - 0.00125 = 0.02375 = 2.375% Switching to "New" values for "T - OE" and "I" ... Delta( RoI ) = 50 / 2025 - 250 * 25 / 2025 ^ 2 = 50 / 2025 - 6250 / 4100625 = 0.02469135802 - 0.001524158 = 0.023167200121932 = 2.317% The average (arithmetic mean) of these two estimates for "Delta RoI" is 2.3458600060966%. Their geometric mean is 2.345679012345647% (a very close approximation for the "actual" value ... The difference may fall within the rounding and truncation errors introduced by the calculator I used. The difference would almost certainly be overwhelmed by the "statistical noise" in ANY actual transaction stream.). Switching to "average" values ( [ Original + New ] / 2 ) for "T - OE" and "I" ... Delta( RoI ) = 50 / 2012.5 - 225 * 25 / 2012.5 ^ 2 = 50 / 2012.5 - 5625 / 4050156.25 = 0.023455885189614 = 2.346% Note that the difference formula for "Delta RoI" (derived above and quoted from _Deming and Goldratt_) is sensitive to the time base one chooses for "I" and "T - OE". In John's example, NONE of the three "Delta RoI"s calculated from the formula exactly equaled the "actual" "Delta RoI" calculated by subtracting the "New RoI" from the "Original RoI". In this instance, the computation using "average values" for "T - OE" and "I" missed by less than 0.01%. Also note that one estimate was high and the other estimate was low. This was not an accident. If money were a continuous quantity the two estimators would converge to one another and the "actual value" for smaller changes in time. Unfortunately, computing "Delta RoI" amounts to "numeric differentiation," a process which numerical analysts will warn us is intrinsically risky. Picking large time intervals ignores changes to RoI between the two end points. Picking small time intervals exposes the calculation to statistical variation in the data and to artifacts of the floating point number system in our calculators and computers (potentially yielding a result based more on "noise" than on "signal"). --- Date: Mon, 05 Mar 2001 13:00:23 -0500 From: "Dr. Clarence J. Maday" Subject: [cmsig] Delta(ROI) Revisited Let's revisit Delta(ROI). Manufacturing at Warp Speed, page 242, by Schragenheim and Dettmer shows the same formula for Delta(ROI) as does Deming and Goldratt by Lepore and Cohen. In my earlier discussion I suggested a formula based on the differential of a quotient. That formula is OK for relatively small changes in I (about 20% or less). Brian Potter then gave a very lucid explanation, tying the derivative and differential together in a really neat way. Differentials can be very useful even if we don't go to the limit to form a derivative. We don't have to worry about continuity of the various quantities. The calculations can be made without any formulas by going back to the beginning (as I learned in Advanced Calculus). Start by stating what you want. We want Delta(ROI) for some additional investment, Delta(I). Then construct the difference between the final situation and the initial situation. In this case, we don't have to go any further. There is no restriction on the size of Delta(I). Let's use numbers to illustrate. We consider an example where T = 5, OE = 3, and I = 10. Then (ROI)1 = (T - OE)/I = 0.2. (1) Now, let Delta(I) = 5, Delta(T) = 2 and Delta(OE) = 1.Then (ROI)2 = (T + 2 - (OE + 1))/(I + 5) = 3/15 = 0.2 (2) or no change from the initial situation (1), so that Delta(ROI) = 0. (3) The additional investment brings in the same return as the original investment. ROI is unchanged. When the formula suggested in the references cited above is used, however, we get Delta(ROI) = (Delta(T) - Delta(OE))/Delta(I) = 0.2 (4) or a 100% increase in ROI over the initial case (1)! This is not correct. There is no increase in ROI, as demonstrated by (3), and this establishes a contradiction in (4). Other than this concern, I recommend Manufacturing at Warp Speed especially for its treatment of Simplified DBR. S-DBR parallels some current work that questions the uses of Critical Chain feeding buffers that increase the length of the project but may not actually be used in the execution of the project. Of course, feeding buffers that do not increase the duration of the project are entirely appropriate. --- Date: Mon, 26 Feb 2001 02:09:41 -0500 From: Brian Potter Oh, now I see. Clare and I both assumed the domain of discourse was changes in RoI with respect to TIME. From what you wrote and from a more careful rereading of Clare's original post, Lepore and Cohen were discussing comparative RoI values for differe nt investment alternatives. As Clare points out, there IS a zero denominator when there is both no new investment and no reduction in current investment. But (since we did not change the investment mix) there is also a zero numerator (because we are not considering changes caused by factors other than the investment mix). We need not resort to calculus and take the limit of "( Delta T - Delta OE ) / Delta I" as "'Delta I' goes to zero." Since that situation implicitly considers a "no changes" investment alternative, one may co nsider the RoI of the "no changes investment alternative" to be either 0% or whatever return one gets by investing in (say, T-bills) while one ponders other alternatives at greater length. Basically, the "Delta I = 0" case is not interesting because th e domain of discourse is CHANGING the investment mix. This decision making approach (calculating the expected future RoI values for possible investment alternatives) can have some merit. For it to be REALLY useful, one should do some homework, first. The alternatives under consideration should be be indep endent in that each (by its presence or absence) will have no significant impact on the others. The alternatives should be feasible both individually and in collections considered for simultaneous implementation (e.g., none individually or in combinati on should overload a constraint or over commit other available resources). The alternatives under consideration must honor the organization's "necessary conditions" (see [for example] the recent "The GOAL of any company" thread). Given that the homework is done so that all the alternatives (individually and in combinations under consideration) ... - satisfy necessary conditions + satisfy the market (including customers, governments, geographic neighbors, the environment, etc.) + satisfy employees (perhaps, a special case of "satisfy the market") + satisfy the owners (perhaps, also a special case of "satisfy the market") - recognize constraints by one or more of ... + improving exploitation of ... + improving subordination to ... + elevating ... ... one or more constraints - recognize constraints by NOT ... + obstructing exploitation of ... + obstructing subordination to ... + decreasing capacity of ... ... any constraint ... the collection of investment alternatives with the largest expected future RoI would be the "best" new investment mix. Mostly, I would expect strategic concerns to dominate the decision process. The RoI issue should surface only as a "tiebreaker" when two or more mutually exclusive alternatives are essentially equal at meeting organizational strategic goals while recognizing the organization's constraints. Thus, I think this issue is a tempest in a teapot because it becomes an issue only in clo se calls where all the alternatives probably have substantial merit. Since RoI of a FUTURE investment is something of a guess in any case, this boils down to a pseudo-scientific way to GUESS which investment(s) will be "best." Leadership teams who did their homework will have nothing on the table but "good" choices; the y CANNOT guess wrong. Leadership teams who did NOT do their homework have no clue about the quality of their alternatives. Picking the choice(s) with the "best" RoI is neither better nor worse than any other arbitrary choice among alternatives of unkno wn quality (rolling dice, flipping coins, and other methods avoiding bias favoring opinions of someone who makes particularly bad choices might serve as well [there's a controversial research topic!], and biases favoring someone who makes particularly good choices might work better). +( Sales&Marketing and compensation What Prevents Sales From Growing Faster? The Constraint Sales Skill What can be done today to make sales grow faster tomorrow? While there is rarely a simple answer to this question, the process for finding the answer does not have to be as difficult as you may think. The first question to be answered is which part of your business is constraining sales todayOperations or Sales? If you are unable to produce the products or services being sold in a timely manner , then Operations is the constraint. For this situation, use Eli Goldratt's Theory Of Constraints (published in "The Goal") to rapidly increase Operation's capacity. If you areable to produce many of your orders, with reasonable delivery to your customers, then Sales is the constraint. Solutions to stimulate a growth in sales, in less than 12 months, must be a Sales solution. A Marketing solution new products, new services, new policies, new software, new marketswill generally take longer than one year. Marketing solutions will be discussed briefly at the end of this paper. When Sales Productivity is the constraint to growth There are only two components to winning a sale. Knocking on the right door (Sales Activity) and how the salesperson performs once the door opens (Sales Skill). Both are required to win a sale, but only one can be the constraint at a single point in tim e. When Sales Activity is the constraint, the solution is relatively simple. Take an action that will cause your salespeople to knock on more doors and get in front of more people. In essence, force your sales force to work harderdo more of what they already know how to do. Analyze most management actions to raise sales productivity and you will find that the effect of almost every single one is to stimulate an increase in Sales Activity. The most common actions are incentives, establishing activity goals, adding sales support, cellular phones, time and territory management, advertising, and a change in the compensation plan. Every one of these initiatives is designed to get salespeople knocking on more doors. When Sales Skill is the constraint to Sales Productivity For sales forces in the business to business selling environment, Sales Skill is almost always the constraint to winning more sales. When Sales Skill is the constraint, the solution is not nearly as simple as consultants and senior sales management would like to believe. Winning a sale is a business process just as manufacturing a product is a business proces s. A business process, by definition, must have a slowest step. The Theory Of Constraints has shown that improvements in the manufacturing process will not work (which means they are not really improvements) unless they are focused on the slowest (const raint) step. An "improvement" in manufacturing only counts if the plant produces more good product that is sold, or reduces inventory, or reduces operating expenses. Since winning a sale is also a business process there must be, by definition, a constraint as well. If Sales Activity is not the constraint, then there must be a constraint Sales Skill. But how many skills does an excellent salesperson have? The list is almost endless. It could be 30 or 50 or even 100 different skills. But which of all these necessary Sales Skills is the constraint skill for your sales force at the present time? If the constraint is the sales force's skill, then the only i mprovements that will cause sales to increase would be those focused on the constraint Sales Skill! This is the problem with most sales training initiatives. The purpose of sales training is to improve Sales Skill. But which skills need to be improved? No distinction is made. In other words, sales training gives salespeople a little bit of informati on on many different skills. Salespeople tend to pick one or two ideas to try, but the skills they choose to focus on are rarely their constraint. They may improve in the one area, but an improvement only counts if they make more good sales faster! Good sales can be defined and measured as: * Increasing the average margin of sales won More sales can be defined and measured as: * Increasing the hit ratio (% won) of proposals submitted * Increasing the average dollar value of proposals won * Increasing the number of proposals submitted (holding the hit ratio constant) Faster sales can be defined and measured as: * Reducing the length of the Selling Cycle. This is the average time to win a sale as measured by the date of the first contact to the date the customer submits the order. Analyzing your current situation The first step a company must take before deciding on any initiative to raise sales productivity is to determine where the constraint lies. If the constraint lies in Operations, then sales productivity will not increase until Operations exploits or eleva tes their constraint. Once Operations is successful the constraint usually shifts to Sales. This can often occur quickly using Theory Of Constraint principles, so the Sales Department must be ready for the constraint shift. The next step is to find the current constraint to sales growth within the sales force. This analysis begins with Sales Activity. The key question is not how many calls they are making; it is how many active Sales Goals they are pursuing. A Sales Goal is something the account pays money for. A salesperson can have several Sales Goals being pursued at certain key accounts. Sales Activity is a m easure of the number of right doors being knocked on (or phones being called). A productive salesperson has a higher % of their contacts focused on winning specific Sales Goals. Most salespeople knock on a reasonable number of doors, but the mediocre on es are just showing up without a specific Sales Goal in mind. They are, in essence, waiting for the account to decide that they want something and then the salesperson will vigorously pursue it. If your analysis shows a sufficient amount of Sales Activity , the next step is to focus on Sales Skill. This is a far more difficult analysis because Sales Skill is the sales-person's behavior face to face and voice to voice with a customer. At the end of a month, it's easy to count Sales Activity, but what is y our measure of the Sales Skill each salesperson exhibited during all of those contacts? A Definition of Sales Skill The dictionary definitionability, proficiency, trade or craftis no help so one had to be developed that could be easily measured. A usef ul definition of skill is "a set of behaviors that are expected to produce a result". In a one call-selling environment, the expected result of a skillful salesperson is a sale. In the multiple call sale, what should be the expected result of one sales call? Most sales training companies and consultants say the expected result is an Action Commitment by the customer. The customer agrees to take some actionan example would be setting up a meeting with another buying influencethat indicates progress towards ultimately winning the sale. This result is useful, but it is not the best measure of Sales Skill. The Best Measure Of Sales Skill The Business Results Premise Businesses buy products and services to produce their own products and services. They make their buying decisions for one and only one reasonto produce better business results. This is the Business Results Premise. It can always be argued that individuals within a business will make buying decisions for a host of reasons above and beyo nd a specific business result. But the business, as a business, develops budgets and justifies buying decisions based on some expected improvement in business results over time. An improved business results statement must begin with reduce or increase a measure that the customer wants to improve. Understanding this most basic principlethat businesses buy to produce better business resultsis the key to measuring Sales Skill. The best measure of Sales Skill is what the salesperson learned about the customer's business results. The specific learning would be the current business results being produced and the desired business results the customer expects to produce after the solution is implemented. Identifying The Constraint Sales Skill If the Business Results Premise is true, then the constraint Sales Skill must be that skill or narrow set of skills that is preventing this learning! When sales management is asked this questionWhat prevents your salespeople from learning their prospect's and customer's current and desired business results? the answer is almost always "nothing" prevents this learning. So why don't salespeople have this vital knowledge? The answer is embarrassingly simple they've never been asked to learn it! If good salespeople are trained to learn their prospect's business results they will often begin to get the learning immediately. Therefore, simply identifying the constraint Sales Skill is sometimes enough to eliminate it as the constraintat least for your most effective salespeople. Eliminating the constraint Sales Skill will cause sales to increase rapidly until the constraint to sales growth shifts; either to a different Sales Skill or back to Operations or Engineering. The business process of rapidly growing sales begins with identifying and elevating the current constraint. But at the same time the constraint is being elevated, the next constraint must be predicted and immediately acted upon. Rapidly growing sales is a continuous process of elevating the constraint today and, at the same time, preparing to elevate the constraint of tomorrow. A Marketing Solution versus A Sales Solution Elevating the constraint of tomorrow (when the market becomes the constraint) is a Marketing responsibility. The actions a company decides to take today to rapidly grow sales are usually taken with a short-term focussales results in less than 12 months. This is the focus and responsibility of the senior Sales executive. Senior management and specifically, the senior Marketing executive, have an even greater responsibilityto decide the actions to take today to ensure rapid sales growth next year and beyond. Companies who continuously have problems growing sales in the s hort term usually have a Marketing productivity problem rather than a Sales productivity problem. It turns out that the constraint Marketing Skill is the same as the constraint Sales Skill, but for a market segment rather than a specific account. Senior m anagement must determine a direction for the company. Marketing must then identify targeted market segments and learn the current and desired business results of representative accounts that are included in each segment. They must then make decisions to change the products &/or the capabilities and services of the company so that the targeted market segments business results will significantly improve. A key measure of Marketing skill and productivity is that new products, new services, new policies, or whatever has been changed by Marketing is presented and justified in the context of improving specific and measurable market segment business results. When Marketing is productive, Sales will be productive. If Marketing is unproductivethey are not introducing innovative new products, new services, or new market segment directionSales can still be productive, but the job is far more difficult. In the long run, an unproductive Marketing group will always eventually result in Sales Productivity declining over time! That is why business strategy is always more crucial to long term rapid and profitable sales growth than business tactics. Conclusion If you are in Sales, focus on learning key account's current and desired business results. Then be seen by those key accounts, after they buy, as helping them produce their desired business resultsfast! If you are in senior management or Marketing, first determine the key market segments to focus on. Then learn the current and desired business results of ea ch market segment. This learning can be used to develop better products, better services, better policies, and better advertising for each market segment. The learning can also be used to train the sales force on what to learn when they are pursuing individual Sales Goals at key accounts within the key market segments. --- compensation --- From: "peter evans" Subject: [cmsig] Re: Measurements for/of sales people. Date: Tue, 17 Oct 2000 07:40:23 +1100 From: Tony Rizzo > Those of you who have read the first 4 issues of Product > Development Strategies w/ TOC have seen the argument against > measuring and rewarding individuals in the sales function > of new-product introduction organizations on the basis of > the monthly revenue that they generate. I won't elaborate > on all the undesirable effects that this particular > measurement creates for, both, Type-A organizations and > Type-B organizations. Issues 3 and 4 of the newsletter > cover these well. Still, two questions remain. > > First, what operational measurement should these people > have available to them? > > Second, on the basis of which measurement, if any, should > these people be rewarded, and by what algorithm? Tony We need to define: 1. The types of sales people, they can sort of be defined by a 3 sided matrix. I assume we are talking about B2B selling. Product * - Simple .............................. Complex Sales Cycle - Short ................................. Long Aggression - Account management ......... New Business * by product I mean the whole offering including the customer complexity. 2. Where those sales people fit into the business system. Having said that, Bill H has a persuasive paper which suggests that the operational measure is the sales funnel. The funnel varies depending on the sales role. Ultimately the performance measure has to be Throughput. The questions: - how much T - when do we measure it - how do we measure it (in the sense of list price vs sales price) - who did we receive T from - how do we measure lost T - how much of it goes to the sales force - when do we pay for it (ie when do we count revenue - contract .... payment ... end of warranty period) --- Date: Sat, 28 Oct 2000 14:42:04 -0400 From: Tony Rizzo > in the list you mentioned that J. Caspari showed you some NBR's > regarding use of NP as measurement for compensation. Here they are: "What is the effect of having a worker rewarded on the basis of increased throughput while his boss is rewarded on the basis of reduced cost." "What happens in your proposed Throughput reward system when a large necessary condition expenditure is required (major jumps in health care, oil embargos, law suit judgments, etc.)." --- From: Billcrs@aol.com Date: Tue, 17 Oct 2000 13:15:34 EDT Since you (Tony) are defining a Type-B organization as one who develops the products before they have been sold I can only assume that a Type-A is an organization for which the Offering is sold and then developed for the specific needs of the customer. Is that right? You appear to be asking one primary question right now and that is, "Operational measurements are the real-time indicators that the sales people can use to continually optimize system performance. What should they be?" Peter was correct when he mentioned my Sales Funnel as the tool and you have asked for clarity on that concept. Here it is for business to business sales regardless of whether the organization is Type-A or Type-B. We need to determine 3 performance measures before we can determine the one operational measure. The 3 performance measures are: (1) The desired rate of sales per month, (2) The expected Win Ratio which is the number of proposals won divided by the number of proposals submitted, and (3) The average length of the winning sales cycle. The first two performance measures are usually known and the third can be estimated. The Sales Funnel includes the normal steps that most sales won go through on their path to a win. If you like we could call it a PreRequisite Tree (PRT) because it is a list of a few Intermediate Objectives (IO) that generally must be achieved to win. While there are usually many activities that must occur in order to win a sale, any business to business sales process can be boiled down to a relatively small number of IOs that can be the basis for tracking progress through the Sales Funnel to a win. A generic Sales Funnel would have the following IOs: (1) A location where the Offering will be implemented and used has been identified within the customer's business, (2) The primary desired operating result has been learned (either an increase in T or a reduction in OE), (3) The customer has identified the primary features that are most important in their evaluation of competing Offerings, (4) The first proposal has been submitted, (5) The sale has been won or lost, (6) The customer is consistently producing their desired operating results. Now we are ready for the operational measure of performance. There is a specific dollar value of sales that must be constantly pursued by the sales channel(s). This is the operational measure of sales performance. It is determined by taking the desired monthly rate of sales and dividing it by the Win Ratio and then multiplying the result by the average length of winning sales cycles. Here are some numbers as an example. If the desired rate of sales is $10,000,000 per month and the Win Ratio is 33% and the average length of the winning sales cycle is 6 months, then the sales channel needs to be actively pursuing $180,000,000 in potential sales at all times during the year in order to meet their $10,000,000 per month revenue goal. That's ($10,000,000/.33) x 6 months. The dollar value of the Sales Funnel is the operational measure that salespeople must be focused on when managing their day to day business. But what about the skills of the sales force? How do we account for that? Well, that is the role of the sales manager - to increase the productivity of his or her sales team. And the measure of the skill of a salesperson or a sales team is the 3 performance measures that determine the size of the Sales Funnel. As sales skill increases the Win Ratio increases, and/or the average length of the winning sales cycle reduces, and/or the average monthly rate of sales increases! In other words, as sales skill increases, the operational measure of Sales Funnel size can be reduced over time just like a buffer in a plant where variability has been reduced or held constant to insure monthly revenue increases over time. Now my impression is that you were positioning this exercise as one for the sale of new products being launched into the market. If that is the case then we would simply build a new product Sales Funnel. I'm inclined to want the dollar value to be expressed as Throughput rather than sales revenue or gross margin, but ultimately that becomes the choice of the customer. If they are a TOC company presumably they would be using Throughput. For now I'm going to pass on the compensation discussion because you only wanted to deal with the operational measure first. But if you want to talk about compensation just let me know when you're ready because it is obviously an important part of this subject. Interestingly, after all of this discussion I will claim that we are focusing on the wrong department. Oh, it's not that the sales department isn't crucial to success. It's just that the marketing strategy that resulted in the new products being developed in the first place will have much bigger impact on the success of the business and the sales channels than simply focusing on how to directly manage the sales channels. So if marketing or the product manager really wants their new product to sell they are going to have provide direction for the activities of the salespeople. If they do it right even mediocre salespeople will be highly productive. There are five questions that need to be answered by marketing BEFORE the product is introduced to the sales force to be launched into the market. Each one provides answers to questions that the salespeople are or should be asking. There's a separate file with these questions, but it won't fit so I'll just tell you the salespeople need to know the following: (1) The market segments the new Offering is targeting, (2) Clear criteria on which accounts they should focus on first to sell this new Offering, (3) The actual operating results (how much T or OE) currently being produced by these targeted accounts that should improve, (4) The expected improvement in these accounts' operating results once they buy and implement the new Offering, (5) How frequently they need to be at these accounts to have an overwhelming edge against the targeted competitors. The product manager who provides these answers will find his or her new Offering selling at a greater rate than expected. The product manager who cannot provide these answers will tend to blame the sales channels for their lack of success! There are three components to corporate strength and an edge in any of them will impact the Sales Funnel operational measurement. They are Market Segment share, Offerings that produce better operating results, and Touches (contacts on accounts within the market segments). Focus the Touches of the sales channel on accounts where we have an edge in account share and/or an Offering that will clearly produce more T or less OE and I guarantee their Win Ratio will be higher, the average length of the winning sales cycle will be shorter, and the monthly rate of sales won will be higher. This means the Sales Funnel could be continuously reduced in size to meet the desired rate of monthly sales, but in reality what we would likely want is increasing monthly sales up to the point where we drive the constraint temporarily back into the business. A superior marketing strategy will ALWAYS have a more dramatic impact on sales productivity than sales activity or sales skill or even sales compensation. Both are important, but the strategy is more important. Therefore, it is a marketing responsibility to provide the answers to the 5 marketing questions. If they cannot or will not do it, then they will have no one to blame but themselves. It is only after the 5 questions are answered that they should get down to the sales support part of their responsibilities which is marketing communications to cause customers to call or want to talk to our salespeople. And finally you made a comment about Type-A and Type-B organizations being different systems. I won't argue that, but what I do argue is that you should determine the goal of the system before you determine the system because the goal may change the boundaries of the system necessary to produce it. For example, a better goal for a business who sells to businesses is to improve a core operating result of their customers' businesses rather than just to increase their own Throughput. But that goal usually requires parts of the customers' organizations to be a part of the system to produce that improvement. So an important part of the new product launch and the marketing communications to support it is a clear statement of our goal for that market segment. Is it to reduce the cash cost to produce a barrel of oil to below $5.00 - how will this new Offering help? Is it to reduce customer's product losses to under .5% - how will this new Offering help? Developing and driving a goal of improved operating results for customers and market segments is a crucial component of a successful marketing strategy and will go a long ways towards developing superior Offerings and causing superior sales productivity. Looking at the size of this post I'm wondering whether or not it will go through even without the attachment! Let's see. Bill Hodgdon --- From: Billcrs@aol.com Date: Tue, 17 Oct 2000 13:57:21 EDT It is certainly true that customers know the quality of salespeople and it's interesting how infrequently they are asked for their opinion of salespeople or consultants for that matter! It is also true that a reduction in account share (your leaky bucket analogy) may be an indication of a deteriorating relationship with the salesperson or the company behind the salesperson. But a decline in account share isn't necessarily (and in fact usually is not) an indicator of an ineffective salesperson. A decline in account share is usually the result of a competitor's salesperson either being there more frequently or selling an Offering that is perceived to produce a better business result than the salesperson's Offering or the competitor's salesperson is more skillful. To guard against all of these possible outcomes the superior sales manager has certain minimum expectations of their salespeople. They are expected to spend a majority of their time at a relatively small number of key accounts, they are expected to have relationships with people from many different departments, they are expected to have learned a certain minimum amount of information from these people. That minimum information is the following: Who are these people - meaning what is their position/title in the company? What are their personal measures of business performance? What are their two most important business objectives for this year? and What business results their departments are producing today that are impacted by the salesperson's products and services? A skillful salesperson and/or one with great relationships will be able to get this information. Salespeople that cannot get this information need help from the sales manager. I can guarantee that the salesperson who knows this information about the key people in the key departments will not have a "leaky bucket" problem as long as the products and services they are selling are perceived to be relatively equal with competitive Offerings by the account. However, a skilled salesperson can have all of this information and still lose account share to a competitor with a superior strategy that comes in with Offerings that produce better operating results. That's why in the previous post I said that marketing strategy will always be much more important than sales tactics. But since this is a sales tactics conversation I would simply ask the salesperson what business results their existing satisfied customers are producing in part due to the salesperson's products and services. If they do not know, then help them to set a goal to find out. If I didn't want to ask the various people within the customer account directly, then I would factor in the customer's perspective by the salesperson's ability to get this information. Bill Hodgdon --- From: Billcrs@aol.com Date: Wed, 18 Oct 2000 10:42:27 EDT In a message dated 10/18/2000 9:00:30 AM Eastern Daylight Time, TOCguy@PDInstitute.com writes: << OK. We can use the chain analogy, if you like, even though I prefer the Information Cycle model. Yes, T is generated at the end of the chain. But any one link in the chain can take actions that damage T. In many product development organizations, it has been the people in sales that have either choked the system (Type-A) or robbed the system of the value that it might have derived from sales (Type-B via discounts). So, the real question isn't how do we measure the contributions of the various links of the chain. The real question is, how do we measure people in each link so as to motivate them to optimize Throughput for the entire chain? By the way, attempting to measure the contribution of any individual link is a completely pointless and counterproductive exercise. Recall Ackoff's definition of a system. A system is defined not by its parts but by the interactions between its parts. Interactions cannot be attributed to separate links. They can be attributed only to the combined effort of multiple links. Consequently, it is mathematically impossible to attribute any portion of Throughput to any individual link in this chain. What does this say? It says that individual performance measurements are the wrong things to use. This is not the same as saying that we should not reward individuals. We must reward individuals. But to reward them on the basis of some so-called measure of their individual contribution is ludicrous. That's impossible. Tony Rizzo >> I cannot believe what I'm reading! You're telling us that the failure of new products to sell is the fault of the Sales department!? If so, you are sadly mistaken. Like it or not Tony, in business to business selling it will be the sales channels that will drive sales growth. That means that Marketing and the product development organization must SELL the sales channels on the value of the new product. If sales isn't selling it then the Marketing department failed - period! It appears that what you want is some sort of measure that will get salespeople to behave "properly". Sales will generally sell those products that are the easiest to sell and/or that make them the most money. Marketing's responsibility is to make the Offerings easy to sell. If they have chosen the right market segments and developed the right Offerings I can guarantee you Sales will sell them because they will be relatively easy to sell. Now I don't argue that businesses may have the wrong sales channels for some segments or the wrong salespeople in some areas or even the wrong compensation program so in that sense sales can get in the way but the failure of a new product to sell cannot be laid at the door of Sales. It must be laid at the door of Marketing. But since you were the one that raised this question, "how do we measure people in each link so as to motivate them to optimize Throughput for the entire chain?" I would like to know what measures the product development people can show that insures the new product will sell. What are the correct measures of the product development department's work that show that they have developed a good product? Measures that are valid BEFORE the product has been launched. And just so you don't fall into this trap, a feature to feature comparison is not the correct answer. The measure of a better product, in the business to business selling environment, is not better features it is something else. I've seen products that were more durable, more accurate, more aesthetically pleasing, and a whole bunch of other "superior" features including faster delivery that failed to sell. Better features does not necessarily mean the product will sell. So what measures do you offer for product development? Another area of interest to me is this statement: "By the way, attempting to measure the contribution of any individual link is a completely pointless and counterproductive exercise." If it is pointless then why do TOC companies do it? In "The Goal" the system was defined as the plant that produced products that were sold. Isn't that measuring just one link in the chain? Don't we define the end of the product development system as the product is launched or some minimum level of sales in some timeframe? Isn't that just measuring one link in the chain? Don't we measure Sales on products and services that have been sold? Isn't that just measuring one link in the chain? If it is counterproductive then why is it working so well in those companies who are successful? If we don't have valid measures of contribution then how do the effective managers rise to the top where they belong? The Business Development System begins with marketing strategy that answers two questions: What market segments to serve? and What Offerings to serve those segments with? Answering these questions provides direction to Sales on where to focus their efforts to win sales and it provides direction to Manufacturing on what they need to do to prepare for the sales that will be won. Given the strategic direction of the business the Sales department goes out and sells the Offerings given to them by Marketing to sell. The orders generated by the Sales channels drives the work of Manufacturing. And if the goal of the Business Development System is to improve core operating results of market segments (rather than just maximize Throughput), then once Manufacturing has produced the Offering there is a group which I will just broadly label Service that insures the customers who receive the Offerings are consistently producing their desired operating results. There is nothing wrong with each of these broad departments having measures of performance. The only question is what should those measures be? We know what the correct performance and operational measures are for Manufacturing. In a previous post I've presented what I believe to be the correct performance and operational measures for Sales (I've not yet heard commentary, but hopefully we will hear what you and others think about them in time). So the questions that remain are either what should the performance and operational measures for Marketing be? or Should Marketing even have performance and operational measures? And finally, I come back to your second sentence which reads, "But any one link in the chain can take actions that damage T". While this is a true statement there is something you and others need to understand. The farther upstream we go in the Business Development System the greater the potential damage to long term T of a mistaken action. A single Manufacturing mistake has little to no damage to long term T, a single Sales mistake has little to no damage to long term T, but a single Marketing mistake can have a huge impact on long term T. Develop the wrong products, target the wrong market segments, lead with the wrong pricing or some other mistake and the most productive Sales and Manufacturing departments in the world will have trouble compensating for these mistakes. If you want to have a small impact on the Business Development System focus on Sales and/or Manufacturing. If you want to have a big impact on the Business Development System focus on Marketing - that's where the big money is made! --- From: Billcrs@aol.com Date: Wed, 18 Oct 2000 21:46:50 EDT Subject: [cmsig] Re: Measurements for/of sales people. ( A test) OK, lots to deal with here so I'll just insert my comments under yours. In a message dated 10/18/2000 12:16:59 PM Eastern Daylight Time, TOCguy@PDInstitute.com writes: << > Who makes these decisions? Marketing recommends and senior management > approves the market segments to serve and the Offerings to serve those market > segments with. The responsibility of Sales is to sell what Marketing has > given them to sell. Within the constraints of their existing product line > they can sell whatever they want to sell - period! The limits to their > decision making is which of their current products and/or services to sell > and to which accounts. Ah! Here we begin to part company. It can be rather damaging for the sales people to sell whatever they choose, without paying attention to the constraint in production. Also, here the distinction between Type-A systems and Type-B systems becomes vitally important. In a Type-B system, the sales people must have information about the load on the constraint, if production is constrained internally. In a Type-A system, the sales people must have information about the constraint (Drum) in the development function, since the sales people are selling a development project as well as a finished product. For the sales people to ignore the constraint, in either case, is to invite some rather damaging effects. If the system (Type-A or Type-B) is constrained by the market, then the sales people should sell freely whatever the market is willing to buy, so long as the system can deliver it within the lead time expectations of the customers. However, if the system is constrained internally, as very many Type-A systems are constrained today, then the sales people must give emphasys to optimizing Throughput now, without compromising Throughput in the future. My comments: This sounds great in theory, but falls apart in the real world. You are not going to be able to reign in sales so easily. They are paid to sell and by God they will sell. You can give them longer lead times which they must comply with but you cannot jerk then around with different pay schemes every time we want them to do something different and you cannot jerk them around by not allowing them to sell certain products. There's no doubt that the internal constraint is a crucial component of maximizing short term T, but don't expect Sales to be very sympathetic or helpful when you have an internal constraint. It's just not in their nature. Remember some of the very qualities that make them good salespeople make them awfully difficult to deal with when internal people screw up! And believe me an internal constraint is viewed as a screw up by Sales! What does this mean? Internal people need indicators that the constraint is moving inside. The indicator I offer is the Sales Funnel operational measure of sales performance. This was explained in an earlier post. It can give early warning signals that orders are increasing at a rate that will outstrip the capacity of the constraint within a certain period of time. The minimum warning it gives is the average length of the winning sales cycle although they should be able to figure it out before that occurs. I don't know if there is an accepted amount of excess capacity in the constraint within the TOC community, but presumably at some point around 80% full we have to be looking for additional capacity to try to avoid the kinds of problems that occur if the constraint moves inside. > The responsibility of Manufacturing is to produce what Sales has sold. > Within the constraints of existing orders in hand and the delivery > commitments that have been made they can produce in any order they deem > effective - period. In other words, they do not or should not have the right > to make a decision to make or not make something that has already been sold. Again, we don't see eye to eye here. The responsibility of manufacturing and sales, together, is to maximize Throughput for the business. They are not likely to do this, if they attempt to operate in isolation from each other. They are much more likely to optimize Throughput, if they work together and create positive interactions. This, of course, requires that they exchange the right information at the right time. My comments: Once again I will agree with you in theory, but in practice sales is generally forced to sell what the market wants. They tend to have certain customers that account for the majority of their sales and they may have been working on those customer for months to make a particular sale. They cannot, at the last minute, tell this important customer that they can no longer sell this product right now. It is highly irritating to the customer and to the salesperson. Manufacturing understands that they cannot increase capacity over night. Well, guess what, Sales cannot change what they are selling overnight either. In fact, a good rule of thumb might be (I've not checked it out, but I like it because it rings true from experience) that the amount of time it takes to increase capacity probably equals the amount of time it takes for Sales to change what they are selling. And if this were really true, then there would be no need for Sales to change because Manufacturing would have shifted the constraint back to the market by the time Sales had made the necessary changes! > The work of Manufacturing is driven by Sales... Yes, but the sales function should not operate without knowledge of manufacturing. > and the work of Sales is driven by the current needs of customers. Yes, but the manufacturing function should be taken into account! My comments: Tony, you have got to understand the world of generating sales. In the business to business sale they cannot change what they have been doing to generate sales very quickly. Manufacturing simply cannot be taken into account because the salesperson often does not know if they are going to win the sale today, next month, next quarter, or next year! So they will continue to sell because they do not have much control over when the customer says yes. But if they do stop selling they know they will likely lose the sale no matter what. This they cannot afford to do. Manufacturing has got to be organized to deal with short term capacity problems without relying on Sales to help. I'm sorry but that is just the nature of business to business selling. Perhaps we can figure out how they can help, but there are many potential long term negative implications to this approach. > But for the long term, the work of Sales > will be driven by Marketing whose job it is to position the company for > success in the future so that when the future arrives Sales will have the > right Offerings and Manufacturing will have the right resources so that the > company continues to grow successfully. In the long term, the work of the entire system is driven by marketing, which determines what comes into the pipeline. > So obviously, in my opinion discussions about Manufacturing versus Sales are > fruitless discussions because they are way down stream in the Business > Development Process. All these departments can do is respond within the > constraints of their responsibilities. We've got to go upstream to the > business strategy to really have a proactive impact on the future of the > business. Yes, the future of a business is determined by the marketing step. But the downstream functions can do a great deal to screw up all that marketing tried to achieve with past marketing efforts. Strategy is clearly important. But, as one other individual said in a much earlier message, operational excellence is a requirement for the successful implementation of any strategy. Very many corporations today are screwing up royally in the operations arena. My comments: That posting was dealing specifically with the execution of the strategy. This does not necessarily mean that Sales and Operations were not operationally excellent. In fact they probably were under the old strategy. It means the CEO was unable to cause the necessary changes within the business to quickly implement the strategy. These screw ups are usually not the result of operational incompetence, they are the result of management or leadership incompetence. There is a big difference. Bill Hodgdon --- From: Billcrs@aol.com Date: Thu, 26 Oct 2000 18:02:52 EDT Geez! There's a lot of stuff here. I didn't realize the posting was so long. Let me just answer your first question Peter. Most senior sales executives (the people who have the power to develop and/or change the comp plan) are trying to manage sales behavior with the comp plan. That's the number one mistake made with comp plans as noted in my original comments. The purpose of the comp plan is to pay for efficient (Sales Activity) and effective (Sales Skill) sales behavior, not manage it. If they could just realize that one simple fact comp plans would get a lot better. Many senior sales executives (or more likely their bosses) feel the sales force, or at least some salespeople, are making too much money so they try to minimize the payout. Any change to a comp plan is scrutinized for this and if it is the true reason for the changes it is discovered almost immediately. Poof goes any trust that may have been there! Since I claim there is no "right" comp plan and most senior managers have not thought deeply about the purpose of comp plans versus the purpose of management, there is a tendency to revert back to what people know (others have done it this way so we will too). That's why I believe comp plans continue to be written as they are. If you study an industry you will tend to find that most of the competing sales forces have similar compensation plans. One reason is that the best salespeople go where the best money is, so when one competitor comes up with something salespeople like it tends to be eventually copied by the others. Just like in new product development! And hey, by the way, I'm very OK with the Trees as long as they are developed by people with excellent insight into the subject. Although admittedly I continue to struggle somewhat with Clouds. But if we can develop Trees that are a roadmap to building trust between any people or any groups within an organization and it can be used to actually cause an increase in trust then it would be a great accomplishment. However, an objective definition of trust and the process to achieve it seems mighty ambitious to me. We all know it when we feel it and we can even describe it, but the process of getting there is tough to say the least in light of the incredible opportunity for any one event over the course of years to make it evaporate immediately. --- In a message dated 10/25/2000 6:37:19 PM Eastern Daylight Time, peterrevans@optushome.com.au writes: << Bill sets out all the problems with sales comp plans. Sounds like one of my discussions with sales managers. All of these are acknowledged by sales managers in discussion, and yet they continue to write the comp plans they do. Why? Perhaps we can write a PRT for this (I know you do not like them Bill). But first the clouds (sorry Bill). We can extract the UDE stories from Bill's message. 1. Sales comp plans are often way too complex. Different commissions on different products sold to different accounts or different markets, etc. And the more complex they are the more likely they are to be changed all too frequently thus further confusing the salespeople. 2. management tries to use the compensation plan to manage the behavior of salespeople. Guess what? That's the role of management, NOT the role of the compensation plan. Remember the purpose of the plan is to pay the right amount of money to the right people in the right timeframe. A reward, if you will, for having performed the right behaviors skillfully. I would bet money that the majority of the discussion around this issue in this forum is about how to use a compensation plan to manage sales behavior and that is simply the wrong thing to be talking about. 3. In the world of business to business selling, straight commission always has been and always will be a mistake regardless of whether the commission is paid on revenue, gross profit, operating profit, or Throughput generated. Why? Because straight commission encourages lots of Sales Activity and in 13 years of analyzing businesses I've never once seen the best salespeople making the most calls. It is Sales Skill that is the key to Sales Productivity increases in the business to business sales force and the compensation plan won't help here. It will be up to management to help here and that is the only role of sales management - to increase the productivity of their sales team 4. The bottom line folks is that there is no "right" compensation plan for any sales force. I've seen dozens of them and there are clear and obvious flaws in every single one. The flaws are there because there are always inequities over the course of a year in a territory or a business that have a big impact on the sales numbers and potential sales compensation. (See below for examples) 5. Annual changes that appear to be arbitrary to the sales force (thus reducing trust) are also bad. 6. [the following happens infrequently] So with regard to the compensation issue there is only one thing that must be in place for the plan (virtually any plan) to work. TRUST! Management and salespeople must trust that they are EACH working for the LONG TERM interest of each other. Since compensation plans will always have inequities due to the nature of selling, there must be trust that, in the long run, the salespeople will be made whole and the business will be made whole. 7. I don't believe there is such a thing as a best plan. [Is this the FRT objective?] It has to be malleable to change with the changing times and without the trust component all plans will eventually fail under the weight of these changes. !!!!!!!!!!!!!!!!!!!!!! Now, who is going to pick three clouds to do the analysis and give us the CCRT? Bill has done the first bit. !!!!!!!!!!!!!!!!!!!!! Bill has given us some PRT Intermediate Objectives: The plans I've seen that seem to work best have a base salary component based on the salesperson's past experience and performance, a bonus component based on how the business performs, and a personal component based on how the salesperson performs. I also have recent experience with one that had a very interesting component to encourage growth (the same sales this year would generate a lower amount of compensation this year than it generated last year). [and there is the trust building set out in 6 above] !!!!!!!!!!! Will someone jump right in and do the PRT? !!!!!!!!!! Do we need an FRT? Peter >> --- From: Billcrs@aol.com Date: Mon, 12 Feb 2001 13:46:46 EST The steps you have outlined above should be done and you can obviously start wherever you want, but, in general you should not start with what they cannot do. Start with what they can do and learn the results that are being produced today by doing their work in that way. Then you can move to learning the UDE's or problems they feel they have by doing their work in that way. Knowing the UDE's or problems is usually not nearly enough though. Here is some basic learning that your "go to market" strategy should know and document BEFORE the launch. How does the market do their work today because they do not have your solution? This should be a flow chart of what exactly is done at each step and measures of performance at each step and, most importantly, the system measure of performance that the market tends to use. What problems (UDEs) are they experiencing because of their current method of doing their work? What are the actual business results being produced that will improve once they buy and implement your solution? Specifically, I'm looking for a system measure of performance. Knowing customer problems is easy, knowing current business results is much more difficult. If your Offering increases throughput you should first know what the actual throughput is today. If your Offering reduces rejects you should first know what the actual rejects are today. If your Offering reduces travel time you should first know the current travel time. In other words, for each UDE or problem the market is experiencing, your marketing strategy should be identifying the numbers that are the measure of the problems. But most importantly your Offering should be improving a system measure of performance that is important to the targeted market segment. What improvement can the market segment expect to see once they buy and implement your solution? These are the market segment's desired business results or expected business results. How can you learn this with a new product? Beta Sites. Unfortunately most new product development groups think the purpose of a Beta Site is to prove that the solution will work the way it is supposed to work. But that is only one purpose and, since they often do not realize this, they forget to do two things during the Beta testing. First, they do not document the actual business results being produced before the Beta Site begins. Second, once the solution finally works the way it is supposed to work, they do not document the actual improvement in business results being produced. The above is obviously not everything that must be learned, but it is the learning that most new product development organizations do not have enough of. They do not have the numbers that are the measure of their targeted customer's current business results and desired (or expected) business results. If you do not have this information, then learning it will, almost for sure, change some of the software features and corporate capabilities you may choose to offer the market. So what are you willing or able to share about the targeted markets for your software? +( scheduling see Haystack Syndrom page 180 ff, summary in 184 a good schedule must be -realistic and achievable -immunize against variation (Murpehy) -support the goal of the system +( Self Learning Kit = Satelite Program Date: Tue, 11 Dec 2001 15:01:18 +0100 From: "Hans-Peter Staber" Subject: Wtrlt: GMG - Registration Info Received: (from eligol@localhost) by www122.hway.net (8.9.1a/8.9.1) id IAA6385464; Tue, 11 Dec 2001 08:47:44 -0500 (EST) Date: Tue, 11 Dec 2001 08:47:44 -0500 (EST) Message-Id: <200112111347.IAA6385464@www122.hway.net> To: hpstaber@fciconnect.com Subject: GMG - Registration Info From: webmaster@eligoldratt.com Dear Staber: Thank you for registering in GMG's Website. Below you will find your registration information. For any modifications please goto: http://www.eligoldratt.com/newdesign/update.php3. ------------------------------ Name: Staber Lastname: Hans Peter Login name: FCIAUSTRIA Password: winter01 E-mail: hpstaber@fciconnect.com Title: = Position: = Company: = Address: Salzburger Strasse 4 = City: Mattighofen State: = Country: Austria Zip Code: = Date of birth: -01- Office Phone: +43 7742 4851 Home Phone: = Mobile Phone: = Fax: = Do you want to receive our Newsletter?: NO ------------------------------ Goldratt's Marketing Group http://www.eligoldratt.com +( selling From: "bhodgdon836" Date: Thu, 09 Mar 2006 13:35:53 -0000 Subject: [tocleaders] Re: FW: Stop talking and listen Sorry Jack, To the professional sales person working for a manufacturer who sells to businesses this is of no value whatsoever. This is a truism (it is so obvious that if you say it people in the business won't respect it). I've been reading paragraphs like the following for twenty years. "To me the best approach to take is not one of selling but one of educating and providing solutions for expressed problems or challenges. It is amazing the things you can learn if you just take the time to ask some questions remember everyone is tuned into the same radio station WIIFM (what's in it for me), if you can relate your pitch to what is important to your customer your results will increase dramatically." While some of the above is true it is the kind of thing that brand new sales people need to hear in context and that's about it. Obviously you have to ask questions but look at some of the key words in this guy's statement: educate, provide solutions, pitch, and your own results (rather than the customers)! These are the words of a presenter - not a learner. Everyone knows sales people are supposed to be selling solutions to problems. Thus, everyone believes that you have to learn about customer's problems and virtually all producing sales people ALREADY DO THIS. The tactical mistake that almost all sales people make, when face to face with a customer, is assuming that learning about a problem is enough. IT IS NOT! They have to go deeper and learn the measure of the problem, the other problems that are being caused, the measures of those problems and where those measures need to be in the future for the customer to be happy. Not one sales person in a hundred has that information about any active sales opportunity. IMHO this is the problem with the CRT Phase of developing a URO - it lists UDEs or problems but usually does not focus them on any particular segment of the market and rarely includes numbers that are the measure of the UDE in the mind of the market. Stop talking and listen is not enough advice because it does not explicitly state what to learn. There are many things you can learn about an account and still not increase your chance of winning the sale. Thus, you give sales people specific learning objectives that do and they are: The results the account is producing today that they want to improve, where those results need to be in the future for the account to be happy, the features the account believes will produce the best results. Learn this information from the right people (different people will have different answers because they have different measures of performance), document it in the cover letter of the proposal to the customer and your sales will increase - period! So my obvious conclusion is that it is not an apt description for figuring out how to sell unless you are selling door to door or consumer to consumer. Sorry about that Jack, but it is important for TOC people to understand the difference between sales basics and real professional selling skills. If not they will deliver training that turns sales people off rather than turning them on. --- From: "bhodgdon836" Subject: [tocleaders] Re: Variability Reduction with ToC? How? --- In tocleaders@yahoogroups.com, "Richard" wrote: > "bhodgdon836" wrote: > The purpose of this project is to reduce process variability within > your starch ......snip... >... your starch holds an average of 4% water content. The > allowance is 6%. At your current volumes, each 1/2% increase in water > content adds $100,000 of annual revenue for starch produced. Your > goal is to increase the water content to an average of 5%, through > superior process control, thus generating additional revenues of > $200,000 over the next year. > > [REZ] And you do this with ToC? How? Can you share the direction of your > solution? [I ask because this is a textbook example of a Six Sigma type of > project for variability reduction.] What tools and techniques would you > apply here? > Regards, Richard Zultner Sorry Richard, This had nothing to do with TOC. It was presented in response to a request from this list for an example of a cover letter documenting the customer's desired business results that had been learned during the pursuit of the sale. The interesting thing about this case was that my client was focusing primarily on variability reduction. They weren't going the extra step to ask about the impact on the customer's business results of solving the variability problem. They just assumed the customer would do that themselves. That is the common problem with many sales pursuits. They assume if the customer says they have a big problem and want to solve it, then that is enough to win. I'm just telling the salespeople to learn the business case for spending the money BEFORE proposing their solution. This forces the learning about current and desired results and can have a big impact on the actual price quoted to the customer. It's the same with developing and launching a new offering - be it a new product, a new service, or a URO. What segment of the market will we focus on winning FIRST to prove the offering will sell? Find the ideal applications and then see how quickly salespeople can succeed. If they have trouble selling the new offering in the easiest accounts, there is no way they are going to sell it to the rest of the market. That gives insight into problems before it's too late. The reality is that I expect the product development team to have these answers long before the new offering is launched. The questions are what criteria define accounts within the targeted segments, what is the size of the targeted market, what business results are these accounts producing now that they want to improve, what results do they need to produce with the new offering that will make them happy in the future. If my client had learned this information for the process variability offerings, they would have had far more success, far more quickly. --- From: "bhodgdon836" Date: Fri, 10 Mar 2006 13:06:52 -0000 Subject: [tocleaders] Re: FW: Stop talking and listen Hi Santiago, Here is an example. This particular proposal was from one of my client's who manufactures control valves and control systems for processing plants. The plant in question produces starch. Notice in the paragraph below that we are documenting the customer's desired results. The client could not guarantee these results because there were factors that could impact the final results that were out of their control. Many people think that by stating what the customer wants, they are guaranteeing the results but that is not the case. You are simply documenting what the customer said they wanted and the balance of the proposal is a description of the features and capabilities the customer stated were most important in their decision on which supplier to choose. This is just a paragraph on the cover letter of the proposal and is my method for proving a cause and effect relationship between my training on learning about results and increased sales. The proposals that have this information have a higher Hit Ratio and a higher level of profitability than those that don't. Here's the example: The purpose of this project is to reduce process variability within your starch line. Our understanding is that due to process control problems, your starch holds an average of 4% water content. The allowance is 6%. At your current volumes, each 1/2% increase in water content adds $100,000 of annual revenue for starch produced. Your goal is to increase the water content to an average of 5%, through superior process control, thus generating additional revenues of $200,000 over the next year. The purpose of this proposal, is to list the features of our products and the capabilities of our company to show how we can help you produce your desired business results. The goal is for the project to be up and running within three months from receipt of order. --- From: "bhodgdon836" Date: Thu, 09 Mar 2006 16:05:20 -0000 Subject: [tocleaders] Re: The sales area --- In tocleaders@yahoogroups.com, "Justin Roff-Marsh" wrote: > > On my sales process list, I've proposed the following as the core > conflict: > > D: Give Salespeople Autonomy > B: Maximise Conversion Rate > A: Maximise T > C: Exert Management Control > D': Integrate Salespeople into the Team > > The Ballistix method, challenges the AB assumption (you maximise T > by maximising the conversion rate). Our injection is that you > maximise T by: maximising opportunity flow; while maintaining > conversion rate within an optimal range. > > The typical approach to salesperson pay (part base; part > commission) is just one example of the inevitable compromise between > D and D'. > > Justin Hi Justin, This is just an open message to the rest of the group to position our relative approaches to maximizing T. I think it is your turn to respond to my last e-mail from awhile back and I'm open to continuing that discussion. However, I now feel it is important for the rest of the group to see where we disagree. I continue to believe it is because we are generally working with different kinds of businesses which is why you have evolved into your ideas and I have evolved into mine. The key question is which approach is better for the target companies for the TOC community. OK here goes. Justin's method for increasing "opportunity flow" is to add resources to make the outside salesperson make far more sales calls. I do not agree with this approach for the business to business sales force in general and specifically for the kinds of companies that TOC people traditionally work with. I'm thinking of manufacturers who make products that are sold to businesses that use those products in their own products (OEMs) or use those products to make their own products or deliver their own services (End Users). The short term impact of Justin's approach is to minimize the amount of learning a sales person can get by forcing him to make 4 to 5 sales calls per day. The predicted long term impact of his approach for the businesses I described above is to drive off the most skilled sales people. Justin obviously disagrees with these claims but I believe it is because his clients tend not to be the ones I'm talking about and for his clients the approach probably works just fine. The ones mentioned on his Web Site are financial services, pharmaceuticals, gym memberships, and real estate developers. These are not TOC target companies. There is no point in going over all of the things we have discussed off line again because it is going to be far too detailed for this list. Suffice it to say that the first thing to look at for any business wanting to grow is its strategy. However, the strategy ultimately gets down to implementation which is where the sales process discussion becomes relevant. Here is the common Cloud for the companies I'm talking about. D: Reduce number of calls (learning about results is my measure of performing well and this usually increases the length of sales calls and reduces the number) B: Perform well during the call (Match's Justin's B and is one cause of sales productivity) A: Maximize Sales (T if a TOC company) C: Make sales calls (Match's Justin's C and is the other cause of sales productivity) D': Increase number of calls Thus, the obvious conflict between D and D'. The problem with this analysis is that I've NEVER met a sales executive who thought their sales people were making enough calls. They ALL think their people should make more calls and most default to forcing more calls. Unfortunately none of them have a valid measure of "enough calls" other than their own anecdotal experience. This is why Justin's approach appeals to the typical sales executive. They look at sales and if they aren't high enough they tell their sales people to make more calls. But the sale is the "effect" a productive sales person produces ­ it is not a cause. And conversion ratio is also an effect, not a cause. Without a measure for each of the two causes, the sales executive doesn't really know why the numbers are off. In my experience, it is almost always because the sales person's territory (and the company's targeted markets) are way too big for the sales person to effectively manage. This is why you must always start by analyzing the business' current strategy rather than the current sales process. Justin begins with the premise that the business needs its sales people to make more calls. I begin with the premise that the business is targeting markets that are too large. I may eventually find that the sales people need to make more calls, but I won't know that until I understand and evaluate the strategy first. Furthermore the measure of making enough calls is NOT number of calls, it is the dollar value of the opportunities they are pursuing. Neil Rackham, the author of SPIN Selling, wrote another book called Managing Major Sales where he explained the devastating impact of forcing sales people in the business to business sale to indiscriminately make more calls. Justin's ideas of removing activities from the sales person, that can be performed by lesser skilled and lower paid people, are sound. And the effect of this will be to free up more sales time and the outside sales people will be able to make more calls to pursue more opportunities and that is good. Most of the manufacturers I work with, already have the people that Justin talks about adding, but it is always good to take a look at that. But here's the problem for the TOC community. Justin and I have both thought long and hard and deeply about the sales process. Either one of us can convince almost anyone without lots of sales experience that our respective methods are right. That doesn't mean that we are right. This is why I always begin by describing the kinds of companies I'm talking about. Nobody's advice is right for every business. Obviously, I believe my advice is the right advice for most manufacturers other than ones selling direct to consumers. Best Regards, Bill Hodgdon --- From: "Justin Roff-Marsh" Date: Wed, 8 Mar 2006 07:43:51 +1000 Subject: RE: [tocleaders] Viable Vision Constraint? This raises some interesting questions: Why is it that salespeople's activities are determined by their 'preference'. Do you allow a process worker's choice of work to process to be determined by his preference? If salespeople can only handle 19 concurrent opportunities, what do they do with the other 99.96% of their capacity? If 19 concurrent opportunities consume 3 appointments each (on average), with a cycle time of 60 work days, then this means salespeople are performing .74 appointments a week (I would expect their capacity to be at least 20). Why is opportunity cycle-time 'many months'? Is it because these companies are selling nuclear submarines? Or is it, perhaps, a function of salespeople's availability? Why do we want salespeople to 'track progress', 'schedule meetings', *and* 'control' the opportunity management process? They are either salespeople or they are clerks, PA's and master schedulers. The odds of their being good at all these functions are slim. Furthermore, what's the opportunity cost of their performing these non-selling activities? Of the time spent with existing customers, I wonder what percentage is securing orders that couldn't have been secured (in full, or in part) by a telephone-based account-management team? Doesn't sound to me like a training problem. I wonder if others on this list, if confronted with these UDE's in a manufacturing environment would conclude that process workers need training? Justin -----Original Message----- From: tocleaders@yahoogroups.com [mailto:tocleaders@yahoogroups.com] On Behalf Of Jim Bowles Sent: Tuesday, 7 March 2006 9:22 PM To: tocleaders@yahoogroups.com Subject: RE: [tocleaders] Viable Vision Constraint? Bill There is a lot of material to go through to extract the fundamentals of the VV approach. But this is an extract from what Dr Goldratt said in Barcelona. It refers to the implementation of the Zycon template. Most client companies are dealing with 1 or 2 new prospects at a time. The hit ratios are extremely low, much < 5%. To get a good prospect they must have 19 rejections for each qualified prospect. So the sales people prefer to spend their time with existing clients. However, if we go with the Viable Vision market template offers to prospects then this time 19 say they are interested. Now the sales people are stuck, they have no clue how to manage 19 simultaneous prospects. So they try to deal with the one or two and ignore the rest or they try to deal with all of them and end up losing them all. So a vital part of the Viable Vision implementation is to build the sales mechanism and processes to enable the sales people to manage, track and control the process of taking a qualified prospect through the several months’ process to a sale. We need to help them track the progress, schedule the meetings, know where in the buffer each is, make sure the follow up is done promptly. The first buy-in is to the top management about the penalties, the next is to the sales people to even go out with the offer. So we must synchronise the implementation to ensure that the operations is ready and confident in delivering not just one express order but up to 1/3 as express. We must hold back the sales people until the operations is in place and the sales organisation has been trained both to conduct the offer and to manage a vast increase in prospects – even to plan the roll out to customers. In early rapid response implementations the protective capacity is from varying the lead times on the regular orders. But once you move into selling the emergency supplier offer and the rapid response may increase to 50% of sales, they must build protective capacity. Hope this helps to point you in the right direction. Jim Bowles I saw this quote on a Blog at the following site: http://www.agilemanagement.net/Articles/Weblog/Archives/November2005.html He was talking about his view of the TOCICO conference in Barcelona. "The current constraint appears to be the ability of existing sales teams to sell these unrefusable offers (UROs as they are known in TOC- speak). Hence, Goldratt Consulting are currently hiring sales force specialists who teach the clients' sales teams how to sell the offers." Out of curiosity, is this true? If so, does anyone know the logic that drove this conclusion? Thanks, Bill +( shareholder value You Can't Spot Serious Shareholder Value? Check Your Paradigms! By Rudolf G. Burkhard Summary: Executives are under too much pressure to spend time looking for and developing new and better solutions to running their business. They are aware of the need to manage their business as a system but on the whole do not do so, because they are lacking the tools to do so. Goldratt's five focusing steps are a way to solve this missing capability by focusing on the very few constraints any (business) system can have. Policies (the way things are done) are key constraints to better profits and improved SVA and many need to be changed. Some examples show how policies from the past are blocking businesses from earning much better SVAs *********************** There is an explanation (attributed to Dilbert) why top executives make so much money going around the World Wide Web. It begins with two statements that we all know and believe. They are "Time is Money;" and "Knowledge is Power". The engineering formula that "Power equals Work divided by Time" completes the premises. So, when we substitute Power with Knowledge and Time with Money we get the new equation that "Knowledge equals Work divided by Money". Solving this for Money we get: "Money equals Work divided by Knowledge". So the conclusion is that - as Knowledge approaches zero, Money goes to infinity - explaining executive salaries! Not very flattering if you happen to be a senior manager! However before we dismiss this Dilbert impudence we should consider that behind humour lurk real problems. (Russians are famous for innumerable jokes critical of their political system and politicians and with good reason.) Dilbert often puts his finger on some painful truths, so maybe we should look for the real problem(s) hiding behind this story. Maybe understanding it will lead us to some very significant shareholder value. Over at least the past thirty years there have been a number of studies trying to understand how busy executives manage their jobs. No one was really surprised to learn that most top-level managers address a large number of issues every day, and influence or make many decisions. We also learned that these people do not solve every problem or issue from scratch - they use their vast experience to respond quickly to almost any situation. They use paradigms - guidelines and rules developed from experience. Could this be bad? Relying on paradigms inevitably prevents (or at least slows down) a person from developing and applying new knowledge. Maybe this inertia, sticking to paradigms, is why managers' knowledge seemingly approaches zero over time. Every management problem actually requires two decisions - the decision that solves the problem itself, and the decision of how to solve the problem. Should a manager solve the problem from scratch or use paradigms he is used to and has been successful with? This second decision is what interests us here, because it defines whether or not an executive wants to (or can) improve. Should he invest his valuable time in developing better (more competitive) solutions or should he use his paradigms to arrive at a quick solution - and move on? All executives are acutely aware of the need to look for better and better solutions. If they do not, they are putting their business in danger. If they do look for better solutions they could also be putting their business at risk, because some important problems or issues might not get addressed or should not be entrusted to others. Executives always face this contradiction. Should they look for and try new and better ways, or should they stick to what they know? The dilemma is - to question, or not to question (paradigms). Niccol• Machiavelli describes it very well in 'The Prince': "It follows that an acceleration in the rate of change will result in an increasing need for reorganisation. Reorganisation is usually feared, because it means disturbance of the status quo, a threat to people's vested interests in their jobs, and an upset to established ways of doing things. For these reasons, needed reorganisation is often deferred, with a resulting loss in effectiveness and an increase in costs." Reorganising our paradigms, our thinking, is no different from reorganising an organisation and just as difficult to achieve. E.M. Goldratt claims that every contradiction or dilemma can be 'evaporated' - that there can be no contradiction in reality. Just as in the physical sciences, we should examine our contradictions to find the flaw that will eliminate them. He demonstrates that behind all conflicts, dilemmas and contradictions there is a series of assumptions underpinning both sides. All it takes is to find an invalid assumption, or have the ability to invalidate an assumption, and the contradiction disappears - it "evaporates". In the above dilemma executives assume (implicitly) that their business is very complex and only they have the 'big picture'. So only they have the vision and capability to attend to the many different problems facing their business. Goldratt contends that businesses are really quite simple. There are, in fact, almost never more than one (1) or two (2) things blocking a business from achieving more of its goal - the goal to make (more) money (SVA) now and in the future. This claim, if true, greatly simplifies an executive's job. Suddenly he can focus a high percentage of his attention on just two things. What a revelation; what a relief; and what a simplification. Now he can really concentrate on new ways to address those most important issues. With executive focus things do get done. He will spot and achieve serious shareholder value. However, why should the claim be correct? Is the whole world wrong in its view of business? Well, much of the business world has embraced the idea of 'Systems Thinking'. We all know a business is a system of interdependent functions and that a business should be managed in a holistic way. So, we can look at a business as a chain - with many interdependent links. All a businessman has to do is to look for the constraining function of his business - the weakest link. There will always be one. Rarely if ever will there be more than two. Too many constraining factors lead to chaos in a system making it very difficult to control. Therefore, to spot serious SVA executives have to first find the constraint(s) of the business. This is the first of Goldratt's Five Focusing Steps of continual improvement. There are four different types of constraints - physical constraints within the business, a supply constraint, a market constraint (the market will not buy all we can sell) and policy constraints. Policy constraints, the way things are done, are probably the most important and least understood. What is the most frequent response to a physical constraint? Often it is to invest in more capacity. Is this the correct response? Of course not! The direction must be (and this is Goldratt's second focusing step) "what decision(s) must I take to exploit the constraint?" This decision will, if implemented, ensure that the output of the system is maximised. Nothing less will do. If a business were to invest before it knows how much can be wrung from its constraint it could very easily spend money unnecessarily - hurting SVA. To implement the exploit decision we must now subordinate everything else to the above decision (Goldratt's third step). This is a huge paradigm shift for many managers. How many times have you seen a sales director subordinate to manufacturing, or vice versa? How often does the constraining resource determine what will be done? However, for really serious SVA every executive must get the most from his constraints - which necessarily means subordination by the rest of the organisation. If the constraint is still in the same place after it has been fully exploited it is time to elevate capacity by investing money (the fourth step). Only now do you know you are investing in the right place. When a business adds capacity and/or breaks the constraint - the whole situation changes - all the things it knows about a business need to be re-evaluated! The last of Goldratt's five focusing steps is simply: If during any of the above steps the constraint is broken go back to step one. BUT do not let your inertia (your paradigms about your business system) become the systems constraint! This caution is extremely important. We must re-evaluate all our assumptions about our system - or we will make grave mistakes and leave really serious SVA on the table. It appears that questioning our paradigms should be very high on every manager's agenda. Goldratt has given us an approach to address a business' constraints - at least for the three first types. Policy constraints are different. If you find one - get rid of it, or make it appropriate for your current situation. The other four steps are not valid for policy constraints. Many policy constraints make our life difficult. Many of these policies are not even written policies - they are just 'the way things are done around here' or simple behaviours we have all become used to. No matter, they are hurting our SVA, our earnings. We must change them. Examples of Policies that Damage Profitability: Management by Objectives: Managing by objectives has been around for a long time. It is true that if people are given objectives they usually try to meet them, they want to do a good job. But let's look at the situation in project management (a large proportion of business activity is projects) - a function that is famous for completion delays, budget overruns and project promises not met. Let us look at how managing by objectives delays projects! Imagine you are one of the people doing one or several of the tasks in a project (Use the drawing to follow). You have many years of experience and you know very well that Murphy's Law applies to tasks in projects. You also want to do a good job. So, you estimate the time of your task so that you have something like a 90% certainty of completing it on time. You provide your 90% estimate (plus a little bit more for management to cut). In the end you are committed to a task time which gives you what you hope is enough safety time. Moreover, everyone else in the project has done the same thing. What have we done? Every task has an enormous amount of safety in it. This is because the probability distribution of a tasks completion time is skewed - with a long tail to the right. In many cases the time between the median (the time where you have a 50:50 chance of being early or late) and the 90% of finishing on time can be double and triple an estimated task time. The picture above describes the situation. This means that there is a lot of safety in a project. In fact knowing that the sum of a series of events has a tighter variation than any individual event means that for the project we actually have much more safety than just 90%. So why are projects so often late? Usually most people, being very busy and knowing they have a lot of safety in their task time estimate, will not start work straight away. When they do finally start they have frittered away a lot of their original safety, they have wasted it. Most of the time they finish on or near their original estimate - but those with bad luck finish late - sometimes very late. So with most people finishing on time (meeting their objectives) and only a few that are very late the project inevitably is late. Managing individuals by objectives is not a good idea in projects - what we want is everyone working to one objective, the project due date - task due dates are in fact meaningless. Maybe it would be better to cut task time estimates in half and, since statistics help us, put only half of what we have cut into a project buffer. This should eliminate 'student syndrome' (starting at the last minute) and with better communication task hand-offs will happen with minimum delay - especially if everyone is now measured on the project performance. Will it work? Probably, but I recommend the reader studies the book 'Critical Chain' before attempting to make such a change. Total Quality Management (TQM) Everywhere TQM is a great tool for improvement, but what happens when a business tries to implement such a programme everywhere? Will it achieve a major improvement to the bottom line quickly? In the current revival of TQM there is a recognition that results must come quickly. Management will not wait years to see bottom line results; they want them now. For this reason consultants selling TQM insist on gathering 'the low hanging fruit - to show early success. Focus is on demonstrating early success. The problem with most TQM programmes is that they are not focused on the constraints of the business. Improvements will be made, but either they will be small or their impact will not translate into bottom line results. Only those companies that are lucky enough to attack the right problem will see big improvements. Usually the rest will give up on their TQM programme - as happened in the past. What sort of a quality improvement do we really want? Of course those bringing breakthrough results to the bottom line. Improvements with no impact to the bottom line are valueless. In fact they probably cost us SVA by causing delays (resource availability) in those projects that do bring money to the bottom line. So where do we focus our efforts? Of course, we focus on the constraint(s). TQM must be subordinated to the constraints just like all other functions. Measurements: Do our measurements cause the right behaviour in our people? What sort of behaviour would we like to see? It is easy to verbalise - we want our people to behave in a way that helps the system (the business) - to meet its goal. Do we actually measure performance in this way? Of course not. It is just too difficult to measure a function based on its impact on the business. This is why many companies have metrics to measure each of its functions - manufacturing, sales, R&D etc. In some enterprises something called 'functional excellence' is called a best practice. Yield (or its inverse scrap rate) is a popular measure used in manufacturing. If it becomes the prime measure - the one the plant manager's bonus really depends on - then it will be focused on. So much so that the business will suffer. For instance, a plant manager was told to improve yield from about 78-80% to at least 85% - or else, someone would be found who could. He did it. He did it by sacrificing machine speed to get his desired result. Unfortunately the plant was already out of capacity and the yield improvement was far less than the lost capacity from slowing down the machines. The business lost money - SVA went down. In another similar case the plant manager simply out-sourced all the small volume products that had poor yields to a subcontractor with more appropriately sized equipment for these products. Yield went up. At the same time the plant manager's machines often stood idle along with their operators. The fee from the toll manufacturer far outweighed the (yield) savings. There is a lot of SVA in choosing the right measurements to get all functions to behave for the good of the business and not just the function. Full Absorption Costing: This means product or service costing in a way that all overheads (burden) are allocated to products or services. The intent is to know whether or not a product or service is profitable - should we keep it or.? This practice has led to a lot of mistakes in businesses - for example, the out-sourcing example given above Another example is a service organisation, one that executes projects for its clients. In this company project managers are evaluated on the profitability of their projects. They must buy internal resources at a fully allocated cost - with all the overheads. In a discussion about resource availability one project manager stated he had no problems with this. He does not even try to use internal resources, since this just leads to conflicts. He always hires sub- contractors. In this way he has no resource problems and he makes greater profits - because internal resources cost him more than the same kind of people from subcontractors. What has he done? He has optimised his (local) performance and probably he has hurt company performance (if internal resources are under utilised). What drives him to do this? Performance measures and full absorption costing. Full absorption costing (no matter how it is done) has another problem. It gives a profitability ranking to be used by sales and marketing which does not take into account how we are using our constraint. In other words it often happens that those products that use our constraint most efficiently are not very far up the profitability ranking. What happens? Sales sells the products with the highest margins on a full cost basis and profits go down. For serious SVA DO NOT use any sort of full absorption accounting - not even ABC (Activity Based Costing). Use a system which tells you which products generate the most variable margin (sales - raw materials costs). There is yet another way Full Absorption costing hurts a business. Many companies value their inventories at 'fully absorbed cost' - with all expenses allocated 'appropriately' to all products. This practise and the way it is implemented results in some counter productive effects. You ask your people to reduce their inventories. Good. They do it and because of this their profits decline in the short term - and they are punished. You want them to increase earnings. No problem, they will just increase inventories and 'hide' some costs there. These are the sorts of games managers will play to optimise their personal performance. The easiest thing in the world is to keep another set of books - the books by which you will manage. (Wall Street is not a reason to not do it. Financial analysts understand very well the problems of full absorption and the games companies play with it.) Full absorption costing of products and inventories helps no one make good decisions. In fact it often leads to decisions that hurts your company. On top of that 'good' product costing takes so much time and effort and yields so little, and it is obsolete almost immediately. If a business stops doing it a lot of talent is freed to concentrate on increasing profits. Efficiency - We Must Use Our Resources Efficiently: Using resources efficiently is a doctrine everywhere. Management wants to see everyone and every machine working all of the time producing. Not only management, as soon as a person has nothing to do for a while he becomes extremely nervous. He wonders whether he will be the next one out the door. So we all make sure that we are all always busy (or look busy) - no matter what! Let us see what this causes in a multi-project environment. An environment where many projects are worked on at the same time and where the workers are usually working on more than one project at the same time. Should all the employees in such an environment be working as hard as they can all of the time? Every project environment has a resource (or set of resources) that is overloaded. These people are the constraint of this project system. The other people are definitely not the constraint. However these other people have a need to look busy and their managers must be able to report the high efficiency. What usually happens? The constraint resource is continually complaining and asking for more capacity. It is working overtime. It has a mountain of work waiting to be done, with no clear priorities on what should be worked on first (every project manager's project is the first priority). The constraint resource multitasks between many tasks and projects (depending on where the squeakiest wheel is) - losing time every time he re-starts on a task. It is a vicious circle. These constraint resources are blamed for the poor performance of the organisation. I would not be surprised to find a lot of frustration here - and a high level of manpower turnover. What about the other resources? They are looking for enough work to keep themselves busy, to be 'efficient'. What does this do? It loads even more work into the system so that the constraint resource gets an even bigger backlog. The vicious circle gets much worse. Projects are delayed, SVA is lost. What is the solution? Identify the constraint. Decide how to exploit the constraint. Subordinate everything else to the constraint. Etc. We already know who the constraint is - so how do we exploit his (her, their) capacity? We make sure that the constraint resource works on one task at a time, to the end - never interrupting a task. Will this help? Of course it will, but the priority project (yes you need to set priorities) gets done first and much sooner, and all the rest also finish sooner. What about subordination? Easy, the constraint resource dictates the rate at which new projects are introduced into the environment. Management's job is to prioritise the projects. Conclusion: To find serious shareholder-value-add executives are faced with the difficulty of finding the time to concentrate on this problem. The direction of the solution that is proposed here is that since a business is a system they need only focus on the very few things that are blocking making more and more money. {Clearly buying businesses in markets that are much more attractive is also a route to SVA, but both these acquisitions and the businesses being shed will benefit from focusing on the business constraint}. Once an executive has found the way to delegate most of his work to focus on the constraint of his business he needs to start thinking about the policies and paradigms driving the behaviour of his organisation. One of the first steps he must take is to define the measures for his organisation: measures that will drive his people to the common goal of making more money now and in the future. With the help of some examples of business paradigms or behaviours, it becomes clear that the approach suggested here is a powerful way to spot and achieve really serious SVA! Acknowledgements: Almost all I have written in this article I owe to Dr. Eliyahu Moshe Goldratt - business thinker and educator. Goldratt is the source of the 'Theory of Constraints'. Much of Goldratt's thinking can be found in his 3 business novels The Goal; It's Not Luck and Critical Chain, which I can wholeheartedly recommend. Another source of Goldratt's thinking are his 'Satellite Tapes', a series of lectures by Goldratt on the subject of applying his theory to production, finance, project management, distribution, marketing, sales, managing people and strategy. These sources of information have been a tremendous influence on my thinking and I hope I have not betrayed Goldratt in what I have written here and that I have helped a little to get his message to more people. There are many others active in the arena of the Theory of Constraints. To many of them with whom I have discussed this subject - Thank-you for your thoughts and support. +( Six Sigma From: "Potter, Brian (AAI)" To: "CM SIG List" Subject: [cmsig] RE: Six Sigma Date: Thu, 14 Oct 1999 10:12:20 -0400 - Six Sigma is a frontal assault on Murphy: Find every source of variation and pound the deviations from the targets down to such a low level that essentially all outputs from processes will satisfy downstream requirements. - ToC is an accommodation to Murphy: Manage systems so that variation will not kill you. Observe where variation costs you the most. Attack the most expensive variation until (policy choice here) (a) the variation is negligible or (b) some other variation is more costly. ToC makes a GREAT "gun sight" which can convert Six Sigma from a shotgun (wasting many pellets on unimportant issues and demoralizing people who see there efforts delivering few [if any] useful impacts) into a rifle always centered on the most valuable target. The two programs can work hand in glove. When a ToC shop makes choice "(a)" above, Six Sigma offers excellent tools for (in practical terms) eliminating the targeted variation. Date: Sun, 05 Mar 2000 21:08:04 +0100 Subject: [cmsig] Re: Six Sigma and TOC From: "Rudolf Burkhard" We (Du Pont) have embraced 6 sigma. What Frank Patrick says below is absolutely true. The biggest problem with 6 sigma I have is that it is not focused (see Bill Dettmer's CRT on why TQM (or continual improvement) fails) on the constraint of the business and to a very large extent it is focused on cost and not on revenue growth. I suspect that most business environments would be in a similar situation should they adopt a 6-sigma programme. I have black belt training and I am a Jonah. Using my knowledge from both camps has allowed me to propose revenue generation projects such as improving our new product development using Critical Chain. My proposal has so far caused a little stir because of the opportunity for 'savings' (= revenue growth) I claim. I am expecting a few interesting weeks ahead. > > Norm Rogers wrote: > >>Would the principles of Six Sigma apply to a manufacturing company and would >>they compliment TOC? > > Absolutely. > > The quality tools and statistical process control concepts at the > core of Six Sigma are perfect tools to use to develop ways to exploit > your CCRs and to reduce variability in performance of non-CCRs that > are creating holes in your buffer. +( strategic constraint Date: Thu, 12 Oct 2000 15:44:54 -0400 From: Tony Rizzo A strategic constraint is a resource that the company chooses to maintain as the constraint of the entire system, for the following reasons: 1) The resource represents a source of significant strength in the company's industry. 2) The resource is in very limited supply, in the world! 3) The resource cannot be elevated, except without exceptional investment, if it can be elevated at all. The strategy is that the one resource that limits throughput is also the one resource that gives the company the greatest competitive edge in its industry. Such a company turns out products as fast as its most valuable resource is capable of doing. In the meantime, competitors are limited not by their most valuable resource but by resources that could be elevated easily but are not elevated because of policies. To apply this strategy, the leadership selects its strategic constraint resource. Then, the leadership REALLY applies the first three of the five focusing steps. Should any elevation of the resource take place, such as might happen if the company hires a similar resource away from a competitor, then other resources are immediately elevated as well, so as to ensure that the strategic resource remains the constraint of the system. --- From: "Bill Dettmer" Subject: [cmsig] RE: Strategic constraint Date: Fri, 11 Oct 2002 07:59:39 -0700 Avraham... I don't know that anyone has ever addressed in publication the issue of "strategic" characteristics when defining a constraint. However, it seems to me that the answer to your question lies in the goal of the organization. Had I been in your position when the question was asked, I think I might have answered with another question (that's called "socratizing the sucker"). My question would have been, "What's the nature of the goal of your organization? Short term, or long term?" If the goal is "to [FILL IN THE BLANK], now and in the future," then I'd be inclined to call "strategic" any constraint that blocks or forestalls achievement of that goal in the future. Anything that blocks the goal today could be considered a tactical constraint, but even that's not hard and fast. What if the constraint you face today is one that is likely to take a long time to overcome? That could be strategic as well as tactical. The fiber optic industry, for example, is facing such a constraint now. So are the airlines. And buggy whip manufacturers. Which brings up another question ("socratizing," again)... How far ahead constitutes the future? Next week? Next quarter? Next year? 2-5 years? More? Seems to me that it could be a value judgment on the part of the organization. In the final analysis, once could adopt Supreme Court Justice Potter Stewart's position: "I may not be able to define it, but I know it when I see it." --- From: "Potter, Brian \(James B.\)" Santiago, Archimedean constraint: A more precise name for a constraint which takes the form of a resource with less long run capacity than the long run demand it faces. Strategic constraint: A resource chosen for planning purposes as the focus point for the buffer management process. This resource may be an actual constraint or it may have a capacity artificially restricted to the known capacity of a known actual constraint elsewhere in the production system. When a strategic constraint differs from the actual constraint, intent to break constraints with less capacity (and thus shift the actual constraint to the strategic constraint) often exists. The strategic constraint model permits leadership to break successive Archimedean constraints without creating multiple focus shifts during the process of moving the system constraint from its original physical location to its desired physical location. Yes, buffer still works when you lie to yourself this way. As long a material release timing actually buffers the physical constraint well enough, the resource used as the computation base matters not. +( strategy and tactics see also +( clouds - strategy and tactics tree Dr. Goldratt realized that it required that the words “strategy” and “tactic” had to be defined more clearly than before. His new definitions were inherently simple, yet powerful. He decided to define “Strategy” as simply the answer to the question “What for?” (objective of a proposed change) and “Tactic” as simply the answer to the question “How to?” (details of a proposed change). From these definitions, it is clear that every Strategy (What for) should have an associated Tactic (How to) and therefore Strategy and Tactic must always exist as “pairs” and must exist at every level of the organization. An S&T can therefore be viewed as simply a logical tree of the proposed changes that should be both necessary and sufficient to ensure the synchronized achievement of more Goal units for the organization. However, any logical tree is only as valid as the assumptions on which it is based. Therefore, it is the responsibility of managers at every level in the organization, to not only contribute to defining and communicating the Strategy and Tactic for each proposed change, but also to define and communicate the logic of the proposed change – why the proposed change is really necessary to achieve the higher level objective and ultimately the goal of the company, why they claim it is possible to achieve the objective (strategy) of the change (especially considering it has probably never been achieved before), why they claim their proposed change (tactic) is the best or even the only way of achieving the strategy of the change and finally, what advice/warning they would give to their subordinates to ensure sufficiency of implementation of the proposed change. Each S&T node therefore in the S&T is simply a proposed change that should answer: 1. Why is the Change needed? (Necessary Assumption) 2. What is the specific measurable objective of the change (Strategy) 3. Why do you claim the Strategy is possible and what must be considered when selecting from the alternative ways of achieving the Strategy (Parallel Assumptions linking Strategy with Tactic) 4. What is the specific Change(s) being proposed to process, policy or measurement (Tactic) 5. What advice/warning should be given to subordinates, which if ignored, will likely jeopardize the sufficiency of the steps they would take to implement this tactic? (Sufficiency Assumption) +( supplier customer conflict (B) (D) be a reliable not overcommit the supplier capacity of my organization (A) avoid failure (C) (D') avoid losing an overcommit the capacity important customer now of my organization Notice that D is a direct threat to C. Similarly, D' is a direct threat to B. An executive who finds himself in this conflict is locked in the C-D' side of the cloud. Nearly always, the choice is to ovid pain now, in the hope that somehow a miracle will resolve the conflict in the future. Reality, of course, is that the miracle never happens, and the organization becomes and remains a very unreliable supplier. Tony Rizzo If this is the conflict, then there is no acceptable solution AFTER the conflict occurs. But there is a TRIZ solution: "Do it ahead of time." The solution is to pre-condition the system, so that the conflict is highly unlikely to occur. The key assumption is between C and D'. In order to avoid losing an important customer now, I must absolutely overcommit the capacity of my organizatin now, because the only time when I can allocate the capacity of my organization to a client is WHEN THE CLIENT REQUESTS IT. There is a technique, which I call the Placeholder method, that can be used to avoid the conflict entirely. With the placeholder, one simply allocates capacity to a client BEFORE the client shows up with an emergency project. Then, when the client really does show up, the client's emergency project enters the system immediately. In the event that the client's emergency project does not show up, another customer's project enters the system in place of the placeholder. The placeholder, of course, never enters the system. It either takes one step back in the queue continually, or it is displaced by the client's emergency project, in which case the place holder goes back to the end of the queue. With this method, the system appears to the client with an emergency project as if the system worked exclusively for him/her. To the other clients, the system continues to appear as reliable as it ever was, which for typical TOC organizations in new product introduction means 90% to 95% on-time performance. From: Frank Patrick I don't see this as "hopeless." I also don't see a lock between C and D (there's nothing inherent about "not overcommitting" that will lead to losing a customer), but rather between B and D'. Assumptions that may be worth considering: C-D' -- Additional capacity is not available, even from temporary sources, or from changes in management approaches and/or methodologies. D-D' -- Capacity is limited to internal resources of "my organization." A-C --- Losing a current important customer will lead to failure, even if we are so busy that capacity to deliver is an issue. +( systems - additive or interactive From: "Tony Rizzo" Subject: [cmsig] Re: Implement TOC to Job-shop Plant Date: Fri, 7 Dec 2001 03:17:09 -0500 At the time when Frederic W. Taylor was doing his studies with efficiencies, there existed very many enterprises for which global efficiency and local efficiency were synonimous. The profit generated by these enterprises equaled exactly the sum of profit contributions generated by the many component operations within them, because each of those component operations generated the end produtcs. The production of clothing items by hourly workers slaving over sewing machines is one example of this kind of additive business model. Further, even though some enterprises did not operate as additive systems, like assembly operations, the level of organization available at that time was probably rather low. Taylor's work made a real contribution even in such cases, due to the low level of organization that existed then. In other words, even when local efficiency and system efficiency were not synonimous, Taylor's ideas provided better system efficiency, because the local efficiency was already exceedingly poor in many cases. Finally, for many decades the government propagated the same additive model in the defense industry. The vehicle for this was the now unlawful practice of issuing cost-plus-percentage-fee contracts. For the many decades during which this type of contract was used by the government, the route to maximum profitability for the defense contractors was in fact to maximize the utilization of everyone who worked such contracts, because, again, the profit of the contractor equaled the sum of individual profit contributions - an additive system. Now, we have systems that are not additive but, rather, highly interactive. A clear example of this is provided by modern product development organizations. Unfortunately, the notion of local efficiency being synonimous with system efficiency is now so deeply embedded within our culture that we find it nearly impossible to prevent that now erroneous notion from influencing our organizational designs and our operational decisions. Often, we experience a cost-culture-shock, when finally we get it. +( The Goal THE GOAL (Notes taken) By Eliyahu M. Goldratt and Jeff Cox Ch 4 If your inventories haven't gone down... and your employees expense was not reduced... and if your company isn't selling more products-which obviously it can't, if you're not shipping more of them-then you can't tell me these robots increased your plant's productivity. Ch 5 The goal of a manufacturing organization is to make money. If the goal is to make money, than an action that moves us toward making money is productive. And an action that takes away from making money is non-productive. Ch 6 Three measurements which are central to knowing if the company is making money: net profit, ROI, and cash flow. I would want to see increases in net profit and return on investment and cash flow - all three of them. And I would want to see all three of them increase all the time. From the above, the goal: To make money by increasing net profit, while simultaneously increasing return on investment, and simultaneously increasing cash flow. Ch 8 There is more than one way to express the goal. The goal to increase net profit, while simultaneously increasing both ROI and cash flow is equal to the goal of making money. Three measurements which express the goal of making money: Throughput, Inventory, Operational Expense. Throughput - The rate at which the system generates money through sales. Inventory - All the money that the system has invested in purchasing things which it intends to sell. Operational Expense - The money the system spends in order to turn inventory into throughput. Ch 9 Another goal derivation: Increase throughput while simultaneously reducing both inventory and operating expense. Ch 10 Each one of those definitions contains the word money. Throughput is the money coming in. Inventory is the money currently inside the system. Operational expense is the money we have to pay out to make throughput happen. One measurement for the incoming money, one for the money still stuck inside, and one for the money going out. All employee time - whether it's direct or indirect, idle time or operating time, is operational expense. The market determines the value of the product. In order for the corporation to make money, the value of the product - and the price we're charging - has to be greater than the combination of the investment in inventory and the total operational expense per unit of what we sell. Ch 11 There is a mathematical proof which could clearly show that when capacity is trimmed exactly to marketing demands, no more and no less, throughput goes down, while inventory goes through the roof. Because inventory goes up, the carrying cost (an operational expense) goes up. A manufacturing line experiences dependent events and statistical fluctuations. Dependent: One step depends on the previous one(s) Statistical Fluctuations: Items that cannot be precisely determined. Most of the factors critical to running your plant successfully cannot be determined precisely ahead of time. Ch 13 If we're all walking at about the same pace, why is the distance between Ron, at the front of the line, and me, at the end of the line, increasing? As long as each of us is maintaining a normal, moderate pace like Ron, the length of the column is increasing. Except between Herbie and the kid in front of him. Every time Herbie gets a step behind, he runs for an extra step. The catch: I see Davey slow down for a few seconds. He's adjusting his packstraps. In front of him, Ron continues onward, oblivious. A gap of ten... fifteen... twenty feet opens up. Which means the entire line has grown by 20 feet. There are limits: Can only go as fast as I am capable, and/or as fast at the person/event ahead of me. I can go as slow as I want. Ch 14 This chapter displays the 'matches' production line. With five pans lined up on a table (manufacturing stations), a die is to be rolled at one station at a time and the number rolled is the amount of matches processed by that station (that can then be moved to the next station). As the die is rolled, matches start to accumulate and the average production of 3.5 matches per round falls short (about 2.0 matches per round). The charting of the 'work in process' depicts a tumbling line for the stations at the end of the production line. Ch 15 Manufacturing always makes the end of the line run to catch up - never slow the front of the line down. So one operation can produce faster than the rest - so what - does this produce more product. No. Only enough product can be produced by the slowest operation. Ch 17 If operation #1 can process a variable amount of product in a set amount of time - and operation #2 can only process a SET amount of product in a set amount of time then operation #1 can only usefully produce the amount of product in a time frame that operation #2 can process. "The maximum deviation of a preceding operation will become the starting point of a subsequent operation." Ch 18 We have to change the way we think about production capacity. We cannot measure the capacity of a resource in isolation. Its true productivity capacity depends upon where it is in the plant. And trying to level capacity with demand to minimize expenses has really screwed us up. What we know now is that we shouldn't be looking at each local area and trying to trim it. We should be trying to optimize the whole system. Some resources have to have more capacity than others. The ones at the end of the line should have more than the ones at the beginning - sometimes a lot more. A bottleneck is any resource whose capacity is equal to or less than the demand placed upon it. And a non-bottleneck is any resource whose capacity is greater than the demand placed on it. You should not balance capacity with demand. What you need to do instead is balance flow of product through the plant with demand from the market. This is the first of nine rules that express the relationships between bottlenecks and non-bottlenecks and how you should manage your plant. Balance Flow, Not Capacity. Bottlenecks are not necessarily bad-or good - they are simply a reality. Where they exist you must use them to control the flow through the system and into the market. To find the bottleneck: First, have to know the market demand for the product. Second, have to know how much time each resource has to contribute toward filling the demand. If the number of available hours for production (discounting maintenance time for machines, lunch and breaks for people, and so on) for the resource is equal to or less than the hours demanded - the bottleneck. Processes can be divided into 'work centers' - groups of people and/or machines with the same resources. Searching the numbers is one way to find a bottleneck - if the numbers are good. A second method is to talk to the people on the floor and see where they think the backup is taking place. Bottlenecks probably have huge amounts of work in process in front of them. Ch 19 Bottlenecks stay bottlenecks - just find enough capacity for the bottlenecks to become more equal to demand. On a non-bottleneck resource some idle time is acceptable - even expected. On a bottleneck resource this is quite the opposite. The throughput of the system depends on the throughput of this the slowest part of the system. An hour of lost production on this resource is an hour lost for the system. Do all of the parts going to a bottleneck have to go to a bottleneck. Some parts are reject parts as they wait for the bottleneck - so weed them out before the bottleneck wastes processing time on a reject part. Recheck that some of the parts do not need to be processed by the bottleneck - maybe they simply don't need to be. Don't have bottlenecks work on parts to go into 'inventory' when it could be working on parts to go into sales. The operational costs of a bottleneck resource to sit idle for one hour is the sum of the costs of each resource for the product to sit idle for one hour. Ch 25 Activating a non-bottleneck resource in excess of a bottleneck resource just ends up with excess inventory of the non-bottleneck part. Rule number two: Activating a resource and utilizing it are not synonymous. Ch 26 Cut batch sizes in half for non-bottleneck resources and more money will be made. With batch sizes cut in half, then there will be half the work in process on the floor. Even cutting batch sizes of delivered items from vendors. In all, less money will be tied up in inventory - better cash flow. From the time a piece of material enters the plant to the time it leaves the plant it goes through four 'elements.' Setup - The time it waits for the resource, while the resource prepares to work on the part. Process Time - The amount of time spent being modified into a new form. Que Time - The amount of time it waits for a resource which is working on a different part. Wait Time - The amount of time it waits, not for a resource, but for another part, so that it can be assembled. Setup and Process are small amounts of time. Que and Wait are immense. Que is dominant at bottleneck resources. Wait is dominant at non-bottleneck resources. Epilogue A person/company should never be satisfied. There should always be change - change for improvement. Everybody needs to know the constraints. Every persons move should be made in consideration with the constraints. The first step is to remove the resistance to change. from http://www.jimwilliamson.net/hls/sayings.htm +( the perception of goodness Subject [cmsig] The Perception of Goodness challenge Date Sun, 5 Dec 2004 160648 -0800 From "Holt, Steven C" A lot of the challenge behind our discussions of TOC and other approaches, including FCS, Dynamic Modeling, Lean, 6 Sigma, etc. has to do with our own experiences and our perceptions. Thus, over the last few weeks we have had logical arguments that come across as one person saying "That will never work" and another saying, "I've done it and it worked." They often appear to be circular and even dip into "I've never seen it, therefore it can't be true." It's an interesting thought experiment to figure out whether there is, in fact, ANY means by which all of the parties in such a discussion can be satisfied with a final answer. We're likely to get into a mode now where even if evidence of success is brought forward, other people will discount it as being "not applicable" or "not the same thing at all" or a number of other reasons why it's actually not valid evidence of success. Consider a TOC practitioner in a predominately Lean company for instance. Lean is the official mandate, it's what the CEO talks about, it's in all the company newsletters and it's what employees are expected to do. TOC has none of that; it has no mandate, it has no budget, it has no communication. If an individual decides to try a TOC solution they are essentially taking a rebellious act. Consequently, they're likely to operate as much under the radar screen as possible. Unfortunately, TOC has a habit of working. If it does, then the evidence of success is often unavoidable and difficult to hide. I once talked to a group who were living in fear that their senior manager would see how much their performance had improved due to TOC because they'd likely be asked to expand their implementation faster than they thought they could handle. But, this means that when TOC is used and works the news gets out that it works. But let's consider the opposite case, suppose TOC doesn't work in a given situation (and there are a number of great reasons why it might run into trouble). Since it was an unsanctioned, unofficial implementation evidence of its use is simply swept under the rug. "TOC? Never heard of it. We ran into resistance to change/a market downturn/problems with software/etc." TOC never gets the blame. Consider the opposite--what happens to the officially sanctioned methods? Well, they don't always work, either. But, because they are officially sanctioned and out in the open it means that people hear about the failures of the official methods but not about the failures of unofficial methods. Lean is a great case in point. Womack has had a number of discussions on cases where Lean has failed. Likewise, Jerrold Solomon says in the forward to his book "Who's Counting?" that he wrote it to overcome what he believes to be the key reason for Lean failures. Along this same line, I recently found an interesting article on line. It's an interview from the Association for Manufacturing Excellence with Clifford F. Ransom II, Vice President, State Street Research, Boston. Ransom is introduced as one of the few bankers to understand Lean and apply it to the evaluation of companies. The full interview is available several places on line, I found it at http//www.superfactory.com/articles/lean_fat_cash_flow.aspx Here's the interviewer's question and Ransom's response that most intrigued me --------------------------------------- Q Do you track many lean manufacturers? A No. Very few companies have advanced with lean manufacturing until you can see the results financially --- perhaps one or two percent at best. Another two-three percent are "getting there" ---OK but not outstanding. Another 10-15 percent mostly "just talk lean." The majority, 80 percent or so, don't even have the buzz words straight. Unless I see three pieces of evidence, I do not consider a management to be serious about lean manufacturing. 1) They must proclaim that they are becoming lean. They can call it whatever they want, but intentions must be boldly stated in a vision that everyone can understand. 2) They must tie compensation to lean systems. You are not becoming lean if you reward people for doing unlean things. 3) They have to drive the company with lean metrics --- time and inventory measures. You have to persist to see results. You won't see much change in the financials for 12 to 18 months, sometimes longer. Clearly, confirming the sustainability of superior performance takes much longer --- years. Most managements waffle around, make only a half-hearted attempt, and never get rid of the inconsistencies in their own leadership. --------------------------------------- So, another challenge for us to throw into the pot in all of this is just what is it that makes us believe that many of the methods considered to be successful actually are successful??? Steve Holt +( the rod Date: Tue, 27 Mar 2001 07:18:56 +0200 From: Avraham Mordoch Subject: [cmsig] RE: Another question about buffer The concept of Rod was developed in the Goldratt Institute, while developing Disaster, and it is well described in Goldratt's "The Haystack Syndrome" in chapter 33. The Rod is applicable when you have a constraint feeding another one. There are two such situations: (a) when the two constraints are not the same resource and (b) when it is the same resource and the material after being processed in the constraint resource, goes to another operation on the same resource, probably with some other operations in between. The Rod represents the minimum time you allocate in your schedule as the distance between the two constraint operations. This time is needed to protect the second constraint. The protection is against any disturbances or delays in the process leading to the constraint (drum). You almost always need such a protection. This situation of a constraint feeding itself is common in electronic wafer industries where the scrap ratio is relatively high and you want to protect yourself against it. If you think about your buffer in terms of time, as you should, and not in terms of material, you will see that the Rod is the mean to describe the buffer size when you have a constraint feeding a constraint. In "The Haystack Syndrome" you also have a discussion of how long the Rod should be, meaning how big is your buffer. > >The purpose of a buffer is to protect the constraint from being starved. If > >the same material is going through the constraint a second time, you do not > >need to buffer it again since once it is through the first time it is ready > >and waiting to be worked on again. +( The Way Geese Fly +c Rizzo Have you ever given it any thought, really? I'm talking about the way that geese fly. Each autumn countless flocks of geese grace the sky with willowy V-formations, as they strive to reach regions of the globe with better weather and more abundant feeding. But have you ever wondered why they fly in such lovely coordination? It has to do with local optimization. That's what it's all about, local optimization. When we see an airborne V, we are observing the behavior of a successful system. But to understand the reason for that system's success, we need to look at the smallest component of the system. We need to look at the single, solitary goose. We also need to understand some very basic aerodynamics. Let's get past the aerodynamics first, then we'll be able to better understand the behavior of the individual goose. It's a fact of life that anything with wings creates spirals of flowing air as it flies. These wisps of wind, called tip vortices, trail from the wingtips of whatever flies. The vortices circulate in opposite directions. The one that trails behind the left wing circulates clockwise. The tip vortex that trails from the right wing circulates counterclockwise. If you live near an airport, you may have seen such vortices trailing from the wingtips of landing aircraft on cool, humid days. They are quite fascinating to watch. But if you were a goose, then your perception of tip vortices would be based on a completely different criterion. In fact, tip vortices would influence your flight position relative to the goose in front of you. Consider this. If tip vortices spin in opposite directions, with the left vortex spinning clockwise and the right vortex spinning counterclockwise, then the air directly behind a flying goose has a net motion downward. Directly behind each goose there exists a downwash. Imagine trying to flap your way south for a few thousand miles, with a downwash constantly trying to push you toward the ground. If you were a goose, and if you could find a more favorable position in which to fly, wouldn't you be very likely to fly there? Well, there would be two such favorable positions relative to the goose ahead of you, if you were a flying goose. These would be slightly behind and to either side of the goose ahead. Remember those tip vortices? To the left or to the right of a flying goose, the tip vortices cause the air to have a net updraft. That's right. While the air directly behind a flying goose is moving downward, the air behind and on either side of a flying goose has a slight upward movement. Now, I'm the first to admit that neither geese nor ganders know didly about aerodynamics. But they all can feel the difference. If a goose is tired, and if the goose finds it easier to fly in a particular spot relative to the goose ahead of it, then the tired goose flies in that particular spot. That's all there is to it. But what about that local optimization? Well, we've explained the local optimization issue, haven't we? Each goose in the graceful V-shaped formation is squarely in the mode of local optimization. Geese don't fly in such formations because they have a sense of esthetics. Geese have no sense of esthetics. They don't fly in such formations because they follow some policy. Geese don't have policies. They have only instincts, and one of these is the instinct for self- preservation. By flying behind and on either side of another goose, each flying goose is making life easier for itself and optimizing its likelihood of survival. We, who gape at them with such wonder and with mouths often open, see the behavior of the system of which the single solitary goose is a component. We see the result of wide-spread, successful, local optimization. Now let's talk about that instinct for self-preservation. I can't prove it yet. But I have a strong suspicion that every complex organism (every animal) that has ever lacked the instinct for self-preservation has vanished from the face of the earth largely because it lacked that very instinct. If this is so, then I should expect every complex organism to behave in a somewhat predictable manner. For example, I should expect a goose to avoid things that are damaging to it, such as flying directly behind another goose. I should also expect a goose to do things that favor its survival, such as flying behind and to one side of another goose. Further, I should expect most geese and other complex organisms to be indifferent to things that are neither damaging to them nor favor their survival. As I said, I can't prove it yet. But I'm working on it, with the help of some very capable brains. Oh! Did I mention that people are complex organisms? People are very complex organisms. They also have a strong instinct for self- preservation. And the behavior of the organizations that people form (we call these companies) is the result of wide-spread, successful, local optimization. When we observe a company that experiences smashing success, we see the behavior of a system within which successful, local optimization is rampant. When we observe a company that exhibits numbing mediocrity, again, we see the behavior of a system within which successful, local optimization is rampant. Whenever we observe any company, we can conclude with confidence that in that company there exists wide-spread, successful local optimization. So, what's the difference between companies that are smashingly successful and companies that are massively mediocre? There is a difference. The difference is in the rules that exist within the companies. These rules (policies and measurements) are the physics of the system. Just as the laws of aerodynamics cause flocks of geese to fly in graceful V-shaped formations, the policies and measurements within a company cause the overall behavior of the organizational system that is the company. They do so by causing individuals to choose a specific set of actions that, within the context of the organization's internal physics, result in either the greatest gain or the least damage to individuals. During our excursion into the realm of constraints and clear thinking, many of us might have drawn the conclusion that local optimization is a very bad thing, to be avoided at all costs (forgive me for the pun). But it is neither a bad thing nor a good thing. It is simply a fact of life. Every living thing today is constantly in the mode of local optimization. It has to be, simply to continue to survive in many cases. The tragedy isn't that local optimization exists. It is that we don't understand it nearly so well as a scientist understands the physics of the universe. If we did understand this organizational physics only half as well as a scientist understands, say, aerodynamics, then we might begin to harness the vast energy of the people that make up our organizations. Perhaps, this is the most persuasive argument in favor of the Thinking Processes. They are tools for the discovery of the organizational physics that we desperately need to understand, if we are to design our organizational systems effectively. (C) Tony Rizzo, 1996. tocguy@PDInstitute.com This article may be reproduced only in its entirety. Any reproduction must include the author's name. This article may be published in formal publications, either in print or in electronic form, without written permission from the author. +( thinking processes From: "Tony Rizzo" Date: Wed, 17 Aug 2005 14:14:00 -0400 Subject: RE: [tocleaders] Assumptions Learning is about challenging or testing assumptions. The thinking process tools are useful in describing and communicating the assumptions under test. And the categories of legitimate reservations (Lord how I hate that label) are useful in creating accurate descriptions of the assumptions under test. But ultimately the test comes from reality, data, and experience. Without these, the assumptions never become anything useful. One of my difficulties with the thinking process tools is that they are verbal. They create verbal models of our reality, accurate or otherwise. Only a fraction of the population thinks and interprets the world through verbal means. I suspect that Eli Goldratt is among these. He admitted at one time that he struggled with sketching cartoons. But a greater fraction of the population consists of people who think visually. I am overwhelmingly visual in my thinking. Other people, a non trivial fraction of the population, learn and understand primarily by hearing. The rest, I am told, are kinesthetic. They must feel and experience, to learn and understand. The thinking process tools address the learning, thinking, and communication mechanism of perhaps 1/4 of the population. In addition, many of us are highly experienced in the science of designing, building, and using mathematical models and computer models of physical systems. I for example, spent 7 years in college and graduate school and an additional 12 years in a technical position where I developed necessarily accurate models of physical systems. Even to this day I regularly develop computer models. Only today the models are of organizations. For those of us who have this training, the verbal approach to describing reality and to understanding systems is, well, slow and ineffective to say the least. We have more effective modeling techniques available to us. Finally, using logic alone, any of us can begin with a handful of entities and merge these in all possible combinations, to derive the set of all logically integrated effects. Most of us then can eliminate from the set of all logically integrated effects those that are obviously inconsistent with reality. But the only people who can whittle the set down to the small number of effects and causal relationships that really have an impact are the few among us who POSSESS KNOWLEDGE of the situation. Knowledge comes from experience, tests, data, and observation. I have never seen anyone build a CRT and then verify its entities and its causal relationships with tests, data, and documented observation. Consequently, I am forced to conclude that most of the CRTs in use are based upon nothing more than speculation. The few that are accurate are so only because they were built by someone who already possessed enough knowledge to see the problem and to see what solutions might work and what solutions might not work. Does this mean that the thinking process tools are without utility? Of course not! The thinking process tools can be highly useful in a number of situations and in the hands of many people with the right experience in the problem areas at hand. But the thinking process tools are not the cure-all that some suggested initially. In short, the thinking process tools were presented initially with much hype. Once the hype was stripped away, that which was left was much less than the hype suggested. That which is left is still useful and valuable. It is just not all that was promised initially. Nor is it all that some continue to promise. --- From: "Tony Rizzo" Date: Mon, 15 Aug 2005 11:46:31 -0400 Subject: RE: [tocleaders] I. In Search of the Thinking Process In this list and certainly in others we often fall into disagreement, because we fail to define the subject of discussion (the system) before we begin. So that we might avoid this trap perhaps one time, let's begin by defining the system to which the current discussion applies. The system or the subject of this discussion is the set of tools known as the Thinking Processes, developed by Lisa Scheinkopf, Eli Goldratt, and others, who as a team attempted to formalize the process by which organizational problems might be understood and solved. The set of tools consists of the following: 1) The Current Reality Tree - intended to capture the effect-cause-effect relationships that describe an organization's current drivers of decisions and actions, which in turn determine the organization's performance. 2) The Evaporating Cloud - a conflict diagram intended to highlight the predominant conflicts felt by people within the organization and to help surface the underlying assumptions that create the conflict, be those assumptions real of imagined. 3) The Future Reality Tree - intended to reflect the effect-cause-effect relationships that describe the desired future state of the organization. 4) The Prerequisite Tree - intended to show the obstacles to creating the desired future and to help identify the intermediate objectives with which to overcome those obstacles and create the future state of the organization. 5) The Transition Tree - intended to be a plan that lists the specific actions with which to overcome the obstacles and the logic explaining why the listed actions are sufficient to overcome each obstacle. The Transition Tree was intended to be the organization's project plan. Since the initial definition of these five tools, a number of derivations have been offered, usually by Dr. Goldratt and or his children. These include A) The Communication Current Reality Tree (CCRT). B) The Three-Cloud method. C) The Three-UDE Cloud. I'm sure that there are others. I just don't remember them all, and after losing interest in this tool set, I no longer kept track of all the derivations. Anyone who wishes to add to the defined list of the Thinking Processes should feel free to do so. So, the subject of discussion appears to be the utility of the tool set known as the Thinking Processes. This begs the question, utility to whom? Keeping in mind this "whom" part is an important aspect of assessing utility. The tool set that I find most useful may be quite different from the tool set that, say, Mark finds most useful. The reasons for the different perception of utility can be a number of things, such as educational background, or job experience, or simply personal preference. A second important point deals with how to assess utility. Specifically, utility suggests progress toward some goal. Therefore, in addition to keeping in mind that utility refers to a specific person or category of people, we must keep in mind also that utility is a statement of progress toward a goal. Consequently, before going much further with this discussion, let's see if we can agree on the goal with which the Thinking Process tools were intended to help. Perhaps Jim can help us here. Further, the subject of discussion is not the need for managers, executives, workers, and consultants to think in terms of effect-cause-effect. We all agree of the need to think in terms of effect-cause-effect. The lack of agreement is with respect to the specific set of tools and their utility. That is, the degree to which they help some as yet unidentified persons make progress toward some as yet unstated goal. The lack of agreement is not with the need for effect-cause-effect thinking. --- From: "J Caspari" Subject: [cmsig] Could be reservation Date: Sun, 5 May 2002 14:10:22 -0400 [NOTE: for newcomers to the list, some previous discussions of the Categories of Legitimate Reservation (CLRs) on this list will be found at http://casparija.home.attbi.com/dweb/l49.htm . A summary of the CLRs is available at http://www.connectedconcepts.net/CLR.htm . For an example of the three could technique in building a CRT, see Dr. Holt's PowerPoint Presentations at http://www.vancouver.wsu.edu/fac/holt/em526/ppt.htm] Richard Zultner (REZ) wrote, in part, as a part of the 3 Cloud Unsoundness thread, REZ: << ... the existing checking procedures [CLRs - categories of legitimate reservation] are inadequate, but for a slightly different reason. To put it simply, people untrained in ToC do not use "legitimate" reservations. The one I see most often is the "could be" reservation. [As a separate topic I would argue that this is NOT properly captured in the existing CLR categories...] Show them a tree, take them through the logic, and at several key steps their reaction is often, "yes, that could be" (they accept the possibility of the conclusion, but they have little or no evidence for it, and so they have little confidence in the conclusion). They are not objecting to the conclusion. They are not saying the conclusion is wrong. They are simpy saying they wouldn't "bet on" the conclusion. This is why a tight, dry tree can still fail to have any persuasive power -- there is no force to the conclusion, no compulsion to act on it. Take the very example laid out so nicely at length on this list. Here we have tree with what, six levels? Even with perfect, dry logic, if I have only(!) an 80% confidence in the logical conclusion at each level, what is my confidence in the argument as a whole? Not much. Would I bet the ranch (or change my behavior) based on it? No. >> CASPARI: [I am assuming that the "dry tree" to which you refer is the one that I posted yeesterday.] If you have only an 80% confidence in the logical conclusion at each level, then I would suggest that the tree is not "dry" enough for the purpose for which it is being used. CASPARI: So let's put the potential "could be" reservation to the test to see if the existing CLRs are sufficient to dry the tree further to the point that you are highly confident (enough to bet the ranch or change your behavior based on it). If they are, then we will shown that another reservation is not needed, but rather that the observation of a "could be" response is a signal for the presenter to surface the specific CLR from the audience. CASPARI: The entry point to the tree as presented was three entities. (30) [CONFLICT ENTITY D] There is pressure to take actions based on the indications of the accounting measurements. (305) [DEFINITION] Costs are in control if they bear a reasonable relationship to revenues. (2-1020) [OXYGEN] Revenues are linked to costs--in the minds of the vast majority of people in the society--in terms of a desirable relationship. CASPARI: Do you understand each of these three entities (30, 305, and 2-1020), or is there something confusing about one or more of them. In other words, do you have a CLARITY reservation? --- From: "J Caspari" Subject: [cmsig] Re: Three Cloud Method Date: Sat, 4 May 2002 15:24:03 -0400 Larry Leach wrote (Subj: 3 Cloud Unsoundness), in parts, LEACH: << A logical argument scrutiny only checks that the deduced results track from the premises. It does not check the reality of the premises. >> CASPARI: One of the categories of legitimate reservation (CLR) is ENTITY EXISTENCE. This asks about the truth value (reality) of the existential premises. (Conditional implication premises of the form, if p then q, are represented by sufficiency arrows.) Note that applying this reservation to the generic cloud that results from the n-cloud approach ALWAYS (that is a strong term, and I use it intentionally) results in a re-wording of some or all of the entities contained in the generic cloud when converting it to the base of the CRT, where all entities must be true (exist in reality). This is because the argument represented by an Evaporating Cloud (cloud) is always inconsistent (false, invalid). The cloud is absurd on the face of it because the D and D' entities represent 'p' and 'not p'; 'p' cannot be both true and false at the same time (reductio ad absurdum). This is why clouds always can be "evaporated"; they cannot exist in reality. CASPARI: For example, in the generic cloud that I used as the starting place for a treatise on Constraints Accounting, the B, D and D' entities are: (B) We exercise good budgetary control. (D) We take actions based on the indications of the accounting measurements. (D') We take actions that are in direct opposition to the indications of the accounting measurements. But in all cases at least one (or more) of these statements (entities) is false, and thus fails the entity existence reservation. Therefore, in the CRT (current reality tree) base these three entities are re-worded as: (20) We need to exercise good budgetary control. (30) There is pressure to take actions based on the indications of the accounting measurements. These re-worded entities. (50) There is pressure to take actions that are in direct opposition to the indications of the accounting measurements. Whereas the Evaporating Cloud entities, D and D', cannot both be true, the re-worded CRT conflict entities (30) and (50) can both be true. CASPARI: Now you get to evaluate entities (30) and (50) [and the logic contained in the remainder of the cloud base that leads to them] as to being either true or false for your organization. That is, apply the entity existence reservation to each of the entities and apply the other CLRs to the arrows and bananas (implications and conjuncts) in the cloud base. LEACH: << Thus, the 3 Clouds can (consciously or unconsciously) be used as a shell game, dazzling the 'user' with a method that hides flawed or missing underlying assumptions (or premises). >> CASPARI: But the CLRs check for exactly these things. If the 'user' does not apply the CLRs, then that is hes problem (possibly caused by a problem with hes educational system). Caveat emptor. Note that the CLRs are used with the CRT, not the cloud. In this case, the CAUSE INSUFFICIENCY reservation searches for the missing "oxygen" (something taken for granted), and both the ENTITY EXISTENCE and ADDITIONAL CAUSE reservations search for the flawed assumption. LEACH: << Psychological research demonstrates that if you can get people to say yes to a little thing, it is much easier to get them to agree with a big thing. Thus, by leading them along a step at a time, you can get them to agree to a CRT that is utter nonsense; or at least completely misses the high leverage core problem or conflict. >> CASPARI: Shame on you. This is an education problem, not a logic problem. Why would you hire an Enron auditor or a telemarketer as a consultant for the Thinking Processes of the TOC? Again, caveat emptor. Hire a consultant (internal or external) who lacks integrity and you are likely to get what you pay for. LEACH: << My assertion is that the previous method causes you to think harder and reach further, possibly helping you find your way out of some of these mental blocks. >> CASPARI: I repeat my earlier comment from my April 12 posting: "Larry says that the logic of CRTs created with the three-cloud approach tends to be weak when compared with CRTs created in the "old way." I do not see how the logic of the three-cloud approach could be weaker than the "old way" if it has survived scrutiny in accordance with the CLRs (categories of legitimate reservation). After all, the CLRs are the same for both techniques." LEACH: << For example, another 'recent' innovation in CRTs was assuring that we asked about Policies, Behaviors, and Measures. I think that is a great idea. I do not know if it is codified anywhere (e.g., I don't know if Dettmer's or Lisa's books have this point, and can't check...they are in boxes). >> CASPARI: Lisa covers conversion of the cloud to a CCRT (Communication CRT) in Chapter 12 (of *Thinking for a Change*), but does not specify the Policies, Measurements and Behaviors (PMB) entities pathway specifically. I may have done that first in my April 12 posting (I, of course, got it from Dr. E. M. Goldratt originally, and the concept has been around since at least the early 1990s when Dale Houle mentioned it to me). CASPARI: Continuing my CRT from above, a portion of it is drawn from the situation Frank Patrick described in 1995 (archived with comments at http://casparija.home.attbi.com/aweb/RL1.HTM ), but what follows as an example contains two PMB pathways from the conflict to the UDEs in a portion CRT that I drew in 1998: IF (30) [CONFLICT ENTITY D] There is pressure to take actions based on the indications of the accounting measurements AND (305) [DEFINITION] Costs are in Control if they bear a reasonable relationship to revenues AND (2-1020) [OXYGEN] Revenues are linked to costs--in the minds of the vast majority of people in the society--in terms of a desirable relationship THEN (310) [NORM] The price of a product should be a function of the cost of the product. IF (50) [CONFLICT ENTITY D'] There is pressure to take actions which are in direct opposition to the indications of the accounting measurements THEN (315) Products are introduced based on the justification of perceived market demand. IF (315) Products are introduced based on the justification of perceived market demand THEN (320) The firm has a large number of products with many variations. IF (320) The firm has a large number of products with many variations AND (317) [OXYGEN] Different product variations consume different amounts of resources AND (310) [NORM] The price of a product should be a function of the cost of the product THEN (325) Prices should be different for each product variation. IF (325) Prices should be different for each product variation AND (310) [NORM] The price of a product should be a function of the cost of the product THEN (330) [POLICY] Target prices are based on product costs. IF (135) [OXYGEN] The cost accounting system (traditional, activity based, modern, or direct) attaches unit costs to cost objects AND (330) [POLICY] Target prices are based on product costs THEN (335) [MEASUREMENT] Target prices are a function of a complex cost allocation scheme. IF (335) [MEASUREMENT] Target prices are a function of a complex cost allocation scheme THEN (340) [BEHAVIOR] Marketing cannot distinguish, for customers, why one product is worth $5 more or less than another. IF (340) [BEHAVIOR] Marketing cannot distinguish, for customers, why one product is worth $5 more or less than another THEN (345) Customers become confused and skeptical. IF (345) Customers become confused and skeptical THEN (430) [UDE] Sales are lost to competitors. IF (320) The firm has a large number of products with many variations AND (30) [CONFLICT ENTITY D] There is pressure to take actions based on the indications of the accounting measurements THEN (420) [POLICY] Management wishes to sell only profitable products. IF (135) [OXYGEN] The cost accounting system (traditional, activity based, modern, or direct) attaches unit costs to cost objects AND (420) [POLICY] Management wishes to sell only profitable products THEN (410) [MEASUREMENT] Products are ranked according to sales price less some measure of product cost (Gross Margin). IF (320) The firm has a large number of products with many variations AND (410) [MEASUREMENT] Products are ranked according to sales price less some measure of product cost (Gross Margin) THEN (415) Some products have relatively low gross margins. IF (415) Some products have relatively low gross margins AND (420) [POLICY] Management wishes to sell only profitable products THEN (425) [BEHAVIOR] Products with low gross margin are discontinued. IF (425) [BEHAVIOR] Products with low gross margin are discontinued AND (405) [OXYGEN] Many customers will buy from other suppliers if the firm does not supply the products that they have been buying THEN (430) [UDE] Sales are lost to competitors. IF (340) [BEHAVIOR] Marketing cannot distinguish, for customers, why one product is worth $5 more or less than another THEN (350) [UDE] Marketing people become frustrated. CASPARI: So, there is an example to scrutinize in accordance with the CLRs and to determine what defects exist in the example that cannot be identified through the appropriate use of the CLRs. LEACH: << I think we might have to go further with these type of questions. For example (AND THIS IS ONLY ONE EXAMPLE), what are the rewards (in reality) that people get for specific behaviors? How are these rewards delivered, in terms of behaviorism; i.e. are the Immediate or Future, Negative or Positive, Certain or Uncertain? This kind of thinking may help us discover the real current reality; e.g., why is it that companies have such a hard time maintaining integrity around project priority. >> CASPARI: For sure, questions such as these are relevant. In fact, scrutiny of my CRT analysis, of which the foregoing example is a small portion, using the CLRs led me into these very areas and resulted directly in expanding the treatise to *Constraint Management: Using Constraints Accounting to Lock in a Process of Ongoing Improvement* and resulted in the specification of the necessary and sufficient conditions for successful Constraint Management that I posted to the list last year. These sorts of subjects must be in every CRT that has some aspect of measurements as the core problem or conflict. CASPARI: However, the CRT is just a part of a larger process leading through a FRT (future reality tree), negative branch reservations, and PRT (prerequisite tree or Intermediate Objective map) to, ultimately, a TrT (transition tree). Last summer I corresponded with Ramush Goldratt as follows: Ramush wrote: The purpose of the [transition] tree is to present a solid logical argument that the way to reach the desired objective(s) is to take each one of the recommended actions. Therefore the motivation is an "oxygen" - we don't have to add it in each one of the links. Caspari response: Motivation, which has to do with individual persons, would be oxygen only in the case in which the DE [desired effect] was a personal DE. That is, the TrT is a personal TrT. If the TrT is an organizational TrT, then an unstated oxygen in your argument is that there is a dynamic congruence between the goals of the individual expected to take the action and the DE. LEACH << I know that Bill Dettmer argues that the CLRs go after this by requiring you prove entity and causality existence, and check predicted effects. I assert that these do not do the whole job, because: 1. The CLR approach only checks what is there (i.e. does not force inquiry into areas such as the above, that may be hidden from the practitioners own consciousness), and >> CASPARI: As I state immediately above, my experience is that "drying" process (complete scrutiny according to the CLRs) does force one into those other areas. LEACH << 2. The self-sealing nature of the espoused theory belief systems described by Argyris. >> CASPARI: Yes, outside scrutiny from a different paradigm is far more valuable than the small amount that consultants charge for such services. CASPARI: Hopefully, this posting taken in conjunction with my posting of April 12 and comments that might arise from the members of the list will provide the codification of the n-cloud approach to the CRT with PMB pathways that you were seeking. +( Throughput Accounting T = sales minus totally-variable expenses, including RM immediately consumed, OE = all other expenses we would pay even if no sale was made, including depreciation & labor. I = balance sheet figures representing money tied-up in the system to protect throughput (equipment, RM, FG, WIP, possibly other capitalized goods). ============================================================================== Brian, You ask if using assets until they gradually wear out converts assets-type-I into inventory-type-I. No. The reduction in value of the equipment is represented in depreciation. The depreciation is part of OE. Nothing changes into inventory. TA does not include the depreciation expense as part of inventory. You recognize that we do not want to apply overhead to inventory. Depreciation expense is part of that overhead. So there is no conversion from the equipment I to an inventory I. There is only a conversion from the equipment I to OE (through the form of depreciation expense). In regards to materials purchased in the form of raw materials, I would like to attempt to clear some confusion I read from an earlier posting. It was said that this is part of T. No. This is I. Raw materials are part of Inventory/Investment. When the material is used and is in the form of WIP or in Finished Goods, it is still part of Inventory/Investment. Only when the end product is sold does it become part of Throughput. So raw material is still "I" regardless of how fast products are made and sold. The only factor is that it may not stay as I very long and will go into T. But it is still I until it does become T. It certainly is not T when it is purchased. One other point is that there is more to "I" than materials and equipment. Look on your balance sheet and you will see many more assets. These assets are part of the investment and need to be considered in "I" Inventory/Investment. Norman Henry +( throughput based decision support TIOE Throughput Based Decision Support By Eli Schragenheim Establishing the Need Every novice in TOC is aware of its condemnation of cost accounting, including activity-based-costing. As a matter-of-fact TOC defies the basic concept of product costing — no matter what the method is. If the product cost is always wrong then decisions based on it are wrong as well. Hence, it is essential that a better way to make such decisions be clearly defined. TOC points to such a way. It centers on the concepts of throughput and the system constraints. I assume the reader already required good knowledge regarding the definitions of those key terms and how they are used to estimate the economic desirability of many daily decisions. However, what fits regular daily decisions may be wrong to apply as is for some larger decisions. Suppose a new large client wishes to buy large quantities of your company products, but with a substantial reduction in price. What do I mean by "large" decision? For the sake of this example let’s suppose the volume of the proposed deal is 12% of the current total sales. And, by the way, we cannot accept half of the deal — either we accept the whole deal or reject it. Does the size of the deal pose any difficulty? The standard TOC way is to consider the T/CU of the deal (throughput per constraint’s unit) and compare it to the T/CU of the least profitable product. Unfortunately this way might mislead the user and thus cause a wrong decision. And this is not only because some long term marketing considerations might override the short-term financial considerations. Large decisions that were supported by the T/CU priority scheme may be wrong also from pure short-term financial outcomes. Here are some cases where using the T/CU straightforward comparison might lead to the wrong decision: 1. Suppose the T/CU of the deal is better than the current least product T/CU. But, the volume of the deal is significantly higher than the sales of the least T/CU product, so we’d need to reduce other products sales as well. Overall, the deal might generate less T than the T lost. 2. Suppose the lesson of the above case is learned. Not just the T/CU of the least profitable produce is examined, but also the volume. When we consider how much of the constraint’s capacity we need to free in order to accommodate the deal we find out we need to reduce the sales of a product that has better T/CU. Does it mean we reject the deal? Not necessarily. 3. Suppose there is no capacity constraint prior to the deal. The deal certainly yields positive throughput. Does it mean we accept the deal? A volume of 12% more sales might turn a non-constraint into a bottleneck. Note that the deal might require much more capacity of a particular resource than 12%. Taking the deal means giving up some current sales, even though no capacity constraint is currently active. 4. If you are smart enough to note the new constraint, then you might find out that the least T/CU of the current sales is better than the T/CU of the deal. Does this imply anything? The new constraint has spare capacity prior to the deal. How much T the deal generates by using the spare capacity before the need to give up some current sales becomes effective? 5. There is a capacity constraint to start with. The deal has great T/CU — much better than most current sales. But, it takes a lot of capacity of another resource and turns that one into a constraint. The calculated T/CU considers the old and known constraint. But, now another one is emerging and interacting with the old one. Trimming some products according to the priorities set by the old constraint do not necessarily reduce the load on the new one to an acceptable level. The Limitations of T/CU All the cases above can be handled by TOC, but the use of T/CU is wrong. Hence, we should re-evaluate the rational behind the concept of throughput per constraint’s unit and verbalize when it can be applied correctly. When we have an active capacity constraint, we assume it is fully utilized. Hence, when a new demand comes in, it has to be at the expense of something else. Assuming we are fully flexible to accept/reject any particular demand, we can prioritize all the potential demands according to the throughput they generate for one unit of the constraint (usually measured by time). This prioritization should yield the maximum throughput out of the given demand. Yet, there are two critical assumptions behind the use of T/CU. The first was stated above and it says that there is an active capacity constraint. The other is hidden: that the decision considered does not change the constraint(s). Let's consider first the role of the first assumption. What happens when no capacity constraint is active? That means that the constraint is the market demand. This also means that the T/CU is infinite. Does it mean that we cannot prefer one incoming demand over the other, provided both yield positive throughput? I know some experienced and knowledgeable TOC people who claim that in such a case one should just prefer the orders with higher T. I don’t think so. Why should we prefer any order over the other? There can be two plausible reasons. One is that we are constrained with something that we do not normally regard as a "capacity constraint", like the perceived need to focus the sales approach. If the sales force can successfully push only limited number of products and this limits the total throughput, then we do have a constraint and priorities should be calculated accordingly. The second reason is more significant. In elevating the market constraint we should consider the identity of the emerging capacity constraint. Hence, it seems that we should based our efforts to get more markets on the relative priority dictated by the future capacity constraint. As a matter of fact, this article gives a clue how to do that. The problem of trying to use the T/CU of the future constraint as a priority measure is that it would push another resource to emerge as a constraint. How come? An order with a very favorable T/CU means the capacity required from the constrained resource is low relative to the generated T. Does it mean it takes also low capacity from other resources? Not necessarily. There is a fair chance that it takes a lot of capacity from another resource. If we concentrate of low T/CU orders we might cause another constraint, while the chosen constraint is not fully loaded. Of course, when this happens we end up with much less total throughput than we’ve anticipated and the T/CU of the accepted orders according to the real constraint is not favorable at all. The T/CU priority is valid only when our current product mix already generates an active capacity constraint. Small changes to the product mix that would add better T/CU items at the expense of some lower T/CU would have positive impact on the profit. However, beware of overdoing it — interactive constraints might emerge and then the reliability of the company would dive down with possible severe impact on the future market. When the strategy of the company is discussed, like what should be the future constraint, what should be the offering to the markets and in what market segments should the company maintain presence, then the T/CU is definitely not the right guide. The T-OE Concept We need a valid way to assess the economic desirability of a large market opportunity. If the T/CU does not supply a valid guide what does? Once we solve this problem we can evaluate the larger picture of assessing the desirable product mix and which strategic constraint to choose. The impact of any decision on the bottom line can be measured by ??????? In other words by the net change in the total throughput of the company minus the net change in operating expenses. The net change refers, of course, to the change due to the decision at hand, but not only direct impact is calculated but all the indirect impacts as well. The ???factor contains two parts. The first is the direct T generated by the decision at hand. The second are possible losses due to lack of capacity. In any case we don’t want to compromise the customer satisfaction of the company’s performance. For the current article I’m ignoring a possible third part, which is the interdependencies between sales like when accepting an order might bring more business in the future or when refraining from satisfying current demand might cause a negative impact. That part is ignored here not because it is not important, but because it is outside the scope. Losses of T might occur because of the current constraint or a new constraint that emerges because of the decision at hand. Note that any non-constraint that loses part of its protective capacity cannot properly subordinate to the constraint(s). Hence it has turned out to be an interactive constraint. Predicting that a certain addition of load might turn a non-constraint into an interactive constraint is one of the difficulties that we’d look for in a good enough solution. What about the OE? Is it fixed because all the truly variable costs are including in the T? Somehow there is an impression that according to TOC the OE is truly fixed. Some of the critiques of TOC aim at this false impression. TOC never claimed that the OE are fixed when there is a significant change of volume. The definition of truly variable costs (TVC) is the costs that invariantly occur per one unit of sale. When it is not obvious that producing one additional unit generates a certain expense, that expense is not part of the T. However, when we consider additional 100,000 units, more expenses might occur that were not included in the definition of T. For instance, two additional shifts are definitely needed for such an additional volume. Estimating the ???part is based on noting the load/capacity profile of the resources participating in generating the proposed market opportunity. We should see that those resources are properly loaded: only one is loaded close to its capacity while the others have enough excess capacity, meaning the protective capacity is maintained. ??? is mainly determined by additional purchased capacity. There is a striking difference between using available capacity and purchasing capacity. In any organization there is a certain level of capacity that belongs to the organization and is paid whether is it utilized or not. Beyond that given level of existing capacity the organization sometimes succeeds to purchase additional capacity, like overtime or outsourcing. However, purchasing capacity as needed is limited and when it is done regularly, the costs of that type of flexible capacity should be included in the definition of T. In cases where the demand goes up (or down) the organization might consider adding (reducing) regular/internal capacity, but that are usually long- term moves, which are not based on one-time demand for additional capacity. When we consider the ?????? for a certain decision, several options should be evaluated. The simple case is that no additional purchased capacity is considered. In this case ??? is zero by definition, and the main burden of calculating the ?? is on deciding what to trim from the current demand. Next a variety of options to enlarge capacity and thus accommodate the proposed deal should be examined. When such an option is considered we need to know in what quantities we can buy that particular capacity. Machines come in certain sizes; each one of them defines a quantum of capacity. Employees also come in usually whole numbers. It’d be impossible to employ 41% of a foreman. Freelances are more flexible, but in most cases we cannot operate based on truly variable capacity that can be match to the demand. Evaluating different options to add capacity and/or trim less desirable demand and calculate the appropriate ?????? is the TOC way for supporting those decisions that today are based on the erroneous product costing. Software support for such decision process can be very useful. For such a process to be effective we should add some insights and new terminology that stems from it. Products and T-Generators Since "It’s Not Luck" the message that it is important to segment the market while not segmenting the resources became properly verbalized. This insight makes a subtle difference between a product from operations point-of-view and a product from marketing and sales point-of-view. Operations management looks at a product according to the bill-of-materials and routings that are associated with it. In other words, the materials purchased from the vendors and the capacity investment of various resources. Sales has a very different perspective. A "product" is a whole package of items and services for a certain price. Hence, the use of the same word for the two different perspectives is confusing and misleading. I suggest the term t-generator (throughput generator) to denote a single sale that includes any package of physical products and services for a certain market for a given price. So, selling a single copy of a book to an incoming customer for $20, selling the same copy through the Internet to an overseas customer for $27.95 and selling 500 copies to a chain for $6,000 are three different t-generators, while the product is the same one: a copy of a certain book. Current Global Activity Any sales decision should be evaluated in reference with the global sales taken place at the same time. A decision to sell 100 books to a certain APICS chapter for $1,100 including shipping can be either good or bad depending on the other sales and the load/capacity profile that it imposes on the current state. Hence, in order to evaluate large decisions, and possibly a whole array of decisions, we have to have the current global activity defined as all the t-generators that are sold now and the resulting load profile on existing resources and the T, I and OE generated. It is not enough to note the current capacity constraint. In order to evaluate the possible emergence of new constraints and the possibility of adding capacity, we need to have the current global activity as a reference and the basis for further calculations. Critical Resources The information needed for larger decisions and significant changes to the product mix includes maintaining the current global activity: all the t-generators sold in a period of time and their impact on the load/capacity profile of the resources. This is the time to think whether we really wish to evaluate all the resources. We need that information in order to assess whether, due to the decision on hand, a non-constraint might lose its protective capacity. This phenomenon cannot happen to every resource. The vast majority of the resources have much more spare capacity and they won’t become constraints before the others. So, instead of evaluating the capacity profile of all the resources, there is a need to look only at very few — those that might turn out to be constraints if the products mix changes significantly. The Load Profile This is probably the most difficult concept. It involves not only how much can the constraint be loaded without compromising customer satisfaction, but also how much can the other critical resources be loaded without turning out interactive constraints. Buffer management is target to monitor that sensitive relationship, but it only gives us feedback on an actual state. I think no internal capacity constraint should be loaded to 100% of its available capacity, because fully loaded constraints means lousy due-date performance. Note, if it is possible to add overtime on the constraint, then loading it to 100% (without the overtime) only means there is still available capacity when needed. But, the tricky question relates to the most loaded non-constraints. How much protective capacity should we provide? It is tricky because when most resources have more excess capacity it is possible to squeeze even more of the constraint and also from the "second-most-loaded-resource". We probably cannot get any quantitative formula for that. But, it does not mean we know nothing. First, we do know that the "second-most-loaded-resource" needs to be less loaded than the constraint. And we do have the intuition of the production manager. Yes, there is a gray area where even the most experienced production manager is not certain whether the load profile is enough for good due-date performance. Let’s agree that in such cases we’d be reluctant to take chances. Still, we can fix a maximum load on the critical resources that are non-constraints to ensure adequate protective capacity. Certainly the person with the best intuition should dictate such a maximum load. Remember that in TOC intuition is a legitimate input to the decision making process. A TOC Decision Support System The full replacement of the flawed cost accounting for decision-making is by bringing all the pieces together to provide a TOC decision support. For that purpose it is not enough to know the current constraint and how much T is generated per one unit of the constraint’s capacity. We need the following data items as inputs for the supporting information: ? A representative list of the current sales or short-term predicted sales ? The list is organized as t-generators: various packages of the basic products along with their actual or predicted quantity and the price tag ? For every product the amount of truly variable costs (TVC) that is associated with it ? Based on the above, every t-generator appears with its associated T ? A list of the basic products and services as understood by the operational system ? The truly variable costs that are associated with every unit of product or service ? A list of the critical resources. Only resources that have a fair chance of becoming constraints should appear. ? For every resource, its relative capacity investment for processing each product ? Various ways where capacity can be added or reduced. Every way should note the minimum unit of capacity that can be added and its cost. For instance, overtime may be such a way. Suppose the minimum quantity is one hour. Adding an employee may add a full man-month of capacity for a different level of cost. ? The current monthly capacity available for that resource ? User defined variables ? The maximum load on the constraint the company’s performance can tolerate ? The maximum load on a non-constraint the operational system can tolerate and still perform well Based on the above inputs the ?????? can be fairly estimated for every market opportunity that is considered. The algorithm adds the opportunity to the full current list of sales and calculates the total T generated and the load profile. The user checks the load profile and either confirm it is doable, or trims some of the less desirable t-generators or adds capacity. The user takes the responsibility that the actions are realistic and have no ramifications that are not considered. Then re- calculation of T and OE takes place where the user looks for total T-OE that is larger than before. It is possible to let the software look for the optimal actions that would yield the maximum T-OE, but be careful, don’t expect that every action suggested by the software is possible in reality. For instance, the software may suggest trimming the sales of a certain t- generator by 50% - but if that t-generator is one deal then in reality you cannot just trim half of it. Re-calculation of the total T and the load profile is quite easy for software to do but is cumbersome to do manually. The OE is touched only when changes in the capacity levels are considered. These "what-if" capabilities link together Operations and Sales. A more strategic analysis that looks ahead for the "should be constraint" needs to involve Marketing in the process. This gives substance to any "Operations and Sales Planning" process and establishes the TOC view as a real replacement of cost accounting as a decision support system. The direction of the solution suggested here is not truly precise as all that is checked is the average load for a period of time and not the actual distribution of the load throughout the period. It seems to me that for generic decision making this is good enough. Once we do have the exact firm orders at hand, then DBR software would load the constraint according to finite loading algorithm. This direction of solution also leads to strategy planning where all the potential market of the company is examined against several optional strategic constraints. The idea here is to merge the market assessments with possible different load profiles, that dictate the amount of capacity, including excess capacity, needed for such a product mix. This issue is beyond the scope here as the obvious negative branch is how to deal with the unreliability of predicting the potential market. This issue is certainly non-trivial, but I believe a method can be developed that is much better than the current support for strategy planning. It is not particularly difficult to develop a software package that would meet the above generic requirements. DBR packages are natural candidates because they contain all the operational data needed. What needs to be added is the financial data and the "what-if" capabilities. Certainly ERP packages contain all the data elements needed. The problem for software companies is to assess the market for such a package. Such a package certainly calls for users who are quite knowledgeable in TOC and ready to develop their intuition in order to better exploit their companies’ constraints. At least one software company has developed a proto-type based on the above principles, but being a non-TOC software company they wonder what the market for their "TOC product" looks like. ---- (c)Copyright 2001 TOCreview All rights reserved. Eli Schragenheim, CEO Elyakim Management Systems elyakim@netvision.net.il "Eli S" is one of the pioneers of TOC and is recognized as an authority in ERP related simulations. He co-authored "Necessary but not Sufficient" with Eli Goldratt and Carol Ptak. +( Throughput Dollar Days TDD Date: Tue, 13 Mar 2001 06:50:49 -0500 From: Brian Potter Perhaps confusion STILL reigns regarding just what money-time metrics (like $-days) measure. I have noticed proposals for these metrics in two applications: 1. Estimating the relative impact of different undesirable scheduling alternatives (e. g., _The Haystack Syndrome_) where the delivery schedule customers want is infeasible and one must begin considering which deliveries will be how late. The proposal involves computing the dot product of money late times days late (or negative time for money arriving early) over all late deliveries. The assertion was that the schedule minimizing that dot product minimized financial damage to the shipper. Said dot p roduct would include early delivery bonuses and late delivery penalties altered by the proposed schedule changes. Naturally, issues like the customers' tolerance for late deliveries have a major impact on planning of this kind, too. 2. Capital investment planning (e. g., the last chapter of _Critical Chain_): Each project would have an investment weight equal to the dot product of its cash flows (negative cash for investments and positive cash for throughput each multiplied by tim e from the cash flow until the end of the project). The most desirable capital project set would be the feasible project set with the most positive total investment weight. Note that the money-time metrics measure neither money nor time. Money-time metrics measure INVESTMENT (or loss of ability to invest). Think of an investment as a mass of money moving through time. Money-time metrics measure the "area" (money "high" by time "wide") swept by the investment. Investments which sweep equal "areas" are in some important senses (carrying costs among them) "equal." For some applications (like the scheduling one), money-time offers some simplicity advantages over NPV ... - Independence from a discount rate (no need to squabble over the "cost of capital") - Simplicity of a dot product relative to an NPV calculation Naturally, one may wish to question the validity of a money-time calculation when either ... - Multiple discount rates may apply to different cash flows ... or ... - A long planning horizon may create significantly different weights for cash flows at significantly different times. Original Message Date: Mon, 12 Mar 2001 13:41:44 -0500 From: "Dr. Clarence J. Maday" This is a matter of units. T is $/day and throughput-days (TDD but it should really be TD) is $. If T = $1,000 per day and if we lose 10 days production, then lost TD = $10,000. A late order is a little different. Using the example of a $1,000 order b eing late 10 days, we have lost the use of the $1,000 for 10 days or we have had to pay interest on it. Performance measures are not as simple as T, I, and OE would lead you to believe. Whatever, they are not that complicated either. Just make sure you know what information you really need. Follow the money. > Norm Henry wrote: > > > Time period 1: One order worth $1,000 is late 10 days. > > Time period 2: One order worth $5,000 is late 1 day. > > Time period 3: Ten orders, $1,000 each are one day late. > > Time period 4: Five orders, $1,000 each are one day late. > > > > Now one wants to compare how the company is doing from period to period. > > How would you compare the different time periods with each other? > > > > Using TDD one can equate one period to another on equal terms. > > Time period 1: 10,000 DD > > Time period 2: 5,000 DD > > Time period 3: 10,000 DD (hey, this is the same as period 1) > > Time period 4: 5,000 DD (hey, this is the same as period 2) > > > > Now we can show that time period 2 got much better than time period 1. Time > > period 3 then went back to equal with time period 1 and then time period 4 > > improved again to the level of time peiod 2. > > > > In addition to enabling comparisons, however, between periods, each > > measurement identifies the financial impact of the value of lateness. > > > > Norman Henry > > The financial impact of being late is, well, you're late. There is > absolutely zero correlation between TDD and financial impact of > lateness. I.e. you've shown the same TDD for periods 1 and 3, and then > for periods 2 and 4. Yet the financial impact is vastly different. > Periods 2 and 4 both represent $5,000 tied up...late > delivery...Throughput you stand to realize by getting the orders out the > door. Periods 1 and 3, on the other hand, represent $1,000 and $10,000 > of orders tied up or late...Throughput you stand to realize by getting > the orders out the door are not both equal to $10,000. > > Now what is your risk in late penalties, or future lost orders by being > late? Again, no correlation with TDD, but more important is days late. > 3 of your 4 examples were only a day late... perhaps some peaved > customers, but probably won't lose them over this one order. Your first > example had the order late 10 days - may very well be at risk of losing > future business on this one, or at risk for a high penalty. > > In fact, the ONLY way I see TDD as being at all representative of > anything close to be financial impact is if there are uniform penalties > imposed on late orders such that there is an x% penalty for every day > late. In this case, the days would be multiplicative, and the % would > factor on the dollar value of the item. > > So, if there was a 1% penalty each day on every late order, the > penalites on your above cases would be: > 1) 0.01/TDD * $10,000 TDD = $100 > 2) 0.01/TDD * $ 5,000 TDD = $ 50 > 3) 0.01/TDD * $10,000 TDD = $100 > 4) 0.01/TDD * $ 5,000 TDD = $ 50 > > HEY! A real value in knowing TDD. But again, you cannot derive > anything else useful from this in regards to either how fast you are > getting goods out the door, or how you are doing keeping customers > satified. --- From: Norm Henry Date: Tue, 13 Mar 2001 07:34:50 -0800 I think perhaps you are still missing the dollar DAYS concept. When you multiplied $1,000 X 10 days you indicate that this is not $10,000 dollar-days. You are correct. It is not 10,000 "dollars" dollar-days. It is 10,000 "dollar-days". It is not a matter of calculating the interest on the period. Instead dollar days are a way of providing equivalent units (not dollars, but dollar days) for comparison. It has nothing to do with the interest rate. If you loan me $1,000 for 10 years, this is the same as if you loaned me $10,000 for one year, We can determine this by equalizing the two which can be done by using dollar-days. We now have comparable units. THEN you can apply an interest rate to a dollar day. If you know the interest value of a day you can apply this to the dollar days. The concept of dollar days is not intended to show you the interest amount. It is to provide comparable units for comparison purposes. You can then apply the interest amount if you are dealing with an investment but need not do so simply in order to determine which scenario is preferable. The dollar days will tell you this. -Original Message- From: Tom Turton [mailto:tturton@ntx.waymark.net] Sent: Monday, March 12, 2001 1:55 PM Dr Maday, Now YOUR explanation I can understand and agree with! T (= $1000 / day) x Days (10 days) = $10,000 Payment Owed (= $1000) x Days (10 days) == ? Not $10,000 dollar-days, but something more like $1,000(1+i)^10 (late interest/penalty, where 'i' is some daily rate) --- Date: Wed, 14 Mar 2001 16:17:02 -0500 From: "Dr. Clarence J. Maday" It's a matter of units and money. The term Throughput-Dollar-Days is unfortunate. It should have been Throughput-Days (TD). T is still $/day and therefore TD is $. Before we take the next step, let's consider an example. If we take care of all sales and raw material purchases on one day of the month (30 days) and our net is $1,000, then T for that day is $1,000/day. For the other 29 days, T = 0. For the math majors out there, we have taken the first step to describe the Dirac delta function. Next, we integrate the delta function (giving us the step function) with respect to time (over 30 days) to get money on hand, TD, or $1,000. We are not considering OE for this example. Now, let's take the next step and consider the case of a late delivery. We have two issues here, a performance measure and money lost. Fortunately both can be handled by carrying out one more integration with respect to time to get the ramp function or Throughput-Days-Days or T(D)^2 with units $-days. This we can use to measure the performance of the organization and calculate lost interest. In a similar vein, inventory ($) and time (D) would be combined as Inventory Days with units $-days. None of the above addresses intangibles such as customer perceptions or customer patience. Their effect on the bottom line can be estimated, however. That's another story. --- From: "Potter, Brian (James B.)" To: "CM SIG List" Date: Wed, 6 Jun 2001 09:01:51 -0400 You have captured the essence of both the metric and appropriate responses to it. In cases where potential for earn early delivery bonuses (higher "T" before a certain date) exists or late delivery penalties may lower "T" after a certain date, the calc ulation becomes more complex. Greater calculation complexity does not alter the principles outlined in your example. If you capture the impact bonuses and penalties have on "T" when you compute TDD, you will be in good shape. -----Original Message----- From: Aaron M. Keopple [mailto:aaron.keopple@apn-inc.net] Sent: Wednesday, June 06, 2001 9:55 AM The way that I understand TDD is as follows: Example: I have several late shipments. Shipment A is late by 10 days and has a T value of $10. Shipment B is late by 1 day and has a T value of $15. Shipment C is late by 5 days and has a T value of $25. The TDD per order is as follows: Shipment A - $100 TDD Shipment B - $15 TDD Shipment C - $125 TDD If our goal is to make more money now and in the future, we must focus on shipping shipment C first, then shipment A and then shipment B. Is my understanding of TDD correct, or has some new technology been added to the measure? Eli Schragenheim Sent: Tuesday, June 05, 2001 10:54 PM Aaron, I suggest you use your efforts to develop real time T measure to develop the TDD (throughput-Dollar-Days) measure. This measure is focused on the timely shipments, points to where special actions need to take place and gives notion of the damage. If T per month seems too infrequent, do it twice a month. This has to be your judgement. Use the TDD to achieve the right focus of your people when things go somewhat wrong. This is my advice, the responsibility, fortunately, is all yours. "Aaron M. Keopple" wrote: > > That is a good negative branch that you surface about the moral flucutating > with the daily numbers. We do have substantial fluctuations on a daily > basis. T on a monthly basis seems like it is too long. My intention was to > have the measure provide immediate communications on how the actions of each > employee impacted T. Maybe Weely or Bi-Weekly T measures would still be > valid. > > To entertain your other point, we do, and will continue to measrure the on > time shipment of each line item ordered. If an employee chose to work on > the wrong order at the wrong time in order to increase T, the on time > shipment measure would be negatively impacted. I continue to stress with > the employees that our customers really do not care about the amount of T we > generate and that our customers truly care about on time deliveries. > Without that there is no T to be gnerated!! > > -----Original Message----- > Sent: Tuesday, June 05, 2001 5:24 AM > Subject: [cmsig] Re: "Truly" Variable Expenses > > Aaron, > > As you described it, your "consumable" are truly variable expenses. > Hence, including them in the calculation of T is the right decision. > > However, I like to argue with you about the necessity of having real > time T. My concern is that your people may try too hard to increase > their daily (very short-term) T on the expense of long term T. > > Suppose the operator of your CCR has to decide which order out of two to > do first. Order 1 takes 3 days, generate $10,000 T and is due in 4 > days. Order 2 takes 2 days, generates $25,000 T and is due in 6 days. > The operations downstream of the CCR are capable to process any order in > one day. > > Certainly the CCR should process first order 1 than order 2 with good > chance to deliver both on time. As a matter of fact, I believe the > amount of T for each order is NOT relevant at this stage. The T/CU > considerations should have been done when the commitment was made. Once > the commitment is done - it is not important. > > We know how daily data can fluctuate. Many people do not really > understand variation. Do you want the moral of your people to fluctuate > upon the daily T? What actions will be done better by providing daily T > numbers? Aren't monthly T good enough measure? > > Throughput-dollar-days is a local measurement designed to let people > comprehend the critical importance of a DELAYED order. This is an > excellent measurement for real time. But, daily generated T is a too > confusing parameter. > > Just for curiosity, do you see a significant 'end of the quarter' peak > of throughput? > > Eli Schragenheim > > "Aaron M. Keopple" wrote: > > > > I am in the process of creating a real time scoreboard for our employees. > > My intention is to provide as close to real time Throughput to the employees > > as possible. We pay our people a quartlerly bonus based on the performance > > of the company, and as we all know the performance of the company is based > > on T. > > > > My journey is to create that "real time" throughput measure so that people > > can see the results of their actions. > > > > We know exactly what was shipped each day, we know the exact material cost > > that went into that product that shipped and we also know the amount of > > freight to ship the order. Taking the dollar amount shipped each day and > > subracting the amount of material and freight used to ship the product > > gives me a fairly decent throughput number. However, in our business, a > > certain product line uses a great deal of "consumables" to produce. These > > are truly variable expenses in that when the volume of that product line > > increases, these variable expenses increase. There are some variable > > expenses that are a choopchik, and I intend to place those in a "fixed > > expense" catagory. > > > > I would like to include the significant variable expenses into my throughput > > per day/week/month measure without creating a management nightmare. I am > > open to suggestions. --- From: "Aaron M. Keopple" Subject: [cmsig] Re: "Truly" Variable Expenses Date: Wed, 6 Jun 2001 12:23:20 -0700 I am not seeing the benefit to TDD. Using our Constraints Management approach to manufacturing we consistently ship orders 99+% on time. If we manage our resource, replenishment and shipping buffers effectively, then we will ship on time. I also have some negative branches with the TDD measure. I think it may tend to push the organization to deliver high T orders instead of pushing the organization to ship orders that belong to customers who globally provide high T. In other words, if we get an order from somewhere other that our primary market and it is very high T, am I willing to push that order through while jeopardizing shipments to my primary market? We have the case, as I am sure others of you do, that some of our customers view us as a one stop shop. If they cannot acquire the entire product line from us, they will order none. In order to get all of those customers order, we may have to sacrifice some T on some orders. But globally, those customers provide extensive T. -----Original Message----- Sent: Wednesday, June 06, 2001 6:16 AM Aaron, You understand it right. I'd also adopt Brian's suggestion to count TDD also for those cases where you get a bonus for early shipments. I'd like to count "late orders" based on 'zone 1' penetration to the buffer. that means you start to look at a damage some time before it is really a damage and then you have a chance to push the order and be on time. No new technology is applied to TDD. The use of it extended to measure vendors, but that is not what we talk now. What you further do is to allocate the TDD to the work centers that hold the last piece of the order. That makes the TDD a local perfomance monthly or weekly measure - something you expect to be able to reduce in the future. Eli S. --- From: "J Caspari" Subject: [cmsig] TDD Metric (was: "Truly" Variable Expenses) Date: Wed, 6 Jun 2001 12:45:13 -0400 On 6-6-2001 Aaron wrote (subj "Truly" Variable Expenses): << The way that I understand TDD is as follows: Example: I have several late shipments. Shipment A is late by 10 days and has a T value of $10. Shipment B is late by 1 day and has a T value of $15. Shipment C is late by 5 days and has a T value of $25. The TDD per order is as follows: Shipment A - $100 TDD Shipment B - $15 TDD Shipment C - $125 TDD If our goal is to make more money now and in the future, we must focus on shipping shipment C first, then shipment A and then shipment B. Is my understanding of TDD correct, or has some new technology been added to the measure? >> To which Eli S replied, in part: << You understand it right. >> However, on 5-18-2001 Tony had written (subj: Bummed Out): << It's not new. It's rather old. What's new is that only now have am I able to describe it succinctly: "Late Work First." >> Which seems to be inconsistent with the use of DD (dollar-day) metrics. Eli S also wrote: << I'd like to count "late orders" based on 'zone 1' penetration to the buffer. That means you start to look at a damage some time before it is really a damage and then you have a chance to push the order and be on time. >> All of which leaves me wondering why you need a routinely calculated TDD metric to guide actions when you have a sufficiently robust buffer management system to routinely identify 'zone 1' penetration. Eli S. also wrote: << I'd also adopt Brian's suggestion to count TDD also for those cases where you get a bonus for early shipments. >> I would like to think about this idea a little further. If we have effective buffer management in place, then our buffers have been sized appropriately based on our experience. Now we have an order that has an early completion bonus, so we are going to give it "head of line" privileges. Isn't that likely to put the remainder of the orders (those for which the completion date has been pushed back to allow the priority processing of the bonus order) in danger of being late. I have the feeling that the time to deal with the early completion bonus is in the original contract negotiation stage, where your customer has an idea of how long the contract should take and you believe that, due to your superior constraint management based system, you can complete the order much faster. --- Date: Fri, 15 Jun 2001 13:52:48 -0400 From: "Dr. Clarence J. Maday" Subject: [cmsig] Optimality Discussions about T and TDD overlook the inconsistency in the units assigned to throughput. In the beginning, the units of T were ($ generated)/time. When TDD was described in The Haystack Syndrome the unit of throughput became $ generated or $ not generated. Time was not involved. (Perhaps we should invent new names such as Unit Throughput or Order Throughput. Or??) This practice was carried over into a recent TDD Metric thread. This can lead to some incorrect decisions about how to schedule and fill late orders. Certainly the schedule should be optimal in terms of a Performance Metric or Performance Index to maximize profits. In this thread we consider an example which starts on page 179 of The Next Phase of Total Quality Management by Robert E. Stein. The tables on page 179 and at the top of page 180 are OK but shouldn't be used for making decisions. The lower table on page 180, however, does not consider optimization correctly. The author should have used Richard Bellman's Principle of Optimality. In non-mathematical terms the Principle states " No matter how you got to where you are, the next and future steps should be optimal with respect to your Performance Metric (Index)". This was the foundation for Dynamic Programming. It is also consistent with Pontryagin's Maximum Principle (a supercharged version of the Calculus of Variations). Back to the example. Two products, A and B, are made on the CCR. There is no market constraint. Product A is made with no defects and returns $2.50/CCR minute. Product B, however, appears to return $3.33/CCR minute but has a 10% reject rate so that the actual return is $3.00/CCR minute for the time required to produce a batch of 100 or more with 10% rejects. At this point we must make a decision about the rejects. Do we rework (no rejects here) them and realize $3.33/CCR, scrap them and incur a unit penalty of $25 and then start with new raw material (and the 10% reject rate) with a return of $2.50/CCR minute, or make Product A with a $2.50/CCR minute return. The answer is obvious. Rework the rejects!! Stein incorrectly recommends the scrap route on the basis of calculations that average what has gone on before. The problem becomes more interesting when we consider late shipments. That will be considered in another discussion. Whatever, the issue is easily resolved by plotting cash-on-hand vs time. This beats heuristics, arm-waving or best guesses any time. The above situation is small enough to do by hand. For larger systems software can be used. If software is not available to make these plots, it isn't too difficult to write such programs. --- Date: Mon, 2 Jul 2001 09:59:33 -0700 (PDT) From: Todd Canedy Here is my first try at translating manufacturing concepts to service concepts. > For reliability he suggests to use 'Throughput Dollar Days': > = The sum of (sales dollars) x (days of delay) Throughput Dollar Days': = The sum of (contract dollars) x (days past due date) > For effectiveness he suggests to use 'Inventory Dollar Days': > = The sum of (inventory dollars) x (days on hand) Resource Dollar Days': = The sum of (contract man day dollars) x (man days on hand) --- From: "Potter, Brian (James B.)" Date: Mon, 2 Jul 2001 16:24:20 -0400 Assuming TDD should estimate adverse impacts from reliability failures and RDD (Resource Dollar Days) should estimate adverse impacts of carrying too much "inventory" (consultants without revenue work), I think your definitions may have flaws caused by failure to consider lost opportunity impacts in the TDD calculation ... What do you think of ... TDD = Sum over all contracts( contract_revenue x days_past_due_date ) + Sum over all lost contracts( expected_revenue_from_lost_contract x if[ contract_lost_for_want_of_consultants then 1 else 0 ] x days_until_consultant_availability ) (adds lost contract impact of unavailable consulting resources) RDD = Sum over all consultants( daily_consultant_salary x days_consultant_performing_nonrevenue_work ) (same as Todd's metric, I think) Note that these definitions put TDD and RDD into direct conflict because available consultants will increase RDD and lack of available consultants will increase TDD. Striving toward minimizing "TDD + RDD" will explicitly surface the conflict. This conf lict parallels a similar (but perhaps less sharp) conflict between IDD and TDD. "Extra" consultants will buffer against opportunities lost (or deferred) by delays in other contracts, increase the flexibility of the resource pool (increase the number of kinds of work the firm can undertake), increase capacity (increase revenue earning potential), and increase OE (reflected by a rise in RDD). The direct conflict between capacity and OE is sharper than the conflict between inventory and timeliness one may find in a production environment. --- From: "Tony Rizzo" Subject: [cmsig] Re: Braess Date: Tue, 10 Jul 2001 14:37:21 -0400 I don't disagree with you. If every link in the value stream wins, then every link of the value stream can win almost indefinitely. But, there's one point that we haven't taken into account yet. It'll bite us badly, if we ignore it. It is time, or, rather, it is the time constant of the value stream. If the time constant of the value stream is large, and if I agree to defer my revenue until the entire value stream generates revenue, then I am exposed to a severe cashflow crunch. Will anyone defer my costs along with my revenue? Further, what happens to my revenue, if somebody downstream from me screws up? Can anyone assure me that no one will screw up? Is anyone willing to insure my investment? Finally, what happens to me if, after having contributed to the development of a new product or service and contributed to the creation of a new value stream, a downstream partner decides to go to somebody else for the component that I'm supplying? If that happens, then I will have spent a good deal of capacity only to have it generate zero return. I'm having second thoughts about playing in your value stream, with these rules. If you want me to play, you'll have to assure me that I won't get hurt. ----- Original Message ----- From: "Potter, Brian (James B.)" Sent: Tuesday, July 10, 2001 2:10 PM > Tony, > > That is the position I voice. When links of the chain begin making money by lightening one another's wallets, the focus is off. The total supply chain competes with other total supply chains which deliver competing end products to those who do SOMETHING other than (perhaps after reconfiguring, combining, or mixing) reselling the goods. NOBODY in the entire supply chain eats unless enough end customers open their wallets deeply enough to pay the complete supply chain. > > Failure to create a win for the end customer AND EVERY link in the chain will eventually break the chain at some link which does not win. With enough adequate replacement links near to hand, losing a link may not be so bad for the chain, but it WILL be disruptive. Links may belong to more than one chain. Such multiple affiliations may drive divided focus which may also be a potential loss for the chain. Dealing with the total supply chain is not easy (probably why you floated the trial balloon). > ----- Original Message ----- > From: "Brian Potter" > Sent: Tuesday, July 10, 2001 1:15 AM > > > Jim, > > > > I see your point. On one level "B" and "C" in the first cloud (below) > appear to belong at "D" and "D'" as in the second cloud (further below). I > put the statements at "B" and "C" in the first cloud precisely because when > you put them at "D" and "D'" the result DOES closely resemble the false > conflict between the "best" short run action and the "best" long run action. > That seemed too easy for this situation. When a for profit enterprise lives > in the middle of a complex supply chain (e.g., an automotive Tier I or Tier > II supplier [ask H. P. Staber in Salzburg if you want a first person > account]), the whole chain must win for any link to win AND every link must > win for the whole chain to win. > > > > Reasoning on that logic, I assumed that the "short run" versus "long run" > conflict was ALREADY BROKEN (at D-D') in recognition of the need to do both. > With that foundation, I promoted "serve the chain" and "serve the > organization" to the "B" and "C" positions as in the first cloud. I'm STILL > not sure which representation makes more sense, but from my perspective,it > feels right to attack D-D' either way. > > > > Tony, > > > > In consultation with Kevin Conflict (soon to be a _Marvel Comics_ > superhero), I offer a few possible assumptions: > > > > A-B: As a member of the supply chain the organization has a vested > > interest in the continued health of the total chain. If any > > link in the chain fails, that failure puts the entire chain > > (including this specific enterprise). Thus, long run success > > depends upon the success of "enough" supply chains of which > > the organization is a member. > > > > A-C: The organization has obligations to its owners, employees, > > suppliers, customers who are NOT part of any given supply > > chain, and other stakeholders. Overbalancing the firm so > > that subordination to any supply chain places the firm at > > risk endangers the firm, its owners, its employees, and > > other supply chains relying on the firm. > > > > B-D: Must subordinate to the chain to attain "B." > > > > C-D': Must look out for number one to attain "C." > > > > D-D': Looking out for the firm conflicts with "looking out for > > the complete supply chain. > > > > The classical approach (as often practiced by automotive OEMs) suggests > hammering one's suppliers for every concession (low price, near perfect > quality, JiT delivery (rain, shine, snow, sleet, hail, hurricane, ..., or > ...; no excuses), ... Similarly, ask the customers to absorb every "cost > increase," settle for what one can deliver, and pry the maximum piece price > out of one's customers. Obviously, if every link in the chain acts this way, > the whole chain will eventually fall. If any single link acts this way, it > might "thrive" briefly until its greed dragged the entire chain into the > sewer. > > > > > > >>>>>>>> Original Message <<<<<<<< > > Subject: [cmsig] Re: Braess > > Date: Tue, 10 Jul 2001 00:24:34 +0100 > > From: "Jim Bowles" > > Brian > > > > Not comfortable with B and C as stated. They appear to be in conflict. Its > > the classic global versus local cloud with these as D and D' > > > > Jim Bowles > > > > OK, I'll bite. Here are two initial efforts ... > > > > > > D: Act to serve the end customer(s) > > using products or services this > > organization delivers as one > > component of a system sold by > > a customer, a customer of a > > customer, ..., or a customer > > of a customer of a customer ... > > > > B: Serve the interests of the total > > supply chain(s) of which this > > organization is one segment. > > > > A: Make more money now and in the future, > > meet the other NCs, too. > > > > C: Serve the interests of this > > organization. > > > > D': Interact with direct suppliers > > and direct customers considering > > the best continuing interests of > > the organization. > > > > > > ... or ... > > > > > > D: Serve the interests of the total > > supply chain(s) of which this > > organization is one segment. > > > > B: Operate as a link in a successful > > supply chain. > > > > A: Make more money now and in the future, > > meet the other NCs, too. > > > > C: Operate as a successful enterprise. > > > > D': Serve the interests of this > > organization. > > > > > > My first impression is that we can break both above clouds at D-D'. If > > this is in fact the case, there is no intrinsic conflict. We face only the > > perception of a conflict when one fails to take a sufficiently broad view > or > > a view with an adequately long planning horizon. > > --- From: Norm Henry Subject: [cmsig] RE: Dollar-Days and Net Profit Date: Wed, 11 Jul 2001 09:54:05 -0700 John, No. NP (T-OE) and ROI ((T-OE)/I) are still the measurements for assessing the overall performance of the organization. While we can measure the overall performance with NP and ROI, it is helpful to have control measurements to measure the reliability and effectiveness of our operations in order to achieve the desired NP and ROI. In order to improve NP and ROI we should measure what is NOT done properly in order to improve. The things that should have been done but were not, is an issue of RELIABILITY. The things that should not have been done but were, is an issue of EFFECTIVENESS. We can measure reliability (i.e., Due date performance) by using Throughput Dollar Days. We can measure effectiveness (i.e., making excess inventory) by using Inventory Dollar Days. Thus the dollar days measurements are simply useful monitors to help us check our reliability and effectiveness. We need reliability and effectiveness in order to achieve the desired NP and ROI which are the overall system measurements. The above thoughts are not original with me. This is based on my understanding, which is hopefully fairly close to being valid, of what Eli Goldratt presented at TOC World 2001 last month. +( throughput for R&D activities From: "larry leach" Date: Tue, 08 Jan 2002 08:09:11 -0600 > > The question concerns the definition of throughput on the design and > > development effort. Throughput for a develpment project equals the expected business benefit of the project; i.e. the net operating profit (T of the product) minus cost of the development project. T does not start for a development project until the project is in distribution and making money for the company; thus the need for speed (and for assuring that the projet scope goes all the way to making money). The root problem many companies have with internal projects is that they do not quantify the expected business benefit. You must do do. (Units may be different for not for profits, but the same thinking applies.) If you are stuck in a situation where you do not have the benefit estimates, as a minimum you can assume that the benefit must exceed the project cost (otherwise, you should not be doing the project.) Thus, you can use the project budget as your inital (hopefully minimum) estimate of the expected T of the project. (PS: All of the money you have invested in uncompleted projects; i.e. those not creating T, is I until the project completes. A good way to focus on the need for speed, and for sequencing of projects.) From: "Tony Rizzo" Date: Wed, 9 Jan 2002 13:03:34 -0500 Subject: Re: [tocexperts] Question on Preliminary TOC Investigation. I can speak to the hypothetical situation of a hypothetical D&D organization. To begin the discussion we have to understand how the organization makes its money. In the case of an OEM that does D&D, the organization makes its money from bringing new products to market. Thus, the benefit that the D&D segment of the enterprise can bring to the bottom line comes from an increase in the frequency with which products are brought to market, as well as from the development of the most effective portfolio of products. In this case, the D&D segment of the enterprise generates no throughput by itself. It is merely a component within a greater system, and the system generates the througphut. The impact of an improvement in the multi-project logistics of the enterprise, which takes place largely within the D&D operation, can be quantified by representing the throughput anticipated from each project as an equivalent cash flow that takes place at the completion of each project. If you can imagine all those cash flows being pulled in and compressed, so that they happen at a much higher frequency, you begin to see the huge impact that speed can have on profitability. A more difficult situation occurs when the D&D operation is its own business. In this case, the D&D operation doesn't develop its own product. It develops the products of its clients. The D&D operation makes its money only from the sale of engineering hours. This is a most unfortuante, additive model. The throughput of the D&D organization, in this case, is exactly equal to the sum of the individual contributions to throughput that are made by the individual contributors. Their contributions, of course, are made as hours billed to client projects. In this most unfortunate situation, the management of the D&D operation faces a difficult conflict. Their mechanism for maximum profit is based on having people charge their time against the customers' projects as much as possible. This, of course, causes the managers to continually bring new projects into the D&D organization, so that everyone always has some work to do and some project number to charge. The resulting overflow of projects and work in process, coupled with the policy of demonstrating progress on all active projects, causes the resource managers and the resources to adopt the infamous dilution solution, which often is carried to the absurd extreme known as massive multitasking. However, there is an even greater irony here. If the D&D operation really did improve its multi-project logistics, then it could cause an increase in the rate at which its customers brought their products to market. The huge increase in their customers' profitability would make the D&D operation wildly competitive, even with significantly higher hourly rates. With the higher rates, of course, the D&D operation would be proportionately more profitable. But, this sort of strategy would require some rather enlightened leadership and an equally enlightened sales force that could make the customers aware of the benefits of speed. From: "Tony Rizzo" Date: Sun, 13 Jan 2002 04:23:52 -0500 Subject: Re: [tocexperts] [cmsig] Question on TOC and Development project ----- Original Message ----- From: "Brian Potter" Sent: Sunday, January 13, 2002 2:18 AM Subject: [tocexperts] [cmsig] Question on TOC and Development project > Jim, Todd, John, Bill, Tony, et al, > ...snip... > Now, it seems that all questions of that kind involve SOME linkage > between D&D and Throughput. When D&D is THE PRODUCT, > the connection is (as we have already noted) easy. What about when > D&D is "in house" and other arms of the same organization handle > production, distribution, and sales? Ah! There's the rub. > Why is there a difference? The goal of the D&D contractor is to maximize his own profit, not the profit of his customers. The D&D contractor behaves as an indipendent function much more than he behaves as an integral component of an interacting system. Just take a look at the defense industry, where the driving measurement is overhead hours. Companies in the defense industry will tell you that they want to maximize customer value. But they behave in such a way as to maximize their profit rate, by keeping the overhead rate to an absolute minimum. A D&D contractor in the commercial sector will do precisely the same thing, if his profit comes from the sale of engineering hours. > Either way, we have a cycle from marketing > to D&D to production to distribution to sales to customers to market > back to marketing (closing the cycle). Of course we do. This cycle is a fact of life. The question isn't, do we use this cycle. It is, how well do we use it?! > If the D&D contractor has a > throughput metric, why does it lose that metric if it becomes a wholly > owned subsidiary of its customer? The D&D arm of a company need not lose that goal-seeking measurement. But the measurement cannot be attributed to the D&D arm, because the throughput that the company makes is the result of interactions between the various functions. Consider the electrical equation, R = V/I. What part of an R measurement can we attribute to V or to I? V and I are needed together, for an R measurement to exist. The same is true of the throughput generated by a complex organizational system. Further, if we arbitrarily allocate portion of the system's throughput to the various functions, then we immediately create dissimilar goals within the organization. As a result, the system is destroyed and its performance is diminished. > What Throughput contribution does a > D&D contractor's customer gain from the association with the D&D > contractor? Great question! The value that the customer of a D&D contractor perceives, in fact, is precisely the increase in throughput generated by the customer's INTERACTION with the contractor. But that value evaporates rather quickly, if the customer is asked to share a portion of the increased throughput with the contractor. In any case, how would the customer attribute any portion of his own throughput to the D&D contractor? The other side of this coin is that the contractor assumes no risk. He just charges a fee for the project. The customer assumes the risk and also gets the additional throughput caused by any favorable interaction. Consider a defense contractor that sees clearly how to deliver outstanding value to his customers. That contractor cleans up his multi-project logistics, and as a consequence begins delivering programs in half the time and, since he charges by the hour, at half the price. Since defense industry customers are highly inclined to spend their entire budgets each year, the contracor would make the same money initially. But he would deliver twice the number of programs to his customers. How long would it take for that contractor to start getting business that would otherwise go to his competitors? Oops! The cat's out of the bag now. This is how a defense contractor can gobble a larger share of a limited pie. > How long will a customer keep coming back if the D&D > contractor shoots too many blanks? In this case, the customer's product definition process is shooting the blanks, not the D&D contrator's engineers, unless those engineers are a part of the product definition process (feature set). Even if they are, the system of customer plus contractor is shooting those blanks. But the contractor expects to get paid anyway. > How will the customer know which > D&D contractors are helping deliver throughput? (OK, they will "just > know," but how? How can we find [or make] a substitute for profound > insight?) The customer can certainly measure his own throughput, can't he? But you're asking that customer to attribute a portion of his throughput to the D&D contractor. That's mathematically impossible. An interaction is attributable only to the interacting factors, together. > How does the question about D&D effectiveness differ for > "in house" and "outside" D&D sources? Why is there a difference? There is no difference. There is a measurement available. It is the throuhgput of the entire enterprise. But if we try to attribute any portion of that throughput to any one of the components of the enterprise, then we get into trouble. That allocation has no mathematical basis. It's like trying to attribute a portion of a car's speed to the oil pump or to the injectors. All the speed of the vehicle is affected by the oil pump. If you don't think so, just try to drive without an oil pump. The measurement is available for the system. But it applies only to the complete system. No part of the measurement can be applied to any one component or to any subset of components of the system. Like the D&D customer and the contractor, we could NEGOTIATE a split of the system's throughput measurement, and we could assign a portion to each component of the organization. But the resulting calculation would be nothing short of bullshit. Also, recognize that the D&D customer and his contractor NEGOTIATE a price up front. During this negotiation, the customer is in essence allocating a portion of anticipated throughput to the contractor, and the customer is taking all the risk as well. But, it is a NEGOTIATION, not a mathematically buttressed quantity. > Letting D&D skate away from a commitment to goal contribution just > plain feels wrong. Where is the connection? How will we measure it? > Suppose you had several different D&D shops (e.g., the different GM > automotive divisions). How would you make strategic judgments about > which internal D&D shops were doing a better job? I like best the one > doing the things that yield maximal Throughput now and in the future. > Which one do you pick? Why? Ah! We're back to compoment measurements. Let's use the car analogy again. Clearly, it's quite stupid for us to allocate a portion of a car's speed to the oil pump. But the oil pump does provide a useful function. If it stopped providing that useful function, we should fire it and replace it with one that had a better work ethic. So the question becomes, how do we know if the oil pump is providing its useful function satisfactorily? Clearly, there must be some performance measurement that we can apply to the oil pump. Know of any? The same is true of any organization system. First, identify the useful function (a triz term). Then, measure the useful function. So long as any portion of the system is providing its useful function satisfactorily, the component is ok, and the system is delivering the throughput that it was designed to deliver. If the system's performance declines, then we should look for a malfunctioning component, no? Which component malfunctioned, as Pontiac brought to market the Aztek? From: "larry leach" Date: Sun, 13 Jan 2002 09:21:51 -0600 Subject: [tocexperts] D&D and other mysterious work Some time ago, we had a discussion about infrastructure projects, and how to prioritize them. I suggest that the gist of that point was the same as for internal D&D projects: we do not know how to quantify T. Since we know we have to prioritize to operate the system effectively, and to do so we must prioritize by maximizing T(now and in the future) per constraint use, we do not know how to prioritze such project work. I am assuming in all cases management has decided that the projects are worth doing. I have no problem using that economic analysis for prioritizing (i.e. expected T increase or OE reduction from the project). Some rejected that notion, relating such predictions as being similar to ABC. I do not see it that way at all, and suggest it is more a difference between an operations (production) mind-set and a project mind-set. Projects are, by defnition, unique efforts we have not done before. There is no historical basis for judging T. There is a historical basis for operations, and ABC or product costing is an attempt to allocate costs on that basis. Not the same thing at all. Second, projects by our understanding do not deliver incremental T (exception: projects on contract that you get progress payments on.) Whereas each order through a production facility (analog to a project task) may generate incremental T. I suggest that those having trouble with defining T on D&D projects probably have a deeper problem: they do not have adequate justification to do the projects at all. You must estimate project T to select and prioritize the projects it is worth while doing. No exceptions. Let's try it this way: IF you can't figure out the expected T for a project, and IF the project takes company resources, THEN you should not do the project. From: Brian Potter Date: Sun, 13 Jan 2002 07:29:55 -0500 Subject: [tocexperts] Question on TOC and Development project Agreed, one may not arbitrarily slice the throughput up into pieces and give a piece of the throughput to each organization which contributes to the total product cycle which yielded the throughput. That is the classic mistake of allocation in reverse. Operating Expenses happen by operating function. Organizations may (and probably should) assign expenses (and management responsibility for the expenses) to functions spending the money. That subdivision may properly happen at any detail level desired as long as the responsibility and the expense match with the organizational component making the spending decision. Revenue, Unit Variable Expenses, and Throughput happen by product. If an organization sells more than one product, tracking the sales revenue and unit variable expenses for each product will give a precise, accurate, and appropriate Throughput contribu tion for each product. One may properly subdivide those Throughput contributions by sales person, by customer, or by option configuration. Any subdivision that helps answer an honest question about the organization while preserving the one-to-one and o nto connection between unit variable expenses for product sold with revenue from product sold will yield a valid throughput measure for that sold product. This remains true for aggregations of product as well. Any attempt to slice pieces of operating expense to different products rather than leaving the expenses with the organizational components creates a fallacy because it was the total organization as a system which created and sold each product. Since on e needs the entire system to create and sell each product, any division of the system's operating expenses among the products will be one or more of arbitrary, pointless, silly, and misleading. There is no basis for such a division in the dynamics of t he organization. Similarly, as you pointed out with the Pontiac Aztec example, there is no system dynamics rationale for dividing Throughput among the components of the organization. The total system generated the Throughput. However, look what happens when the partiti on in the organization exactly matches partition by product. The GM Pontiac Division was the only GM division which envisioned (in Pontiac marketing), designed and developed (in Pontiac PD), manufactured (in Pontiac manufacturing), distributed and sold (through Pontiac dealers) the Aztec. The beneficial (or damaging) effects on GM resulting from the Aztec clearly "belong" to the Pontiac Division. Each link in the Pontiac Division product cycle "owns" the total Throughput (not some arbitrarily decide d fraction, the totality) generated by the Aztec. If (hypothetically) some other GM division had marketed a very similar vehicle (say a Buick Inca) from the same platform developed collaboratively between the two divisions, then the responsibility and credit would "belong" jointly (and indivisibly) to both divisions. Further, one could distinguish Throughput generated by Buick Inca sales from Throughput generated by Pontiac Aztec sales. Yet, one could NOT properly assign part of the platform Throughput to Pontiac PD and part of the platform Through put to Buick PD because the platform PD effort (in this hypothetical case) arose from the synergy between the Buick PD and Pontiac PD teams. ALL of the Throughput from both Aztec sales and Inca sales indivisibly "belongs" to both Pontiac PD and Buick P D. However, if Pontiac had done all the design and development work for both Aztec and Inca, then Pontiac PD would (by itself) "own" the total Throughput for BOTH Aztec and Inca. However, if the Buick PD folks contribute so much as where the "Buick" tag goes or what it! looks like or a paint color, any division of the Aztec/Inca throughput between Buick PD and Pontiac PD becomes impossible because both PD teams have now contributed to the common product sold under both names. The more the Buick PD team contributes, t he more clearly lubricious a division of Aztec/Inca Throughput between the two divisions becomes. As long as Throughput stays entirely attached to product generating the Throughput, the TOTAL Throughput for each product reasonably "belongs" to EVERY organization component which contributed toward the Throughput. In the hypothetical Aztec/Inca examp le, The sales and distribution product cycle segments may properly distinguish between "Aztec Throughput" and "Inca Throughput." If each division has distinct manufacturing and supply chains (possible, but not likely in niche vehicles off a common plat form), the manufacturing product cycle segment may also properly distinguish between "Aztec Throughput" and "Inca Throughput," but when the two PD or manufacturing teams collaborate (even a little), one can no longer properly divide the two. At the PD and Marketing segments of the cycle, one may correctly speak only of "Aztec/Inca Throughput." To the extent that the various GM divisions had independent product cycles, the original concept of those divisions strengthening one another via "i nternal competition" had so! me basis. When two or more GM divisions collaborate on a common product sold by all collaborating divisions (the usual case), the distinction between the collaborating divisions becomes a misleading fallacy. The net effect of the Aztec on GM is probably positive. Some people who bought Aztecs MIGHT have bought some other GM car or SUV (for a net not much difference). More likely, they would have bought a small SUV or other "crosstrainer vehicle" from a com petitor. Further, the buzz I get is that Aztec buyers LOVE their vehicles. Many are young, and they may well go back to GM (rather than the competitor from whom they did not buy) for another vehicle in a few years. You can bet that GM did not sell Azte cs at a price below unit variable expenses so Throughput was positive. The assembly plant capacity GM committed to Aztecs was available (probably with headroom to make a million or two more each year if the demand had been there) as were the UAW folks doing the assembly work (depending upon circumstances and the details of the union contract, GM probably pays laidoff UAW employees 80% to 95% of their straight time pay for up to two years whether they come to work or not). Offering the Azte c to the market had a relat! ively small impact on GM's OE. Would some other vehicle that GM did not offer to the market have done BETTER than the Aztec? Maybe, but since that "some other vehicle" is only a "might have been," there is no way to know. -------- Original Message -------- Subject: Re: [tocexperts] [cmsig] Question on TOC and Development project Date: Sun, 13 Jan 2002 04:23:52 -0500 From: "Tony Rizzo" ----- Original Message ----- From: "Brian Potter" To: "APICS CMSig List" ; "ToC Experts" Sent: Sunday, January 13, 2002 2:18 AM Subject: [tocexperts] [cmsig] Question on TOC and Development project > Jim, Todd, John, Bill, Tony, et al, > ...snip...... big snip ... > How does the question about D&D effectiveness differ for > "in house" and "outside" D&D sources? Why is there a difference? There is no difference. There is a measurement available. It is the throughput of the entire enterprise. But if we try to attribute any portion of that throughput to any one of the components of the enterprise, then we get into trouble. That allocation has no mathematical basis. It's like trying to attribute a portion of a car's speed to the oil pump or to the injectors. All the speed of the vehicle is affected by the oil pump. If you don't think so, just try to drive without an oil pump. The measurement is available for the system. But it applies only to the complete system. No part of the measurement can be applied to any one component or to any subset of components of the system. ... snip ... +( throughput per constraint unit From: Eli Schragenheim Date: Wed, 30 Jan 2002 09:42:43 +0200 Subject: Re: [tocexperts] Throughput versus profit Sorry I don't buy {Delta(T*)-Delta(OE*)}/(ConstraintTimeUnits) at all. The use of T/CU is misleading for "large decisions". Large decisions means those decisions that might impact the identity of the constraint. Adding a project to the current portfolio is in most cases such a "large decision". I wrote an article for TOCReview about the limitations of T/CU. It can be reached at their website. There are three different aspects to T/CU when applied to a portfolio of projects. First, this is planning with discreet events, even large events. You might find out that including project X is preferable, even though the net return per constraint time is smaller than of Project Y. X might use more time than Y, but there is no clear alternative to the free time of constraint (it is the drum resource - but not a bottleneck, see later). Second, who says the constraint is fixed? If we can get significantly more profit why shouldn't we ELEVATE the constraint? This might increase OE and I, but it might yield more profit. The whole idea of T/CU is valid only when the constraint is fixed. Third, in multi-project environment the constraint is not a bottleneck. Even the current CCPM software do not try to "exploit" the available time of the constraint. Let me give you an example: MX is the constraining resource that the multi-project drum is build upon. Project-1 needs MX for two months, then there is no need for MX for one month (somebody else has to process the output of MX), then MX is needed for another two months. Project-2 needs the same amount of work from MX. Note, the drum should release MX to work on Project-2 only after 5 months (plus capacity buffer) of being dedicated to Project-1. Why not utilize the one month? Because projects are held as one unity and should not be splitted. This is my view, which I have verified also with Eli Goldratt. Danny Walsh and I wrote an article about the difference between manufacturing planning and multi-project planning. It should appear in The Performance Advantage in February (I hope, haven't head otherwise). And another aspect of the above measure: delta-T I assume is the T generated by a specific project. However, delta-OE may be initiated for a specific project but eventually used for other projects as well. Hence, any delta-OE should be viewed for the whole portfolio. My recommendation: Look for the portfolio that gives maximum overall T-OE, and T-OE/I, after validating that an acceptable drum schedule can be set. You need to fix the time frame where you check the alternatives. This is a process that might take several hours, but that is cheap. Another related topic is how you handle the risk/uncertainty? Do you really know the T to be generated? What is the probability that very little would be generated? Here too the concept of checking the whole portfolio is beneficial. But that is another discussion. > {Delta(T*)-Delta(OE*)}/(ConstraintTimeUnits) > > where Delta(T*) is the time-value equivalent of the > increase in the throughput generation rate of the enterprise, > brought back to the expected end of the project. > Delta(OE*) is the time-value equivalent of the operating > expense rate increase of the enteprise, brought back to > the expected end of the project. > And ConstraintTimeUnits is the staff-days of the > constraint resource of the enteprise. > > As a first cut and an outstanding improvement over > current practices, the use of this measurement as the > means for prioritizing revenue-generating projects > seems pretty good. > > ----- Original Message ----- > > From: "Ronald A. Gustafson" > > Sent: Tuesday, January 29, 2002 10:04 PM > > > Clare -- It's been a while (25+years) since I've been involved in flight > > > test of fighter aircraft, but it seems like the "grunt-level" approximation > > > for minimum time-to-climb could be obtained graphically by estimating the > > > points where Es was tangential to Ps. Also, while great precision in those > > > numbers (even with the graphical technique) is an interesting analytical > > > problem, the practicality of actually flying those profiles usually takes > > > precedence over the precise, theoretical answer. So, maybe there is a > > > simple approach to Tony's problem that is good enough for practical > > > application. > > > > > > -----Original Message----- > > > From: Clarence Maday [mailto:cmaday@nc.rr.com] > > > Sent: Tuesday, January 29, 2002 1:48 PM > > > > > > Consider the problem of a supersonic fighter climbing to 60,000 ft at Mach > > > 1, from takeoff, in minimum time. That altitude is greater than sustainable > > > cruise altitude. Real pilot took 11 minutes. Trajectory optimization is a > > > Pontryagin Maximum Principle nonlinear TPBVP > > > (two-point-boundary-value-problem). Lots of fun to solve! I know, I did > > some > > > eigenvalue problems that weren't quite as bad. Solution was first > > described by Art Bryson, ca.1960, of Stanford. > > > > > > Solution to aircraft problem. Take off and climb to about 38,000 ft at Mach > > > 1, dive 3,000 ft to Mach 1.4 and resume climb to 60,000 ft. These numbers > > > are approximate but I remember the results...331 sec!!!! Later verified in > > > actual flight. The least time to climb problem is similar to the min fuel > > > problem with the same boundary conditions. Blackbird follows this trajectory > > > on its way to 100,000 ft or wherever. I wonder how long it would have taken > > > to find the solution by trial and error instead of by computer simulation. > > > > > > So what's this got to do with the optimum sequence of projects, priorities, > > > or whatever. You have at least a TPBVP with variable operating expenses, > > > maybe a muti-point problem. One goal would be to determine the optimum > > > strategy of OE(t). The only technique I know of that might work is Dynamic > > > Programming but I wouldn't bet the house on it. > > > There may be other methods. +( TOC - OPT and LP (linear programming) From: "Jim Bowles" Subject: [cmsig] Re: TOC vs OPT vs LP Date: Sat, 26 Oct 2002 14:01:06 +0100 I will attempt to see how much I can recall regarding inputs that I have received as a member of the Goldratt Network about the difference between these three items. I recall Dr Goldratt giving a presentation where he said that in his early attempts to create a scheduling system he tried to use LP. But found that the number of iter ations required to come to the right solution simply took too long to provide a practical tool. He concluded tha t back in the real world of being a production manager. It wasn't so important to have such a "precise" answer but one that was "good enough" to give results that were reliable and consistent. In the OPT software the initial task was to accurately model the plant in terms of a network of material flows , routings and bills of material. The next step was to load this with a given order book and to determine the loading on the resources. Once the drum had been determin ed a schedule was produced for the constraint and for m aterial release. I came to TOC (1987) after the sale of the OPT software so I was never employed in building such a network. During my initial training it was stresse d that we didn't need to go to such an extent and t hat we could achieve very significant improvements with the "Thoughtware" and to encourage companies to focus on the ir constraint. Once this was done to use appropriate sche duling rules and buffer management methods to achieve the degree of control required and the means to improve. In other words we weren't looking for the ultimate solution (the goal say in using LP) we were proving a sound ba sis for the organisation to establish control and to buil d in a mechanism for continuous improvement. "Disaster" later to be called The Goal system was being developed at this time. "Disaster" was designed using diffe rent criteria to that of OPT. This software allowed for more interactions between the manager and the schedule. It allowed the planner to "offload", work overtime, and split the order batch so as to answer many what if questions before fixing the schedule. In my experience only a few companies progressed towards the need for such software. Not many organisations needed that level of sophistication in order to sustain their im provement. One of the best examples I have seen of this is shown in a video produced by one of Ford's plants in Canada. They achieved one day manufacturing cycle tim e using some very simple rules and the DBR methods. In three years they went from a16 day manufacturing lead-time to less than one day. After "The Goal" was published in 1984 more and more co mpanies latched onto the idea of identifying the systems constraint. They set out to find "Herbie" the bottleneck but many couldn't find one. Mainly because the plant didn 't have one or because it wasn't something physical that blocked them from improvement. So Eli and his Associates set out to develop the "Thinking Processes" to provide a tool that allowed the constraint to be identified in a systematic way. During the late 1980s Eli produced work w hich showed the difference between the Cost World and the Throughput World. I.e. In which we are asked to consid er the difference between the weight of a chain and its strength. The rules for one are entirely different to the other. E.g. Weight 3D addition of weights of each link but Strength 3D strength of the weakest link. The difference between these two "Thinking bridges" i.e. be tween actions and results is still what blocks most Weste rn Companies from improving their performance. To summarise I would say that TOC started with attempts to solve the problems of production planning by using LP. But that this was found to be too complex to apply in the real world of production. OPT was then the first algorithm to provide a best fit between the order book and the plants resources using an expensive piece of s oftware. The difficulties in implementing such a solution led to d evelopments in software and thoughtware. Since the mid 1990s TOC has come to mean the combinatio n of applications (such as DBR -Production, CCPM-Project Ma nagement, Replenishment-Distribution, and Throughput Management [ TOCs approach to financial measures]) and the Thinking Proc esses. TOC applications show us how to manage statistical fluctuat ions and dependent events using methods that are approximat ely right and good enough rather than to provide the "p erfect" solution that would be the case if we were able to apply LP in the dynamics of a production environmen t. If anyone can add to or correct anything that I have said here, please do so. ----- Original Message ----- I've heard of some people saying that the Linear Programming is already obsolete and no longer useful. That it has been replaced by the TOC and the OPT (optimizati on process technology). What B4s the truth about this? Does TOC differ greatly from the Linear Programming? Are they opposite methods/philosophies? Could TOC be considered as a branch of LP? or perhaps they B4re different but they complemment each other? In case they are different, what are the advantages of one above the other? +( TOC and LEAN From: "Potter, Brian (AAI)" Subject: [cmsig] Relationships between "Lean" and ToC Date: Tue, 22 Jun 1999 14:18:22 -0400 As many have recognized, "Lean" and ToC harmonize. Both management philosophies attack the damage unplanned variation (Murphy) inflicts on an organization. Lean (by this and other names) typically attacks variation directly. Often the attack comes across a broad front via many local improvements. In such cases, the direct benefit from most local improvements will be negligible (the "improved" system component(s) did not measurably limit global system performance). Some "improvements" may yield a future benefit (when other improvements would have exposed a weakness had it not been corrected by the earlier action). ToC typically attacks the system's vulnerability to variation. The initial ToC actions often recognize points where variation can damage system performance and protect those points with buffers of time or material. Once a ToC action has identified a vulnerable location, the exploitation and elevation steps in the ToC process may take the form of "Lean" improvement measures. Lean-style variation reduction will allow smaller buffers (variation reductions will allow equal or better protection with a smaller buffers). Smaller buffers normally imply shorter lead times, reduced investment, reduced scrap, and improved quality. Just like "Lean," yes? Important organizational gains from ToC derive from its ability to focus "Lean" improvements in the places where they will yield maximal benefit (the points which throttle the global system). Goldratt has related that Ohno claimed he could have developed the Toyota Production System in half the actual time if he had employed ToC methods. Fortunately for other automobile companies, Ohno's work preceded Goldratt's. --- From: "Jim Bowles" Subject: [cmsig] Lean versus TOC (DBR and Replenishement) Date: Wed, 23 Jun 1999 09:33:10 +0100 Paul wrote: Jim: Very interesting response, but I would propose an alternative to TOC. Lean principles focus on the heart of the problem, namely long lead times, large amounts of slow moving inventory, etc. Paul I have only a short comment to make to you. "Check your assumptions." There is a excellent video that was produced by Ford's Electronics. It is called "One day MCT" Produced by Ford Communications American Ford. It describes their experiences over several years, firstly with JIT and then later with DBR and buffer management. Manufacturing cycle times fell from weeks to days with their lean efforts. But they plateaued at 8.6 days with these approaches. Within months of using DBR their MCT came down to 1 shift (4hrs). They have won a great deal of business because of their response times, including work from the Pacific basin. In reality system's don't come any leaner than they do with DBR and Replenishment (TOC's method for Supply Chain Management, more aptly called Demand Chain Management). Give me an hour of your time and I will demonstrate the difference with "JOSI". This is the name given to the UK's Competitive Manufacturing Groups table top simulator in full a JIT, OPT Simulator. Jim Bowles --- Date: Tue, 06 Jul 1999 18:46:11 -0400 From: Michael Cuellar Subject: [cmsig] Re: Visual Scheduling/kanban Laurie: You should read: The Quantum Leap in Speed to Market by John Costanza. In it Costanza outlines what he calls demand flow technology (we call it Synchronous Manufacturing since he copyrighted DFT!). As I'm sure you know, companies using DFT have made dramatic improvement in inventory turns, cycle time, customer performance etc. He is not very specific about the implementation details because he (as well as we) is interested in getting consulting revenue to help you do that. I also have a presentation format white paper on what Synchronous Manufacturing is and the benefits, however I think you are looking for implementation specifics. Let me know in particular what you are interested in and perhaps we can have a conference call to give you some assistance. I will have one of our manfacturing consultants speak with you gratis on your situation. I can recommend to you some plants in Atlanta and perhaps the Chicagoland area that practice Flow that we could take you into. =================== A number of people have asked for information about the difference between Lean and TOC, not necessarily on this list alone. Fundamentally, lean and toc are not similar. They are very dissimilar. Lean sends the message that prosperity is achieved through the reduction of waste. This message is disseminated throughout the adopting organization. It also sends the message that waste reduction anywhere is as good as waste reduction anywhere else. > This is absolutely true, seems simple, and yet is NOT understood by most > of the "lean" crowd. They will argue that long haul, the little > increments of savings will add up to thruput. This has not been my > experience, however. Well, I've yet to see a company that saved its way to prosperity! In addition, focusing the entire organization on waste reduction, when the organization's markets are growing at 20% to 30% annually, is like rationing bullets and gasoline to troops in the midst of a battle. Screw the bits of waste here and there! The name of the game is speed with quality, for the purpose of capturing market share and maximizing throughput, all the while keeping operating expenses well under control. > I would even argue that you can afford to "waste" quite a bit -- IF you are > focused on improvement to a constraint. TOC sends the latter message throughout an organization. More importantly, it provides the means with which to achieve stunning speed, the most effective cost control imaginable, and extreme profitability. It does this by focusing improvements on the leverage points of a system. > > So, if anybody tells you that Lean and TOC are basically the same, tell him to put THAT in his Lean pipe and smoke it! Tony Rizzo --- From: "Potter, Brian (James B.)" To: "CM SIG List" Subject: [cmsig] Difference between lean and toc. Date: Wed, 22 Mar 2000 14:37:53 -0500 RIGHT! When you do operating expense reductions, you are making TACTICAL improvements. You trust local supervisors and individual contributors to do those things subject to global considerations (e.g., THINK about constraint exploitation and subordination to the constraint while you plan your tactical improvements). Awareness of strategic issues and practice (in the context of making local improvements) with handling them can prepare local leadership and individual contributors for responsibilities with more scope. Senior managers and executives should become involved only to point out the occasional untrimmed negative branch or approve necessary resource allocations for good local improvements. Some OE reductions neither risk quality nor attack protective capacity. Sound OE reductions (e.g., scrap reduction nearly any place) incrementally increase profits (agreed, no big deal), AND they allow the organization to SURVIVE on less throughput. The improved "survivability" can be important if a product bombs, a competitor steals a march on you, or economic changes (e.g., higher interest rates) apply external pressures. Intelligent "Leanness" has a place in the ToC toolbox (a secondary place, perhaps, but a place in any case). "Lean" may have fewer (and less important) sound applications in the marketing-development-launch domain than in the operations domain. When your are aimed at the future, it becomes very difficult to understand what is "waste" and what is "capability" you are not using right at the moment. In a production environment, legitimate examples of "waste" can be easier to see and easier to attack. Perhaps, that explains your strong (possibly, domain specific) distaste for "Leanness." --- From: Michel Baudin Date: Mon, 03 Jul 2000 16:26:07 -0700 Subject: Re: NWLM: Lean and TOC Lean manufacturing and TOC are fundamentally different. The former is predicated on paying excrutiating attention to the details of how the work is done on the shop floor. As Crispin Brown put it, it's about "what happens when the guy picks up the wrench?" The detailed analysis of the work leads to reengineering the shop floor, moving equipment, redesigning operator jobs, etc. TOC, by contrast, is about understanding the high-level dynamics of what you have and make the best of it, without getting bogged down in details. The creator of TOC is Eli Goldratt, who is a genius, according to himself. In his book on TOC, he describes himself as the Isaac Newton of manufacturing, where Taiichi Ohno is only Keppler. In terms of track records, let's see... Lean manufacturing has had a major impact on the automobile industry worldwide. About TOC, what exactly has it accomplished? --- From: "Kluck, William" Date: Tue, 4 Jul 2000 14:07:11 -0700 Nothing promotes feverish dialog on this list like a good 'Lean vs. TOC' posting. It has been several months since the last battles on this were fought here, so (for expediencies sake), I'll recap the basics of both camps: Lean manufacturing is the development and implementation of a system of production, which constantly strives to identify and eliminate waste (that which does not add value). Within lean manufacturing, there are many tools and techniques that are used to systematically eliminate waste, ensure continuous flow of product, and allow customers to pull value through the system. Lean thinkers believe that reducing the various types of wastes will reduce costs, and improve both quality and delivery (the 3 things customers are value most). TOC (the Theory of Constraints) is a tool that focuses on the 'bottlenecks' in a production system, and is associated with a series of analytical tools designed to 'elevate' those constraints, and improve T, I, and OE (throughput, inventory, and operating expense). TOC advocates believe that the bottom line can only be affected once the system constraints (bottlenecks) have been identified and exploited. The controversy stems from extremists, who either believe that the techniques are in direct opposition, or who don't believe that they are compatible. However, there are many organizations which have applied both successfully. It has been my experience that because TOC techniques are highly complex, and often require highly knowledgeable 'experts' to be successfully applied, while lean mfg has many simple techniques that can be applied by literally anyone, that lean mfg is the more popular of the two. But I hesitate to compare the two directly, because it is also my experience that companies leaning out their systems can receive a significant boost by applying TOC techniques after their lean implementations are firmly rooted in their mfg and mgmt cultures. I prefer to think of lean mfg as a set of tools in a toolbox. Not all tools will be applicable in all circumstances. One of those tools can be TOC, if it can support an organizations goals. SO PREPARE YOURSELF for the traffic on this issue. Some of the conversation might seem heated, but take it in the spirit in which it is meant (to expand our knowledge of individual techniques that we might not have experience with). From: "James Oberloh" Date: Wed, 5 Jul 2000 07:54:27 -0700 I will lend one from the Toyota Production System. All over Toyota you see buffers, because they know that things are not perfect all the time, and they would like to have the ability to be able to shut down a particular process and not stop the whole line. Toyota consists of three main departments, body shop, paint shop and assembly. Body shop, due to the number of robots that are there, normally runs the lowest operating rate, therefore the buffer between body shop and paint shop is usually equivalent to about two hours of production. This enables the shops downstream to continue to run if body shop goes down. Up to two hours overtime is mandatory at Toyota, so at the end of the day body shop will normally work additional OT to fill their buffer back up. The other place that a buffer is prevalent at Toyota is between the assembly lines in the assembly shop. Each line, which consists of about 20 stations, usually has a buffer of between 5 and 15 minutes, this will enable one line to shut down without effecting the rest of the shop. +( TOC and Marxism - Throughputrates bezogen auf Produktionsfaktoren Date: Fri, 5 Nov 1999 13:59:12 -0500 (EST) From: Hans Peter Staber To: "CM SIG List" Subject: [cmsig] TOC and Marxism :-) Now Karl Marx and Ludwig Engels wrote aboute "productivity factors" in capitalist worlds (sorry for my bad translation from german to english) : - money = cash, assets, inventory, machinery - soil = raw materials, production surface - human ressources = obvious What do you think about tying TPR to these general findings of Marx. Does it make sense to use as denominator in TPR - m2 for business activities in Tokyo or NYC, where space is deer - mach. hours for highly invested industries - HR for activities in Mexico, China or any low labour cost coutry operation Date: Sun, 14 Nov 1999 18:44:18 +0100 From: Hans Peter Staber To: JmBPotter@aol.com Subject: Off-List answer: TOC and Marxism :-) Brian, <> Well said. I agree. > Thats what the P-Q example explains and it's what is teached in Germany with respect to "Deckungsbeitragsrechnung" - cotributional margin analysis. I wanted to tie the contributional analysis to the more general productivity factors identified by Marx. <> In my opinion dNP, dRoI and dCF are outcome measures - lagging indicators. For decission making I'd like to have leading indicators. They should be simple enough to be understood and interpreted by common people. [Especially for strategic (macro level) considerations "NP," "RoI," and "CF" (cash flow) give one all the performance information one needs.] The first two are sufficient. CF is a different interpretation of NP. RoI is now often substituted by ROCE and more often now EVA. <> I think your dT is incremental throughput. I tend to use TP (I am aware that this is the abreviation of the thinking process). We seem to agree. The decission making process needs leading indicators which might be TP/CU. The verification of the decission making is done by calculating the correct outcome measures NP and RoI. YOU MAY ALSO SAY : dTP = deltaTP/deltaCU which closes the loop between what I say and what you say :-) [However when the time comes to choose among the alternatives, dNP, dRoI, and dCF will identify the superior alternative, right?] You would have to define the functions f1 = NP = f1(TP,OE) and f2 = RoI = f2(NP,I) and differentiate and solve them for dNP = 0 and dRoI = 0 The problem seems to be to find the correct functions if you go that route. The easier way should be to apply the rule of calculating dTP only and accept OE as a given. < 0 (implies dT > dOE) a restatement of our goal?}>> The correct equation would be dNP = 0. At the maximum (and the minimum) of the function the tangent is horizontal - the differential therefore is zero 8-) <> I don't understand your repeated pointer to CF : Net Profit + Depreciation + Change in Provision + Goodwill =CASH FLOW -Investments +/-change in inventory +/-change in trade receivables +/-change in trade payables = NET (FREE) CASH FLOW CF is just a different way of NP or OpInc to me, nCF is a different way of RoI, ROCE or EVA as it ties up NP and Inventory. +( TOC as science or religion From: "Richard E. Zultner" Subject: [cmsig] Is ToC a Science, or a Religion? (Are you sure?) Date: Wed, 21 Jun 2000 21:28:54 -0400 In previous posts it was suggested 1> Here are some facts with which you will have to live. TOC > is the first serious attempt to apply scientific principles > to the management of human systems. ... 2> Here's a thought. If we want TOC to spread, then we must > prevent the perception that TOC is a religion rather than a > management science. ... 3> What's the difference between science and religion? Science > tolerates personal preference and personal opinion. Well, if our aim is to convince people that ToC is a science, and NOT a religion, then we are off to a bad start. 1. How can anyone who has taken any courses in pyschology, sociology, systems dynamics, or management take seriously the proposition that "TOC is the first serious attempt to apply scientific principles to the management of human systems"? The social sciences are not sciences? The professionals in those fields over the past 100 years did not apply the scientific method? (What, they did all their work on inhuman organizations?) All the experiments cited in basic texts in those fields count for nothing? Take the work of just one famous researcher: Frederick Taylor. He didn't apply scientific principles? His approach was not called "Scientific Management" for no reason. Sorry, Eli was not the first. Such sweeping generalizations (and such massive disregard of a huge body of scientific work) suggests to those who are interested in ToC that ToC is NOT a science (since it ignores large areas of scientific study) -- and so must be more like a religion -- or a management fad. [To learn about the history of science in management, I suggest starting with the works of Peter Drucker.] 2. Why are we concerned about a perception that ToC is seen as a religion? Because it IS seen as a religion by some right now. How did people get such a perception? Because of many of the practices and attitudes of ToC organizations and practioners are similar to a religion -- like Scientology. If you compare ToC and Scientology (a religion, according to the US government) you will find MANY parallels: a charismatic founder, extensive use of a special vocabulary, high prices for training, public "testimonials", claims of miraculous results, teaching by parables, a rigid orthodoxy, mandatory intellectual "celebacy" of the "highest" practitioners (they must be "dedicated", no "ecumenical" use of other methods), intolerance of "heretics", etc. We are acting like a religion! How could some people NOT conclude ToC is a religion? Why aren't we seen as a science? If I pick up a science book, or attend a course in a scientific field, I find MANY references to prior work. A book or course with no references is either: (1) a breakthrough into a brand new field; or (2) written by someone who is ignorant of the existing work in the field -- that is, a novice; or (3) a religious tract; or (4) entertainment. To ignore prior work is the sign of an untutored undergraduate. It is unscientific. If ToC wants to be percieved as a science, then its proponents must at least act like graduate students -- and review the literature! Some of it does apply, and failure to cite it leaves a bad impression -- especially on those with some scientific background. (Or should ToC be seen as the "revealed word" of a prophet?) 3. The most distinguishing characteristic of a "science" vs. a "religion" is not tolerance, it is experimental testing of hypotheses. There have been many logical, persuasive theories put forward -- which were "common sense" for their time -- that when put to the test, were proved WRONG. Look in any book on the history of science. Some very smart people fell in love with some very silly ideas (silly to us with the benefit of hindsight and experimental results!). How do we know we haven't fallen to a similar fate? Have we put our beliefs to test? Consider the Six Layers of Resistance. A nice logical concept. I use it myself. But is it true? Has any research been done to test it? (The fact that some people get "results" when using it is not a scientific test. You will find in any history of science book many silly ideas that were "proven" in similar fashion.) Are there really six layers? Not five? Not seven? (After all, there weren't six layers to start with...) Does resistance really go layer by layer? No jumping? No regressing? Are there NO similar ideas in the field of psychology? NO ONE has done any relevant prior research on resistance to change? Scientific fields test their hypotheses -- and publish the results. Where are our tests? Where are our publications? I think the responses to this post will be an interesting test of whether ToC is a de facto religion, or an an emerging science... --- +( TOC at Ford and Visteon - success story Brian, you describe a situation similar to what existed in Ford's Electronic Devices Division, three plants in "I think" Allentown PA,, Saline or Ann Arbor MI and Markham ON. These were all captive plants shipping directly to Ford's assembly or component assembly (Instrument panel) plants. EDD had no ability to "sell or develop new products or customers". These plants were constraints in the supply chain in Ford operations, so it would not have made any difference anyway. Once the EDD plants implemented TOC they succeeded in improving their capability considerably. Total inventory was reduced by 100 Million dollars, cycle time was reduced from the antiquated JIT -Kanban approach of four days to DBR of one day, the constraints were elevated to the point of no internal constraint. The EDD component plants could now outproduce the demand in their internal market. The lack of market became their constraint. Eventually Ford woke up and realised that the only way to improve the EDD business was for EDD to get new products and new customers. As I seem to recall, they developed 19 new, non - automotive customers. The EDD group is now a major part of Visteon. And Ford again woke up and said to the whole group, go out and find yourself a new customer. As a matter of fact go find yourself a new owner. Ford is spinning off EDD, the assumption being that as long as EDD is owned by Ford it will be limited in the amount of new automotive business it can attract from outside of Ford. Even in your situation there are lots of opportunities to increase T not only reduce I and OE. Harvey Opps +( TOC for children From: "Ward, Scott" Subject: [cmsig] RE: Born with TOC knowledge. Date: Fri, 21 Dec 2001 11:33:37 -0600 Let's revise the apple question for students: A bag of apples sells for $5. The bag contains 8 apples. The apples cost $0.50 each from the wholesaler. The bag costs $0.50. What's the throughput for a sale of varying quantities on a given day? What's the daily net profit at different sales volumes? The apple seller is offered $3 for 6 apples with the bag (so it has to replaced before the next sale of 2 apples). Will net profit go up or down? The apple seller can purchase 10 apples for $0.45 each but can only sell 8 on a daily basis. How does this affect net profit over time? -----Original Message----- From: Gilbert, Rick [mailto:rick.gilbert@weyerhaeuser.com] Sent: Thursday, December 20, 2001 11:31 Actually, a suitable question might be as follows (and this is the way it was usually written before soccer became so big): A bag of apples sells for $5. The bag contains 8 apples. How much does each apple cost? One could theoretically buy one apple (or sell one from the bag). One cannot normally buy/sell a part of a soccer team membership. -----Original Message----- Tony Rizzo Sent: December 18, 2001 22:05 To: CM SIG List Subject: [cmsig] Born with TOC knowledge. Oh good Lord! How early the cost conspiracy begins in our education! My 10-year-old, Stephen, complained that he didn't understand his math homework this evening. That was his excuse for not doing it. With some parental urging, he sat at the kitchen table again and read the problem aloud, so that I might understand it. "It cost $50 to join the soccer team," stated the worksheet. "The team played 8 games during the season. How much did each game cost?" asked the homework question. I was baffled by my child's inability to understand the homework question. Therefore, I asked him to read each sentence again, and I made sure that he understood each sentence, before moving on to the next sentence. "It costs $50 to joint the soccer team," he read per my request. "Do you understand what that means?" I asked. "Yes!" stated Stephen. "OK! Read the next sentence," I said. "The team played 8 games during the season," he read aloud as instructed." "Do you understand that _that_ means?" I asked again. "Yes!" stated Stephen. "They played 8 games." "OK! Read the question," I said. "How much did each game cost?" read Stephen. "Do you understand the question?" I asked. "No!" said Stephen. "How can you not understand?" I asked, baffled. "They want to know how much each game cost." I said with emphasis. "How much did each game cost?" "It didn't cost anything!" stated Stephen emphatically. Only then did I realize the degree to which my child understood TOC and the degree to which the creator of the math assignment did not. +( TOC frequently asked questions From: "Kenneth S. Moser, Jonah, CIWA, CNA, CNSA" Date: Mon, 14 May 2001 14:33:38 -0400 Constraints Management Frequently Asked Questions Ver 1.2 cmsig@lists.apics.org 03/21/2000 ================================================================ --- Forward --- In July 1995, I issued a draft of this document requesting comment. Version 1.1 of the FAQ was released in December 1997. This is Version 1.2, which never quite got out of draft. Contributors to this FAQ are too numerous to list individually, but I would like to thank everyone who wrote me for their assistance. For the present, I will serve as coordinator for changes to the FAQ. You may send your comments and ideas to me at: k_moser@apicshq.org =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Topics Q. What is the Theory of Constraints? Q: Who is Dr. Goldratt? Q: What's a constraint? Q: How does Goldratt define "throughput"? Q: Who or what is a Jonah? Q: What is "The Goal"? Q: How does TOC define "Inventory" and "Operating Expense"? Q: Why are these definitions important? Q: What are the five focusing steps? Q: Can explain these steps in a bit more detail? Q: What is an Evaporating Cloud? Q: How does the Evaporating Cloud methodology work? Q: What is a Reality Tree? Q: Acronyms: What are CRT, DBR, EVA, FRT, PRT, TP, etc? Q: Where can I read more about the Theory of Constraints? Q: Are there any TOC resources on the Internet? Q: What is APICS? =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Q. What is the Theory of Constraints? A. In broad brush, the Theory of Constraints (TOC) is about change and how best to effect it. More specifically, TOC is a set of management principles that help to identify impediments to your goal(s) and effect the changes necessary to remove them. TOC recognizes that the output of any system that consists of multiple steps where the output of one step depends on the output of one or more previous steps will be limited (or constrained) by the least productive steps. In other words, as paraphrased in "The Goal", the strength of any chain is dependant upon its weakest link. Where manufacturing is concerned, TOC postulates that the goal is to make (more) money. It describes three avenues to this goal: 1. Increase Throughput 2. Reduce Inventory 3. Reduce Operating Expense As Dr. Goldratt notes, the opportunities to make more money through reductions in Inventory and Operating Expense are limited by zero. The opportunities to make more money by increasing Throughput, on the other hand, are unlimited. More than that, though, TOC challenges us to define a goal and re-examine all of our actions and measurements based on how well or how poorly they serve it. This is done through a set of tools including, o The Socratic method o Goldratt's five focusing steps o Evaporating Clouds o Reality Trees that help us identify and resolve bottlenecks. For more information on the terms and tools given here, continue reading. ---------------------------------------------------------------- Q: Who is Dr. Goldratt? A: The TOC list server described him as follows: "Goldratt is a physicist by education and a business consultant by profession. His theories talk about how to improve manufacturing processes by finding the "bottle- necks" in the manufacturing process and then exploiting those bottlenecks to either increase the flow of product through it or bypass it with other systems." "He talks in terms of throughput (money coming into a business), inventory (the money inside the business) and operating expenses (the money it takes to get inventory turned into throughput." "He also talks about problem solving -- determining what to change, what to change to and how to change using current reality trees, future reality trees and the socratic method. The current reality tree cuts through the symptoms of a problem to find the core problems. The future reality tree is created by evaporating clouds -- by finding assumptions behind the objections. Then the socratic method is used to find ways to overcome those objections." ---------------------------------------------------------------- Q: What's a constraint? A: It is any resource that prevents you or your organization from increasing throughput. Technically it is anything that prevents you from achieving a higher performance relative to the goal. Something might not limit T but might cause OE to increase inordinantly and be a constraint (e.g. environmental legislation). There are three types of constraints, resource, market, and policy. [Blackstone] ---------------------------------------------------------------- Q: How does Goldratt define "throughput"? A: In "The Goal", Jonah defines Throughput as the rate at which a system generates money through sales. As Jonah points out here, it is critical that we distinguish sales from production: While manufacturing operations tradition- ally measured production at each stage of production, the only Throughput that counts is that which comes off the end of the line to be sold. Why? -- this serves the goal (see above). Mathematically, we express Throughput as Sales minus the raw material inventory content of the sales. In generic terms, Throughput is a quantitative measure of of the entity that the organization seeks to maximize. *** Editor's note: There is some dispute over the next portion of this definition, but I have not been able to resolve it yet. One participant on the TOC list defined Throughput as, "The rate at which the organization creates added value to its owners". As he observed, however, added value is sometimes limited by the global interests of the owners. This is the case for any department within an organization that does not contain any global constraint. One may still attempt to produce better quality and faster response - but even here the added value is restricted by the overall quality and responsiveness of the global system. What is left is to try and do the same with less expenses. For reasons outlined below, "Throughput" is one of the most critical elements of TOC. ---------------------------------------------------------------- Q: Who or what is a Jonah? A: Jonah is the name of the physicist that advises Alex Rogo, the narrator of Dr. Goldratt's book, "The Goal", as he attempts to identify and solve the problems plaguing his manufacturing plant. In a general sense, a Jonah is someone who has completed the Goldratt Institute's two-week Jonah course, during which the Jonah learns how to apply the five thinking pro- cess tools and the categories of legitimate reservations. A Jonah also applies those tools with at least some regularity. [Rizzo] ---------------------------------------------------------------- Q: What is "The Goal"? A: In a concrete sense, it is the title of Dr. Goldratt's first book on the Theory of Constraints. In that book, the narrator -- Alex Rogo -- defines the goal of his manufacturing company as making money. Working with an accountant, he settles on three measures for this goal: net profit, ROI, and cash flow. More recently, the goal for a profit-making business has been defined as making more money both now and in the future. Also, we have recognized that cash flow is not a measure of the goal but is a necessary condition. This is something that you want in certain limits, not too low, not too high, like blood pressure -- a measure of the goal you want always to increase if you can. [Blackstone] Another Jonah on the TOC-L forum paraphrased it as achieving as much as possible from an entity while minimizing the cost to produce the throughput. ---------------------------------------------------------------- Q: How does the Theory of Constraints define "Inventory" and "Operating Expense"? A: o Inventory is defined as all funds that the system has invested in purchasing things that it intends to sell. In other words, Inventory is the current value of all the things that the organization owns and uses to create its product or to deliver its service to the market. o Operational expense is defined as all funds the system spends in order to turn inventory into throughput. ---------------------------------------------------------------- Q: Why are these definitions important? S. Throughput, Inventory, and Operational Expense are the three operational measures by which the performance of any profit-making organization is gauged. From these measures, one can define metrics with which to gauge the performance of smaller groups (divisions, departments, teams...) within the organization. These metrics are designed so as to make the goal of each local group consistent with the goal of the organization. [Rizzo] L. Recall that the goal is to make money. In "What is this thing called Theory of Constraints...?" Dr. Goldratt states that there are three avenues open to increase our ability to make money: 1. Increase Throughput 2. Reduce Inventory 3. Reduce Operating Expense As Dr. Goldratt notes, the opportunities to make more money through reductions in Inventory and Operating Expense are limited by zero. The opportunities to make more money by increasing Throughput, on the other hand, are unlimited. Recall that Alex Rogo settled on three measures -- net profit, ROI, and cash flow. The first two are described mathematically as follows: NP = T - OE ROI = (T - OE) / I Where T is Throughput, OE is Operating Expense, and I is Inventory. ---------------------------------------------------------------- Q: What is the Socratic Method? A: The Socratic method is an approach that attempts to help students find the solution on their own. The instructor guides students by posing a problem and then asking questions to direct their thinking. This approach works whether one is dealing with concrete or metaphysical forms of inquiry. For example, if a student knows how to calculate the area of a square, teachers can use the Socratic method to help them find the area of a right triangle. The same techniques can be used to help a student determine the nature of moral and immoral behavior in any given context. This is a powerful teaching method because concepts and ideas are better retained when one goes through a process of discovery. For example, most people are more likely to remember Pythagorean theory if they work their way through it than if a teacher simply says, "The sides of any right triangle are defined by the equation, A squared plus B squared equals C squared". More to the point, the process of discovery teaches students to think. Socratic teaching methods have been criticised, however, because they are somewhat manipulative and can be misused. ---------------------------------------------------------------- Q: What are the five focusing steps? A: 1. IDENTIFY the system's constraint. 2. Decide how to EXPLOIT the system's constraint. 3. SUBORDINATE everything else to the above decisions. 4. ELEVATE the system's constraint. -- If in any of the previous steps, the constraint has been broken: 5. Return to step 1 -- don't let INERTIA become the system's constraint! In "What is this thing called Theory of Constraints...?" Dr. Goldratt paraphrases these steps more generally as, 1. What to change? - Pinpoint the core problems! 2. To what to change to? - Construct simple, practical solutions! 3. How to cause the change? - Induce the appropriate people to invent such solutions! ---------------------------------------------------------------- Q: Can explain these steps in a bit more detail? A: 1. IDENTIFY the system's constraint. If you were to pick a single resource to add more of, which one would allow you to increase Throughput? Physical in nature, it will be, Materials...The input to the process, Capacity....Insufficient amount of a specific resource relative to market demand, Market......Insufficient sales to consume available capacity Later works deal with one other kind of constraint: Policy .....Any internal or external policy that limits profitability 2. Decide how to EXPLOIT the system's constraint. Determine how to work with the system's constraint so as to maximize throughput. For instance, if the constraint is a specific raw material, it means ensuring that there is no waste of that material. If the constraint is in sales, it means deciding how to capture more sales. If the constraint is a specific internal resource, it means ensuring that it is productive all of the time. Jonah's on the TOC list note that this is a difficult process -- squeezing the most Throughput from the system entails strategic decisions. They further note that if any derivative of cost accounting is used to make that decision, it will not be an optimal decision! 3. SUBORDINATE everything else to the above decisions. This is the means by which the rest of the organization is synchronized with the capabilities of the constraint and the decisions made regarding how to best utilize it. For instance, if the constraint is a machine on the line, you might establish buffers to protect its ability to produce and base the release of materials into the plant on the schedule for that constraint and the amount of buffer time that has been established. This is where most of our common measures in the plant must be changed. By default, every single resource that is NOT the constraint will do severe damage to the organization if they strive for 100% utilization. However, that is exactly how they are measured! 4. ELEVATE the system's constraint. In previous steps, you ensure that the organization is optimized via nothing more than policy changes. In this step, you are actually altering the constraint. For instance, when the constraint has been a machine in the plant, this is the step in which you will add physical capacity. You may do this through, - reducing setup and process times - investing in other process improvements - overtime - hiring more staff - buying another machine or any other action that removes the constraint. 5. Don't let INERTIA become the system's constraint. Once you have "broken" a constraint, go back to Step One! This is a reminder that all of the policies you have established in the organization based on one con- straint will likely not apply once the constraint lies elsewhere! ---------------------------------------------------------------- Q: What is an Evaporating Cloud? A: This is a term used to describe a methodology developed by Goldratt to resolve conflicts in a "win-win" manner. Name relates to the idea that conflicts, like clouds, are often indistinct (i.e., people are unable to articulate the real reasons for the conflict). The "evaporating" part refers to the tool's ability to dissipate the confusion surrounding the conflict, clearly identify the key elements, and provide a means for resolving the conflict. [Dettmer] ---------------------------------------------------------------- Q: How does the Evaporating Cloud methodology work? A: The method makes more sense when it is diagrammed visually through example, but Mr. Dettmer summarizes it as follows: Identify FIVE elements of every conflict (right-to-left): The 2 Prerequisites that directly conflict with one another, the Requirement each Prerequisite is trying to satisfy, and the Objective each Requirement is necessary to achieve. Diagram looks like "home plate" lying on its side, pointing left. Unspoken Assumptions are identified relating to the arrows connecting each element of the EC. Solutions are proposed that replace invalid assumptions. [Dettmer] ---------------------------------------------------------------- Q: What is a Reality Tree? A: A Reality Tree is a cause-and-effect tree (current or future), construction of which is governed by rigorous rules of logic (eight of them). It starts with "roots" in a cause of some kind, develops upward through a "trunk" and "branches" of several layers of intermediate effects, to the "leaves", which are the ultimate effects. In a Current Reality Tree, the "leaves" are undesirable effects, and the "roots" are the core problem or root causes of the Undesirable Effects. Root Causes constitute " what needs changing". A Future Reality Tree starts with a pro- posed solution to a core problem at the "root", builds upward through intermediate effects (the "trunk" and "branches") to Desired Effects (the "leaves"). ---------------------------------------------------------------- Q: Acronyms: What are CRT, DBR, EVA, FRT, PRT, TP, etc? A: Common acronyms related to discussions of TOC include: BM............Buffer Management CC............Critical Chain (for projects) CCR...........Capacity Constraint Resource CRT...........Current Reality Tree DBR...........Drum-Buffer-Rope DE............Desirable effects EVA...........Economic Value Added FRT...........Future Reality Tree IO............Intermediate Objective (for PRTs) MSW...........Management Skills Workshop NBR...........Negative Branch Reservation OE............Operating Expense PIG...........A reference to Flying Pigs POOGI.........Process of Ongoing Improvement PRT...........Prerequisite Tree TA............Throughput Accounting TOC...........Theory of Constraints TRT...........Transition Tree (also TrT) TVA...........TOC "Throughput" Value Added TVC...........Totally Variable Costs TP............Thinking Process UDE...........Undesirable Effects In addition, one or two folks sent me full-blown definitions which may or may not be complete, but I thought might be worth sharing: CCR Capacity Constraint Resource [Haystack] This occurs when there's no true bottleneck, but a resource with, on average, tight capacity. Used for DRUM in most cases where no bottleneck exists. This is one big difference from computing based on The Goal, and computing based on Haystack Syndrome. POOGI Process Of Ongoing Improvement [reference?] In this instance, it refers to going through the cycle of: 1) Identify system's constraint 2) Exploit the constraint 3) Subordinate other elements of the system to this constraint 4) Elevate the capacity of the constraint 5) Loop back to 1, but beware of inertia. TP Thinking Process A five step process of logic trees to perform creative and structured problem solving. The best easy way to see these is to read Goldratt's books: -- Critical Chain has good discussion -- It's Not Luck has more detail -- "Uncommon Sense", http://www.goldratt.com/ucs.htm -- Bill Dettmer's intro to TOC/TP, located at http://www.goalsys.com/apics.htm CRT Current Reality Tree A structured method to list UDES (symptoms) and diagnose the problem (find one or two problems which, if solved, would cause most or all symptoms to disappear). This is the first of the five structured processes to solve the problem, not just mask the symptoms with a bandaid solution. OE Operating expense Defined as all money required to achive your goal (not necessarily make more as it works for non- profit situations as well). This definition is reasonably precise in that OE is everything you would continue to spend if you did not purchase raw materials to turn into product (That is inventory). DBR Drum-Buffer-Rope A scheduling process that is master-slave: The master is the constraint (the weakest link in the chain of events to get from raw material to finished product). It goes full pace and sets the drum (the pace at which things go). A buffer is required to keep the master (the drum) beating continuously, so that processes feeding the master can have variation in output without affecting the drum (the pace). The rope is what keeps other processes from working once they have filled the buffer for the master and are producing at a constant pace. By definition, if the master is going full pace on parts that are required, the only thing other processes can do is build up inventory which has real costs in material costs, storage costs and opportunity costs (because the increased inventory will increase lead(or lag time) to market response. Note that not producing at 100% for these processes has an apparent unit cost increase (since you are still paying your fixed costs over a smaller number of units). This is fallacious since the fixed costs are there whether you produce or not, but by continuing to produce at these processes more material than can be used, you do increase the variable costs noted above. EC Evaprorating Cloud This is the second of five thinking processes. Once the CRT brings out what the core problem to be solved is, the EC is used to identify a family of possible solutions that concentrate on a win- win, rather than a compromise or win/lose approach. See Dettmer's article for an excellent example of this in a labor-management issue. By the way, he calls the EC a Conflict Resolution Diagram. -- Flying Pig Something that seems, on the surface, as feasible as a flying pig, usually related to injections in an FRT or intermediate objectives in a PRT Tom McMullen offered some additional possibilities: ABPC Allocation-based product cost As opposed to TOC's totally variable cost, TVC. ABC-1 Activity Based Costing (Type 1) The use of an activity basis for allocating overhead amounts to form allocation-based product costs (ABPC). ABC-2 Activity Based Costing (Type 2) A range of approaches for analyzing and altering the sources of overhead and operating expense. These approaches are often also called things like process modelling, process improvement, identification of "non-value-added" vs. "value-added activities," reengineering, etc.. TOC practitioners, working in the "Throughput World" context, would do a lot of things like this too, but typically call it "cause-and-effect analysis" rather than "activity based costing." Also, in contrast to some ABC-2 practitioners, TOC practitioners would be performing this work in support of growth in Throughput, and in reduction of relative expenditures, not in support solely of cost reduction efforts which can lead to downspiral. ---------------------------------------------------------------- Q: Where can I read more about the Theory of Constraints? A: Several books have been published on the subject, including: "The Goal", (Second revised edition) (c) 1992 Eli Goldratt, ISBN 0-88427-061-0 - The novel that introduces the Theory of Constraints APICS Stock Number: 03341 "What is this thing called Theory of Constraints and how should it be implemented?" Goldratt, ISBN 88427-085-8 "The Haystack Syndrome" Goldratt APICS Stock Number: 03125 "It's Not Luck" North River Press, 1994. 1-800-486-2665. Goldratt APICS Stock Number: 03291 "Production the TOC Way" Eliyahu Goldratt - Billed as a training package for implementing TOC APICS Stock Number: 03620 "The Theory of Constraints and its implications for management accounting" Noreen, Smith, and Mackey, ISBN 0-88427-116-1 APICS Stock Number: 03356 "Goldratt's Theory of Constraints: A Systems Approach to Continuous Improvement" University of Southern California, 1995. 800-477-8620 Dettmer, H.W. - Recommended for guidance on constructing reality trees "Theory of Constraints -- Applications in Quality and Manufacturing" Robert E. Stein, CPIM APICS Stock Number: 03338 "Constraints Management Handbook" James F. Cox II, CFPIM and Michael S. Spencer, CFPIM APICS Stock Number: 03522 "The Race" APICS Stock Number: 03202 "The Theory of Constraints Journal" Related publications recommended by the APICS Constraints SIG include, "Re-engineering Performance Management" APICS Stock Number: 03160 "Regaining competitiveness" APICS Stock Number: 03431 "Syncronous Manufacturing" APICS Stock Number: 03602 "Syncronous Manufacturing Workbook" APICS Stock Number: 03657 Note: APICS also offers a library containing several of the books and materials listed here at a substantial dis- count. Ask about APICS Stock Number 03400. ---------------------------------------------------------------- Q: Are there any TOC resources on the Internet? A: Yes: o There is an active discussion group on a list server at lyris@lists.apics.org. This list is a forum to talk about Dr. Goldratt's theories and their real life application. It's also an opportunity to practice those theories by involving the greater community available through the internet. You may subscribe to the list by sending to it the message, JOIN CMSIG your_email_address o Crazy about Constraints! web site at: http://www.lm.com/~dshu/toc/cac.html Administered by David Shucavage (dshu@telerama.lm.com), this site is meant to be a more light hearted web site devoted to TOC, the Thinking Processes, Drum Buffer Rope, etc., and other methodologies stemming from Eli Goldratt's work. It provides listings of the books written on the subject and where they can be purchased. A list of consultants working in the area is included, as are conferences and workshops. It even has a section on "Unconstrained Humor" where you can share your best Jonah Joke. Contributions are welcome. o Another Web site was reported by Earl Mott (earl_mott@ i2.com) at http://fohnix.metronet.com/~intell. o APICS also hosts several pages dedicated to constraints management on its online service, which is located at http://www.apics.org/. ---------------------------------------------------------------- Q: What is APICS? A: APICS, the educational society for resource management, is an international not-for-profit organization with more than 70,000 members in over 270 chapters throughout North America. APICS offers a full range of programs and materials on the latest business management concepts and techniques. These offerings, developed under the direction of integrated resource management experts, are available at the local, regional, and national levels, and in over 25 countries worldwide. Since 1957, APICS members have been at the forefront of such management and manufacturing achievements as the widespread use of material requirements planning (MRP), Just-in-Time (JIT), computer-integrated manufacturing (CIM), Theory of Constraints (TOC), and other dynamic management practices and principles that affect all areas of manufacturing and service industries. APICS' internationally recognized Certified in Production and Inventory Management (CPIM) program is a standard by which individuals voluntarily assess their understanding of production and inventory control concepts and methods. APICS administers more than 57,000 CPIM exams each year. Answering the need for a more cross-functional, integrated workforce, APICS developed the Certified in Integrated Resource Management (CIRM) program. In conjunction with the CIRM body of knowledge, APICS offers a variety of educational programs and services that provide information regarding the interdependencies of all business disciplines. APICS materials and programs are available to members at discount prices. Domestic orders for books and materials are shipped free, and expedited shipping is available for a small fee. For information about APICS programs and activities, call APICS Customer Service at (800) 444-2742 or (703) 237-8344. ---------------------------------------------------------------- Please note that this FAQ borrows heavily from Dr. Goldratt's books, especially "The Goal" and "It's Not Luck". Many thanks to the many contributors to this FAQ. Conspicuous among them are: * Karl & Matt Bentz * John Blackstone * Bill Dettmer * Charlie Fried * Tom McMullen * Frank Patrick * Sarah Rahamim * Tony Rizzo * James Semerad, CPIM * David Shucavage * Len Zaifman +( TOC glossary, dictionary, abbreviations, index Additional Cause Reservation This is often the end of a torture process, in which the sadist is obliged to go on and say, "And I propose that the additional cause is...." This torture process is sometimes called scrutiny, in nice company, and related to the Medieval torture tool called Categories of Limited Reservations, defined below. Banana A soft fruit that grows on sufficiency trees, shaped like a flat ellipse. They connect two or three cause arrows that enter an effect entity, and are read 'AND.' Bananas can be either vertical or horizontal, if you use FlowChart. They are not found frequenting necessity trees. Bottleneck The constraint in a production flow process. The limiting capacity process step. Buffer In process inventory, time or budget allowance used to protect scheduled throughput,(little t) delivery dates, or cost estimates on a production process or project. Categories of Legitimate Reservation (CLR) A new medieval torture instrument invented by Dr. Eli Goldratt, and ruthlessly applied by sadists of the cult 'scrutinizer.' This instrument is carefully constructed to strain your brain in four dimensions, and is supposed to cause you to recant any dumb propositions you put into your tree. The instrument is ruthlessly applied until you shout to the watching crowd, "All right, I'll change this entity to #$%@#." Causality Reservation A nice way of saying, "Bullshit. Just because both things happen doesn't mean they are related." Choopchick This you are going to have to figure out for yourself. "Trying to pin the air with a sharp needle". Clarity Reservation A pair of words you are guaranteed to never want to hear again after Jonah training. They mean that way down deep the person who said them knows your tree is screwed up, but isn't going to tell you what, or how to fix it, and is going to keep bothering you until you figure it yourself. Most of the time, this reservation is only polite conversation (i.e., a lie) because the insidious scrutinizer really does understand that entity, and is just trying to confuse you before getting to the meat of your foolishness. Cloud (EVAPORATING) A fixed format necessity tree used to develop win-win solutions to action alternatives. The action alternatives are best expressed as opposites e.g., 'Do D, don't do D." The cloud has five entities and arrows, in the illustrated format. (See the Thinking Process description). You identify the assumptions underlying the arrows to resolve the cloud. You develop injections that will invalidate the assumption, and therefore invalidate the arrow and 'dissolve' the cloud. Conflict Resolution Diagram (CRD) A more confusing and big worded title for a Cloud, used by people in Universities so they can charge a lot for their courses, or maybe because graduate students would not feel that they are getting their money's worth with simple words like 'Cloud.' Constraint A process or process step that limits Throughput. Core Problem A primary cause of most of the UDE symptoms in your system. You identify the Core Problem as an entry point on your CRT that traces, in cause-effect-cause relationships, through at least 2/3 of the UDEs, and which you have the stamina and energy to change. Critical Chain The longest set of dependent activities, with explicit consideration of resource availability, to achieve a project goal. The Critical Chain is NOT the same as you get from performing resource allocation on a critical path schedule. The Critical Chain defines an alternate path which completes the project earlier by resolving resource contention up front. Critical Chain Completion Buffer (CCCB) A time buffer placed at the end of the critical chain in a project schedule to protect the overall schedule. Critical Chain Feeding Buffer (CCFB) A time buffer at the end of a project activity chain which feeds the critical chain. Critical Chain Resource Buffer (CCFB) A buffer placed on the critical chain to ensure that resources are available when needed to protect the critical chain schedule. This buffer is insurance of resource availability, and does not add time to the critical chain. It takes the form of a contract with the resources that ensures their availability, whether or not you are ready to use them then, through the latest time you might need the resource. Cucumber A green vegetable used primarily in salads and to make pickles. In an AGI training session, usually a very bright young person, sometimes with a really nifty accent, who says mean things about your tree, such as "House-on-fire reservation" or, "Oh my, all of this stuff does not connect UDEs, and has to be trimmed," while being as 'cool as a cucumber.' Sometimes also mean old people who take even more delight about doing the same thing, and don't even smell nice. Current Reality Tree (CRT) A sufficiency tree connecting together all of your UDEs on a particular system. The CRT is the first step in the Thinking Process, and is created to identify the Core Problem of your system, and to aid in developing your Future Reality Tree. Desired Effect (DE) The positive effect you want to have in Future Reality to replace your UnDesired Effect of current reality. DBR Drum-Buffer-Rope method for production scheduling. The drum is the capacity of the plant constraint, and is used to set the overall throughput schedule. The buffers are in process inventories strategically located to eliminate starving the constraint due to statistical fluctuations. The rope is the information connection between the constraint and material release into the process. Drum The bottleneck processing rate, which is used to schedule an entire plant. Entry Point An entity on a sufficiency tree which has not causes (arrows) leading into it. Cause An entity which inevitably leads to a certain result (effect). Causes may be single or may require other conditions to lead to the effect. Dependent events Events in which the output of one event influences the input to another event. Effect An entity representing the result of one or more causes. Entity A condition that exists. Existence Reservation This means, "Prove it." Five Focusing Steps The five step process to identify and elevate constraints. See the process diagram. Flush A project measure for making decisions. Flush is the time integral of net profit times days, in units of dollar-days. Future Reality Tree A sufficiency tree connecting INJECTIONS to Desired Effects. Goal See definition for Jonah, below. Hockey Stick The shape of a curve that is relatively flat and then rises rapidly, representing, for example, the amount of effort one puts out as a deadline approaches. House-on-fire Reservation Pig Latin for, "Ah, you can't seem to tell the difference between cause and effect." Orginial definition based on the logic statement, "IF there is smoke and Fire Engines, THEN the house is on fire." The smoke and fire engines are not the CAUSE of the house being on fire, but rather cause us to KNOW that the house is on fire. Since the TP trees are Cause-Effect trees, House-on-fire is not correct. Injection An action or effect which will be created in the future. Intermediate Objective (IO) An action or effect which is a necessary prerequisite to an injection or another IO. Inventory All of the investment (material) in the equipment necessary to convert raw material into throughput. Jonah A nice old man who never heard of the Categories of Legitimite Reservations. What Dr. Goldratt hopes to be some day. In a more serious tone, if you don't know Jonah, you got to this page by mistake. The price of admission is that you have to read The Goal, by Dr. Eli Goldratt, before you come back. A title bestowed upon those who complete the AGI Jonah course, and are therefore prepared to go forth and replenish the rain forests with all kinds of trees. This is the guy that swallowed Jonah, and took him where God told him to go. I think I like him much more than I like Jonah. Goldratt said that Jonah is named Jonah because he was the only prophet in the bible that people listened to next to Aaron. Much to his surprise and disgust. He was a reluctant prophet in the first place (thus heading the other way in a ship), and he got mad when God didn't kill the bad guys (who repented because of Jonah's preaching) anyhow. Not what you would call open minded or forgiving. Hmm. Need The requirement(s) which MUST be met in order to achieve an objective or goal. Necessity Tree A logic tree in which each item at the tail of an arrow MUST exist in order for the item at the head of the arrow to exist, BECAUSE of some assumption or obstacle represented by the arrow. Necessary Condition #1 Satisfy customers now and in the future. (A necessary condition to meet the Goal of any enterprise.) Necessary Condition #2 Satisfy and motivate employees now and in the future. (A necessary condition to meet the Goal of any enterprise.) Negative Branch A sufficiency logic tree (potential FRT) stemming from an INJECTION; which may lead to undesirable effects. Obstacle An entity which prevents an effect from existing. Operating Expense All of the money it costs to convert raw material into throughput. Predicted Effect Reservation This means, "That entity can't be right, because if it existed we would...the predicted effect." This is one of the less painful CLRs, as the sadist has to tell you what they are predicting. Prerequisite Tree (PRT) A logic tree representing the time phasing of actions to achieve a goal, connecting intermediate objectives with effects that overcome obstacles. The PRT is read, "In order to have ENTITY AT HEAD OF ARROW we must have ENTITY AT TAIL OF ARROW because of OBSTACLE." Red curve-Green curve The Red Curve represents the typical process of ongoing improvement, which increases for a while and then decreases. Root Cause The cause which, if changed, will prevent recurrence of an UDE. Rope The information flow from the Drum (bottleneck resource) to the front of the line (material release) which controls plant Scrutiny Inspection of a tree to ensure that none of the categories of legitimate reservation apply, and that all of the entities are necessary to connect the UDEs. Statistical fluctuations Common cause variations in output quantity or quality. Sufficiency Tree A tree construction in which the existence of the entities at the tail of the arrow make the existence of the entity at the head of the arrow an unavoidable result. Throughput All of the money our customers pay us minus the raw material cost. Thinking Process The five step process which identifies What to change?, What to change to?, and How to cause the change? The sequence of steps starting with the CRT, through the evaporating Cloud, FRT, PRT and resulting in the TRT. Transition Tree (TRT) An effect plan specifying the effects to be achieved, the starting conditions, the actions necessary to create the effects, the logic of why the action will create the effect, and the logic for the sequence of the effects. UnDesired Effect (UDE) (Pronounced "You-Dee," pronunciation invented by AGI partner Dee Jacob so that all Jonah's seem to give positive feed back). UDEs are the things you don't like about your system, and about which you can say, "It really bothers me that UDE." Note that UDEs are EFFECTS. They can not contain 'IF..THEN' statements. Want The effect that one believes MUST exist in order to satisfy a need, because of some set of ASSUMPTIONS. +( TOC international certification organisation From: "Bill Dettmer" Subject: [cmsig] TOC Body of Knowledge (Draft) Date: Tue, 19 Mar 2002 11:24:57 -0800 Some of you may know that last November in Atlanta, 200 TOC practitioners and teachers convened to establish a formal world-wide organization to certify TOC knowledge AND the ability to apply it. This organization is now called the Theory of Constraints International Certification Organization (TOC-ICO). The purpose of such certification, which will be open and available to anyone anywhere in the world who desires to qualify, will be to designate people with an acceptable standard level of knowledge in constraint management principles, tools, and techniques. In the same way that ASQ, APICS ISO-9000, or other certifying body certifications signify comprehensive knowledge in designated areas, a TOC-ICO certification will identify to the world at large a person with mature knowledge and well-developed capabilities to apply TOC in specific or general circumstances. Eventually, a TOC-ICO certification will become the world-wide standard for this purpose. Its qualifications and requirements will exceed (and be more rigorous than) any other comparable designation (e.g., "Jonah"). One of the first tasks that must be completed, before certification opportunities can even be offered, is the definition of a standardized body of knowledge. A committee of the TOC-ICO board of directors, made up of Eli Schragenheim, Jim Cox, and me, has begun the task of defining this body of knowledge. The first step in this task is the construction of a general outline of what should be included. This outline will eventually (within the next couple of months) be amplified and expanded into much more detail, suitable to guide prospective certification candidates in preparing for their certification evaluation. However, for now, we want to be sure we haven't omitted anything critical. So, we're circulating this draft outline for general review and comment. The primary purpose of this e-mail message is to advise the TOC community at large of what's afoot in this domain of certification---in other words, what you can expect to see coming down the pike in the future. A secondary purpose is to enlist the aid of any who might care to offer it in identifying any obvious omissions, or suggesting additions. Before suggesting any additions or changes, please keep the following guidelines in mind: 1. Any topic included in the BOK is subject to testing. This means that prospective candidates must have ACCESS to written or verbal (via video or audio tape) documentation on the topic. Unique or proprietary uses, approaches or applications of principles or tools of TOC may not be suitable for inclusion in the BOK unless the what-how-and-why have been introduced into the public domain, accessible to everyone, through proceeding papers, book, magazine articles, professional journal articles, etc. 2. Untested (i.e., not proven through repeated application in the real world) approaches, uses, or applications of principles or tools MAY be suggested, but it is likely that these would be included in the BOK at some later date---probably at the time of a revision to the BOK. What we are after here is "tried-and-true," tools and principles in "commonly accepted use" to begin with. This is not to say that we're excluding "cutting edge" developments---rather, it means that the BOK used to test for certification should not include experimental or developmental concepts and tools. With those qualifications in mind, we solicit your comments on the TOC Body of Knowledge (draft) that follows. You may return comments to me at: gsi@goalsys.com P.S. To relieve possible concerns you might have after reading this outline, be assured that as part of the BOK, the TOC-ICO will provide a comprehensive bibliography of study resources (and sources) for prospective certification candidates to use in preparing for a certification exam. Answers to all certification evaluation questions will be found within these materials, which will be available to everyone. P.P.S. Please don't mistake the BOK for a "statement of inclusion" concerning any particular certification exam. Construction of the whole BOK comes first. The guidelines for each certification exam will specify which parts of the BOK are subject to testing for that certification. Ther TOC-ICO board of directors hasn't even formally decided exactly what certifications will be offered initially. At this point there will probably be two categories of certifcation. One might be called a "strategic application expert," which will require a comprehensive knowledge of the Thinking Process and TOC concepts and principles. It will also require a good general knowledge of discrete TOC tools such as Drum-Buffer-Rope (DBR) and Critical Chain Project Management (CCPM). A second category of certification might be called "tactical application expert" and will require comprehensive knowledge of tactical applications of TOC, such as DBR and CCPM, along with a general knowledge of concepts and the Thinking Process. More information on actual certification categories and process will follow in a later update. (Don't put too much stock in the names of the categories I used above---the formal name of the certification will undoubtedly differ.) The TOC Body of Knowledge: Main Topics CONCEPTS AND PRINCIPLES * The Basics of the Throughput World o The definition of constraints o The Five Focusing Steps o Statistical fluctuations and dependent events o T, I and OE * Strategy or the holistic approach o The link between local improvement and how to turn it to global improvement o Analysis of various cases with "school solutions" * The Cadillac case * Samsonite. * Applied Materials (selling turn key solutions rather than separate items). o Dealing with written case studies. * The Layers of Resistance * Marketing and sales o Definitions of marketing and sales o The 5, 6 or 9 layers of resistance o Segmentation o Unrefusable offer / Mafia offer o That dog's tail and its impact. A breakthrough in a certain area that is only at the 'tail' of the value to customers might change the market perception for the whole market. o Preparing the client's CRT and using it for "mafia offer" o In how many market segments should a company show presence o Using the TT in sales to develop the sales pitch and the presentation o SPIN selling and TOC in sales. Managing the sales process o The devastating impact of cost accounting on marketing and sales and the use T/CU in sales o Integrating sales and production planning. Mutual agreement to what is the desirable product mix, what additional features can be bundled together, like very reduced response time for some additional price, what customization efforts can be granted THINKING PROCESS * The Thinking Processes o The road map o Cause and effect basic logic o Reservations categories o Current Reality Tree o Evaporating Cloud (conflict resolution diagram) o Future Reality Tree o Prerequisite Tree o Transition Tree o The three-cloud technique o Communication trees. * Management Skills (applications of discrete TP tools in daily environment) PRODUCTION AND DISTRIBUTION * Manufacturing o What dictates batch sizing? Including transfer batch versus process batch and the fallacy of EOQ o The efficiency and variance performance measurements - the fallacy and the damage o The notion of capacity-constrained-resource (CCR) o Drum-Buffer-rope for make-to-order o DBR for make-to-stock - replenishment o I, V, A, T o Buffer Management for make-to-order o Buffer Management for make-to-stock o Software support for DBR and BM. The additional features of DBR software relative to manual DBR. How to force non-DBR software systems to support the DBR efforts. o Performance measures in manufacturing o Line design, the notion of space-buffer o Strategic location of the capacity constraint o Engineering improvements - investment justification o How quality control is affected by the location of the CCR o Outsourcing - the right and wrong o Common policy constraints in manufacturing * Managing the Supply Chain o Managing a distribution company - the replenishment buffer o Manufacturing and Distribution: the concept of central warehouse o Maintaining buffer between links in the chain (vendor responsibility) o Maintaining trust between links * The Inventory-Dollar-Days measurement * The Throughput-Dollar-Days measurement o Splitting T between the partners along the chain o The typical downstream dilemma o The typical upstream dilemma o Using the TP to achieve match of interests along the chain. Certainly risk sharing, especially when forecasting is involved, is such a case that can be handled with clouds. PROJECT MANAGEMENT * Project Management o The three basic problems (what project? What features, scheduling) o The concept of the critical chain o The impact of Parkinson law o Managing the uncertainty (buffers) o Managing multi-projects * Bad multi-tasking * The concept of the Drum in PM * Capacity buffers o Buffer Management.in PM o Performance measures in PM o Dependent projects CONSTRAINTS (THROUGHPUT) ACCOUNTING * Throughput/Constraints accounting o Why product cost is a mirage o The kernel of the conflict with ABC allocation o T/CU and its limitations, like that the T/CU is valid only for "small decisions" and only when there is only one clear CCR. Any decision that might shift the CCR cannot be addressed with T/CU o The value of inventory o Dollar-days in investment justification (flush) o Judging the economic desirability of large opportunities. o The strategic constraint from TA view * Measurements o The dual role of measurements * A status of one aspect of a situation * A motivational tool o The notion of T as added value * T for profit and for non-profit organizations o I and OE o Local performance measurements - the problems * Efficiencies etc. * Local OE, local I and contributions to the T * Holes in the buffers * TDD and IDD as departmental performance measurements TOC Body of Knowledge (c)TOC-ICO, 2002 1 +( ToC proliferation problem From: "Tony Rizzo" Date: Fri, 8 Apr 2005 01:21:47 -0400 Subject: RE: [cmsig] [tocleaders] TOC Acceptance CRT Reply-To: tocleaders@yahoogroups.com " ...but as soon as the constraint moves out of production then the next constraint(s) are always about policies." It's always about policies, and it goes something like this: 1) The policy makers think that by maximizing workforce utilization they maximize system output. 2) They require everyone to maximize the workforce utilization measurement. 3) They put in place defining policies that align operational decisions with the operational goal of maximizing workforce utilization. 4) Managers and workers consistently take decisions and actions that maximize workforce utilization. Unfortunately, the workforce utilization measurement is in direct conflict with system output and shareholder value. So everyone compromises shareholder value for the sake of complying with the defining policies and surviving. Why do I call them defining policies? I do so because they define the operational model of the enterprise, by forcing a specific set of decisions and actions consistently. But at the core of it there is the false belief that workforce utilization and system output are directly proportional with each other. That used to be the case, about two hundred years ago. But it ain't so any more. +( TOC Theory of Constraints From: "Jim Bowles" To: "Constraints Management SIG" Subject: [cmsig] Re: FCS and Production Strategies Date: Sun, 5 Dec 2004 01:35:40 -0000 Some time ago we discussed what is TOC the following short description was offered. The Theory of Constraints, commonly referred to as TOC [Tee O Cee] is a body of knowledge that has been accumulating for more than 25 years. It comprises two parts, a set of applications for improving different functions of an organisation, and a set of scientifically based tools for resolving different types of problem or for creating new applications. These are known as the Thinking Processes. The best known application is for Production/operations (described in the form of a novel called The Goal, 1984). This application today is referred to as "Drum Buffer Rope" DBR. Others include Finance and measures, Project management/ Engineering, Distribution, Management of People, Marketing and Sales and the all-embracing Strategic and Tactic approach, now called the "Viable Vision". On their own each application provides the means to change the performance of a function by several orders of magnitude. But the most powerful of all is to use the combination of applications and the Thinking Processes in a strategic way in what can be called a Process Of On-Going Improvement. We do this to set the direction and best practice for developing the organisation as a whole. The Thinking Processes (which have their origins firmly rooted in the hard science of physics) are a set of tools/processes or procedures that allow you to "break out of the box". They can be used singly or in combination (known as the TP Road Map). Each tool addresses a different problem depending on what it is that blocks you from moving forward towards your chosen Goal. In essence they help you address three questions; What to change, What to change to, and How to cause the change. Of course these questions only have relevance if you know what you want to achieve (A Goal) and know how to measure your progress towards that Goal. In TOC the analogy of a chain is used to focus attention on how we view an organisation. On the one hand we can view it as a set of independent links. This is the most common view in all types of organisation and it encourages people to do things that are good for their link alone. We call this focusing on the local optima. On the other hand we can consider the chain as a whole and then we need to consider its weakest link(s). We call this the focusing on Global Optimum. Identifying the weakest link(s) is synonymous with finding the systems "constraint". For those more familiar with the TQM philosophy or Deming this is our way of "Using the few to control the many". The most commonly encountered problem is the differences in the way that people view and measure their chain or part of it. By focusing on the "weight" or "cost" we come to one set of decisions actions and conclusions. But by focusing on its strength or its ability to deliver results we come to an entirely different set of actions and solutions. This has led TOC practitioners to a consciousness of two opposing paradigms. One we call the "Cost world" the other we call the "Throughput World". --- From: "Jim Bowles" Date: Thu, 3 Nov 2005 15:13:20 -0000 Subject: RE: [tocleaders] Testable (Refutable) Hypothesis For Theory of Constraints Hi Guys This afternoon I have had great fun (Ahah) sorting through a huge pile of papers. I found several items on TOC and the attached notes struck a cord. How’s this for a simple explanation of TOC and its assumptions? The notes were taken at one of the Jonah Workshops I attended during the 1990s. Probably after 1994 when Eli accepted the letters TOC which he had been averse to do until then. Bad experience with OPT he said. WHAT IS TOC? Purpose: To bring human organisations to the same level of science as the hard sciences – i.e. effect-cause-effect. In other words show that management is science based not merely an art. 1. Definition: Complex. The more data you have to supply to explain things the more complex the system. a. Beliefs: In reality every system is extremely simple. In physics there is a search for an under lying theory of everything. Gravity Electro – magnetic Strong Nuclear force Weak Nuclear force 2. Definition: Problem a. The more degrees of freedom the system has, the more complex the system. b. Belief : Conflict does not exist in reality. 3. Jonah Conflicts can be removed by surfacing an assumption that can be changed. --- von Eliyahu M. Goldratt in "Das Ziel", McGraw Hill 1997 DAS ZIEL IST ES, GELD ZU VERDIENEN UND GEWINNE ZU MACHEN und zwar jetzt und in der Zukunft, ausgedrckt in OpInc, ROCE, fCF oder in Durchsatz, Best„nde und Betriebskosten (in dieser Reihenfolge). DURCHSATZ ist die Geldmenge pro Zeiteinheit, die vom SYSTEM durch Verk„ufe verdient wird. BESTŽNDE ist alles Geld, das in das SYSTEM fr den Ankauf von Dingen (Lager, Maschinen...) investiert wurde, die zum Verkauf gedacht sind. BETRIEBSKOSTEN sind alles Geld, das das SYSTEM dafr ausgibt, Best„nde in Durchsatz umzuwandeln. Durchsatz und Best„nde lassen sich um Gr”ssenordnungen ver„ndern - Betriebskosten nur in geringem Masse ! Daher die Priorit„ten richtig setzen (zuerst Best„nde senken und dann erst Betriebskosten) ! Ein SYSTEM ist eine Einheit von abh„ngigen Ereignissen (Prozesschritte im APL) die statistischen Fluktuationen (Auftragsschwankungen, St”rungen ...) ausgesetzt sind. Im System gibt es Engp„sse und Nicht-Engp„sse. Engp„sse drfen nur mit qualitativ einwandfreien Bauteilen versorgt werden und sollen maximale Auslastung aufweisen. Sie drfen keine Ersatzteile, Sicherheitsbest„nde, Mindestlosgr”ssen usw fertigen, sondern nur Stckzahlen fr Kundenauftr„ge. Engp„sse mssen zuallererst beseitigt werden. Der Durchsatz einer Fabrik wird ausschliesslich von den Engp„ssen bestimmt - nie von den Nicht-Engp„ssen => Nichtengp„sse k”nnen ruhig stillstehen oder ”fter gerstet werden, ohne dass der Ausstoss vermindert wird. Stillstehende Nicht-Engp„sse vermeiden unverk„ufliche Best„nde! Fast alle wettbewerbsf„higen Produkte, die Engpassteile beinhalten, sind sofort verkauft und selten l„nger als ein paar Tage am Lager. Eine bei einem Engpass verlorene Stunde ist eine fr das gesamte System (=Unternehmen) verlorene Stunde. Eine bei einem Nicht-Engpass eingesparte Stunde ("Produktivit„t") ist Zeitverschwendung/Produktivit„tsillusion und sonst gar nichts ! Wenn man die Losgr”ssen an Nicht-Engp„ssen reduziert, dann steigen die Kosten nicht (so lange Personalkapazit„t und Maschinenkapazit„t vorhanden ist) - sie SINKEN (Bestandsreduktion)! OPT-METHODE : Wie stellt man das Kernproblem fest (Black Box Vorstellung). Was sollen wir „ndern ? Wie schaut die Ersatzl”sung aus ? Wie bringt man den Wandel ins Rollen (z.B.: durch Fragetechnik/Sokrates WENN->DANN) ! 1) IDENTIFIZIERUNG der Sachzw„nge/Restriktionen/Engp„sse des Systems (Fabrik, Organisation) 2) Entscheiden, wie die Sachzw„nge/Restriktionen/Engp„sse des Systems (besser) zu NUTZEN sind. 3) Alles andere diesen Entscheidungen UNTERORDNEN - insbesondere alte Regeln, Richtlinien, Usancen usw. 4) ENTLASTUNG des Systems von seinen Sachzw„ngen. 5) ACHTUNG !!! L”st sich ein Sachzwang auf, dann zurck zu Schritt 1). Vorsicht vor Sachzw„ngen, die systemimmanente TRŽGHEITEN hervorrufen. BERATUNG: Creative Output Ltd./UK DMA-Planungs GmbH Pagenstecherstrasse 4 D-65183 Wiesbaden Tel 0611 524743/599676 Fax 0611 520770 siehe auch Eintrag "TOC network" There are three potential goals to choose from. 1. Make Money now and more in the future. 2. Satisfied Customers. 3. Satisfied and secure Employees. It doesn't matter which one you select as your goal, as long as you make the other two "necessary conditions" never to be violated by any action trying to achieve your goal. Given the above rule, Eli chooses #1 as "The Goal" because it is a lot easier to measure $ than Customer or employee satisfaction. Dennis Marshall Certified Associate of AGI >> It would seem to make sense Dennis, but there are too many negative implications (or negative branches) of making #1 the goal. #2 will eventually cause #1, but #1 will not necessarily cause #2 therefore it is my claim that #2 should always be the goal. However #2, as stated, is not nearly enough. Who is the customer that we have decided to serve requires clear criteria of accounts that are a part of our market and accounts that are not. This is necessary to provide guidance on a host of tactical decisions from manufacturing to product development to sales. So in my world, #2 is really no better than #1 until it has much better definition. And most of the rest of this forum is familiar (probably too familiar!) by now on my views of what criteria are necessary to properly define the targeted markets. Bill Hodgdon +( TOC training programs, JONAH course, Effective Decision Making from the AGI-UK homepage on 19-NOV-2000 The Jonah Programme =================== This is a 10-15 day process designed to give people the full set of the advanced "Thinking Process" tools available as part of the TOC body of knowledge. Often, improvement initiatives and problem solving methods are addressing only symptoms of a deeper core problem and hence the benefit gained from addressing them is extremely limited. For results in line with the direction of your goals, you need to find the real leverage point that can bring you substantially more for your efforts. This programme is a must for anyone who wants or needs to: ? Learn and apply a holistic approach to business performance ? Find breakthrough solutions for their own company or their clients ? Develop and implement breakthrough solutions to systemic core problems ? Accelerate the improvement process in a particular department/subject area ? Effectively address the long-standing and seemingly intractable issues facing their organisation ? Generate increased value by focusing on a leverage point rather than incremental improvement ? To analyse, document and communicate full cause and effect relationships in systems ? Move towards becoming an AGI Certified Expert Overall, it answers the question facing many leaders today - "How can we accelerate the rate of ongoing improvement?" In summary, each participant learns a proven process to identify with cause and effect logic in any given area, WHAT needs to change, WHAT it should be changed TO, and HOW to cause the changes. Systematically and logically asking and answering The Three Questions is essential to accelerating any process for ongoing improvement. "The Jonah Programme teaches the use of the logical Thinking Processes which, in my experience, are the vital ingredient for anyone wishing to bring real performance improvement to their organisation. Most managers already have the intuition and emotion that are pre-requisites for improving the organisation, however intuition and emotion alone are frequently not sufficient. The thing that ties everything together is logic. I know of no more effective tools than the TOC Thinking Processes." J. Murphy, Vice President R&D - Jonah Programme graduate Detailed Programme Description 1) The process starts with answering the first question - What to change? Prior to the programme, each participant will have agreed with the tutor a subject matter to use for the full duration of the programme. This means the participant can practise applying the tools on and find answers to the Three Questions for a real life subject area, rather than case study. We then begin to build the first tool in the programme, the "Current Reality Tree", a map of cause and effect logic to fully understand the issues, beliefs and assumptions around the problems to gain a greater understanding at a deeper level. This process starts with creating a list of the issues as "Undesirable Effects" in their given subject area. A few of these from a broad perspective are taken and the tool called the "Cloud", which identifies the conflict or dilemma that sits behind each Undesirable Effect, is used on each one. The result is a greater understanding of each individual issue but now there is the need to bring them together through a consolidation technique to develop the "Core Conflict Cloud" that sits at the bottom of all the issues as the major common cause. This is represented as the dilemma which prevents the core problem from being resolved. This is the one conflict that causes or contributes in a major way to most (80%) of the Undesirable Effects listed earlier. This work must then be validated to check that what has been identified as the core problem really is the leverage point to affect the other issues. This is done through mapping the remaining issues with robust cause and effect logic and linking this map to the Core Conflict Cloud created earlier. This completes the creation of the Current Reality Tree and answers the question of what really needs to change. 2) The process continues to answer the next question - What to change to? The first task is to find a way to break out of the Core Conflict Cloud. The participants find the erroneous assumptions that created the core conflict and that now hold the problem together. These assumptions will often be policies (either explicit or implicit), measurements or behaviours that constrain the desired results. One of these will be the key to the direction of the solution that will give rise to a breakthrough idea, an "Injection" of something not done before in this reality. However, as this is only the initial direction for a solution, it is not yet a complete solution. The Undesirable Effects listed earlier are used for the participant to identify what effects they would like to see instead. Since the initial "Injection" is usually not sufficient to cause all of these "Desired Effects", the "Future Reality Tree" tool is used to check what exactly will result from the breakthrough idea, "Injection", and what additional "Injections" are needed. The process highlights the missing elements to allow the development of a more complete solution. The "Future Reality Tree" is then upgraded using rigorous cause and effect to include the additional elements required to achieve the full map of the Desired Effects. It is also important that the solution does not create any new, devastating undesirable effects, so participants plan to identify and deal with potential negative outcomes ahead of time. The "Negative Branch Reservation" tool is used here to identify, define and address the potential, significant negative outcomes. 3) The last part answers the third and final question - How to cause the change? The next step now is to create a logical implementation plan for the breakthrough ideas that have to be brought into reality to achieve the vision contained in the "Future Reality Tree". Here the participants use the "Pre-Requisite Tree" tool to collect the major obstacles that block implementation of the required changes. These obstacles become leverage points to identify and sequence the intermediate objectives that must be achieved to create the new mode of operation. The process of developing the "Pre-Requisite Tree" leads to a clearer understanding of the buy-in required by groups and individuals and also provides a clear roadmap for the implementation plan. This can then be converted from a strategic level plan into a tactical action plan with the use of the "Transition Tree" tool. All organisational improvement efforts require the active collaboration of others. In order to properly achieve the buy-in of others, it is important that it be done in a way that works with the normal process people employ when mentally evaluating a proposed solution. Failure to work within this process usually creates the impression that people are resisting change. This natural process is what has been defined in TOC as the "Six Layers of Resistance". Systematically addressing the Six Layers minimises resistance to change and the solution is enhanced through the collaboration of those whose buy-in is needed. Additional information: During the process, participants are expected to validate their early work with others outside of the main programme. One pre-requisite for this programme is that each participant must understand the three basic tools of the Thinking Processes - the Cloud, the Negative Branch and the Pre-Requisite Tree. This can be achieved by attending what is called the Effective Decision-Making programme. For details of this programme Click Here We also recommend that anyone wanting to complete the Jonah process watches the Goldratt Video Programme. Being certified by AGI (The Goldratt Institute) as a Jonah gives individuals certain privileges in relation to continuing education in TOC and means that the person has joined a growing body of people dedicated to excellence in personal performance and contribution to others. Effective Decision Making ========================= This is a 4 - 5 day process designed to give people the basic Thinking Process tools contained within the TOC Body of Knowledge to increase their personal effectiveness. Unlike many other such programmes, this programme provides a specific process for developing win-win solutions, refining solutions to avoid dysfunctional outcomes and creating implementation strategies for those solutions. It doesn't tell you what you should do but instead focuses on teaching the PROCESS to get results no matter what the problem is, through the use of easy to follow techniques. This programme is a must for anyone who wants or needs to: ? Be more valuable, personally, to their organisation ? Be more effective at handling day to day issues that arise ? Develop a focused and logical approach to the way they personally manage ? Improve the rate at which decisions are made and true consensus is reached with others ? Develop their problem solving skills and ability to find real win-win solutions ? Develop a plan to achieve an ambitious deliverable ? Prepare for the more advanced Thinking Process tools programme Overall, it is a personal development / problem-solving programme that teaches how the systematic and logical approach of the TOC Thinking Processes can provide a consistently successful means to achieving these objectives. "The EDM programme provided some very useful insights into the way we were approaching decisions. As a management team we now use the tools on a daily basis and I think we are making better decisions, more quickly and without the frustrations and conflict we suffered from before. The programme taught us a way of better understanding problems and then how to work together to solve them. I would recommend EDM to any individual or team wanting a powerful method of dealing with any problem, in business or elsewhere." Chris Hocking, Managing Director - Effective Decision-Making Programme graduate Detailed Programme Description: The programme begins with an introduction into TOC, Goldratt and the Thinking Processes. We cover the fundamental aspects of the TOC approach - the 5 Steps of Focusing and the 6 Layers of Resistance/Buy-In which must be understood as a basis for using the Thinking Processes. In the next part of the programme, participants learn the basic logical connections for investigating problems and developing solutions. They learn and practise the basic tool of TOC, called a Cloud, a logic diagram used to define the problem as a dilemma or conflict between two opposing pressures. Then the technique is built on in order to find a breakthrough idea that will remove the conflict and resolve the dilemma. The breakthrough provides the direction for a full solution. However, there are no perfect solutions. Especially not the 'half-baked' solutions that our bosses 'dump' on us and expect us to implement! Every solution also contains a fear of something going wrong - perceived negative outcomes of the successful implementation. So next, the participants learn how to handle the natural "Yes, but…" response to new ideas by converting them into a full cause and effect diagram and surfacing basic assumptions that cause the concerns. This tool is called an NBR and shows the Negative Branch of the logic that arises from the idea about which the person has a Reservation or concern. Every breakthrough solution must have one or more supporting ideas to trim or pre-empt its risks. Time is devoted to learning and practising this tool to enhance the ability to constructively address how to critique new ideas. Once the two basic tools have been understood (Cloud and NBR) the participants are ready to tackle deeper issues. Taking UnDesirable Effects (UDEs), problems that have repeated themselves often enough for their negative effects to be considered as the norm, participants can apply the "3 Cloud method" to reveal the "Core Conflict" that lies behind the majority of the problems. They use the techniques learnt so far to build a breakthrough solution and trim the risks. Having a written solution has not yet given us the benefits! We have to actually implement it. Many practical obstacles wait for us on the way. Using the Pre-Requisite Tree tool (PRT), each participant collects all the obstacles standing in the way of implementing a rounded solution or a major personal deliverable. They then develop the necessary intermediate objectives to achieve the overall successful implementation of the solution. This method demonstrates how to address these concerns and use them as the foundations for a 'milestone plan' for the implementation. This serves as a skeleton project structure that could later be enhanced into a project network. +( TOC webpages INTRODUCTION TO TOC, GENERAL Basic info on TOC - links to other webpages http://www.rogo.com/cac/index.html Goldratt Institute Homepage http://www.goldratt.com Collection of Links http://users.aol.com/caspari/toclist.htm Dr Holt TOC Power Point Presentations - http://www.vancouver.wsu.edu/fac/holt/em526/ppt.htm Constraints Management SIG--Theory of Constraints - http://www.apics.org/sigs/CM/TOC.htm IOWA State University Center of Ind. Research and Service http://www.ciras.iastate.edu/toc/index.html Webpages of consultants include papers and presentations on TOC matters. TOC and PROJECT MANAGEMENT TOC Program Management - Multi-Project Management with TOC - http://www.focusedperformance.com/articles/CCPM.htm http://www.focusedperformance.com/articles/multipm.html The Product Development Institute - For Speed! TOC CCPM - http://www.pdinstitute.com/ Giorgia Southern University - Dr. Ed Walker http://www2.gasou.edu/facstaff/edwalker/ccpm.PDF http://www2.gasou.edu/facstaff/edwalker/mpccpm.PDF http://www2.gasou.edu/facstaff/edwalker/apics.PDF Managing Design Capacity in an Uncertain World http://www.cadence.com/features/designCap.html TOC and ACCOUNTING TOC-L-Goldratt Postings on accounting - http://users.aol.com/caspari0/toc/MAIN.HTM Constraint Accounting Measurements John Caspari - http://members.aol.com/caspari/ TOC CONSULTANTS Chesapeake Consulting http://www.chesapeak.com TOC 1999, by Anders Claesson - http://home1.swipnet.se/~w-19265/anders/index.html TOC Tu Nguyen's Theory of Constraints Homepage - http://www.saigon.com/~nguyent/toc.html TOC The Decalogue Quality for business Index - http://www.thedecalogue.com/Index.htm TOC Larry Leach Advanced Projects Institute - http://www.srv.net/~lleach/ Index of - Larry Leach TOC consultant - http://www.srv.net/~lleach/ Fran Stone Enterprises - TOC Jonah's Jonah - http://members.aol.com/FStoneEnt/index.html TOC Focused Performance - http://www.focusedperformance.com/ TOC APT Concepts - http://www.aptconcepts.com/ Theory Of Constraints TOC in Europe - http://www.toc.co.uk/ http://www.goldratt.nl Bill Dettmer from Goal Systems International - http://www.goalsys.com/ AMC Homepage TOC consulting - http://www.wvamc.com/ TOC SOFTWARE TOC Thru-Put Technologies - http://www.thru-put.com/ ProChain Solutions, Inc. Theory of Constraint software - http://www.prochain.com/ MISC TOC The Human Benome (Business Memetics) Project by Tony Rizzo - http://www.jps.net/eurogrfx/benome/ +( TOD theory of delays Date: 18 Dec 2001 23:15:31 -0000 To: List Member From: "Superfactory" Subject: Superfactory Newsletter - December 2001 If you cannot easily read this email, please see the newsletter online at: http://www.superfactory.com/newsletter/newsletter121801.htm Superfactory Newsletter - Volume 2 Number 7 - 19 December 2001 One of the many tools and articles to be posted in the public domain on the Job Shop Lean group is "Theory of Delays" by Vincent Bozzone of Delta Dynamics. The first article created some rather intense discussion, dissention, and agreement. Here's the second part. Go to the JSLEAN page on to see the first part as well as other articles and tools on lean manufacturing in job shop environments. The Theory of Delays (TOD) - Part II Improving Performance and Profitability in Job Shops & Custom Manufacturing Environments Vincent Bozzone Delta Dynamics, Inc. Introduction: There are a number of types of theories in science, each of which has a different function or purpose. For example, models are simplified representations of the real thing that allow us to conceptualize phenomena that are too complex to understand in their entirety. Heuristic theories provide an organized method for exploration and learning, generally for problem solving. Predictive theories involve making a statement, or forming of an opinion about what will happen in the future, of ten based on taking some action in the present. Science develops theories through inductive logic (from the specific to the general and built on a sound data base), and then tests the theory by generating predictions through deductive logic. The test of the validity of a theory is accomplished by em pirically verifying those predictions. The Theory of Delays (TOD) is a predictive theory based on a business model specific to job shops and custom manufacturing businesses. The principal hypothesis (from the Greek hypothesis 'foundation, base') is that continuously reducing lead time is th e single, most powerful strategy a business owner or manager can pursue to bring about profitable growth in these types of businesses. The Logic is Straightforward: ú Companies that can bid and ship an order more quickly will realize a competitive speed advantage and so increase sales. The company that can deliver in 2 weeks has a significant advantage over one that requires a 12 week lead time. ú Faster service can command a premium price and produces winning bids. Our research shows, for example, that getting your quote in front of a buyer before your competitors gives you a huge advantage in getting that order. ú Because custom manufacturers are 'order driven,' additional sales (order backlog) creates momentum and greater efficiency. When the backlog is down, work has a tendency to get stretched out as employees want to make the existing work last, an d management wants to maintain the skill base. There is less pressure to produce when the backlog is low than when it is high. ú It is a 'law' of production that the longer an order remains on the shop floor, the more it costs to get out the door. Orders accumulate costs as they wend their way through a shop. Thus, the less time an order stays on the floor, the less o pportunity for costs to add up. ú Although some people believe that quality and productivity are opposites that cannot co-exist, this is not true. A company does not have to sacrifice quality to meet output goals. In fact, the opposite is more often the case. A shop that is operating at a productive pace will generally be able to meet quality goals more consistently than one in which the pace is disjointed, lackadaisical, or chaotic. ú The greater the volume of orders through a company, the lower the fixed overhead that must be carried by each order. This creates an opportunity for overall profit improvement and/or improved price competitiveness. ú Cash flow is improved. Less working capital is required when the time from order entry to shipping is compressed (e.g., when the time from 'money out' to 'money in' is shortened, less working capital is required.) A Note on Terminology: Although the term 'job shop' is often construed to mean 'machine shop,' job shop is used more broadly here to refer to businesses that: ú Produce on an order-by-order basis to meet customers' specifications. ú Secure work through a bidding process, and thus tend to be highly competitive. ú Serve other companies and/or distributors as opposed to consumers or end users. ú Are highly specialized. Product differentiation is generally limited to variations within a basic product category as opposed to product variety. ú Are extremely diverse in terms of output, technology, operations, and size. Output can range all the way from single parts to complex sub-assemblies. Materials can include metals, plastics, paper, rubber, cloth, ceramics- virtually any mater ial with commercial applications. Production technologies are equally diverse. ú Are not all manufacturers. Printing, engineering services, architectural design, advertising agencies, construction companies, and others, all operate on a similar order-driven business model. Evidence in Support of TOD: There are innumerable case studies of companies that have achieved profitable growth by applying TOD. Hyde Tools, which is my primary 'laboratory,' has seen their business double in the last 3 years and their profits grow f ourfold since they reduced their lead time from sixteen to two weeks on average. The Center for Quick Response Manufacturing, which operates on the same theory, has reported outstanding results in their client base as well. There is no question about the predictive validity of the theory. Application of TOD: A number of changes in perspective and thinking are required in order to apply TOD effectively in practice. 1. Management must recognize the difference between managing the processes that produce results vs. managing activities, which is more typical (e.g., learn horizontal management). 2. Management must recognize the difference between task time and chronological time, and refocus its perspective. For example, whereas businesses have traditionally concentrated on reducing task time because it is paid by the hour, and productivi ty (more output per hour) has a direct bearing on profitability, the objective of TOD is to reduce chronological time in the business process from quotes to cash. Eliminating or reducing delays in this process accomplishes cutting lead time. 3. A delay can be caused by a physical constraint in the production system (TOC), but not all delays are the result of constraints. The large majority of delays result between process steps, not within a step. 4. Management must recognize that it is in a service business first, and a manufacturing business second. Applying performance improvement programs and techniques from the mass production world will not bring about significant improvement in a job shop environment. 5. Management must recognize that faster service is more valuable than slower service in today's just-in-time, lean manufacturing world. Speed to market has value. 6. It is often necessary to reconceptualize the business in order to achieve better alignment with markets and customers, especially when there is more than one value stream or conversion process involved. 7. This requires management to learn to see their business in the context of the larger environment, as opposed to the more commonly held, restricted view of the shop in the context of the business. Methods: There are three primary methods. One involves perspective, insofar as this can be considered a 'method.' However, it is extremely important to be view a particular business in a way that enables the overall conversion process to be viewed in the proper context. This perspective is necessary for managing the process, and for improving it at the same time. TOD also requires the ability to change perspectives and levels of abstraction as required. (We have a conceptual tool we use-Organiza tion Improvement Strategies by Level of Impact-that serves this purpose.) A second dimension of the TOD methodology involves plain old process improvement. We have a number of tools and techniques we use to make organizational processes explicit, a necessary first step before redesigning and streamlining can take place. Aga in, the focus is primarily on chronological time, although task time and elimination of unnecessary activities are not ignored. TOD is not something that you do-it is something that you keep doing, hence the third dimension involves the design and implementation of what we call a non-bureaucratic continuous improvement infrastructure. This operates on two levels-on the business process as a whole via the weekly management report and associated activities; and on the production level via closing the loop-comparing planned (estimated) to actual results in conjunction with a performance improvement team (PIT). Modifications are made to the organizational design, as well as establishing new routines so that performance data can be converted into information with meaning, and acted upon appropriately. Measuring Results: In addition to broadening the system scope and focusing attention on delays, TOD uses two operational measures that are meaningful. One is similar to that used by TOC-throughput productivity which is measured in sales dollars per pay roll hour. The problem with using this metric by itself is that customer service can be sacrificed for throughput efficiency. Therefore, it is also necessary to measure lead time (and on-time ship performance) to ensure improvement is happening in the se central areas as well. Summary: The Theory of Delays recognizes that job shops are service businesses, and is specific to these types of businesses. It is based on the proposition that cutting lead time, which equates to faster service, is the most effective strategy manage ment can employ to bring about profitable growth in these types of businesses. Why? Because faster service is more valuable than slow service in today's lean, just-in-time manufacturing world. Identifying and eliminating delays in the total business process from quotes to cash accomplish this objective. There is no question about the validity of the hypothesis. TOD is not something you do, such as a program: rather, it is something you keep doing, and so requires changes in organizational structure, management systems, concepts in use, and behavioral routines. TOD is not a way to study organizations as an aca demic theory; it is a way to manage them. And although TOD shares a kinship with lean manufacturing and TOC, it is distinctly applicable to job shops and so stands apart from these more general approaches. (See the white paper, The Theory of Delays: A Tool for Improving Performance and Profitability in Job Shops for more on this, as well as Speed to Market: Lean Manufacturing for Job Shops, AMACOM, New York: 2001.) +( Tony Rizzo : Real Cost Control +c Rizzo Date: Fri, 20 Aug 1999 12:40:20 -0400 From: Tony Rizzo Subject: [cmsig] REAL COST CONTROL IS COST CONTROL REALLY IN THE GENES OF YOUR BUSINESS? ************************************************ * * * "Throughout this period you continue * * to pay salaries and benefits for all. * * It almost sounds patriotic." * * * ************************************************ Tony Rizzo tocguy@lucent.com (908) 230-5348 Are you responsible for a new-product development business? Take a close look at your organization, particularly if you find yourself in a rapidly growing market. Is your organization growing rapidly too? Chances are quite good that your managers are trying to hire resources as fast as they possibly can. Now, try to answer a deceivingly simple question: why are your managers hiring so many people in so many areas? Very probably, they are convinced that (to use the words of a popular fiction hero, Alex Rogo) their existing resources "... are maxed out, ... don't have enough capacity." Your managers have concluded that they've reached the limit of capacity of the current employees. If your organization is to take advantage of its growing market, then it needs more people with which to develop a greater number of products. So your managers think. Unfortunately, massive hiring is likely to have only a marginal effect on the organization's overall ability to perform projects. Since the organization is undergoing a scale-up, the absolute best that your managers can expect is a percentage increase in capacity, equal to the percentage increase in resources. This can be expected only under the best of circumstances and only after a finitely long transient period, during which all your new employees try desperately to learn the ropes of their new organization, your business. Over the short term, the massive hiring is very likely to have only a detrimental effect on the overall capacity of your organization to develop new products, because all those new-hires need to be trained, housed, taught, indoctrinated, mentored, etc. From where will the effort come, to bring the new-hires up to speed? It will come from your existing employees and managers, of course. Therefore, your organization is very likely to see only a minor long-term benefit and a severe, near-term productivity penalty from the rapid hiring. But, throughout this period you continue to pay salaries and benefits for all. It almost sounds patriotic. One might say that the capacity of your organization to develop new products lags the hiring spree by months, many months. With luck, the period of market growth exceeds the lag between the massive hiring and the effective use of the new resources. If not, then the massive hiring spree only sets you up for your next bashing from the Wall Street gang. Do you feel a bit like Sisyphus? You lose sleep trying to figure out how to push profitability up, and with your hard work the bottom line improves. A few months later you watch in pitiable pain as your profitability plunges to lamentable lows, due to the massive increase in operating expenses that accompanies the hiring. THE INFORMATION FACTORY Before we can solve this problem, we need to look at a product development organization from a different perspective. You'll see that the profitability of your business is determined not by the number of people in it but by the system model that your managers (and you) have imposed upon it. As you'll see shortly, the scale-up of such an organizational system does nothing to improve its overall efficiency, because the design of the system is usually unaffected by a scale-up. In fact, when a scale-up does have an effect on the system's overall efficiency, it is usually a damaging effect. In many ways, the product development process is, simultaneously, different and similar to a manufacturing process. One obvious difference is in the material that is processed during product development. That material is information. At the input to your product development system, your marketing people provide the information-equivalent of raw material. They refer to this as a feature list. If they've done their jobs well, then the feature list begins to define a product that, when developed, solves many severe problems for your customers. But, of course, a feature list is not a product. This information- equivalent of raw material, therefore, must be processed; the first processing step happens at the functional design station, where some of your most talented and creative people invent design concepts that have a reasonable probability of becoming successful products. At the output of this station, you can expect to find block diagrams of products. The pieces of the block diagrams indicate parts of the product that are expected to provide the primary functions. If your products consist of complex software packages, then the block diagrams represent the architecture of those complex software packages. The arrows, scribbles, boxes, and notes all help to define the various functions and the required, internal flows of data. The output of the functional design station becomes the input to the system engineering station. If you don't believe this, just ask any system engineer to give you a set of product requirements without the use of a functional diagram. It's not possible, because the functional diagram is the high-level definition of the system that is to become the product. The requirements for the downstream development work cannot be defined or even understood, without first defining the overall architecture of the product. Therefore, the logical flow of the information-equivalent of raw material is from the conceptual design station to the system engineering station. But, it is no longer raw material. Now, it is work in process (WIP). The output of the system engineering station, which usually takes the form of a requirements document, becomes the input to many work centers, most of which do their work in parallel. These are the work centers where your people develop the information packages that define the smaller components and the subassemblies that ultimately become your products. If your products consist primarily of software, then these work centers are where your people develop the many code modules. At this point, the WIP in the system tends to be significant, because the system engineering station's output fans out to quite a few of these detailed design work centers. In other words, at this stage many use the output of few. The many developers who use the output of the few system engineers, of course, generate many other pieces of output. These may be designs for physical components, if your product is a physical product. Or, they may be many distinct pieces of software. Whatever they happen to be, they are still pieces of information. Also, they have to come together, before you have a product. This is where most product development organizations that develop products with a heavy software content have their demolition derbies. It is at these assembly points that many projects pile up, for months at a time and for a variety of reasons. After the assembly and the tests are completed, the information package that is the definition of your product begins its transition to manufacturing. There, the information package undergoes two transformations. First, it is transformed by your manufacturing engineers into a collection of highly complex instructions, equipment, and skilled workers, which we know as a manufacturing line. Second, it is transformed by that very manufacturing line into the most complete, most precise, most accurate information package that describes your product, i.e., it becomes the product itself. Necessarily, the process continues until your customers buy, use, and ultimately discard the product. But, for the sake of this discussion, let's focus our attention on the process steps that span from the initial feature list to the General Availability of the product (GA). This is where most product development organizations need help badly today. The important thing to remember is this. Although there may be many parallel steps in any product development process, the various phases are sequential. The information packages come into the product development system in a rather raw form, and they leave the system as the best possible descriptions of the products. They leave as the products themselves. Even when there appear to be loops in the process, the process is still sequential, because the information that describes the products is refined continually throughout the process. So, if the product development process does little more than continually refine information packages, then why is it such a difficult thing for us to manage? WHAT ARE WE REALLY MANAGING? Product development appears difficult to manage only because we continually try to manage it not as the system that it deserves to be but as a collection of individual projects, which inevitably compete with each other for resources. Think about your own product development organization. Do your managers make any attempt to create effective resource schedules that bridge multiple projects? If your organization is similar to most, then your managers have at best only a highly muddled view of the resource needs of all the projects. Sure! They may put together staffing profiles for the coming year. But a high-level staffing estimate doesn't even begin to provide the information that they really need, to pay attention to the logistical requirements of your product development system. Consequently, your managers aren't able to effect the proper logistics. What does this really mean? It means, for example, that when release 1.3 of your software development reaches the assembly and test phase, it crashes into release 1.2 and possibly into release 1.1 as well. In other words, it competes with the earlier releases, for the same resources. It means, also, that release 1.2 possibly is stuck in the assembly and test phase, because a couple of code modules didn't show up on time. But this is only the visible damage, which, like the proverbial tip of the iceberg, only hints at the underlying, massive damage that really exists. The real damage is due to multitasking. This is the widespread practice of time-sharing resources across multiple tasks and across multiple projects. Due to this widespread practice, each of your projects takes at least twice as long as it really needs to take. What's the conclusion? It is this. Your product development organization is a very inefficient system. If it were the family car, its operating efficiency would be rated in gallons per mile, not in miles per gallon. Your trips would be unpredictable and long. Most of the time, you'd miss your appointments. And many times you'd never reach your destination. Now, your managers want to scale up this highly inefficient system, by hiring hundreds of people. Should you let them? Before you approve even one additional request for a new employee, ask yourself this question. If the current system is so inefficient, will it really be more efficient when it is bigger? Will it really turn out more products per operating expense dollar, when it's twice its current size? Think about that badly running family car again. Would it run better if it were bigger? Not by a long shot! A BETTER ALTERNATIVE If you hire all those people for whom your managers are pleading, you'll be doing nothing more than scaling up the same, old, inefficient system. There is a better alternative. Improve your system. Tune it. Supercharge it. Make it run at peak overall efficiency, BEFORE you make it bigger. If you cause your managers to run your product development organization as a highly efficient system, before they even think about increasing its size, then you achieve the most effective form of cost control imaginable. You AVOID SPENDING MILLIONS UNNECESSARILY, forever! However, be careful. Don't confuse local efficiency with the efficiency of your overall system. They are not the same, and they don't happen under the same set of circumstances. In fact they are mutually exclusive. If you want real efficiency for the total system, then you must let go of the notion of keeping everyone in your organization constantly busy. But, this is an easy notion to let go, if you keep the goal of your business squarely in sight. How can you supercharge your product development system? Well, you could learn about Synchronous Product Development, also known as The Theory of Constraints' Multi-Project Management Method. But don't be unnecessarily hasty. It's easier if you wait for your competitors show you how it works. +( Toyota Production System To: "CM SIG List" From: "Harvey M Opps/Amer/Auto" Date: Mon, 11 Sep 2000 11:19:07 -0400 . My sense of it is that Deming felt it was profound knowledge because too few really got it. His direction was towards growing the business, or getting more Throughput. He did not focus on costs. Deming's process led to more capability = more market = more bottom line. This is what happened at Toyota. This is the Throughput world focus. The Cost world focus overrides the "T" world focus. Activity Based Costing goes hand in hand with the application of Lean, REDUCE COSTS. The reason they call it Lean is because it is not the Toyota Production System. The quality folks who bought into this Lean Paradigm didn't get it. The profound knowledge relates to what Dr. Ohno said at Toyota. "My hardest job was getting the cost accountants thrown out of the factory". The glass is really half full. The ones who don't get it think the glass is half empty. The reason it's "profound knowledge" is that it is so simple and obvious and all around us and yet most of us don't get it. Fish don't realise they are in water, its all around them. They only realise this once they are taken out of the water, then look at how they behave. "Potter, Brian (James B.)" @lists.apics.org on 09/06/2000 04:43:54 PM Perhaps "awareness of the constraint" (and the influences non-constraints may have on the constraint) is the "profound knowledge" of which Deming spoke and the "profound knowledge" of which you wrote last week. How does a Deming-esque attack on quality differ from an attack on a market constraint created by inadequate quality? How does employing "awareness of the constraint" differ in effect from employing "profound knowledge?" -Original Message- From: Harvey M Opps/Amer/Auto [mailto:oppshm@meritorauto.com] Sent: Friday, September 01, 2000 10:48 AM Larry, its quite the opposite of what you think. TPS was first and became very holistic quickly. When Dr. Ohno started putting it together, all his processes were really geared to getting more market. The Lean and JIT people who looked at TPS only saw the opportunity to be more efficient, that is cut costs, eliminate waste etc. If two people Toyota and Nissan , are doing the same thing why are the results so different. Toyota has all the money in the bank and Nissan is suffering relative losses. With the direction now to break up the kierutsu system, Nissan is setting everyone free, Toyota is buying up whatever they can. Toyota will take and control and make it better. Everyone else will outsource. TPS is a business philosophy, just like TOC it is Throughput focused, get more revenue. The other guys are cost focused, drive down expenses. One half empty, one half full. TOC is not just DBR to find the waste (protective capacity) and get rid of it. Its using DBR to identify the variation that effects the bottom line. Minimise the variation, and shrink the cycle time of the process and you increase the revenue per period of time. Maybe TPS requires profound knowledge, if you don't have the knowledge, turn it into Lean. - Original Message - From: Christopher Mularoni Sent: Wednesday, August 30, 2000 5:59 PM No one fully understands the Toyota Production System not even Toyota. Much of it is implicit and Toyota often claims "the system is the people." Lean & JIT are attempts to emulate the TPS. One important note: TPS is a path not a place and Toyota has been on the path for some time. +( TPR Performance Analyzer From: Craig Homann To: Hans Peter Staber Subject: RE: [cmsig] Re: TOC accounting Date: Mon, 8 Nov 1999 07:52:13 -0800 At Thru-Put Technologies we wrestled with this for a few years. We had hoped that a Thru-Put accounting company would come along and offer a package (we did not want to carry the banner of DBR for manufacturing and also TOC accounting). However, we got tired of waiting and concentrated on a several key DBR/TOC measurements that help in the manufacturing environment. It is a new module for us but our users have said that we hit a homerun with it (it is a bolt on product to our core DBR Scheduling and Planning System called Thru-Put Manufacturing). The new module is called Performance Analyzer. You may want to check it out on our web page, www.thru-put.com . It may give you some ideas. +( TRIZ Theory for the Resolution of Inventive Problems The Theory for the Resolution of Inventive Problems (in Russian, the word for problems begins with the letter Z). Some folks pronounce it TREES. But they are just copying blindly the pronunciation, of the acronym, by the Russians whose accents are thicker than fog in a swamp. For the rest of us, TRIZ works just fine. good webpage : www-personal.engin.umich.edu/~gmazur/triz/ Founder: Genrich Altshuller, Mechanical Engineer, long-time employee in the Soviet patent office. Purpose: Identify, codify, improve, and teach the process by which truly creative, breakthrough inventions are achieved. By so doing, greatly accelerate the benefits that those inventions bring to society. For example, while Edison took a shotgun approach, with trial and error experimentation, TRIZ would have produced the same results with far fewer experiments. In a very real sense, the TRIZ tools achieve great focus and provide a rifleshot approach, rather than the usual grenade approach to invention. Components of TRIZ: S-Field analysis; 40 Inventive Principles; Technology Evolution; A.R.I.Z. (Algorithm for the Resolution of Inventive Problems). Categories of Inventive Solutions: Category 1) Something is changed slightly, such as more insulation is added, or a dimension is changed in a component, which operates more effectively with the change. Altshuller did not consider these to be true inventions worthy of patents. These are the most frequent improvements that we see. Category 2) Something is added to the system at hand. For example, we add wheels to luggage, and the luggage becomes easier to pull through the airport. Altshuller did not consider these to be true inventions worthy of patents. These also happen with considerable frequency, albeit not so frequently as those of category 1. Category 3) A component or subsystem is essentially changed, to achieve much higher performance. For example, the bias ply construction of tires was replaced by the radial ply construction. The outcome was much higher performance in both road holding and durability. Altshuller considered this and the later categories to be worthy of patents. These are achieved with less frequency. Category 4) A system is replaced entirely. For example, propellers were replaced by jet engines. Copper cables and electrical signals were replaced by optical fibers. These inventions happen with much less frequency. In fact, they are rare. Category 5) These are inventive solutions that cannot be achieved with the currently available science. They require the discovery of new science. Edison's phonograph was in this category. In my opinion, so was the discovery of soliton transmission in optical fibers. These inventions are extremely rare, they happen perhaps once in a lifetime. S-Field Analysis: This is a purposely simplistic language with which to describe the technical problem at hand. The intent is to remove the technical jargon and, with it, the psychological inertia that the technical jargon creates. In a very real sense, this is the mental equivalent of the mathematical transform. First, we transform the problem statement from the language of the technology into the simple, generic language of S-Field analysis. For example, we would speak in terms of the substance (S) rather than say "wheel" or "transistor." Once we have a usable problem statement in the S-Field language, we seek one or more of the generic solutions identified by Altshuller and his followers. These are the 40 inventive principles. With one or more of these generic solutions identified, we then transform the SOLUTION statement back to the language of the technology. Imagine being lost in a forest, rising 10,000 feet above your current dilemma, and then aiming directly at your solution destination, via the generic solution of choice. Language guides thinking, and Altshuller sought to exploit this. 40 Inventive Principles: These are the truly generic solutions that Altshuller and his followers identified, first from their study of the patent database of the Soviet Union and later from their study of the world's patent database. Some estimate that millions of patents were screened and hundreds of thousands were studied. From all these, the researchers found that virtually all inventions were achieved with only 40 generic solutions. These 40 inventive principles have been categorized. Example: (Apply a change in phase) plus (do it ahead of time). Problem: How do you fill little chocolate bottles with rum? Solution: First freeze the rum in little molds of the shape of the inside of the chocolate bottles (do it ahead of time). Then dip the frozen rum shapes in molten chocolate. Finally, let the frozen rum thaw (apply a change in phase). Example: How do you lower a one-ton transformer from a cement pedestal, when you don't have a crane available and won't have one available for weeks? Solution: (apply a change in phase). Build an adjacent pedestal from blocks of ice. Slide the transformer onto the pedestal of ice. Let the ice melt and lower the transformer to the ground. Neat, huh! Technology Evolution: Altshuller and his followers identified a few effective rules with which to predict the evolution of technical systems. With these, a product development organization can see the approaching end of the useful life of its current technology and know when it's time to begin developing the replacement technology. The organization can also use the rules to guide its development of the replacement technology. Example: Technical systems tend to decrease in size and increase in numbers. Specific example: Mainframe computers became smaller and were eventually replaced by a larger number of minicomputers. The minicomputers, too, were replaced by many more workstations. The workstations also were replaced, by desktop PCs. The PCs are also being replaced, by an even greater number of handheld computers. If AT&T's Computer Systems Division had paid attention to this in the early 1980s, that company would not have invested billions of dollars in the development of reliable minicomputers, just as the minicomputer market was about to be overtaken by Sun workstations. A.R.I.Z: This is a true, step-by-step algorithm with which to tackle inventive problems. It is updated with some regularity. It is worthy of study. But it is also a process that requires discipline. I've seen it used. To me it appeared rather tedious. But I also saw the value that it brought. Interesting point: Altshuller notes that truly inventive solutions always require that the inventor first identify a technical contradiction and finally a physical contradiction. Read that as a conflict. The invention emerges from the resolution of the physical contradiction, without any compromise (sound familiar?). The S-Field language was developed in part to be able to identify the physical contradictions more easily and effectively. Suggested Reading: "And Suddenly The Inventor Appeared." Genrich Altshuller. "Creativity As An Exact Science." Genrich Altshuller. A couple of other books have been written more recently. I just don't have their author and title information on hand. I'm sure that you'll find them easily enough, if you search Amazon with the key word TRIZ. Perhaps somebody else can offer those citations. ===================== From: "Richard E. Zultner" To: "CM SIG List" Subject: [cmsig] RE: What is "missing" from ToC? Date: Thu, 23 Mar 2000 16:16:27 -0500 A good example of what is "missing" from ToC is TRIZ. Now, let me add some clarity! ToC can certainly address physical conflicts, just as it addresses organizational conflicts. But with ToC you are starting "bare-handed" to break the conflict, and you are limited by your domain knowledge (or lack thereof). (If you don't have a scientific or engineering background, good luck!) So if you want to design a better pizza box, where the box needs to support a floppy pizza, but not conduct heat away from the pizza (so it stays warm) how do you break this physical conflict? With TRIZ, (let's take "Classical TRIZ" first -- or TRIZ of the 1920's) you have a set of systematic ways for breaking all physical conflicts (as actually demonstrated in over 2 million patents, taken from a wider set of engineering disciplines than any one engineer is likely to possess). And for software engineers like me, TRIZ makes me dangerous in solving physical conflicts -- and with only two days of instruction! (The pizza box example is a favorite TRIZ case study, and even in half-day workship you too will have multiple inventive design solutions.) So (even classical) TRIZ is better than (modern) ToC for breaking physical conflicts -- which is what Eli said in Atlanta at the Facilitator's seminar. This is an area where the TRIZ folks have been working for over 70 years, so it should not be too surprising that they have a better focused solution than ToC's general solution in the domain they have conentrated on: physical conflicts. If I was any kind of hardware designer, I think TRIZ would be of more immediate (and personal) benefit than ToC... and certainly much easier to implement (no organizational paradigm shifts required). Comments? =================================================== From: "Bryan Bloom" To: "CM SIG List" Subject: [cmsig] RE: TRIZ Date: Tue, 4 Apr 2000 22:13:26 -0700 For Triz expertise with real world examples you should contact Ellen Domb at 909-949-0857 or email her at her website www.triz-journal.com . Alternatively, contact Bill Dettmer at 360-683-7034 or by email at gsi@goalsys.com. =================================================== Date: Wed, 24 Jan 2001 08:56:37 -0500 From: Tony Rizzo Here are the two with which you should start, in my opinion: And Suddenly The Inventor Appeared; Genrich Altshuller Creativity As An Exact Science; Genrich Altshuller. The second book is a very tough read. But it is worth the effort, as it gives you the perspective of the founder of the body of knowledge. --- From: "Stratton, Roy" Date: Fri, 26 Jan 2001 09:55:59 -0000 I have been looking at the links between TOC and TRIZ over the last two years and there are some very interesting and fundamental comparisons and practical outworkings. A paper on some of these links, a TRIZ colleague and I wrote, can be found on the TRIZ journal April 2000 with earlier TRIZ-journal references. This journal is monthly and open access on the Internet. www.triz-journal.com I have other papers linking these concepts with manufacturing strategy and lean/agile if you are interested. --- Date: Wed, 14 Feb 2001 14:09:55 -0500 From: "MARK FOUNTAIN" Below is an expanded version of the TRIZ breakout of the clouds from a couple people's comments: CONTRADICTION TABLE New Products Most Desired: Profitable Co. PUF Most Undesired: Not Sleep PHF Cause of Desired: Keep Employees UF4 Cause of Undesired: Not Tell HF4 Second Desired: No rumors UF1 Second Undesired: Shooting HF1 Third Desired: Protect Co. UF3 Third Undesired: Protect Empl. HF3 Fourth Desired: E UF2 Fourth Undesired: Z HF2 1a. Find a way to eliminated, reduce or prevent Not Sleep Under the condition of Profitable Co. 1b. Find a way to benefit from Not Sleep 2a. Find an alternative way to provide Profitable Co. that eliminates, reduces or prevents Shooting but does not cause Not Sleep 2b. Find a way to enhance Keep Employees 2c. Find a way to resolve contradiction: Keep Employees eliminates Shooting and should not cause Not Sleep 3a. Find a way to eliminate, reduce or prevent Shooting under the condition of Protect Co. and E and does not require Not Sleep 3b. Find a way to benefit from Shooting 4a. Find an alternative way to provide Protect Co. 4b. Find a way to enhance Protect Co. 5a. Find an alternative way to provide E that provides or enhances Profitable Co. and does not cause Shooting 5b. Find a way to enhance E 5c. Find a way to resolve contradiction: E should provide Profitable Co. and should not cause Shooting 6a. Find an alternative way to provide Profitable Co. that does not require Keep Employees and E 6b. Find a way to enhance Profitable Co. 7a. Find an alternative way to provide No rumors that provides or enhances Profitable Co. 7b. Find a way to enhance No rumors >>> "Danley, John" 02/13/01 05:49PM >>> Along the line of the current discussions on layoffs as well as a personal conflict I found myself in, I threw together this cloud. It summarizes the conflict we sometimes find ourselves facing where we are torn between obligations to our superiors (read:The Company) and our subordinates. I'm sure it could be formulated in a variety of ways, but I think the essence remains the same. My questions to the list are 1)What are your perceived underlying assumptions behind the arrows? 2)What policy changes/attitude changes would be necessary to break the cloud? I look forward to your responses. <--Take care of employees<-Inform employees of potential layoffs Manage well <--Protect the company<-Do not inform employees of potential layoffs +( variation is the enimy From: "Philip Bakker" Date: Sun, 4 Mar 2001 13:32:07 +0100 About a month ago I read the following book from Donald J. Wheeler: "Understanding variation - The key to managing chaos" (2000). In this book of about 151 pages Wheeler writes things that make very much sense to me. In bullets: - The first principle of understanding data: No data have meaning apart from their context - The second principle of understanding data: While every data set containts noise, some data sets may contain signals. Therefore, before you can detect a signal within any given data set, you must first filter out the noise. He also pays attention to many of the principles laid out by dr. Deming (exceptional variation and routine variation). He states somewhere that according to Braian Joiner people are left with three ways to proceed when there is no or little awareness of variation: 1) They can work to improve the system 2) They can distort the system 3) They can distort the data On page 144 and beyond he discusses the traditional definition of trouble and another way of getting improvements beyond "If it ain't broke, don't fix it'. Wheeler discusses how the traditional wisdom about process improvement is: --> Ignore (or tweak) the good predictable processes, and reengineer the bad predictable processes. He further discusses new consequences relating to process improvement which are exactly the opposite: --> Reengineer the good processes, and tweak the bad processes. (In this definition predictable is good and bad is unpredictable. This way of improvement is the right one for 'Process trouble' and 'Double trouble'.) Wheeler distinguishes 4 possibilities: 1) No trouble: Predictable process with little or no nonconforming product 2) Product trouble: Predictable process with too much nonconforming product 3) Process trouble: Unpredictable process with little or no nonconforming product 4) Double trouble: Unpredictable process with too much nonconforming product If you have a process that is characterized by 'Process trouble' or 'Double trouble', then you have a process that is unpredictable. Regardless of the suitability of process outcome, the unpredictability of the process will undermine all the predictions and all the process modifications that you may try to make. Moreover, the process is not operating up to its full potential. Wheeler mentions: "When a process displays unpredictable behavior, you can most easily improve the process and process outcomes by identifying the assignable causes of unpredictable variation and removing their effects from your process." From: "Christopher Mularoni" Date: Sat, 03 Mar 2001 19:41:31 -0500 I am compelled to add to Brian's comments (in my simple and rambling way). Failure to recognize the existence of variation is the underlying problem. Accepting numbers as accurate just because they are given to several decimal places, accepting budget numbers as given and accurate, accepting time estimates as accurate. My personal favorite, tweaking a process till it produces a few good parts and expecting it to continue to produce good parts (You have to realize that the few good parts could have been due to variation, and even if not so that variation is likely to result in bad parts if the process or specs are not robust). --- From: "Stefan van Aalst" To: "Constraints Management SIG" Subject: RE: [cmsig] Where is the Constraint Role? Date: Thu, 23 Dec 2004 05:58:09 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-Mailer: Microsoft Office Outlook, Build 11.0.6353 Thread-Index: AcToeF+X6awpyTZBSyawcMQlYu7AmAAADXpAAAxlJxA= X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2800.1441 In-Reply-To: X-AntiVirus: checked by Vexira MailArmor (version: 2.0.1.16; VAE: 6.29.0.5; VDF: 6.29.0.31; host: postbode01.zonnet.nl) List-Unsubscribe: Deming made an interesting observation: - focus on reducing defects results in an equal decline in jobs. For some that jump to the wrong conclusion, Deming wasn't against reducing defects and variation. If this causes some kind of paradox, see what assumption you make that Deming didn't make. He further advises how to get out of this spiral. Deming has a strong believe were the weakness of most not excelling organizations lay ...TOC calls this the constraint, but what's in a name :-) There is an interesting German Baron, known for his great achievements, Baron Von Munchausen. One of his amazing achievements was that once when he was stuck in a swamp, he freed himself by pulling himself by his own hairs out of the swamp. Impossible?? Perhaps if you're in a real swamp, but if you're in a swamp of economic downturn it doesn't need to be ...but this goes right back to where Deming believe is the weakness of most not excelling organizations. What can TOC do? It can help to pinpoint the problem, it can help to evaluate the solutions, it can help to ease the implementation ...what it can't do is MAKE people change. A huge weakness ...and so is 6-sigma, lean and all those other fancy things ...none of them kan MAKE people to give commitment in behavior/action. So the weakness you address isn't a TOC weakness for TOC never even claimed to be able to MAKE people change (in contrast to some other things out there), all it claims that if people will make the effort using TOC will give the biggest return on investment (not just the financial part) because it gives a specific focus. This in strong contrast to a lot of other 'quality' movements that try to improve every single part ...including those parts that don't give any return now. --- +( VAT product/manufacturing structure VAT as it relates to TOC describes the "Logical Product Structure Analysis" and is the abreviation for three different generic types of product/manufacturing layout : T-shape : is the most common structure; you have raw materials and semifinished components (platforms) which run into a broad diversity of finished goods. V-shape : is the second most common structure; it represents a fixed flow product structure (identical routings for all products of a family) and is typicall for process industries. Goldratt explained it in the satelite sessions based on the stell mill example. A-shape : The opposite of the V-shape. A number of raw materials and semifinished goods assemble into a few finished goods. Depending on the type of your product/manufacturing structure you will have your bottlenecks more likely upstream oder downstream. +( viable vision From: Caspari@aol.com Date: Sun, 18 Dec 2005 06:42:41 EST Subject: Re: [tocleaders] Paradox of Systemism In a message dated 12/15/05 10:32:35 AM Eastern Standard Time, you wrote: If the company is implemented Viable Vision and there is an internal constraint in operations, the necessary condition of achieving a Viable Vision is to not have an internal constraint. You have to elevation the constraint until operations is no longer the constraint. In VV this has to be broken to a significant extent, meaning you have to have a great deal of protective capacity in order to significantly grow sales. We did not realize that not having an internal physical constraint was a necessary condition for a successful Viable Vision (VV) implementation. As we came near to the end of out book, Management Dynamics: Merging Constraints Accounting to Drive Improvement, we wrote (on page 267): <> We would agree that substantial elevations will be needed, but they should be accomplished while being careful to maintain the constraint at an internal location. --- From: "Nicos Leon" Date: Mon, 23 Jan 2006 13:16:59 +0200 Subject: [tocleaders] VV projects follow up Back in June 2004, I had received an email by Goldratt UK with subject “Dr Goldratt's Viable Vision in the UK” in which it was mentioned that : Results to date: In the USA a PCB manufacturer with current turnover of $65M has signed a four year contract worth $15M to achieve the target of current turnover as profit within the time period. In India TATA steel has signed a contract worth $25M to achieve the same result and, also in India, a much smaller company (SME) has signed up for a $3M contract. On June 2005 I received through a gmail alert the announcement of TATA’s launching of “Aspire Unlimited” which was the name they gave to the VV project. It was launched in a ceremony held on June 2, 2005 at the Tata Auditorium. You can read the article here (http://www.telegraphindia.com/1050603/asp/jamshedpur/story_4821055.asp) and here http://www.tatasteel.com/newsroom/press247.asp Today Bloomberg announced the drop of 15% on the results of Tata steel for 3rd Q, first time Tata presented a drop in 18 quarters. Reading the article, one can understand that Tata is in tough competition with China. You can read the relevant article here (http://www.bloomberg.com/apps/news?pid=10000080&sid=axEFhlWugbpc&refer=asia# ). I suppose that the VV project is in full development. I am curious to know how the project is going on and if these results are a consequence of the first changes introduced by VV or not. How are all the other VV projects in progress going? I have heard some rumors that some projects have been stopped. 2006-07-25 +( WHAT TYPE IS YOUR ORGANIZATION? +c Rizzo Date: Mon, 11 Sep 2000 00:52:28 -0400 From: Tony Rizzo WHAT TYPE IS YOUR ORGANIZATION? Tony Rizzo The Product Development Institute, Inc. TOCguy@PDInstitute.com (908) 230-5348 +-+-+-+-+-+-+-+-+-+-+-+-+ Did you ever try to drink from a fire hose? Your lips get bloodied; your clothes get all wet; and after all that, you're still thirsty. +-+-+-+-+-+-+-+-+-+-+-+-+ Is your organization a Type-A or a Type-B? What's the difference? Is there a difference? There is. Let's begin with the Information Cycle model. Recall. The information cycle begins with customers, who provide the raw information input to the cycle. This raw input goes through a marketing process, which generates a feature list and a set of project deliverables, i.e., product and project descriptions that are intended to satisfy the customer needs described implicitly and explicitly by the raw information inputs. Then, the feature list is converted to a project plan, which is put through the design process. This process creates first a prototype and then a new or modified production line, which builds the new product. Finally, the product goes through a distribution process, which delivers it to customers. Customers, in turn, begin bitching and moaning about their next set of needs, and the cycle continues. Here's the diagram: Define ___ Design __ | | |---| _/ \ | C | <--|---|<---_ } | U | | |---| \__/ | S | | | T | |______ | O | | | M | v __ | E | |---| _/ / | R | ---> --->_ -- | S | | |---| \__ \ |___| | Make Sell | Raw Mat. Pur. Comp. The diagram is useful, because with it we can trace the entire cycle of information that ultimately leads to satisfied, paying customers. Information starts at the customers, with their expressions of problems. It moves through the "Define" process, which generates a feature list and a set of project deliverables. This contains the same "customer needs" information. The information then goes through the "Design" process and it becomes embedded into the new production line. The line, in turn, imparts its customer needs information onto the raw materials and purchased components that pass through it. Finally, the finished product is sold and delivered to customers. This is a cycle, because customers are continually defining new problems and new needs. The cycle goes on indefinitely. But, this cycle does not describe all new product introduction organizations. It describes only a subset, which we might call Type-B organizations, which brings us to a question: What are Type-A organizations? Consider this. Many organizations compete for development projects. For these organizations, the "Define" step is also the "Sell" step. Customers are sold not just the finished product but also the project with which that product will be developed. Virtually all government contractors are of this type, i.e., they are Type-A organizations. Type-B organizations don't operate this way. For Type-B organizations, there's no up-front contract. No customer is sold the development project's completion date as well as the finished product. Thus, the real difference between Type-A's and Type-B's is that IN TYPE-A ORGANIZATIONS, THE SALE HAPPENS BEFORE THE DEVELOPMENT PROJECT. Whereas, in Type-B organizations, the sale happens after the development project. For people in Type-B organizations, life is really quite a bit easier. This is not so, for those in Type-A organizations. For Type-A organizations, the people who do the product definition also do the selling. These people are almost always measured and rewarded on the basis of how much revenue they sign on, and the measurement is recorded at the time of signing, not at the time that the yet-to-be-developed product is built and delivered to the customer. What's the effect of this measurement on a Type-A organization? It is this. The "Define" process of a Type-A organization creates a fire hose of projects, and it aims that fire hose right down the throat of the design organization. Did you ever try to drink from a fire hose? Your lips get bloodied; your clothes get all wet; and after all that, you're still thirsty. The outcome of having a fire hose of projects aimed straight down the throat of the design organization is massive multitasking, of the most damaging kind. Is the distinction between Type-A organizations and Type-B organizations important, if, say, we are trying to implement the TOC Multi-Project Management Method (TOC-MPM)? Think about this. Most of the failed TOC-MPM implementations have been with Type-A organizations. They failed, because the fire hose was never throttled back. No form of flow control was ever put in place in those organizations. The massively damaging multitasking, as a consequence, never stopped. Does this mean that a Type-A organization can never implement TOC-MPM? It does not. It means, however, that the implementation must be done with great care, step by step, and with the buy-in of the right people at every little step. The pitfalls are many.