How to identify the constraint of a system? Part 3

Inventories and Work In Progress (WIP) can be helpful clues to visually identify the bottleneck or constraint in a process, but they can also be insufficient or even misleading as I explained in part 2 of this series.

It is often also necessary to study material and parts routes to really understand where they get stuck and delayed. Chances are that the missing or delayed items are waiting in a queue in front of the constraint. Or have been stolen by another process…

In the search for the system’s constraint, experienced practitioners can somewhat “cut corners” by first identifying the organization’s typology among the 3 generic ones: V, A or T. Each category has a specific structure and a particular set of problems. Being aware of the specific problems and possible remedies for each of the V, A and T categories may speed up the identification of the constraint and improvement of Throughput.

V, A & T in a nutshell

Umble and Srikanth, in their “Synchronous Manufacturing: Principles for World Class Excellence”, published 1990 by Spectrum Pub Co (and still sold today), propose 3 categories of plants based on their “dominant resource/product interactions”. Those 3 categories are called V, A and T.

V, A & T plants

V, A & T plants

Each letter stands for a specific category of organization (factories, in Umble’s and Skrikanth’s book) where the raw materials are supplied mainly at the bottom of the letter and the final products delivered at the top of the letter.

V-plants

V type plants use few or unique raw material processed to make a large variety of products. V-plants have divergence points where a single product/material is transformed in several distinct products. V-plants are usually highly specialized and use capital-intensive equipment.

V-plant

V-plant

You may imagine a furniture factory transforming logs of wood into various types of furniture, food industry transforming milk in various dairy products or a steel mill supplying a large variety of steel products, etc.

The common problems in V-plants are misallocation of material and/or overproduction.

As the products, once gone through a transformation cannot be un-made (impossible to un-coock a product to regain the ingredients), thus if material is misallocated, the time to get the expected product is extended until a new batch is produced.

The misallocated products wait somewhere in the process to meet a future order requiring them or are processed to finished goods and sit in final goods inventory.

The transformation process usually uses huge equipment, not very flexible and running more efficiently with big batches. Going for local optimization (Economic Order Quantity (EOQ) for example) regardless of real orders leads to long lead times and overproduction.

V-plants often have a lot of inventories and poor customer service, especially with regards to On-Time Delivery. A commonly heard complaint is “so many shortages despite so many inventories”.

Misallocations and overproduction before the bottleneck will burden the bottleneck even more. Sales wanting to serve their upset customers often force unplanned production changes, which leads to chaos in planning and amplification of delays (and of the mess).

Identification of the bottleneck should be possible visually: Work In Progress should pile up before the bottleneck while process steps after the bottleneck are idle waiting for material to process.

Note: while the bottleneck is probably a physical resource in a transformation process, the constraint might be a policy, like imposing minimum batch sizes for instance.

A-plants

A-plants use a large variety of materials / parts / equipment (purchased and) being processed in distinct streams until sub-assembly or final assembly, that make few or a unique product: shipbuilding or motor manufacturing, for example.

A-plant

A-plant

Subassembly or final assembly is often waiting for parts or subassemblies because insuring synchronization of all necessary parts for assembly is difficult. Expediters are sent hunting down the missing parts.

Expediting is likely to disrupt the schedule on a machine, a production line, etc. If the wanted part is pushed through the process, it is at the expense of other parts that will be late. The same will repeat as the chaos gets worse.

In order to keep the subassembly and assembly busy, planning is changed according to the available kits. Therefore some orders are completed ahead of time while others are delayed.

The search for the bottleneck(s) starts from subassembly or final assembly based on an analysis of the delays and earlies. Parts and subassemblies that are used in late as well as in early assemblies are not going through the bottleneck. Only parts constantly late will lead to the bottleneck. For those, follows the upstream trail until finding the faulty resources where the queue accumulates.

T-plants

T type factories have a relatively common base, usually fabrication or assembly of subassemblies and a late customization / variant assembly ending in a large display of finished goods. Subassemblies are made to stock, based on forecasts while final assembly is made to order and in a lesser extend made to stock. In this latter case it’s to keep the system busy even there are no sufficient orders. Assembly is made to stock for the top-selling models.

T-plant

T-plant

Computers assembled on-demand for instance use a limited number of components, but their combinations allow a large choice of final goods.

In order to swiftly respond to demand, final assembly generally has excess capacity, therefore the bottleneck is more likely to be found in the lower part – subassemblies – of the T.

The top and bottom of the T-plants are connected via inventories acting as synchronization buffers. The identification of the bottleneck(s) starts at the final assembly with the list of shortages and delayed products. The components or subassemblies with chronic shortages or long delays point to a specific process. The faulty process must then be visited until finding the bottleneck.

Yet bear in mind that assembly cells, lines or shops may “steal” necessary parts or components from others or “cannibalize” i.e. remove parts or subsystems on some products for completing the assembly of others. If this happens, following the trail of missing and delayed parts upstreams can get tricky.

Combinations of V, A and T plants

V, A & T-plants are basic building blocks that can also be combined for more sophisticated categories. For instance a A base with a T on top, typical for consumer electronics. Yet the symptoms and remedies remain the same in each V, A & T category, combined or not.

Wrapping up

As we have seen so far along the 3 parts of this series, the search for the constraint in a system is more an investigation testing several assumption and checking facts before closing in on the culprit.

There are some general rules investigators can follow, like the search for large inventories in front of a resource while the downstream process is depleted of parts or material, but it is not always that obvious.

Knowledge about the V, A & T-plants can also help, without saving the pain of the investigation. And we are still not done in the search for the constraint! There is more to learn in the part 4!

Readers may be somewhat puzzled by my alternate use of the name bottleneck and constraint despite the clear distinction that is to be made between the two. This is because in the investigation stage, it’s not clear if the bottleneck is really the system’s constraint. Therefore, once identified, the critical resource is first qualified as a bottleneck and further investigations will decide if it qualifies for being the system constraint or not.

Bibliography about V, A & T-plants

For more information about V, A and T plants:

  • Try a query on “VAT plants” on the Internet
  • “Synchronous Manufacturing: Principles for World Class Excellence”, Umble and Srikanth, Spectrum Pub Co
  • “Theory of Constraints Handbook”, Cox and Schleier, Mc Graw Hill

View Christian HOHMANN's profile on LinkedIn

Advertisements

How to identify the constraint of a system? Part 2

When trying to find the system’s constraint, why not simply asking the middle management? At least when Theory of Constraint was young, our world spinning slower and processes simpler, the foremen usually had a common sense understanding of their bottleneck. They knew what machine to look after, and what part of their process to give more attention.

If you haven’t read part 1, catch up: How to identify the constraint of a system? Part 1

Even so they may not have discovered all 9 rules about managing a bottleneck by themselves, they intuitively applied some of them.

Nowadays the picture is blurred with complexity and frequent changes. Nevertheless, asking middle management can still give precious hints about bottlenecks. Ask what the more troublesome spot of the process is and from where / whom the downstream process steps are waiting for something. Chances are that many managers will point to the same direction.

This works for project management or (software) development as well. In those cases I would also ask who the superstar (programmer) is, the one everyone wants on his/her project and in every meeting. Chances are that that person turned into a constraint without even noticing it.

Now if the tip can be useful, refrain from rushing to conclusions from these answers and check for yourself. Many managers may tell you the same just because they all heard the same complaints in a meeting. A meeting where all managers meet…

Go to the gemba, look for Work In Progress

Let’s start the shop floor investigation searching for the bottleneck like it is described in the early text books.

Go to the gemba, follow the flow (which is easier and somewhat more natural than walking upstreams, but up to you to choose the preferred way) visually assess the work in progress (WIP) and inventories in front of the machines or work cells.

Usually the highest piles of Inventories or work in progress are sitting in front of the bottleneck and the following downstream process steps are starved from material or parts.

Yet if it would be that easy it would be no fun. The above works well in simple processes which are neat and tidy. Most often the inventories are scattered wherever it is possible to store something, FIFO (First In First Out) rules are violated and downstream processes, incentivized on productivity, work on whatever they can work on for the sake of good looking KPIs. Finding the bottleneck in such a chaos needs more than a visual check.

It is also possible that excess inventory and work in progress may be temporarily stored in remote warehouses and not in full sight, thus not visible.

Another pitfall is confusing work waves, periodically releasing parts or information, and real bottlenecks. An example could be a slow process which is not a true bottleneck but needs more than the regular shifts to catch up with its workload.

Imagine a slow machine (sM) amidst a process. The process upstream (P1) works 8 hours with best possible productivity and WIP piles up in front of sM. The downstream process (P2) works at best possible productivity and has some WIP in front of it.

At the end of the shift P1 and P2 are shut down. They both fulfilled their daily scheduled work. sM goes on for a second shift, processing the WIP in front of it.

By the end of the second shift, no more WIP (or very few) in front of sM and what was waiting in front of sM is now waiting after it, in front of P2. This is the picture the next early morning:

An observer, depending when he/she looked at the process, could have come to wrong conclusions about a bottleneck. Early morning it looks like the first machine of P2 is holding back the flow. In mid afternoon it is sM that is the culprit, when in reality there is no true bottleneck. sM has enough capacity provided it can work more than one shift.

Some would mention wandering bottlenecks, jumping from one place to another. This is something I will elaborate on in a separate post. Or series…

We are not done now with our bottleneck safari. To learn more, proceed to part 3.


View Christian HOHMANN's profile on LinkedIn

Continuous Improvement: Prevent frustrations related to the S curve

When implementing some solutions, like in continuous improvement, project managers better take care about the frustrations related to the S curve.

S curve

S curve

The “S curve” is the shape of the performance curve over time. It describes a latency (t1) before the performance p1 takes off after the improvements have been implemented, then a more or less steep rise before stabilization at the new level of performance p2.

This latency time after the first improvements until improvements become noticeable has several possible causes and can pose different problems.

>Lisez-moi en français

The most trivial reason for a lack of significant effects after a while is that the solutions put in place do not produce the expected effects. It is therefore advised to estimate in advance, at the moment improvements are implemented, when the effects should be noticeable, in order to have an alert when the estimated time is elapsed.

Another trivial reason is a long cycle time. This may be the case with lengthy process of transformation, processing time or latency inherent to the process before the success of the operation can be judged. Typically, these are technical lead times, time required for chemical or biological transformation processes or “responsiveness” from third-party organizations, etc.


Let’s assume that in the process below the improvement is a new setting at machine R1 and the effect can only be measured or assessed before the entrance of machine R6, it will take all the time for a sample to travel the whole process from R1 to R6, including the buffer between R2 and R3.

Industrial Process

Similar cases can be found in chemistry or biology when reactions need some time to happen, in curing or drying processes, etc.

The delay may be due to the improvement process itself, which may require several steps such as initial training, implementation of the first improvements, measurement of their effects and time to analyze them.


Another reason, that may be coupled with the previous one, is Little’s law. It states that the lead time through an inventory or queue of work in progress (WIP) is equal to the value of this inventory divided by the average consumption. This means that if the improvement occurs at a point decoupled from the measurement point of its effectiveness by either inventory or WIP, the effect must first propagate through the queue before it can be detected. Everything else being kept equal.

Please note that this delayed improvement phenomenon or “S curve” described here in the context of continuous improvement can be found in the implementation of any project.

This discrepancy can be a problem for Top Management awaiting return on investment and wishing it as quick as possible. This is all the more true if the activity is highly competitive because an improvement can determine the competitiveness and/or profitability of a project, an offer or even of the whole organization.

It is therefore recommended that the project leader reminds the likeness or certainty of the S curve, even to the managers pretending to know it. Under pressure of business they tend to “forget” it.

The second problem with delayed effects concerns those closer to execution who expect some benefits from improvement, such as problem solving, elimination of irritants, better ergonomics, etc.

Assuming that the operational, shopfloor staff have been associated with the improvement, their frustration and their impatience to see changes is even more important. Without promptly demonstrating that “it works”, there is a significant risk of losing their fate, attention and motivation.

In order to prevent this, the project manager must choose intermediate objectives in short intervals in order to be able to communicate frequently on small successes.

The recommendation  is to look for a weekly interval and not exceed the month. The week represents a familiar time frame to operational staff, and the month being, in my opinion, the maximum limit. Beyond the month it usually becomes an abstraction and attention gets lost.

About the author, Chris HOHMANN

About the author, Chris HOHMANN

View Christian HOHMANN's profile on LinkedIn

What is Throughput Accounting?

Throughput Accounting (TA) can be understood as a simplified accounting system based on Theory of Constraints (ToC) principles. TA makes growth-driven management and decision making simpler and understandable even for people not familiar with traditional accounting.

Beyond simplifying, TA has a different approach compared to traditional accounting. The latter will focus on cost control (cost of goods sold) and minimizing the unit cost while TA strives to maximize profit.

Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Simplifying accounting

Throughput Accounting will probably not replace GAAP in short nor medium term, but provides a limited set of simple KPIs, sufficient to:

  • Manage and make decisions in a growth-oriented and ToC way
  • Allow faster reporting and near to real-time figure-based management
  • Help people in operations to understand the basics of accounting
  • Set a common base for controllers and operations to discuss decisions, investments, etc.

Throughput Accounting uses 3 KPIs and 2 ratios:

Throughput (T)

Throughput, defined as the rate of producing goal units (usually money) and translates as revenue or sales minus totally variable expenses in accounting terms.

Totally variable expenses can be simplified to the cost of direct materials because labor is nowadays paid on a (relatively) fixed amount per time period, hence a constant expense to be considered as part of Operating Expenses.

Operating Expenses (OE)

Operating Expenses are all expenses, except the totally variable expenses previously mentioned in the calculation of throughput, required to run and maintain the system of production. Operating Expenses are considered fixed costs, even so they may have some variable cost characteristics.

Investments (I)

Investments, formerly call Inventories, is the amount of cash invested (formerly “tied”) into the system in order to turn as much of the Investments into Throughput as possible. This encompasses the stored raw material waiting to be transformed into sellable products as well as investments in capacities / capabilities to produce more units.

Net Profit (NP)

Net Profit is defined as Throughput minus Operating Expenses, or Sales – Total Variable Costs – Operating Expenses.

Return On Investment (ROI)

Return On Investment is the Net Profit compared to Investments (ROI = NP/I).

Drivers for achieving the Goal

Throughput Accounting offers a simplified way to identify and use the drivers to achieve the Goal, assuming the Goal is to make money now and in the future.

In a very simple way this can be summarized by the following picture which means strive to maximize Throughput while minimize the Operating Expenses and Investments.

ToC practitioners recognize that Throughput has no limit while Operating Expenses and Investments have limits beyond which no safe operations can be further envisioned.

The priority focus on improving T (focusing on the constraint exploitation) rather to go for all-out cost cutting explains the (usually) superior results when going the ToC way compared to unfocused improvements.

Throughput Accounting KPIs can be presented in a Dupont-inspired model in order to make the levers and consequences clear. (graphics to come)

Throughput Analysis

Beyond the simplification compared to traditional accounting, Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Reminder: in a system with a capacity constraint, the Throughput is limited and controlled by the sole constraint. As the capacity is fully used and no spare is available to exploit, what goes through the constraint must be chosen wisely in order to make the best use of this very precious resource.

It becomes obvious then that utmost attention must be paid to maximize the passing of the highest profit generating products through the constraint. The decision making is then based on the Throughput per constraint minute rate. The higher the T/mn, the better.

Other decisions Throughput Analysis helps to make are about anything likely to alter the Throughput, Operating Expenses or investments. Basically, any incremental increase of OE and/or I should lead to an incremental increase of T.

Conversely any decrease of OE and/or I should NOT lead to an incremental decrease of T.


This post is partly inspired by the work of Steven Bragg. I recommend his blog and post about Troughput analysis http://www.accountingtools.com/throughput-analysis


Bandeau_CH40_smlView Christian HOHMANN's profile on LinkedIn

What is Little’s law? What is it good for?

Little’s law is a simple equation explaining how Waiting Time, Throughput and Inventory are related.

Wait Time = Inventory (or WIP) / Throughput

Here is a video about Little’s law:

Fine, what is Little’s law good for?

Well, if a process lead time is too long, chances are that work-in-progress (WIP) is too high. For a given processing rate (Throughput), the lead time will be equal to WIP/Throughput. To reduce the lead time the process Throughput must be increased or the WIP reduced.

Throughput is usually limited by some constraint: machining speed, resources available and so on. It may not be easy to increase Throughput.

WIP on the other hand can generally be controlled by limiting the inventory or the work to be done entering the system upfront.

In this video, Philip Marris explains how to reduce WIP by controlling the flow of work entering the system. Even he does not mention Little’s law, it is indeed used to reduce Inventory.


 Bandeau_CH11View Christian HOHMANN's profile on LinkedIn

Introduction to Throughput Accounting

Throughput accounting comes early for all studying Theory of Constraints.  The simplest is about the 3 KPIs: Throughput (T), Operating Expenses (OE) and Inventory (I) – later changed to Investment – and their relationship for higher profits.

Later, Throughput accounting is used to make sound decisions to maximize profit despite limited means, favoring the products with highest “octane”, which is the Throughput per time unit of the constraint.

Here is a 18 minute ‘essentials’ about Throughput Accounting provided by the London School of Business and Finance (LSBF).


View Christian HOHMANN's profile on LinkedIn

ABC analysis for efficient picking

When it comes to machine layout, workplace setup or storage space, a prior ABC analysis of the frequency of events or value, volume, weight, etc. is a good start for making it efficient.

Reminder

The ABC analysis is based on the principle of Pareto law, sometimes called 20/80 (20% of causes accumulate 80% of effects) and defines three classes A, B and C:

  • Class A: items accumulating 80% of the effect
  • Class B: items accumulating the next 15%
  • Class C: items accumulating the last 5%

According to 20/80 law, A class items are few (20%) but concentrate the main part (80%) of the effect. In the case of an inventory, few class A items will accumulate the main value or biggest weight, will move the fastest, will earn the most profit, etc. depending the parameter considered.

B class  is made of relatively large share of part numbers or items accumulating altogether only 15%  all occurrences. The numerous items of C class make a tail accumulating the remaining 5% of occurrences, despite their great number.

Application example: Inventory and picking management

Let’s consider a warehouse where operators prepare kits or orders from parts or products stored on shelves. The shelves are traditionally set up in a U-shaped layout, providing a logical path for picking.

Let’s assume the part numbers are placed on the shelves in an alphanumerical order, the first being AA000 and the last ZZ999, to simplify picking and make it straightforward. Picking lists are consistently printed in alphanumerical order, the operator walks the U and picks up the articles (part numbers) accordingly to his list.

This system is pretty simple, but yet not very efficient. For each preparation of few articles, operators have to walk the whole U. If the stored items are big volumes or if the U cell is widespread, the time spent in walking and transportation becomes significant.

Remember time passed to move or transport material or parts is considered non value operation, a waste in the lean thinking way.

Applying the rules of ABC analysis to our inventory, and focusing on the “stock turns” index or “frequency of picking” per part number, we’ll see an A class (probably around 20% of all items) accumulating 80% of all pickings.

The A class items being so often picked, it is only common sense to place them the closest to entry-exit (green zone) in order to reduce reaching distance, hence time spent picking.

Stored quantities must be consistent with picked quantities, therefor and despite the fact these items represent only a limited share of the whole (numbers of part numbers), the A class deserves duplicate storage space on the shelves.

Statistically in fewer demand, B class items are placed behind A class (orange zone) and C class items (the least picked) stored even further (red zone).

Now, thanks to new layout, picking time and distances are globally reduced. In most cases, the journey in the U cell will be restricted to the front (green zone). In few cases only, statistically less frequent, the operator will have to walk the whole U cell.

Other benefits

 

On top of time and distance reduction, we can imagine the light of our green zone should be on constantly, but orange and red zones could be equipped with sensors or switches to light them up only when somebody is in. In the same way, for the comfort of workers, heating or air conditioning isn’t necessary in zones where they seldom go. These are opportunities to reduce energy expenses.

Bandeau_CH32b