Continuous Improvement: Prevent frustrations related to the S curve

When implementing some solutions, like in continuous improvement, project managers better take care about the frustrations related to the S curve.

S curve

S curve

The “S curve” is the shape of the performance curve over time. It describes a latency (t1) before the performance p1 takes off after the improvements have been implemented, then a more or less steep rise before stabilization at the new level of performance p2.

This latency time after the first improvements until improvements become noticeable has several possible causes and can pose different problems.

>Lisez-moi en français

The most trivial reason for a lack of significant effects after a while is that the solutions put in place do not produce the expected effects. It is therefore advised to estimate in advance, at the moment improvements are implemented, when the effects should be noticeable, in order to have an alert when the estimated time is elapsed.

Another trivial reason is a long cycle time. This may be the case with lengthy process of transformation, processing time or latency inherent to the process before the success of the operation can be judged. Typically, these are technical lead times, time required for chemical or biological transformation processes or “responsiveness” from third-party organizations, etc.

The delay may be due to the improvement process itself, which may require several steps such as initial training, implementation of the first improvements, measurement of their effects and time to analyze them.

Another reason, that may be coupled with the previous one, is Little’s law. It states that the lead time through an inventory or queue of work in progress (WIP) is equal to the value of this inventory divided by the average consumption. This means that if the improvement occurs at a point decoupled from the measurement point of its effectiveness by either inventory or WIP, the effect must first propagate through the queue before it can be detected. Everything else being kept equal.

Please note that this delayed improvement phenomenon or “S curve” described here in the context of continuous improvement can be found in the implementation of any project.

This discrepancy can be a problem for Top Management awaiting return on investment and wishing it as quick as possible. This is all the more true if the activity is highly competitive because an improvement can determine the competitiveness and/or profitability of a project, an offer or even of the whole organization.

It is therefore recommended that the project leader reminds the likeness or certainty of the S curve, even to the managers pretending to know it. Under pressure of business they tend to “forget” it.

The second problem with delayed effects concerns those closer to execution who expect some benefits from improvement, such as problem solving, elimination of irritants, better ergonomics, etc.

Assuming that the operational, shopfloor staff have been associated with the improvement, their frustration and their impatience to see changes is even more important. Without promptly demonstrating that “it works”, there is a significant risk of losing their fate, attention and motivation.

In order to prevent this, the project manager must choose intermediate objectives in short intervals in order to be able to communicate frequently on small successes.

The recommendation  is to look for a weekly interval and not exceed the month. The week represents a familiar time frame to operational staff, and the month being, in my opinion, the maximum limit. Beyond the month it usually becomes an abstraction and attention gets lost.

View Christian HOHMANN's profile on LinkedIn

What is Throughput Accounting?

Throughput Accounting (TA) can be understood as a simplified accounting system based on Theory of Constraints (ToC) principles. TA makes growth-driven management and decision making simpler and understandable even for people not familiar with traditional accounting.

Beyond simplifying, TA has a different approach compared to traditional accounting. The latter will focus on cost control (cost of goods sold) and minimizing the unit cost while TA strives to maximize profit.

Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Simplifying accounting

Throughput Accounting will probably not replace GAAP in short nor medium term, but provides a limited set of simple KPIs, sufficient to:

  • Manage and make decisions in a growth-oriented and ToC way
  • Allow faster reporting and near to real-time figure-based management
  • Help people in operations to understand the basics of accounting
  • Set a common base for controllers and operations to discuss decisions, investments, etc.

Throughput Accounting uses 3 KPIs and 2 ratios:

Throughput (T)

Throughput, defined as the rate of producing goal units (usually money) and translates as revenue or sales minus totally variable expenses in accounting terms.

Totally variable expenses can be simplified to the cost of direct materials because labor is nowadays paid on a (relatively) fixed amount per time period, hence a constant expense to be considered as part of Operating Expenses.

Operating Expenses (OE)

Operating Expenses are all expenses, except the totally variable expenses previously mentioned in the calculation of throughput, required to run and maintain the system of production. Operating Expenses are considered fixed costs, even so they may have some variable cost characteristics.

Investments (I)

Investments, formerly call Inventories, is the amount of cash invested (formerly “tied”) into the system in order to turn as much of the Investments into Throughput as possible. This encompasses the stored raw material waiting to be transformed into sellable products as well as investments in capacities / capabilities to produce more units.

Net Profit (NP)

Net Profit is defined as Throughput minus Operating Expenses, or Sales – Total Variable Costs – Operating Expenses.

Return On Investment (ROI)

Return On Investment is the Net Profit compared to Investments (ROI = NP/I).

Drivers for achieving the Goal

Throughput Accounting offers a simplified way to identify and use the drivers to achieve the Goal, assuming the Goal is to make money now and in the future.

In a very simple way this can be summarized by the following picture which means strive to maximize Throughput while minimize the Operating Expenses and Investments.

ToC practitioners recognize that Throughput has no limit while Operating Expenses and Investments have limits beyond which no safe operations can be further envisioned.

The priority focus on improving T (focusing on the constraint exploitation) rather to go for all-out cost cutting explains the (usually) superior results when going the ToC way compared to unfocused improvements.

Throughput Accounting KPIs can be presented in a Dupont-inspired model in order to make the levers and consequences clear. (graphics to come)

Throughput Analysis

Beyond the simplification compared to traditional accounting, Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Reminder: in a system with a capacity constraint, the Throughput is limited and controlled by the sole constraint. As the capacity is fully used and no spare is available to exploit, what goes through the constraint must be chosen wisely in order to make the best use of this very precious resource.

It becomes obvious then that utmost attention must be paid to maximize the passing of the highest profit generating products through the constraint. The decision making is then based on the Throughput per constraint minute rate. The higher the T/mn, the better.

Other decisions Throughput Analysis helps to make are about anything likely to alter the Throughput, Operating Expenses or investments. Basically, any incremental increase of OE and/or I should lead to an incremental increase of T.

Conversely any decrease of OE and/or I should NOT lead to an incremental decrease of T.


This post is partly inspired by the work of Steven Bragg. I recommend his blog and post about Troughput analysis http://www.accountingtools.com/throughput-analysis


Bandeau_CH40_smlView Christian HOHMANN's profile on LinkedIn

What is Little’s law? What is it good for?

Little’s law is a simple equation explaining how Waiting Time, Throughput and Inventory are related.

Wait Time = Inventory (or WIP) / Throughput

Here is a video about Little’s law:

Fine, what is Little’s law good for?

Well, if a process lead time is too long, chances are that work-in-progress (WIP) is too high. For a given processing rate (Throughput), the lead time will be equal to WIP/Throughput. To reduce the lead time the process Throughput must be increased or the WIP reduced.

Throughput is usually limited by some constraint: machining speed, resources available and so on. It may not be easy to increase Throughput.

WIP on the other hand can generally be controlled by limiting the inventory or the work to be done entering the system upfront.

In this video, Philip Marris explains how to reduce WIP by controlling the flow of work entering the system. Even he does not mention Little’s law, it is indeed used to reduce Inventory.


 Bandeau_CH11View Christian HOHMANN's profile on LinkedIn

Introduction to Throughput Accounting

Throughput accounting comes early for all studying Theory of Constraints.  The simplest is about the 3 KPIs: Throughput (T), Operating Expenses (OE) and Inventory (I) – later changed to Investment – and their relationship for higher profits.

Later, Throughput accounting is used to make sound decisions to maximize profit despite limited means, favoring the products with highest “octane”, which is the Throughput per time unit of the constraint.

Here is a 18 minute ‘essentials’ about Throughput Accounting provided by the London School of Business and Finance (LSBF).


View Christian HOHMANN's profile on LinkedIn

ABC analysis for efficient picking

When it comes to machine layout, workplace setup or storage space, a prior ABC analysis of the frequency of events or value, volume, weight, etc. is a good start for making it efficient.

Reminder

The ABC analysis is based on the principle of Pareto law, sometimes called 20/80 (20% of causes accumulate 80% of effects) and defines three classes A, B and C:

  • Class A: items accumulating 80% of the effect
  • Class B: items accumulating the next 15%
  • Class C: items accumulating the last 5%

According to 20/80 law, A class items are few (20%) but concentrate the main part (80%) of the effect. In the case of an inventory, few class A items will accumulate the main value or biggest weight, will move the fastest, will earn the most profit, etc. depending the parameter considered.

B class  is made of relatively large share of part numbers or items accumulating altogether only 15%  all occurrences. The numerous items of C class make a tail accumulating the remaining 5% of occurrences, despite their great number.

Application example: Inventory and picking management

Let’s consider a warehouse where operators prepare kits or orders from parts or products stored on shelves. The shelves are traditionally set up in a U-shaped layout, providing a logical path for picking.

Let’s assume the part numbers are placed on the shelves in an alphanumerical order, the first being AA000 and the last ZZ999, to simplify picking and make it straightforward. Picking lists are consistently printed in alphanumerical order, the operator walks the U and picks up the articles (part numbers) accordingly to his list.

This system is pretty simple, but yet not very efficient. For each preparation of few articles, operators have to walk the whole U. If the stored items are big volumes or if the U cell is widespread, the time spent in walking and transportation becomes significant.

Remember time passed to move or transport material or parts is considered non value operation, a waste in the lean thinking way.

Applying the rules of ABC analysis to our inventory, and focusing on the “stock turns” index or “frequency of picking” per part number, we’ll see an A class (probably around 20% of all items) accumulating 80% of all pickings.

The A class items being so often picked, it is only common sense to place them the closest to entry-exit (green zone) in order to reduce reaching distance, hence time spent picking.

Stored quantities must be consistent with picked quantities, therefor and despite the fact these items represent only a limited share of the whole (numbers of part numbers), the A class deserves duplicate storage space on the shelves.

Statistically in fewer demand, B class items are placed behind A class (orange zone) and C class items (the least picked) stored even further (red zone).

Now, thanks to new layout, picking time and distances are globally reduced. In most cases, the journey in the U cell will be restricted to the front (green zone). In few cases only, statistically less frequent, the operator will have to walk the whole U cell.

Other benefits

 

On top of time and distance reduction, we can imagine the light of our green zone should be on constantly, but orange and red zones could be equipped with sensors or switches to light them up only when somebody is in. In the same way, for the comfort of workers, heating or air conditioning isn’t necessary in zones where they seldom go. These are opportunities to reduce energy expenses.

Bandeau_CH32b