What is Throughput Accounting?

Throughput Accounting (TA) can be understood as a simplified accounting system based on Theory of Constraints (ToC) principles. TA makes growth-driven management and decision making simpler and understandable even for people not familiar with traditional accounting.

Beyond simplifying, TA has a different approach compared to traditional accounting. The latter will focus on cost control (cost of goods sold) and minimizing the unit cost while TA strives to maximize profit.

Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Simplifying accounting

Throughput Accounting will probably not replace GAAP in short nor medium term, but provides a limited set of simple KPIs, sufficient to:

  • Manage and make decisions in a growth-oriented and ToC way
  • Allow faster reporting and near to real-time figure-based management
  • Help people in operations to understand the basics of accounting
  • Set a common base for controllers and operations to discuss decisions, investments, etc.

Throughput Accounting uses 3 KPIs and 2 ratios:

Throughput (T)

Throughput, defined as the rate of producing goal units (usually money) and translates as revenue or sales minus totally variable expenses in accounting terms.

Totally variable expenses can be simplified to the cost of direct materials because labor is nowadays paid on a (relatively) fixed amount per time period, hence a constant expense to be considered as part of Operating Expenses.

Operating Expenses (OE)

Operating Expenses are all expenses, except the totally variable expenses previously mentioned in the calculation of throughput, required to run and maintain the system of production. Operating Expenses are considered fixed costs, even so they may have some variable cost characteristics.

Investments (I)

Investments, formerly call Inventories, is the amount of cash invested (formerly “tied”) into the system in order to turn as much of the Investments into Throughput as possible. This encompasses the stored raw material waiting to be transformed into sellable products as well as investments in capacities / capabilities to produce more units.

Net Profit (NP)

Net Profit is defined as Throughput minus Operating Expenses, or Sales – Total Variable Costs – Operating Expenses.

Return On Investment (ROI)

Return On Investment is the Net Profit compared to Investments (ROI = NP/I).

Drivers for achieving the Goal

Throughput Accounting offers a simplified way to identify and use the drivers to achieve the Goal, assuming the Goal is to make money now and in the future.

In a very simple way this can be summarized by the following picture which means strive to maximize Throughput while minimize the Operating Expenses and Investments.

ToC practitioners recognize that Throughput has no limit while Operating Expenses and Investments have limits beyond which no safe operations can be further envisioned.

The priority focus on improving T (focusing on the constraint exploitation) rather to go for all-out cost cutting explains the (usually) superior results when going the ToC way compared to unfocused improvements.

Throughput Accounting KPIs can be presented in a Dupont-inspired model in order to make the levers and consequences clear. (graphics to come)

Throughput Analysis

Beyond the simplification compared to traditional accounting, Throughput Accounting sets the base for Throughput Analysis, helping to make decisions in the ToC way.

Reminder: in a system with a capacity constraint, the Throughput is limited and controlled by the sole constraint. As the capacity is fully used and no spare is available to exploit, what goes through the constraint must be chosen wisely in order to make the best use of this very precious resource.

It becomes obvious then that utmost attention must be paid to maximize the passing of the highest profit generating products through the constraint. The decision making is then based on the Throughput per constraint minute rate. The higher the T/mn, the better.

Other decisions Throughput Analysis helps to make are about anything likely to alter the Throughput, Operating Expenses or investments. Basically, any incremental increase of OE and/or I should lead to an incremental increase of T.

Conversely any decrease of OE and/or I should NOT lead to an incremental decrease of T.

This post is partly inspired by the work of Steven Bragg. I recommend his blog and post about Troughput analysis http://www.accountingtools.com/throughput-analysis

Bandeau_CH40_smlView Christian HOHMANN's profile on LinkedIn


Critical Chain Project Management alone is not enough

Critical Chain Project Management (CCPM) alone is not enough to drastically reduce a project’s duration and improve the development process efficiency.

CCPM is a proven Project Management approach to ensure a project, any project, will meet its finishing date without compromising quality nor any of the requirements, and even though CCPM can lead to terminate projects earlier, CCPM alone will not squeeze out all improvement potential still hidden in the development process.

What CCPM does well is reconsider in a very smart way the project protection against delaying. Individual protective margins will be confiscated and mutualized in a project buffer, allowing everyone to benefit from this shared and common protection.

There is a bit more than this protective project buffer, but for the sake of simplicity let us just be that… simple.

The visual progress monitoring with a Fever Chart will provide early warning if the project completion date may be at risk and help spot where the trouble is.

Fever Chart

Fever Chart in a nutshell: x axis = project completion rate, y axis = protective buffer burn rate. Green zone = all ok, don’t worry, Amber zone = watch out, the project is drifting and finishing date may be jeopardized. Red zone = alert, project likely to be delayed if no action bring the plot into Amber and preferably Green zone.

After a while, with the proof that all projects can finish without burning up all the protective buffer, meaning ahead of estimated finish date, this arbitrary margin confiscation can be refined and some tasks durations trimmed down while fixing some of the common flaws in the process, like incomplete Work Breakdown Structures, poor linkage between tasks, ill-defined contents or missing requirements.

When done, the projects may be shorter because of lesser of the original protective margins and the other fixes, but the tasks themselves are seldom challenged about their value.

For instance, many of the project’s gate reviews have been set to monitor progress and give confidence to management. They were countermeasures to the drifts and tunnel effects, the period where management is blind about the progress, but with the early warning and easy visual monitoring through the Fever Chart, and more agility in the process, many of these reviews are now useless.

Thus, the time to prepare the documents, KPIs, presentations and attend meetings can be saved for value-creating activities or simply eliminated.

Other tasks may clutter the project, like legacies of fixes of older issues, long obsolete but still kept as the project template still carry them over. Evolution in technologies, unnecessary or suppressed downstream process steps, never fed back may also let unnecessary tasks in the project.

This is where a Lean Thinking approach completes CCPM, challenging the Added-Value of each task, questioning the resources required (both in qualification or competencies and in quantity) and even the linkage to preceding and following tasks.

When considering a development process, embracing Lean Engineering can even go further. Lean Engineering fosters learning and reuse of proven solutions. Libraries of such solutions and ready-for-use modules can save significant time, which can be reinvested in experimenting for the sake of further learning or to shorten projects and engage more development cycles with same resources and within the same time span.

About the authorView Christian HOHMANN's profile on LinkedIn

What is Negative Branch Reservation?

Did you experience this utmost frustration when having implemented a solution or countermeasure to a problem, a new issue arises brought up by this fix?

This is what Negative Branch Reservation (NBR) intents to prevent.

Negative Branch Reservation is  a robustness test usually associated with a Future Reality Tree (FRT). It checks what could go wrong in the intended change process in order to anticipate possible negative outcomes.

In a Future Reality Tree, identified Undesirable Effects (UDE) are combined with “injections”, which are solutions or countermeasures to neutralize the UDE, a cure to the pain if you will, hence the name “injection”.

Yet some injections may have negative side effects, opening a chain of causes-and-effects developing what is called a Negative Branch, leading to new UDEs.

How does this happen?

Injections combine themselves with existing reality to produce some effect, this effect is either undesirable by itself or the cause of a new UDE further up the tree as the new effect can combine with existing reality and so on.

Being aware of this risk, Logical Thinking practitioners mitigate it with a scrutinizing technique called Negative Branch Reservation.

How to spot possible Negative Branches?

The search for possible Negative Branches is part of the Future Reality Tree scrutinizing, once the FRT is built. External* scrutinizers are invited to consider each effect entity of the tree and check if another effect can arise then the expected one. This includes the Desired Effects (DE) at the top of the tree.

*not being involved in the construction of the FRT

If somewhere a Negative Branch is likely to grow, the next step is to check if the injection causing it can be replaced by another one, without the negative side effect.

Chances are that the initial injection must be kept as no better alternative is found. In this case, the Negative Branch has to be trimmed.

How to trim Negative Branches?

In order to neutralize the UDE brought up by the Negative Branch, go back to the Branch’s origin and surface the underlying assumptions, using the if…then…because following the arrow from the injection that caused the Negative Branch to grow.

Look for a possible injection to neutralize or minimize the UDE. If none can be found at that spot, move upwards the Negative Branch and repeat the process. At some point an injection will “cure” the UDE.

As the cure for the Negative Branch’s UDE is an injection, this too must be checked for possible new side effects with negative outcome.

The robustness of the Future Reality Tree, and every tree in the Logical Thinking Process, is guaranteed by thorough scrutinizing.

cropped-bandeau_ch402.jpgView Christian HOHMANN's profile on LinkedIn

Bill Dettmer refining some points about Logical Thinking Process

In this video, an excerpt of Bill Dettmer‘s Logical Thinking Process (LTP) training course in Paris, June 2015, Bill refines some points about LTP.

First, the two first tools (Trees) of the LTP, namely the Goal Tree (GT) and Current Reality Tree (CRT) are based on facts. The others, Evaporating Cloud (EC), Future Reality Tree (FRT), Prerequisite Tree (PRT) are based on high probability things will happen as planned.

The components of a Goal Tree, called Necessary Conditions (or NCs) are factual needs necessary to achieve the Goal. The Current Reality is described in a CRT in a way that can be checked and proven, based on facts.

As soon as the people are working on the future, from Evaporating Cloud on, it can only be described in probabilities, as things may not exactly turn out as planned. Theory of Constraints’ Logical Thinking Process may need to rely on other tools and methods than Negative Branch Reservation (NBR) to mitigate the risks.

Furthermore, the time to accomplish the necessary tasks is an evaluation at best.

Finally Bill summarizes what Logical Thinking Process is:

a structured way to move from an ill-defined system level problem to a fully implemented solution.

View Christian HOHMANN's profile on LinkedIn