The Executive Guide to Breakthrough Project Management – Book Review

The Executive Guide to Breakthrough Project Management is about combining Critical Chain Project Management and “alliancing” or collaborative contracting for a win-win efficient way to manage huge (or small) construction projects.

Soon when reading the guide, it becomes obvious that what the authors describe as efficient in construction and capex projects can be used in many other trades.

Watch the author’s conference

My takeaways from throughput accounting, the book

I knew the author, Steven M. Bragg from his podcast series “Accounting Best Practices with Steven Bragg” before I came across his book “throughput accounting, a guide to constraint management” published by Wiley & sons, 2007.

Book presentation

The hard cover book has 178 pages, 10 chapters, easy to read in neat presentation and legible fonts, with numerous tables, graphs and illustrations to back up all the provided examples and case studies.
It claims to contain the tools needed to improve companies performance for accountants, financial analysts, production planners or production managers.

The book starts head on by introducing the basics of Theory of Constraints (ToC) in an uncommon, and for me daring way: explaining very briefly the Drum-Buffer-Rope (DBR) logic, in chapter 1 (page 1!).

It is daring because it’s a shortcut putting DBR upfront when it’s usually presented to newbies (long) after explaining the bottleneck concept and the differences between traditional manufacturing, trying to run every resource at full utilisation rate, versus the ToC approach where “only” the bottleneck matters (this is another shortcut, but of mine…).

It goes on with presentation about the different types of constraints, not all being bottlenecks, discussing the nature of the constraint (page 5). The Throughput Accounting (TA) KPIs are presented page 7 and 8 before diving into the financial aspects of TA.

Chapter 2 is about Constraint Management in the factory, starting with how to locate the constraint and how to manage the constrained resource. The various hints are clearly targeting managers or readers keeping some distance from shopfloor as they give enough insight without being too detailed. No people will go through and get bored, the various hints are condensed within few lines, without giving up anything important.

Four pages deal with policy constraints, again something of interest for managers and readers that may have influence within their own organization to educate their colleagues about the drawbacks of some policies and hopefully change them. The importance of constraint buffer comes page 25 followed by the importance of proper batch sizes and machine setups.

Chapter 3 is about throughput (TA) and traditional cost accounting concepts and starts with the emphasis on cost versus Throughput and goes on with all the consequences describing why traditional cost accounting – companies that means – is suffering from several problems.

This chapter is important for people not very familiar with accounting, especially in operations, because it explains some of the decisions that make no big sense when considered from operations point of view. It is also important for those familiar with traditional cost accounting for to understand the limitations and problems brought up by that approach.

Chapter 4 is about Throughput and Financial Analysis Scenarios and from page 59 to 86 take the readers through 14 different scenarios, from Low Price, High Volume Decision to Plant Closing Decision.

Chapter 5 is on Throughput in the Budgeting and Capital Budgeting Process, chapter 6 about  Throughput and Generally Accepted Accounting Principles and chapter 7 about Throughput and Control Systems.

Chapter 8 details Throughput and Performance Measurement and Reporting Systems, interesting because it links the operations’ reality to usable KPIs, e.g.

  • Ratio of Throughput to Constraint Time Consumption
  • Total Throughput Dollars Quoted in the Period
  • Constraint Utilization
  • Constraint Schedule Attainment
  • Manufacturing Productivity
  • Manufacturing Effectiveness
  • Order Cycle Time
  • Throughput Shipping Delay
    And more.

Chapter 9 is named Throughput and Accounting Management and addresses 12 decision areas among which: Throughput Analysis Priorities, The Inventory Build Concept, Investment Analysis, Price Formulation.

Finally chapter 10 presents 7 Throughput Case Studies each of them in a couple of pages.

My takeaways

The book is easy to read and to all concepts are easy to understand thanks to the simple ways the author puts them. Not being an accounting specialist at all, I always liked the simple, pragmatic and concise ways Steven Bragg explains accounting rules or practices. This book is not different.

Reading “throughput accounting, a guide to constraint management” reinforced both my knowledge and my interest in throughput accounting, as well as the conviction about throughput accounting being a powerful and crucial decision-making approach.

I’ve marked dozens of pages with sticky notes highlighting my points of interest and/or inspirations for posts on my blog, reinforcing my consulting approach, etc.

Throughput accounting

Almost all companies have their management heavily influenced by traditional cost accounting and most of them make ill-oriented decisions. With the book’s content help, it is easier to explain to CFOs and CEOs why their decisions are biased by false assumptions or outdated rules, something that can be quite shocking to them.

The book doesn’t come cheap, but as it explains, quit reasoning in terms of cost savings and consider how much (intellectual?) Throughput it can leverage.


Bandeau_CH160608View Christian HOHMANN's profile on LinkedIn

My Takeaways from Big data, the book

I got my first explanations about Big Data from experts who were my colleagues for a time. These passionate IT guys, surely very knowledgeable about their trade, were not always good about passing somewhat complex concepts in a simple manner to non-specialists. Yet they did well enough to raise my interest to know a bit more.

I then did what I usually do: search and learn on my own. That’s how I bought “Big data: A Revolution That Will Transform How We Live, Work and Think” by Viktor Mayer-Schonberger & Kenneth Cukier.

Without turning myself into an expert, I got farther in the understanding of what is behind big data and got better appreciation of its potentials and the way it surely will “Transform How We Live, Work and Think”, as the book cover claims.

My takeaways

Coping with mass and mess

Big data as computing technique is able to cope not only with huge amount of data, but data from various sources, in various formats, able to show order in an incredible mess the traditional approaches could not even start to exploit.

Big data can link together comments on Facebook, twitter, blogs, websites and companies’ data bases about a product for example, even the data formats are highly different.

In contrast, when using a traditional database software, data need to be neat and complying to predetermined format. It also requires to be disciplined in the way to input data into a field, as the software would be unable to understand that a mistyped “honey moon” meant “honeymoon” and is to be considered, computed, counted.. as such.

Switch from causation to correlation

With big data, the obsession for the “why” (causation) will give way to the “what” (correlation) for both understanding something and making decisions.

Big data can be defined as being about what, not why

This is somewhat puzzling as we are long used to search for causation. It is especially weird when using predictive analytics, the system will tell a problem exists but not what caused it, why it happens.

But for decision-making, knowing what is often good enough, knowing why is not always mandatory.

Correlation was known and used before big data, but with big data and as the computing power it is no more constrained, limited to linear correlations, more complex non linear correlations can be surfaced, allowing a new point of view and even a bigger picture to look at.

I use to imagine it as a huge data cube i can handle at will to look from any perspective.

Latent, inexhaustible value

Correlation will free latent value of data, therefore, the more the better.

What does it mean?

Prior to big data, the limitations of data capture, storage and analysis tend to concentrate on data useful to answer the “why”. Now it is possible to ask huge mass of data many different questions and find patterns, giving answers to (almost any?) “what”.

The future usage of data is not known at the moment it is collected, but with low-cost of storage, it is not (anymore) a concern. Value can be generated over and over in the future, just going through the mass of data with a new question, another research… Data retain latent value until it will be used and used again, without depleting.

That is why big data is considered the new ore and it is not even exhausted when used, it is a kind of infinite usage. That’s why so many companies are eager to collect data, any data, many data.

Do not give up exactitude, but the devotion to it

For making decisions, “good enough” information is… good enough.

With massive data, inaccuracies increase, but have little influence on the big picture.

The metaphor of telescope vs. microscope is often used in the book; when exploring the cosmos, a big picture is good enough even so many stars will be depicted by only a few pixels.

When looking at the big pictures, we don’t need the accuracy of every detail.

What the authors try to make clear is not giving up exactitude, but the devotion to it. There are cases where exactitude is not required and “good enough” is simply good enough.

Big versus little

Statistics have been developed to understand what little available data and/or computing power could tell. Statistics are basically extrapolating the big picture from (very) few samples. “One aim of statistics is to confirm the richest findings using the smallest amount of data”.

The computing power and data techniques are nowadays so powerful that it is no more necessary to work on samples only, it can be done on the whole population (N=all).

Summing up

I was really dragged into reading “Big data”, a well written book for non-IT specialists. Besides giving me insight of the changes and potentials of real big data, it really changed my approach with smaller data, the way I collect and analyse them, how I build my spreadsheets and how I present my findings.

My takeaways are biased as I consider big data for “industrial”, technical data and not personal ones. The book shares insights about risks of the usage already made of personal data and what could come next in terms of reduction of or threat to privacy.


Bandeau_CH36If you like this post, share it!
View Christian HOHMANN's profile on LinkedIn