What is autonomous maintenance (TPM)?

Autonomous maintenance is one of the 8 Total Productive Maintenance (TPM) pillars, it aims to give both competence and responsibility for routine maintenance, such as cleaning, lubricating, and inspection to operators.

The aims and targeted benefits of autonomous maintenance

The ultimate goal of Total Productive Maintenance is to enhance machines’ effectiveness. TPM is a participative approach, involving all stakeholders and taking into account all aspects of maintenance. In order to achieve this goal, TPM is split in 8 pillars or topics. Autonomous maintenance is one of the 8 and is about simple mundane tasks, but having their importance nevertheless. The expected outcomes are:

  • Operators’ greater “ownership” of their equipment
  • Increased operators’ knowledge of their equipment
  • Ensuring equipment is well-cleaned and lubricated
  • Identification of emergent issues before they become serious failures
  • Freeing maintenance personnel for higher-level tasks

Operators’ ownership

Operator’s ownership of their equipment is meant to close the divide between Production and Maintenance in cases where the first claim “my job is to produce” and the second “my job is to repair”. This is mainly the case when production staff is incentivized on production output and maintenance is jealous about keeping its technical skills and prerogatives.

What happens then is that production operators do not usually care much about the equipment and machines they use and are prone to trespass speed limits, for example.

As they are not supposed to do anything about the machine breaking down, they soon find out that breakdowns are opportunities for an extra break, hence an extra smoke, one more coffee and so on.

As a result, machines stops last longer as they should: waiting for maintenance staff to come, discover the cause of the trouble, fix it, waiting for the operators to come back and resume production.

It can go the other way when production is strongly incentivized on units produced: any stoppage or breakdown jeopardizes the bonus and is immediately resented when maintenance doesn’t fix the problem fast enough.

What TPM is trying to do: give operators a sense of ownership of their equipment in order for them to take care, use it well, help maintenance technicians to find the causes of breakdowns by summarizing what happened before, and so on.

In order to achieve this, training must be delivered to both production and maintenance staff, focusing on the required cooperation for the sake of overall performance improvement. It will be a win-win cooperation: operators enriching their jobs with technical aspects and maintenance technicians being freed of low-qualification tasks for a better use of their real technical expertise. However, this must be done step by step.

Increasing operators’ knowledge of their equipment

Operator will use their equipment and machines correctly if they are trained not only for the use, but also a bit further into technical details. When operators have a basic understanding of how a machine works, they may be able to discover some causes of malfunction by themselves and give precious indication to maintenance team. With this focus, downtime can be reduced as maintenance does not have to go through a full investigation. If operators show interest and abilities, they may be trained further, to a point they can help maintenance with repairs, preventive maintenance tasks, adjustments, etc.

In my years as production manager with Yamaha, we brought teams of ladies to take care of the maintenance of automatic electronic components insertion machines. These ladies started as operators without any technical background, only feeding the machines. Step by step we trained them to take care of simple cleaning tasks, then adjustments, later exchanging more and more complicated mechanisms and finally be involved in major repairs.

Ensuring equipment is well-cleaned and lubricated

Before dreaming of repairing complex equipment, the journey starts with more mundane but important tasks: cleaning and lubrication.

But it’s more than that. Autonomous maintenance is about passing over  to operators the basic cleaning of the machines, lubricating and oiling, tightening of nuts and bolts, etc.

With these new tasks, operators will soon be able to take over daily inspection, diagnosis of potential problems and other actions that increase the productive life of machines or equipment. With appropriate prior training, of course.

Identification of emergent issues before they become serious failures

Cleaning and lubrication by operators is not a trick to reduce manpower costs by pushing tasks to lesser qualified people. On the contrary: TPM considers daily cleaning as an inspection and operators as subject matter experts. Indeed, operators using the machines and equipment daily are the best qualified detectors of early signs of problems. While cleaning they can detect: wear, unusual noises, vibrations, heat, smell, leakage, change of color, etc.

Using the machines frequently, they know best what is “as usual” and what is unusual. Someone hired only to clean and lubricate machines without using them would not be able to notice the forerunning signs of potential big trouble.

This daily inspection is key to reduce breakdowns by keeping the machine in good condition and by warning early – before breakdown – in order to remedy swiftly to unusual forerunning signs.

Freeing maintenance personnel for higher-level tasks

Putting skilled professionals in charge of challenges matching their expertise is certainly more attractive than asking them “to clean up other’s mess”, as maintenance staff frequently complain. Therefore the reluctance to train production operators for simple tasks and hand those over should not be a big deal for maintenance techs.

Production management should also see the opportunity to have better technical support for improvement and repairs, as skilled technicians are made more available. Of course, this comes at the expense of some daily minutes devoted to take care about machines instead of producing parts. In the long run, this should be a good deal, because less breakdowns, less scrap, fewer minor stops and faster changeovers thanks to technical improvements will pay back in productive capacity.

Finally, for production operators, the deal is to enrich their job with more technical content. For those immediately claiming acquisition of new skills deserve a pay raise, they should first consider that taking care of machines and equipment they are in charge is a basic expectation, not an extra requirement. Time will be given to do the daily maintenance routine. For operators it’s a shift of occupation content a few minutes a day.

Now this said, the question of a raise is to be considered in the context.

About the author, Chris HOHMANN

About the author, Chris HOHMANN

View Christian HOHMANN's profile on LinkedIn

Advertisements

The fallacy of maturity assessments

Maturity assessments are a kind of qualitative audit during which the current “maturity” of an organization is compared to a maturity reference model and ranked accordingly to its score.

As explained in the wikipedia article about maturity model (https://en.wikipedia.org/wiki/Maturity_model), the implementation is either top-down or bottom-up, but from my experience it is mostly top-down. The desired maturity score is set by the corporate top management in its desire to bring the organization to a minimum level of maturity about… Lean, Supply Chain practices, project management, digital… you name it.

The maturity assessment is usually quite simple: a questionnaire guides the assessment, each maturity level being characterized by a set of requirements. It is close to an audit.

The outcome of such an assessment is usually a graphic summary displaying the maturity profile or a radar chart, comments about the weak points / poor scores and maybe some recommendation for improvement.

The gap between the current maturity and the desired maturity state is to then to be closed by an action plan or by following a prescribed roadmap.

What’s wrong with maturity models/assessments?

1 – The fallacy of maturity assessments

A maturity assessment would be ok if it would be considered for what it is: a maturity assessment. But the one-dimensional assessment is too often used as a two-dimensional tool by assuming that the level of operational performance is positively correlated to maturity.

In other words: the better the maturity, the better the operational performance.

Indeed, such a correlation can frequently be found, but correlation isn’t causation, which means that there is no mechanical nor systematic link between the maturity and performance level.

Even so the high level of maturity matches a high level of performance and vice-versa, there is no guarantee that performance will raise if maturity is raised.

Furthermore, studies have shown that there are exceptions and organizations with low maturity perform better than some high maturity ones. You may be interested reading my post How lean are you part 2, about Awareness / performance matrix about this subject.

Therefore the belief in the positive correlation between maturity and performance that makes it a kind of law is flawed or is nothing more than wishful thinking.

Many organizations boast about their high maturity, the number of kaizen events, number of workshops, number of colored belts, the number of training sessions or worker’s suggestions but there is nothing impressive to be noticed on the gemba.

Now I can understand why most managers and improvement champions like the sole maturity assessment:

  • it is much easier to do
  • the assessment items can be common to very different units with different activities
  • the general roadmap and global target are easy to set
  • maturity objectives are qualitative

On the other hand:

  • measuring overall performance that can be compared can be more tricky, especially in an organization with several different core businesses
  • it is annoying to admit that all efforts to raise maturity are not paying-off in terms of performance and painful to explain why

2 – The one-fits all maturity targets

Another problem with maturity assessment is that some corporations dictate a minimum maturity level regardless to local realities.

That’s how some subsidiaries doing well with regards to performance get bad maturity scores because they do not apply SMED (Single Minute Exchange of Die, an approach to reduce the changeover duration). The point is these subsidiaries have more or less continuous production processes with huge batch sizes that barely change. Why would they go for SMED when they don’t need it? The same case can be told with one-piece flow or heijunka (load levelling) enacted as a must do.

Others are scoring poor because they didn’t Value Stream Map (VSM) their processes. The fact is that those units had no problems a VSM could help to solve. The example list can go on and probably, dear reader, you have faced such situations yourself (leave your testimony in the comments..!)

3 – Doing it to be compliant, not because it makes sense

This third point is a corollary to the previous one. Because the objectives have been set at higher level and in order to be compliant, most unit manager will pay lip service to the dictated targets, get the scores good enough and be left alone once the assessment is done.

The local staff recognizes the nonsense of the demanded score, yet goes for the least effort and instead of fighting against the extra unnecessary work, chose to display what top management wants.

This the typical “tell me how you’re measured, I tell you how you behave” syndrome inducing counterproductive behaviors or practices.

While top management will be pleased with the scores enforcing its flawed belief, the local units managers did not embrace at all the practices, tools or methods prescribed. They only camouflaged the reality.

Wrapping up

Maturity assessment are not a bad thing per se, but their practicality and simplicity are often misused to assess more than just maturity (or awareness). This is most often misleading because of the false underlying assumptions and promoting wrong behaviors and practices.


PS: You may be interested to read Michel Baudin’s comments on his own blog about this post: http://michelbaudin.com/2017/08/22/the-fallacy-of-maturity-assessments-chris-hohmann/


View Christian HOHMANN's profile on LinkedIn

OEE Rescue: OEE is composite and does not tell much per se

Overall Equipment Effectiveness (OEE) is probably the most widespread and well-known among KPIs in industry, which does not mean that everyone likes it. OEE rescue is a series of posts that aim to balance the love-hate comments about this KPI as well as debunking some myths and misconceptions.

In this post: OEE is composite and does not tell much per se

Yes, OEE is composite. OEE is expressed in a single dimensionless value. It’s a ratio, a multiplication of 3 other ratios (availability, performance and quality).

Not familiar with OEE? Follow this link

What I immediately liked when I discovered OEE is the fact that multiplying 3 fractions leaves the result smaller than the smallest fraction, meaning it is a very aggressive and challenging KPI.

Any worsening of one of the 3 constituent will amplify the worsening of OEE, which in turn should trigger quick countermeasures to stop the KPI to plunge.

I do not agree OEE per se doesn’t tell much. The original intent, I assume, was to provide management with a single value in order to get an instant feeling about how the overall performance stands, as well as a convenient benchmark when comparing machines, lines, workshops or factories. And it does the job well..

This advantage of being synthetic is also a drawback as it is necessary to “disassemble” OEE to its components in order to understand which of the availability, performance or quality is the evildoer.

But again, I see here an interesting “constraint” for management: the head of department will review OEEs and get a broad feeling about how well the various lines or cells of his/her realm are doing. When intrigued or alarmed by a poor OEE, he/she will turn to the supervisor or line leader to get more information.

The latter needs to know more precisely what’s going on as it is his/her responsibility to keep OEE at best. This required dialog is, from my point a view, a good way to have management commit to interact in both directions: top-down and bottom-up.

I suspect that the managers not liking OEE struggle to drive and maintain theirs on the expected level. Instead of looking how to boost their OEEs, they probably prefer criticizing the concept.

It is one thing to display a quality rate of 93%, a machine availability of 90% and a performance rate of 95%, which at first glance look good, and another thing to report a OEE of 79.5% which is exactly the same (0.795=0.93×0.90×0.95), except for the perception.

Yes, OEE is humbling.

By the way, there are other KPIs that are composite like the On-Time-In-Full (OTIF). When OTIF is bad, is it the On-Time or the In-Full part that hurts? You don’t know until you dig deeper into the details. Would you dare replying to your furious customer measuring your performance with OTIF that this KPI is composite and does not tell much per se?

View Christian HOHMANN's profile on LinkedIn

Standing in the Ohno circle. And then?

Ohno circle is also known as “Taiichi Ohno’s Chalk Circle”, a circle drawn on the shop floor to materialize the observation point from where to learn to observe, see, analyze and understand.

The original method puts (commits?) the “disciple” in such a circle for extended time with instruction to watch and not leave the circle. After the time the master judged sufficient, he (would a lady-master do this to others?) will ask the disciple to tell what he/she have seen, of course expecting to feedback on something the master’s attention got caught.

I would probably never had impressed Ohno that way, nor would I have appreciated this kind of treatment. With such a vague assignment and a “creative brain”, my observation would probably have turned into a virtual mind stroll.

Getting a scolding afterwards for not having experienced an epiphany (e.i. the great revelation) while dreaming in my circle or for having dared stepping out of it would not have pleased me, at all.

As I never was never told to do it and never have done it this way, the reservations and benefits I express in this article are merely assumptions.

My reservations about the chalk circle

Holding a static position for observing and understand when there is no other reasons than the master’s saying so does not make sense.

Observation and understanding is certainly easier and more effective when observers can change point of view and ask questions.

Executing a task without knowing the purpose is not very motivating and just been told to “watch” without moving from the spot is not very respectful.

Hence my question: Couldn’t it be nothing else than a manager-humiliating exercise disguised as a master’s skill?

Being told to watch may lead to have too much to look at, especially when not familiar with the environment. Chalk circle promoters will answer that this is precisely what the exercise is meant for: get and overall impression then gradually become aware of things in foreground/background, what is normal and what is abnormal and eventually focus.

So far so good, but does a manager need such a constraint method and spend several hours to get a fair level of understanding? In the era of high speed and volatility, the understanding-to-time ratio does not make the chalk circle a method with reasonable ROI.

Lean Management has long promoted Gemba Walks, not Gemba Stands, where the motto is go see, ask why and show respect.

This way is probably far more effective than standing hours in a circle.

If it wasn’t the case, Lean gurus and the Lean community would have made it clear, long time ago.

Being convinced to have observations and analyzing skills and voluntarily spend time watching from a static standpoint may lead to erroneous conclusions, a risk easily mitigated when changing the vantage point and interacting with subject matter experts.

To me the chalk circle method looks outdated and rooted in asian master-to-disciple apprenticeship, no more fit for purpose in current times.

Benefits (Devil’s advocate)

Over the years and with more experience and wisdom, I’ve somewhat softened my first impression and could see some benefits about observing while “standing in a circle”.

There are some situations in which walking around freely to observe a situation and asking people questions is simply not possible.

In such cases, having developed ability to watch, analyze and understand is indeed a great asset. Think about my trade as a consultant during diagnostics or a buyer during a supplier’s assessment.

Organized factory tours are other instances with limited possibilities to move freely or get good answers to questions. Here again, the individual ability to observe and understand is a great asset as it will yield more information than the host is willing to share.

In some cases, the knowledge about something isn’t existing and there is noone to ask for explanations. I experienced this in a factory, facing a machine with unstable performances, in a noisy and space-limited location. Spending several hours in several sessions, taking data and observing the machine’s cycles helped me understand the kinematic and some of the malfunctions.

What to look for?

Alright, I have shared my cons and pros, now what can I recommend to look for when observing, in a circle or not?

  • Look for the sequence. In industrial production, in logistics or in services, what you’re looking at may be in some degree a repeatable process. What is the sequence? In what order are things done or do things happen?
  • Look for harmony. Mastered motions are seamless. Controlled processes are operating smoothly.
  • Count. Count the resources involved, the physical units moved, produced or consumed. Count the steps walked, the number of times one person have to stoop, to pick up the phone or turn to someone.
  • Estimate. If counting is not easy or impossible: estimate. Get a sense of duration, of time elapsed between two events.
  • Look for consistency. Do your counts or estimates repeat themselves sequence after sequence or do you see variations?
  • Look for disturbances. What/who is disrupting the flow? How frequent and how long is it?
  • Look for the bottleneck. Is some spot the accumulation point where flow is significantly slowed down? Why? Is it managed?
  • Search for the muri, mura and muda, the 3 evil doers from a Lean point of view. Muri and mura are lesser mentioned, so try to spot them first. Chances are muri and mura, if they exist, will induce some muda.

These are a few hints. The question list could go on endlessly. But if your observation exercise ends up with answers to most of these questions, it may have been worth the time spent.

Feel free to share your comments and experience.


View Christian HOHMANN's profile on LinkedIn

7 questions to help you reduce projects‘ duration

On one hand, in current competitive environment, time to market and speed to respond to customers’ needs is a Critical Success Factor, often more important than sales price.

On the other hand, projects templates used in companies have “grown fat” over time with an inflation of additional tasks, milestones and reviews, thus extending project’s’ duration.

>Lisez-moi en français

Why templates grew “fat”

Organizations dealing repeatedly with projects will soon develop templates of Work Breakdown Structures (WBS) holding the most current tasks and milestones. These canvasses speed up somewhat the project initiation and insure some degree of standardization.

Over time though, the copy-pasting from one project to the next, the addition of “improvements” and requirements as well as countermeasures to problems kind of inflate the templates and the projects. This in turn extends the project’s duration as every additional task not only adds its allocated time to completion, but also the safety margin(s) the doer and/or project manager will add on top.

Most of the countermeasures and special reviews were meant to be temporary, only for fixing specific problems. But hassle and lack of rigor soon let them in the next copy-paste template and over time their original purpose gets forgotten and those specific and temporary fixes end up being… standardized!

This is how loads of unnecessary tasks extend project duration without anyone noticing it.

In order to stick to delivering on due date, and in some extend reduce the project duration, Critical Chain Project Management (CCPM) proposes some solutions. Yet those solutions mainly concentrate on a smarter use of the margins without challenging the value and necessity of the tasks themselves.

Therefore, reducing the margins and sharing the risks with a common project buffer, everything else remaining equal, the reduction of the total project duration is limited.

Now, combining CCPM with a Lean-inspired approach, projects can be shortened even more.

Challenging every task

The proposed approach is to scrutinize every task and investigate about its usefulness and its added value, as well as about the allocated resources to achieve it.

The idea is to get rid of unnecessary or low-value adding tasks cluttering the WBS and reduce the workload placed upon the scarcest and most expert resources, reduce the related costs and most of all reduce the time required for completing the whole project.

In a Lean Thinking way these kinds of tasks are wastes of resources and time and should be eliminated. If it’s not possible to eliminate them, is it at least possible to to reduce them to the bare minimum?

Here are 7 questions to help you surface these kind of resource drainers and waste generators in your WBS

1. Is this task really necessary? Why?

As soon as the purpose of one task is not obvious and cannot be simply demonstrated, some investigation is advised. Before rushing to the conclusion it useless and can be eliminated, one must verify that the outcome of this task is not required elsewhere in the project as an answer to some regulatory, standard or technical requirement.

The next question can help to answer this one.

2. What would happen if this task wouldn’t been completed?

A really useful task should answer a need. This one can be explicitly expressed in the requirements or in a procedure for example. It can also be implicit and naturally impose itself.

Most projects embed lots of reviews, gates and reporting points. These are resources and time drainer added by anxious project managers and customers. Yet not every project has very high stakes neither is jeopardized. What can make sense for a very sensitive project is not necessarily required for EVERY project.

What would happen if tasks and deliverables related to these reviews, gates and reporting would be omitted?
If a try shows nothing happens, it’s either an evidence of:

  • the no/low value
  • the lack of rigor in project management and follow-up
  • the lost sense of reviews as a management ritual

I remember a manager having put on hold projects for which project managers didn’t demand reports or reviews several weeks after voluntarily stopping to report progress.

It ultimately led to unclutter the project portfolio of several “nice-to-have” or “to-be-done-when-we-have-time” projects and free valuable capacity for sellable ones.

3. Who will benefit the outcome of the task?

In a well structured WBS no task should end “nowhere”. Who benefits from a task, usually the successor, should be directly readable in the WBS.If it isn’t the case, the value of a task as well as the robustness of the WBS must be challenged.

4. Is this task adding any value?

Value-Added is something the customer is willing to pay for. When assessing the value of a task, the right question is: can the outcome of this task be sold? Is anybody ready to pay for it?

An alternative in product, process or software development exists though, which is creating new, reusable knowledge (Lean Engineering/Lean Product and Process Development). This is considered a kind of investment.

If a task adds nothing worthy to a paying customer nor new knowledge to the company, it adds no value. To keep it or not sends back to question 1.

5. Does this task really require this resource? Why?

Once the task is assessed as useful, the next question is about the allocated resource.

One good practice is to allocate the lowest qualified resources to any task in order to save the more competent and expert resources, which are scarcer and thus more precious, from the mundane tasks that can be achieved by more common and cheaper resources.

If a task requires a scarce, expert resource, the next question is: how come?

Overburdening scarce and precious resources is one major reason for projects taking long time as the flushing of their tasks backlog requires the project managers to level the load, thus push back the completion of staggered tasks.

Many project managers compete to have the best resources allocated to their projects. Success and reliability attract attention and any project manager wants the best team in order to achieve his/her challenge. Always picking the same best ones ends up with overburdening them. Besides, not challenging the lower performers will not help them to improve.

6. Can it be done differently?

The alternate ways to consider here are both technical solutions as well as alternate resources.

  • Technically: can it be done differently so that the scarce bottleneck resource(s) is/are less required? Simpler solution may require less expert resource to implement it, for instance.
  • At the resource level, is it possible to delegate to a lesser constraint resource? Is it possible to subcontract?

These alternates should be considered for the sake of project’s duration reduction first, then for cost efficiency.

7. Must this task be done at that moment/stage of the project?

Some tasks have some degree of liberty with regards to when they must be fulfilled. Moving their relative place in the project structure may help limiting the overload and load levelling.

Wrapping up

Challenging necessity and contribution of all tasks in a project helps reveal those useless and of low added-value. Getting rid of them shortens the project’s duration accordingly, provided those task are on the Critical Chain.

This reservation is a hint about where to look first: the string of tasks on the Critical Chain.

The second benefit of this approach is to reduce the workload of the scarcest, most constrained resources, thus reducing the effect of load-leveling, hence project duration.

View Christian HOHMANN's profile on LinkedIn

Could Six Sigma have more harmed than helped?

I started my career in the heyday of Total Quality Management (TQM) in France, beginning of the 1980’s and witnessed over the following years how the TQM trainings and deployments built a quality-aware culture in the companies and spread to everyday’s life.

Over time though, other “Japanese Methods” became fashionable and the hype was on the flavor-of-the-month, something the rich Lean toolbox had plenty to offer.

Quality still was a hot topic, but more for getting ISO9000 certified than delivering greater value to customers. The latter was now more a goal for Lean, seamlessly taking over the TQM legacy.

Then came Six Sigma, rewamping TQM with more math inside and those irresistible colored belts. Six Sigma was not intended to merge with anything else, hence the coining of “Lean Six Sigma” or “Lean Sigma” to describe the attempts to merge the two.

Reflecting on nearly 40 years of evolution of the business philosophies, approaches or methodologies, could Six Sigma have more harmed than helped?

Reminder about Total Quality Management (TQM)

What made Total Quality Management (TQM) “Total” was the aim of embarking ALL employees to participate in working toward a common goal: satisfy the customer.

Group activities (quality circles) were organized in order to get everybody’s brainpower, knowledge or understanding about quality issues and the ways and means to solve them.

A simple toolbox holding basically 7 tools was made available to everybody. Those tools were simple enough that a light training and moderate coaching would suffice for anyone to understand and use them. The required level of math was not more than basic arithmetic operations. Everyone was considered a Subject Matter Expert and was invited to participate.

The 7 tools are (minor variations can be found):

You may check my post about these tools >here<

Once problem solving started, continuous improvement would follow. Besides improving value for customers TQM brought also a sense of purpose to all stakeholders, the understanding of how their daily tasks would contribute to bring flawless products or provide great services to customers, turning them into loyal ones and insuring the future success of the organization.

When Lean came to enlarge the scope, it embedded TQM so that there was a continuum and the same people would keep on with a greater variety of problems to solve.

Six Sigma and its colored layers of expertise

Six Sigma is a different approach in which (originally*) statistical math play a key role.With new mathematical knowledge required, the participants roughly divided in three groups:

  • Those lacking the math
  • Those having limited knowledge of it
  • Those having the knowledge or able to get it

To distinguish the levels, martial arts inspired colored belts were created, making the hierarchy of knowledge and related prerogatives official. From then on the working groups switching from total participation regardless to rank and knowledge in TQM and Lean  to Six Sigma, now divided between the few geniuses and their many servants**.

*Over time some “Six Sigma” programs softened up on data and math, promote almost only DMAIC and 5S

**A great quote (approximately) by Jim Collins’ “Good to Great”

The Six Sigma “aristocrats” define what is to be done and the lower ranking take over the mundane tasks to prepare data and stuff for the masters to do their science on. The low ranking participants, awarded white and yellow belts, are less and less involved and empowered, eventually losing the sense of purpose.

All that made TQM and later Lean so great and acceptable by shopfloor people fades away. Interest also fades, as only few people are attracted by the abstract side and difficulties of statistical math.

On top of that, the Black Belts and Master Black Belts, usually challenged and rewarded on the achievements they lead, keep close control of the rollouts, so that they (purposely) become bottlenecks, a limiting factor to solving more problems.

Now it seems to take ages before getting a problem solved, while it was much quicker to those who have experienced simpler, more practical and pragmatic approaches.

Could Six Sigma have more harmed than helped?

Here we come to the question : Could Six Sigma have more harmed than helped? Well, there is no single answer. Six Sigma proved great to solve tricky problems that required more hard science than the simpler methods and tools.

  • Six Sigma reinvigorated quality when this discipline was turning into ensuring compliance to standards, regardless to results.
  • Six Sigma, just as lean, has been deployed properly and sometimes it was misused.

So everything is not just black or white.

Yet I assume that the bad uses outnumber the good ones, that marketing dressed a scientific method with fancy dresses and greedy promoters sold it as a one-fits-all trendy cure to all troubles.

I therefore do not believe in true overall quality improvement and most of all I fear that this segregated approach, with savvy experts on one side and driven doers on the other, turned many people away from developing their own problem solving skills and being engaged in improvements.

For the latter, yes I believe Six Sigma has more harmed than helped.

Your comments are welcome.

View Christian HOHMANN's profile on LinkedIn

Problem solving: what was the last change?

This post could be a sequel of “Yeah, problem solving” in which I used Peter Senge’s quote: “Today’s problems come from yesterday’s solutions”.

Quite often people we consultants meet are puzzled by a problem they can’t understand:

  • a reliable process or machine suddenly seems out of control,
  • steady performance dropped unexpectedly and with no apparent reason,
  • sudden quality issues with trusted supplies,
  • etc.

Our experience lead us to investigate the last change made, precisely because of the “wisdom” of Peter Senge’s quote: chances are that a modification (fixing a problem) led to unexpected Undesirable Effects and causing new a problem to appear.

Of course, the modification to look for is seldom the worried person’s ones, which he/she would most probably remember and perceive the possible cause-to-effect relationship.

No, the modification more likely happened outside the span of control and without the knowledge of the impacted people.

A modification leading to a problem in a lengthy process can happen far away (both in process steps and location) from the point the problem appears, letting the people perplexed about this reliable process now out of control.

Purchasing and procurement choices are unfortunately often the unintentional culprits, buying a slightly different grade of material, changing a supplier or accepting a low quality batch with the best intentions: cut costs or ensure timely deliveries.

When facing a puzzling problem the investigation should follow “the last modification path”.

This isn’t always easy though. The Undesirable Effects brought up by the change may be minimized or even neutralized for a while, long enough for everybody to forget about the nature of the change, when it happened and its consequences then.

That’s precisely why some industries with strong safety and regulatory constraints like aeronautics or pharmaceutical have to be cautious about any modification (needs approval after thorough risk assessment) and capture every information about virtually anything (dates, manufacturing conditions, persons in charge, certificates…), in case an investigation must find the root causes of a deviation (or worse), long time after the triggering action occurred.

When the problem cannot longer be neutralized by the former forgotten fix, it looks like a new problem.

Searching for the last change is often a good guess, yet not always leading to the root cause. Keep in mind that some modification correlate nicely with the apparition of the problem, but correlation isn’t causation.

3

View Christian HOHMANN's profile on LinkedIn

Yeah, problem solving

Most people love to solve problems and feel the satisfaction of getting rid of some nasty tricky problem. It’s an outdated but still lasting belief that management is about problem solving. Problem solving turned in some cases into the managers’ and engineers’ holly mission and in some minds, the more problems the manager/engineer solves, the better manager/engineer he/she is. This kind of problem solving can be addictive, hence the Arsonist Fireman Syndrome.

On the other hand, thanks to Lean Management, enlightened managers understand it is crucial to refrain from solving problems and develop their subordinates’ ability to solve problems themselves instead.

Note that all the above is about problem solving, not problem avoidance or problem prevention. And if today’s problems come from yesterday’s solutions, as stated in Peter Senge’s “The 11 Laws of the Fifth Discipline”, in a world requiring increasingly fast decisions (read solutions), we’ll never run out of new problems to solve.

So what’s wrong with problem solving?

There are at least 2 major issues with actual problem solving practices.

1. Quick fixes

Solutions to problems are most often quick fixes made of the first “best” idea that popped up. Problem solving is not very often a robust and standardized process, systematically rolled out. In fact formal problem solving processes seldom exist even if everybody is claiming solving problems.

If known, simple structured approaches like PDCA are disregarded and ignored, pretending the situation requires quick reaction and not “unnecessary paperwork!”

Often, the problem seem to be fixed, giving credit to the firefighters and reinforcing their belief in their “way” of handling.

It is not really surprising that the same problem keeps showing up as the fixes did not eradicate the problem’s root cause, and the problem itself was never really studied, hence understood.

2. No risk assessment / risk mitigation

If formal and structured processes to tackle problems are seldom, the solutions’ risk assessment is even more seldom. And if the rush to quick fixes leaves no time for properly analyzing the possible problem root causes, no need to mention non-existing attempts to figure out the possible risks these quick fixes bring with them.

Chances are that the ill-prepared and hastily put in place solutions generate unexpected Undesirable Effects. What may fix one problem may well cause one or several others to appear.

That’s how quick and dirty troubleshooting usually come at the expense of later longer efforts to cope with a situation that possibly grew worse, and how Peter Senge’s quote: “Today’s problems come from yesterday’s solutions” makes the most sense.

What solutions?

  • Choose yourself a structured problem solving approach, there are several available. Try it and if proven suitable for your purpose make it your standard way of approaching a problem.
  • Make sure the implemented solutions will really kill the problem by measuring on a long time horizon if the trouble has disappeared for good. The Quality Operating System is perfect for that.
  • Explore the Logical Thinking Process, the sole complex problem solving methodology I know which includes a systematic “Negative Branch” check to avoid or mitigate Undesirable Effects as by-products of the implemented solution.

1View Christian HOHMANN's profile on LinkedIn

The benefits of failing fast

In a recent conversation with a friend of mine, CEO of a small Consulting firm, he explained me how he energized his small company using Lean Startup principles and tools.

Especially when it comes to answer calls for tender or a request for a proposal, Frederic (his name) has gotten pickier.

My test, he said, is to ask when I can come and present my proposal. If the person asks to receive it by e-mail or tries to escape the presentation, chances are there is no genuine interest. I can save myself precious time for something doomed from the beginning. I won’t inflict pain to myself starting to answer. Fail fast, save time.”

Being still somewhat old school, educated in a system and at a time when failing was not fashionable, I realized that “failing fast” is not only about physical widgets or apps not working (even if called Minimum Viable Products) or services nobody care about except their creators, but also about the more mundane and lukewarm requests from prospects.

I recalled how many proposals I wrote myself, for which I got stupid excuses to turn them down, if any answer ever came. I could have failed fast and saved myself a lot of time!

Indeed, some prospects are asking for proposals to gather some intelligence on a subject, fuel their own creativity, get a free guideline to roll out the proposed program by themselves or just to please the purchasing department with more than their favorite proposal because the procedure requires at least three.

In a time of harsh competition it’s sometimes hard to discard an opportunity for business, but here one has to remember that every inquiry and call for tender is not a true opportunity.

And failing fast has real benefits, it saves time!

Bandeau_CH160601
View Christian HOHMANN's profile on LinkedIn