What data for changeover monitoring and improvement?

CapacityMaximizing the exploitation of critical Capacity Constraint Resources (CCRs), so called bottlenecks, is crucial for maximizing revenue. Changeovers usually have a significant impact on productive capacity, reducing it with every new change made on those resources that already have too few of it.

Yet changeovers are a necessary evil, and the trend is going for more frequent, shorter production runs of different products, so called high mix / low volume. Consequently, changeovers must be kept as short as possible in order to avoid wasting the precious limited productive capacity of Capacity Constrained Resources (CCRs).

Monitoring changeover durations at bottlenecks is a means to:

  • reinforce management’s attention to the appropriate CCR management
  • analyze current ways of changing over
  • improving and reducing changeovers duration

Management’s obsession should be about maximizing Throughput of the constraints.

To learn more about this, read my post “If making money is your Goal, Throughput is your obsession”.

What data for changeover monitoring?

When starting to have a closer look at how capacity is lost during changeovers, the question is: besides direct periodic observations, what data are necessary and meaningful for such monitoring?

Before rushing into a data collecting craze, here are a few things to take into account:

In the era of big data, it is admitted now that one never has enough data. Yet data must be collected somewhere and possibly by someone. The pitfall here is to overburden operators with data collection at the expense of their normal tasks.

I remember a workshop manager so passionate with data analysis that he had his teams spent more time collecting data than to run their business.

Chances are that your data collection will be manual, by people on shop floor. Keep it as simple and as short as possible.

This a matter of respect for people and a way to insure data capture will be done properly and consistently. The more complicated and boring the chore, the more chances people will find ways to escape it.

Take time to think about the future use of data, which will give you hints about the kind of information you need to collect.

Don’t go for collecting everything. Essential fews are better than trivial many!

Be smart: don’t ask for data that can be computed from other data, e.g. the day of the week can be computed from the date, no need to capture it.

Example of data (collected and computed)

  • Line or machine number
  • Date (computed)
  • Week number (computed)
  • Changeover starting date and hour
  • Changeover ending date and hour
  • Changeover duration (computed)
  • Changeover type
  • Shift (team) id.

Explain why you need these data, what for and how long presumably you will ask for data capture. Make yourself a note to give data collectors regular feedback in order to keep people interested or at least informed about the use of the data.

Data relative to resources with significant excess productive capacity can be ignored for the sake of simplicity and avoid overburdening data collectors. Yet chances are that some day you’ll regret not having captured those data as well, and soon enough. Make your own mind about this.

Monitoring: what kind of surveys and analyses?

There are roughly two types of analyses you should be looking for: trends and correlations. Trends are timely evolutions and correlations are patterns involving several parameters.

Trends

One key trend to follow-up is changeover duration over time.

Monitoring by itself usually leads to some improvement, as nobody wants to take blame for poor performance i.e. excessive duration. As frequently things tend to improve spontaneously as soon as measurement is put in place, I use to say measurement is the first improvement step.

The first measurements set the crime scene, or original benchmark if you will. Progress will be appraised by comparing actual data against the original ones, and later the reference will shift to the best sustained performance.

In order to compare meaningful data, make sure the data sets are comparable. For instance certain changeovers may require additional specific tasks and operations. You may therefore have to define categories of changeovers, like “simple”, “complex”, “light”, “heavy”, etc.

Over time the trendline must show a steady decrease of changeover durations, as improvement efforts pay off. The trendline should fall quickly, then slow down and finally reach a plateau* as a result of improvements being increasingly difficult (and costly) to achieve, until a breakthrough opens new perspectives: a new tool, simplified tightenings, another organisation…

Changeover duration

*See my post Improving 50% is easy, improving 5% is difficult

>Consider SMED techniques to recover capacity

Correlations

Looking for correlations is looking for some patterns. Here are some examples of what to check:

Is there a more favorable or unfavorable day of the week? If yes, understanding the cause(s) behind this good or poor performance can lead to a solution to improve everyday performance.

Does one team outperform underperform? Is one team especially (un)successful? The successful team may have better practices than the lower performing ones. Can those be shared and standardized?

For instance if one team consistently outperforms, it could be this team found a way to better organize and control the changeover.

If it is the case, this good practice should be shared or even become the standard as it proved more efficient.

I happen to see the performance data from a night shift in a pharma plant being significantly better than the day shifts. Fewer disturbances during the night was the alleged cause.

Be critical: an outstanding team may “cut corners” to save time. Make sure that all mandatory operations are executed. Bad habits or bad practices should be eradicated.

Conversely, poor performing teams may need to be retrained and/or need coaching.

Is one type of changeover more difficult to master? Search for causes and influencing factors. Some engineering may be required to help improving.

These are only some examples of patterns that can be checked. Take time to consider what factor can have some influence on changeover ease and speed, than check how to test it with data and how to collect these data.

Note that correlation is not causation. When finding a pattern, check in depth to validate or invalidate your assumptions!

Speak with data

All the data collection and analysing is meant to allow you and your teams to speak with data, conduct experiments in a scientific way and ultimately base your decisions on facts, not beliefs or vague intuitions.


Bandeau_CH160601View Christian HOHMANN's profile on LinkedIn

Advertisements

So many wasted data

In many organizations people capture a lot of data and… just ignore them, wasting their potential value.
The latest case, at the moment I write this post, is with an aircraft MRO company.

This post echoes a previous one: Trouble with manual data capture

Every aircraft undergoing MRO requires a lot of mandatory paperwork for the sake of traceability. The required information is either directly captured in an IT system, either written on paper and later input into the IT systems.

As this company wants to drastically reduce the duration of the aircrafts’ grounding for MRO and improve the reliability of its planning, the primary source of information to understand the causes of the problems is the data logbook.

I could easily figure out what kind of analysis to do and the correlation to look for, as adherence to planning for example.
Alas, as I was presented the database my enthusiasm quickly faded.

Some of the data supposed to be entered into the system simply wasn’t. Of course it happen to be the most interesting data for my analysis.

Work Breakdown is not always consistent across the portfolio, which makes comparisons challenging.
Mechanics would not always report their work on the appropriate work order. Thus work order lead time to workload correlation would be flawed.

It didn’t seem to worry management as much as it worried me, not because it could compromise my analysis but because the clients would not be charged the right amount (hours spent on an aircraft are billed).

According to data some aircrafts departed the MRO facility before they flew in. An indication of the lack of rigorous tracking as well as a lack in the software’s input trustworthiness checking.

And the list of flaws goes on.

A bit troublesome in a business boasting about safety by the way.

The pity is, as so often, that companies allocate resources to capture data and just ignore them. It would just require a little extra energy and rigor to exploit the data and use them to monitor, drive and improve their business.

Instead of that, just accumulating the data without exploitation is nothing more than wasting its value.

Bandeau_CH38View Christian HOHMANN's profile on LinkedIn

Play big on small data

Chris HOHMANN

Chris HOHMANN – Author

This weird title “Play big on small data” suggests the utilization of big data principles on small data sets. “Small” is to be considered relatively to huge amount of data big data can manage, which is not necessarily only a handful.


I came across big data with former colleagues who were IT experts and got a kind of epiphany about big data with the eponym book.

Since that reading I do not collect, structure and analyze data the same way anymore. I tend to be more tolerant about inaccuracies, mess and lack of data because what I am looking for is insight and big picture rather than certitude and accuracy.

As poorly tended datasets are the norm rather than exception, starting an analysis with this mindset saves some stress. The challenge is not to filter out valid data for a statistical significant analysis, but a way to depict a truthful “good enough” picture, suitable for decision-making.

Playing big on small data does not mean apply the technical solutions for handling huge amount of data or fast calculation on them, simply get inspired by an approach favoring the understanding of the “what” rather than the “why”, in other words, favor correlation instead of causation.

In many cases, a good enough understanding of the situation is just… good enough. Going down to the very details or make sure of the accuracy would not change much but would take time and divert resources for the sake of unnecessary precision.

When planning a 500km journey, you don’t need to know each meter’s details, some milestones are just good enough to depict the way.

Accepting to trade, when it’s meaningful, correlation for causation helps to get around the few and messy data usually available. Even so data may be plenty, for a given analysis they are too often few fitting the purpose and in the right format. It is then smart to look at other data sets, even if they are in the same state, and search for patterns and correlations that can validate or invalidate the initial assumption.

The conclusion is most of the time trustworthy enough to make a decision.

Bandeau_CH36
View Christian HOHMANN's profile on LinkedIn

If you liked this post, share it!

Trouble with manual data capture

Asking people to fill out forms in order to monitor performance, track a phenomenon or try to gather data for problem solving, too often leads to trouble when data is ultimately collected and analysed.

The case is about manual data capture into paper forms and logbooks on production lines. A precious source of information for a consultant like me. Potentially.

Alas, as I started to capture the precious bits of information from the paper forms into a spreadsheet, I soon realized how poorly the initial data were written:

Most of the forms were not thoroughly filled out, some boxes not filled, fields left blank, totals not calculated or wrong, dates not specified and a lot of bad handwriting leading to possible misinterpretation, among other liberties taken.

It seems obvious that the production operators do not understand the importance of the data they are supposed to capture nor the reasons for desired accuracy and completeness.

To them it’s probably a mere chore and not understanding the future use of the stuff they are supposed to write, they pay minimum attention to it.

It is also obvious that management is complacent about the situation and does not use the data, otherwise somebody else would have pointed out the mess before me, and hopefully acted upon.

Well, we can’t change the past and all data lost are definitely lost. The poorly input ones is all I could get, so I’ll had to make with what I had.

Thanks to a relatively important (I dare not write big) amount of data, flaws do not have too much impact, the big picture remains truthful. For me the importance is the big picture, not the accuracy of each single data point. (A takeaway from my exposure to big data!)

I noticed that most of the worse filled forms related to “special events”, when production suffered a breakdown, shortages and the like. These dots on the performance curve would anyhow been regarded as outliers and discarded for the sake of a more significant trend.

So it was not a big deal to disregard them from the beginning.

However, the pity was that no robust and deeper analysis could be conducted on these “special events”, not that unusual over a six-month period.

Some incomplete data could be restored indirectly, for example calculating durations from start and end time or conversely a missing timestamp could be restored from another date and duration for example. Sometimes, these kind of fixes introduced some uncertainty on the values, but again I was not after accuracy but trying to depict and understanding the big picture.

In order to be fair with personnel on the lines, I have to agree that some of the forms had poor design. A better one could have led to less misunderstanding or confusion. This acknowledged, the data reporting was not left to everybody’s choice, as it is mandatory by regulation.

Because to my great surprise and disappointment, this happened in pharma industry.


Bandeau_CH36View Christian HOHMANN's profile on LinkedIn

If you liked this post, share it!

Why Big data may supersede Six Sigma

Chris HOHMANN

Chris HOHMANN – Author

In this post, I assume in near future correlation will be more important than causation* for decision-making, decisions will have to be made according to “incomplete, good enough” information rather than solid analyses, thus big data superseding Six Sigma.

*See my post “my takeaways from Big data” on this subject

In a world with increasing uncertainty, fast changing businesses and fiercer competition, I assume speed will make the difference between competitors. The winners will be those having:

  • fast development of new offers
  • short time-to-market
  • quick reaction to unpredictable changes and orders
  • fast response to customers requirements and complaints
  • etc.

Frenzy will be the new normal.

I also assume that for most industries, products will be increasingly customized, fashionable (changing rapidly from one generation to the next, or constantly changing in shapes, colors, materials, etc.) and with shorter life cycles.

That means that production batches are smaller and the repeating of an identical production run unlikely.

In such an environment, decisions must be made swiftly, most often based on partial, incomplete information, with “messy” data flowing in great numbers from various sources (customer service, social media, real-time sales data, sales reps reports, automated surveys, benchmarking…).

Furthermore, decisions have to be made the closest to customers or where decision matters, by empowered people. There is no more time to report to a higher authority and wait for the answer, decisions must be made almost at once.

There will be fewer opportunities to step back, collect relevant data, analyze them and find out the root cause of a problem, not even speaking about designing experiments and testing several possible solutions.

Decision making is going to be more and more stochastic: with the number and urgency of decisions to make what matters is making significantly more good decisions than bad ones, the latter being inevitable.

What is coming is what Big data is good at: fast handling a lots of messy bits of information and revealing existing correlations and/or patterns to help making decisions. Hence, decision-making will rely more on correlation than causation.

Six Sigma aficionados will probably argue that no problem can be sustainably solved if the root cause is not addressed.

Agreed, but who will care about trying to eradicate a problem that may be a one-shot and which solving time will probably exceed the problem duration?

In a world of growing interactions, transactions and in constant acceleration, time to get to the root cause may not be granted often. Furthermore, even knowing what the root cause is, this one may lay outside of the decision maker or company’s span of control.

Let’s take an example:

The final assembly of a widget requires several subsystems supplied by different suppliers.The production batches are small as the widgets are highly customized and with short life cycle (about a year).

The data survey – using big data techniques – foretells the high likelihood to have some trouble with the next production because of correlations between former experienced issues in combination of some of the supplies.

Given the short notice, relatively to the lengthy lead time to get alternate supplies, and the short production run, it is more efficient to prepare to overcome or bypass the possible problems than trying to solve them. Especially if the likelihood to assemble again these very same widgets is (extremely) low.

Issues are not certain, they are likely.

The sound decision is then to mitigate the risk by adding more tests, quality gates, screening procedures and the like, supply the market with flawless widgets, make the profit and head for the next production.

Decision is then based on probability, not on profound knowledge.

But even so the causes of issues are well-known, the decision must sometimes be the same: avoidance rather than solving.

This is already the case with quieter businesses, when parts, supplies or subsystems are supplied by remote unreliable suppliers and with no grip to control them.

I remember a major pump maker facing this kind of trouble with pig iron casted parts from India. No Six Sigma techniques could help make a decision or solve the problem: the problem laid beyond the span of control.


If you liked this Post, share it!

Bandeau_CH36View Christian HOHMANN's profile on LinkedIn

My Takeaways from Big data, the book

I got my first explanations about Big Data from experts who were my colleagues for a time. These passionate IT guys, surely very knowledgeable about their trade, were not always good about passing somewhat complex concepts in a simple manner to non-specialists. Yet they did well enough to raise my interest to know a bit more.

I then did what I usually do: search and learn on my own. That’s how I bought “Big data: A Revolution That Will Transform How We Live, Work and Think” by Viktor Mayer-Schonberger & Kenneth Cukier.

Without turning myself into an expert, I got farther in the understanding of what is behind big data and got better appreciation of its potentials and the way it surely will “Transform How We Live, Work and Think”, as the book cover claims.

My takeaways

Coping with mass and mess

Big data as computing technique is able to cope not only with huge amount of data, but data from various sources, in various formats, able to show order in an incredible mess the traditional approaches could not even start to exploit.

Big data can link together comments on Facebook, twitter, blogs, websites and companies’ data bases about a product for example, even the data formats are highly different.

In contrast, when using a traditional database software, data need to be neat and complying to predetermined format. It also requires to be disciplined in the way to input data into a field, as the software would be unable to understand that a mistyped “honey moon” meant “honeymoon” and is to be considered, computed, counted.. as such.

Switch from causation to correlation

With big data, the obsession for the “why” (causation) will give way to the “what” (correlation) for both understanding something and making decisions.

Big data can be defined as being about what, not why

This is somewhat puzzling as we are long used to search for causation. It is especially weird when using predictive analytics, the system will tell a problem exists but not what caused it, why it happens.

But for decision-making, knowing what is often good enough, knowing why is not always mandatory.

Correlation was known and used before big data, but with big data and as the computing power it is no more constrained, limited to linear correlations, more complex non linear correlations can be surfaced, allowing a new point of view and even a bigger picture to look at.

I use to imagine it as a huge data cube i can handle at will to look from any perspective.

Latent, inexhaustible value

Correlation will free latent value of data, therefore, the more the better.

What does it mean?

Prior to big data, the limitations of data capture, storage and analysis tend to concentrate on data useful to answer the “why”. Now it is possible to ask huge mass of data many different questions and find patterns, giving answers to (almost any?) “what”.

The future usage of data is not known at the moment it is collected, but with low-cost of storage, it is not (anymore) a concern. Value can be generated over and over in the future, just going through the mass of data with a new question, another research… Data retain latent value until it will be used and used again, without depleting.

That is why big data is considered the new ore and it is not even exhausted when used, it is a kind of infinite usage. That’s why so many companies are eager to collect data, any data, many data.

Do not give up exactitude, but the devotion to it

For making decisions, “good enough” information is… good enough.

With massive data, inaccuracies increase, but have little influence on the big picture.

The metaphor of telescope vs. microscope is often used in the book; when exploring the cosmos, a big picture is good enough even so many stars will be depicted by only a few pixels.

When looking at the big pictures, we don’t need the accuracy of every detail.

What the authors try to make clear is not giving up exactitude, but the devotion to it. There are cases where exactitude is not required and “good enough” is simply good enough.

Big versus little

Statistics have been developed to understand what little available data and/or computing power could tell. Statistics are basically extrapolating the big picture from (very) few samples. “One aim of statistics is to confirm the richest findings using the smallest amount of data”.

The computing power and data techniques are nowadays so powerful that it is no more necessary to work on samples only, it can be done on the whole population (N=all).

Summing up

I was really dragged into reading “Big data”, a well written book for non-IT specialists. Besides giving me insight of the changes and potentials of real big data, it really changed my approach with smaller data, the way I collect and analyse them, how I build my spreadsheets and how I present my findings.

My takeaways are biased as I consider big data for “industrial”, technical data and not personal ones. The book shares insights about risks of the usage already made of personal data and what could come next in terms of reduction of or threat to privacy.


Bandeau_CH36If you like this post, share it!
View Christian HOHMANN's profile on LinkedIn

Lean in digital age: sensors and data

In near future, technology and especially connected objects – smart things stuffed with sensors and so-called wearable devices – will supercharge Lean improvements.

One example of such already used device is given in a Mark Graban podcast about Hand Hygiene & Patient Safety. In this podcast (Episode #205), Mark’s guest Joe Schnur, VP Business Development at Intelligent M, explains how his wearable solution called smart band, (see video below) helps gather huge amount of accurate data compared to human observer with a clipboard.

You may listen to the whole podcast or skip to 13:30 and more specifically to 15:00 to hear about the wearable smart band, 21:50 about the data gathering.
http://www.leanblog.org/2014/07/podcast-205-joe-schnur-hand-hygiene-patient-safety/

Human observer has its limitations as to what information he/she can catch and how accurately it can be done. Think about fast events occurring often and/or tasks not easy to watch because of the layout. Human observations are therefore often limited to ticks on a pre-formated check sheet.

As human observers are high cost (compared to newer technology), they are used in limited number, during limited time and usually with sampling techniques.

Appropriate technology can gather many data for a single event: temperature, motions, duration, acceleration, applied force and what ever embedded sensors are designed for. These devices capture everything of each event, not only samples.

The cost per data point is obviously in favor of technology, not only because of quantity of data but also its quality (read accuracy). In near future the cost of these technologies will further drop, making automatic data collection available almost for free.

The mass of data captured allows using big data techniques, even so data scientists may smile at the “big” in this specific case. Nevertheless, with more smart objects and sensors everywhere (Internet of Things, Smart factories, etc.), the flood of data will grow really big and allow process mining, correlation search on a huge sets of parameters and more.

I am convinced that in near future, most of Value Stream Maps will be generated automatically and updated real time by such kind of devices/data sets, with ability to zoom in on details or zoom out for a broader view at will, and more.

The same systems will be able to pre-analyze and dynamically spot bottlenecks and sub-optimized parts in the process, make suggestions for improvements if not corrections by themselves.

  • Artificial intelligence with machine learning ability will suggest improvements based on scenarios stored in their literally infinite memory or on their predictions about potential problems.
  • The Internet of Things (IoT) will be made by objects communicating and interacting with each other.

What is likely to come are intelligent monitoring systems for any process, that build and maintain themselves, hence smart factories.

So, when Lean goes digital to that point, what will be left to humans?

This is a topic for a next post and an opportunity for you to give your opinion in a comment.

Bandeau_CH11View Christian HOHMANN's profile on LinkedIn


You may also be interested by my series about What jobs in the factory of the future?

If you liked this post, share it!


Can 5S survive big data?

5S are meant to be the foundations of operational excellence as no efficient work is imaginable in a messy, dirty and unsuitable-for-quality environment. This is long proven in the “physical world” and until recently transposable into the virtual world of digital information.

In short, 5S is a framework for sorting, organizing, tidying, set housekeeping and behavioral rules and standards and improving operations. This “school of discipline” and its simple techniques yields fine result in business as well as in private life.

Yet with the rise of big data, this theorem may need revision and it may happen that laws governing physical efficiency are no longer true on the digital side.

This is an outline of more to come on the subject

From scarcity to abundance

From the very limited capacities to the sheer endless ones nowadays, data storage is no more a problem, not for storage itself nor for costs that decrease continually. It was once mandatory to manage the scarcity by getting rid of obsolete or non-essential data and files. This is no more necessary, may be just a nice-to-have option!
From necessary discipline to unavoidable chaos

In early times, limited data processing and storage capacities made data management and housekeeping discipline mandatory. With actual features and apps to retrieve old data and manage different versions of documents, the chore is pushed onto IT tools, freeing users from the necessity of order and tidiness.
Worse, the ever-growing flood of new data makes it impossible to spend time managing the flow. The chaos is unavoidable, but don’t worry, technology takes care.

Numerous, messy data is the new ore

Big data is about… big data, meaning very large sets of data of different nature. Data don’t even have to be complete or consistent, technology now knows how to cope with messy data!

More and more companies are making big money exploiting big data, that’s why data are called new ore.

As lean-educated people understood to think in terms of just necessary resources, big data is all the contrary, the more the better. And because more future value is in yet unknown use of data, those will be created, collected and stored with greed and no intention to discard a bit (!). This ore is endlessly usable and recyclable.

Those companies that haven’t started to collect their ore or started too late or unable to collect it have no other choice than buy it from those who have. Just as it happened with raw material in the physical world.

5S won’t get over the gap

It is interesting to discuss if proven merits of 5S will survive big data. Furthermore, younger generations, so-called digital natives, do they capture the interest of 5S? When I see the most offices in which the younger people work, I can make my opinion.

I assume that in short-term, 5S will only apply to the physical world, while other rules will prevail on the digital side.

Watch for updates

This is an outline of more to come on the subject. Follow me on twitter or on this blog to keep posted for updates. An e-mail or tweet to encourage me would be welcome.

Remember, I am a Frenchman, non-native English speaker. If you have suggestions to improve my writing, do not hesitate to contact me!

Chris HOHMANN

View Christian HOHMANN's profile on LinkedIn

Technologies alone will not regain competitive advantage

Smart factories, high level of automation, robots, cobots and industry 4.0 concepts will not be enough to regain competitive advantage for Western European* companies. The reason is very simple, these technologies will be available to everyone and there is no real barrier to entry. These technologies won’t be very expensive and the ease of mastering them is their core claim.

Christian HOHMANNThus, everything else being equal, technologies alone won’t change contestants’ actual competitive advantages once they all acquired and mastered them.

Will the innovations therefore be useless? Surely not, they’ll enhance tools and processes and open new perspectives, but technologies alone won’t regain competitive advantage.

* This post is written from a French perspective which may be valid for Western Europe and United States as well

What can differentiate a competitor from his peers is the attractiveness of its offers, as it already did and still does before the next techno revolution. Attractive offers are based on:

  • Innovative products and services
  • High level of customization
  • High perceived value
  • Fast deliveries

These features are responses to common customers’ expectations like:

  • the fascination for novelty, originality
  • the desire to distinguish from the mass with something custom made
  • the ratio from perceived quality and value to its cost
  • the instant satisfaction of desires

In other words, it is not the means – read technologies – used to please customers that determine performance but the way of using them. The keys to competitive edge do not relate to machinery, automation nor sophisticated IT alone but to smarter use of them.

Hints for future successes, with a bit of high-tech

Analyzing voice of customers, soon greatly improved with big data.

Big data brings all kind of heterogeneous information together, analyze them and refine customers’ preferences better than traditional inquiries could achieve. For a simple reason: inquiries are based on limited questions with limited answer options and too often biased. Respondent keep much of their expectations and desires unspoken, implicit and thus hidden. Big data allows gathering small pieces of information in tweets, facebook posts, online orders, blog comments, etc. and finding correlations that allow to refine the offering to customers’ unspoken and maybe unconscious longings.

Innovation

Innovation is not only responding to customers’ whishes but surprise them with something new, different. Here TRIZ may help. TRIZ is one of these powerful methods and tools that didn’t really make it into the light so far.

TRIZ is a problem solving method based on logic and data, not intuition, which accelerates the project team’s ability to solve these problems creatively. TRIZ also provides repeatability, predictability, and reliability due to its structure and algorithmic approach. “TRIZ” is the (Russian) acronym for the “Theory of Inventive Problem Solving.” G.S. Altshuller and his colleagues in the former U.S.S.R. developed the method between 1946 and 1985. TRIZ is an international science of creativity that relies on the study of the patterns of problems and solutions, not on the spontaneous and intuitive creativity of individuals or groups. More than three million patents have been analyzed to discover the patterns that predict breakthrough solutions to problems.

source: http://www.triz-journal.com/archives/what_is_triz/

The TRIZ pioneers used a big data approach in time big data as technology and tool did not exist. Now that big data is growing mature, methods like TRIZ and QFD (Quality Function Deployment) could be boosted and jointly used for invention.

Speed

Speed, both for launching often new products/services and deliver them fast to market, is a key success factor. Additive manufacturing (3D printing) may be a technical response, but when it comes to speed Lean can help a lot.

Lean is not only about reducing lead time, but also avoiding loops (e.g. rework), unnecessary dwelling (e.g. waiting for next process step or waiting for inventory queue to flush). Lean also cares about doing things right first time, improving in-process quality and doing what is really necessary to deliver value and thus stop over processing and needless tasks. While all this reduces lead time, it reduces also costs and improves quality.

Profitability

Profitability means that all the previous should not be done at the expense of company’s profit. Profit making is essential for company’s sustainability. What’s the use of a one-shot success?

Bandeau_CH11