The cobot controversy – Part 1

“The cobot controversy” is the title of a short article published by and on the Hannover Messe (“Hannover Fair”, the industry exhibition) website.

The article can be read in English as well as in German (assumed original version). This article proposes a “balanced” view about the impact of the collaborative robots (cobots) on the jobs in industry.

It caught my interest because most often the articles on those subjects, i.e. robots and future of jobs are single-sided.

  • On the one hand promoters of the factory of the future, industry 4.0 and robotics only highlight the alleged benefits of the new technologies.
  • On the other hands, prophets of doom predict nothing else than mass extinction of jobs.

Published by what can be considered the Mecca of Germany’s Industrie 4.0, the showplace of the most recent and finest developments in cyber physical systems, automation and more, it is fair (no pun intended…) to present the flip side of the coin.

Furthermore, some references to studies cited in the article are interesting. For instance the fact that “robots are replacing tasks, not jobs”. Digging deeper into this one, I read that usually analysts assume that the whole job is taken over by automation or robots when in fact only specific tasks are. This is mainly because the analysts remained on a macro level.

Now, can this invalidate the initial assumption: robots won’t replace humans at work?

When observing any person in its daily work, many of the tasks done are not described in the work instructions neither in the procedures and many tasks are not even part of the job description.

This can have several reasons:

  • people not sticking to the work instructions and taking liberties
  • reacting to unexpected situations that require decision and action on the spot
  • impossibility to describe every possible situation in work instructions and procedures
  • broad guidelines as instructions, relying on human know-how to carry out the tasks
  • etc.

The human workers defenders will argue that humans are irreplaceable when facing an unexpected situation, something that is likely to happen (very) frequently. They may be right, but with regards to old automation constraints and algorithm programming.

Until relatively recently, automation required accurate positioning and low variability for automated machines or robots to operate. Programming was linear and only capable to adjust on programmed variations. With the all the progress in various fields, objects positioning is no longer a hard constraint and systems are increasingly capable to adjust to unexpected situations.

Machines, in the broadest meaning of the word, are also increasingly capable to learn and adapt. Therefore, the assumption of the irreplaceable human is losing its validity as the machines’ abilities improve.

When observing humans work, is it also common to see them take deliberate liberties with the list of tasks, because of their inability to keep focused over time, because they are convinced to know better or because they lack the self-discipline to stick to instructions.

Humans introduce many variations and not always for good reasons, therefore praising the vast variety of tasks the human do must be considered with care. For the same reason, stating that “robots are replacing tasks, not jobs”, based on such observations without a critical discrimination of the necessity and added-value of the human tasks, might be wrong.

Why? When going for automation, the engineers will analyze the process and concentrate on the core activities. They may well ignore many special issues a human will take care of, but also ignore all the unnecessary or deviant activities human will add. More or less, this analysis will discriminate necessary from unnecessary tasks, value-added from waste.

Comments welcome.

View Christian HOHMANN's profile on LinkedIn

Cobots: more cooperation than collaboration

Cobot is the contraction of “collaborative” and “robot”, name and concept of a new kind of robots able to work literally hand-in-hand with humans without a safety fence between them.

fraunhoferCobots are hype and the word tends to become generic for any kind of robot working in close proximity of humans. A study from the German FRAUNHOFER – INSTITUT für Arbeitswirtschaft und Organisation IAO (2016) about first experiences with lightweight robots in manual assembly* distinguishes cooperation from collaboration.

*“Leichtbauroboter in der manuellen Montage – einfach einfach anfangen. Erste Erfahrungen von Anwenderunternehmen

This post is in great part my translation of the original study, with my personal comments.

>Lisez-moi en français

The study summarized different combinaisons in the use of robots near and with human operators, leading the authors to propose 5 classes:

  1. Robotic cell in which a robot operates on its own, fenced-off from humans by a safety fence. In such a case there is no human-robot collaboration.
  2. Coexistence of robot and human, a case in which both are close to each other but without a safety fence, yet have no common workspace. The robot has its own dedicated space distinct from the human one.
  3. Synchronized work: an organization in which human and robot share a common workspace but only one being active at a time. The work sequence is like a choreography between human and robot.
  4. Cooperation: the two “partners” work on their own tasks and can share a common space but not on the same product nor same part.
  5. Collaboration: an organization with common and simultaneous work on the same product or part. Typically the robot handles, presents and holds a part while the operator works on it.

Based on this classification, the studies reveals that collaboration is still seldom. Workers and robots work side by side on their own dedicated tasks, letting me conclude that for the time being, “cobots” are more cooperative than collaborative.

Motivation for investing in this kind of more expensive robots is mainly productivity improvement and secondary objectives are improvement of ergonomics (avoid heavy lifting for example) and testing innovative technologies.

The choice of this kind of solutions requires also new planning and management tools as well as consulting. New standards and regulations are in preparation that must be managed by companies themselves, not the system provider. All this carries additional costs.

Companies with no or only limited experience with these kinds of robots remain hesitant, therefore the authors of the study recommend to implement step wise, starting simple and going from human-robot coexistence to collaboration.

View Christian HOHMANN's profile on LinkedIn

Why Big data may supersede Six Sigma


Chris HOHMANN – Author

In this post, I assume in near future correlation will be more important than causation* for decision-making, decisions will have to be made according to “incomplete, good enough” information rather than solid analyses, thus big data superseding Six Sigma.

*See my post “my takeaways from Big data” on this subject

In a world with increasing uncertainty, fast changing businesses and fiercer competition, I assume speed will make the difference between competitors. The winners will be those having:

  • fast development of new offers
  • short time-to-market
  • quick reaction to unpredictable changes and orders
  • fast response to customers requirements and complaints
  • etc.

Frenzy will be the new normal.

I also assume that for most industries, products will be increasingly customized, fashionable (changing rapidly from one generation to the next, or constantly changing in shapes, colors, materials, etc.) and with shorter life cycles.

That means that production batches are smaller and the repeating of an identical production run unlikely.

In such an environment, decisions must be made swiftly, most often based on partial, incomplete information, with “messy” data flowing in great numbers from various sources (customer service, social media, real-time sales data, sales reps reports, automated surveys, benchmarking…).

Furthermore, decisions have to be made the closest to customers or where decision matters, by empowered people. There is no more time to report to a higher authority and wait for the answer, decisions must be made almost at once.

There will be fewer opportunities to step back, collect relevant data, analyze them and find out the root cause of a problem, not even speaking about designing experiments and testing several possible solutions.

Decision making is going to be more and more stochastic: with the number and urgency of decisions to make what matters is making significantly more good decisions than bad ones, the latter being inevitable.

What is coming is what Big data is good at: fast handling a lots of messy bits of information and revealing existing correlations and/or patterns to help making decisions. Hence, decision-making will rely more on correlation than causation.

Six Sigma aficionados will probably argue that no problem can be sustainably solved if the root cause is not addressed.

Agreed, but who will care about trying to eradicate a problem that may be a one-shot and which solving time will probably exceed the problem duration?

In a world of growing interactions, transactions and in constant acceleration, time to get to the root cause may not be granted often. Furthermore, even knowing what the root cause is, this one may lay outside of the decision maker or company’s span of control.

Let’s take an example:

The final assembly of a widget requires several subsystems supplied by different suppliers.The production batches are small as the widgets are highly customized and with short life cycle (about a year).

The data survey – using big data techniques – foretells the high likelihood to have some trouble with the next production because of correlations between former experienced issues in combination of some of the supplies.

Given the short notice, relatively to the lengthy lead time to get alternate supplies, and the short production run, it is more efficient to prepare to overcome or bypass the possible problems than trying to solve them. Especially if the likelihood to assemble again these very same widgets is (extremely) low.

Issues are not certain, they are likely.

The sound decision is then to mitigate the risk by adding more tests, quality gates, screening procedures and the like, supply the market with flawless widgets, make the profit and head for the next production.

Decision is then based on probability, not on profound knowledge.

But even so the causes of issues are well-known, the decision must sometimes be the same: avoidance rather than solving.

This is already the case with quieter businesses, when parts, supplies or subsystems are supplied by remote unreliable suppliers and with no grip to control them.

I remember a major pump maker facing this kind of trouble with pig iron casted parts from India. No Six Sigma techniques could help make a decision or solve the problem: the problem laid beyond the span of control.

If you liked this Post, share it!

Bandeau_CH36View Christian HOHMANN's profile on LinkedIn

My Takeaways from Big data, the book

I got my first explanations about Big Data from experts who were my colleagues for a time. These passionate IT guys, surely very knowledgeable about their trade, were not always good about passing somewhat complex concepts in a simple manner to non-specialists. Yet they did well enough to raise my interest to know a bit more.

I then did what I usually do: search and learn on my own. That’s how I bought “Big data: A Revolution That Will Transform How We Live, Work and Think” by Viktor Mayer-Schonberger & Kenneth Cukier.

Without turning myself into an expert, I got farther in the understanding of what is behind big data and got better appreciation of its potentials and the way it surely will “Transform How We Live, Work and Think”, as the book cover claims.

My takeaways

Coping with mass and mess

Big data as computing technique is able to cope not only with huge amount of data, but data from various sources, in various formats, able to show order in an incredible mess the traditional approaches could not even start to exploit.

Big data can link together comments on Facebook, twitter, blogs, websites and companies’ data bases about a product for example, even the data formats are highly different.

In contrast, when using a traditional database software, data need to be neat and complying to predetermined format. It also requires to be disciplined in the way to input data into a field, as the software would be unable to understand that a mistyped “honey moon” meant “honeymoon” and is to be considered, computed, counted.. as such.

Switch from causation to correlation

With big data, the obsession for the “why” (causation) will give way to the “what” (correlation) for both understanding something and making decisions.

Big data can be defined as being about what, not why

This is somewhat puzzling as we are long used to search for causation. It is especially weird when using predictive analytics, the system will tell a problem exists but not what caused it, why it happens.

But for decision-making, knowing what is often good enough, knowing why is not always mandatory.

Correlation was known and used before big data, but with big data and as the computing power it is no more constrained, limited to linear correlations, more complex non linear correlations can be surfaced, allowing a new point of view and even a bigger picture to look at.

I use to imagine it as a huge data cube i can handle at will to look from any perspective.

Latent, inexhaustible value

Correlation will free latent value of data, therefore, the more the better.

What does it mean?

Prior to big data, the limitations of data capture, storage and analysis tend to concentrate on data useful to answer the “why”. Now it is possible to ask huge mass of data many different questions and find patterns, giving answers to (almost any?) “what”.

The future usage of data is not known at the moment it is collected, but with low-cost of storage, it is not (anymore) a concern. Value can be generated over and over in the future, just going through the mass of data with a new question, another research… Data retain latent value until it will be used and used again, without depleting.

That is why big data is considered the new ore and it is not even exhausted when used, it is a kind of infinite usage. That’s why so many companies are eager to collect data, any data, many data.

Do not give up exactitude, but the devotion to it

For making decisions, “good enough” information is… good enough.

With massive data, inaccuracies increase, but have little influence on the big picture.

The metaphor of telescope vs. microscope is often used in the book; when exploring the cosmos, a big picture is good enough even so many stars will be depicted by only a few pixels.

When looking at the big pictures, we don’t need the accuracy of every detail.

What the authors try to make clear is not giving up exactitude, but the devotion to it. There are cases where exactitude is not required and “good enough” is simply good enough.

Big versus little

Statistics have been developed to understand what little available data and/or computing power could tell. Statistics are basically extrapolating the big picture from (very) few samples. “One aim of statistics is to confirm the richest findings using the smallest amount of data”.

The computing power and data techniques are nowadays so powerful that it is no more necessary to work on samples only, it can be done on the whole population (N=all).

Summing up

I was really dragged into reading “Big data”, a well written book for non-IT specialists. Besides giving me insight of the changes and potentials of real big data, it really changed my approach with smaller data, the way I collect and analyse them, how I build my spreadsheets and how I present my findings.

My takeaways are biased as I consider big data for “industrial”, technical data and not personal ones. The book shares insights about risks of the usage already made of personal data and what could come next in terms of reduction of or threat to privacy.

Bandeau_CH36If you like this post, share it!
View Christian HOHMANN's profile on LinkedIn

How much non-added value can 3D printing add?


Chris HOHMANN – Author

3D printing is no doubt hype. No day without another breakthrough announced or stunning news about materials, or extraordinary achievements with the help of this new technology.

I am convinced 3D printing and Additive Manufacturing at large will completely change manufacturing and consumer experience in near future, and posted several blogs about it.

One of my posts was titled “Creativity breaks loose from constraints with additive manufacturing” and an other “How much non-added value additive manufacturing can take out of actual processes?”. The latter being a sweet and sour description about the benefits of this technology and the flip side of the coin, but in my view an encouraging perspective for future manufacturing in western countries.

Yet I came to see a proud announcement about the PancakeBot – The world’s first pancake printer! on that echoes both the above mentioned posts.

3D printing as one of the additive manufacturing technique does indeed boost creativity, allowing for example to 3D print pancakes at will. On the other hand this creativity bursts question the added value of the extraordinary possibilities offered by 3D printing.

  • Who needs a customized 3D printed pancake?
  • How many trials does it take to have a Eiffel Tower pancake correctly cooked and delivered in one piece still looking like the Eiffel Tower?

(by the way, thank you for flattering my French pride with such an example).

Back to industry, the extraordinary possibilities offered by Additive Manufacturing should not lead to just buy more expensive toys for big boys to have some temporary fun or because it is hype, an aspect management must be aware of and before agreeing the investment, ask how much non-added value can 3D printing add?

Bandeau_CH36View Christian HOHMANN's profile on LinkedIn

If you liked this post, share it!

Is 3D printing the ultimate postponement? Part two

In the previous post of this series, I used somewhat extreme examples to illustrate the benefits of postponement with additive manufacturing i.e. 3D printing (space exploration, ships amidst oceans and warfare). In this post I use more common examples about how the promises of these new techniques will disrupt existing businesses and bring new benefits to competitors and customers.

Spare parts for automotive industry, appliances, etc.

Spare parts are needed for mending cars or appliances for example. Until now, spare parts must be produced and kept in inventories in the eventuality someone needs a part. This happens eventually but it is hard guess to tell which parts, when and in which quantities parts will be required.

Therefore, spare parts production is launched according to complex and more or less scientific guessing, based on statistics. Once these parts are produced, they’ll go for various locations through the proprietary network or through  importers, distributors, retailers and repair stations.
Huge amounts of cash are kept frozen in inventories, scattered in many warehouses in various locations.

  • These inventories are likely to grow with each new specification change that affects a part, as the adequate replacement part must be provided
  • These inventories’ value will have to be depreciated when parts become obsolete and the probability of their sales diminishes

Storing and distributing spare parts is a business per se, but the value-added remains limited (which does not mean it is not profitable!), especially for the “players in the middle” who act more like cross-docking platforms taking their share of profits and risks.

Over time distributors and retailers slightly changed their business model and drift away from their original business: storage and retail.

In old days it was important to be the reliable parts provider and huge inventories were normality.

More and more those companies embrace a financial, more profit-driven purpose and keeping inventories is for them a necessary evil at best. Distributors and retailers try to get delivered at short notice in order to keep inventories – that is frozen capital and risk – low.

They push the problem upstream to manufacturers, the latter being required to reduce delivery lead time, which most often ironically means holding inventories to serve “off-the-shelf”. Distributors and retailers become a kind of post-office collecting orders, passing them over to manufacturers, who in some case have to deliver to the point of use, by-passing the distributor/retailer.

I worked in some industries facing this “problem” and the distributor / retailer channel in this way does not seem sustainable as manufacturers try to get rid of these “order collectors”.

Now with the rise of additive manufacturing techniques, new opportunities appear. Distributors and retailers may use them to become manufacturers themselves. What they need are competencies to use such equipments and managing CAD files from OEMs’ libraries, “print” spare parts at will: at the right moment, in the right version, without holding huge, costly and risky inventories of parts in huge warehouses, with high fixed costs.

Furthermore, customizing parts locally would yield additional revenue, as customers with specific and maybe urgent needs are willing to pay a premium.
So would scanning and redesigning no longer supported parts for which no CAD files are available.
This kind of service is an ultimate postponement because the manufacturing of parts is on hold until the very last moment, when the orders are confirmed or the parts paid!

This is one example about additive manufacturing (i.e. 3D printing) techniques can disrupt existing businesses and bring new benefits to (some) competitors and customers. The financial barriers to entry dropping significantly, OEMs could reconsider to re-integrate this kind of activity and keep the value creation all by themselves.

This post being a prospective analysis, I would be glad to read your comments.

If you liked it, please share it!

View Christian HOHMANN's profile on LinkedIn

Is 3D printing the ultimate postponement? – Part one

Imagine the first habitable base on mars. Your challenge is to pack the first cargo spaceship with all the necessary for the staff to face all maintenance issues, until the next cargo spaceship can lift up, say three months later.

Chances are you’ll include a 3D printer and enough of printer’s raw material, simply because it would be the most efficient way to provide many things needed despite tremendous logistics constraints.

Now quit outer space and consider a tanker, an aircraft carrier or container ship amidst the ocean. In some aspects, these vessels share common traits with our base on mars:

  • storage space for spare parts, raw material and machines for maintenance purpose is scarce
  • they are far from everything, can be supplied only after some delay
  • supplying them is not without some risk (weather, enemies, etc.)
  • supplying them is not only risky but comes at (very) high cost

In these cases too, 3D printing is a good option to consider as printing what is needed at the very moment it is needed is the optimum solution and ultimate postponement.

What is postponement?

In manufacturing and supply chain operations, postponement means delaying the completion of a product or packaging products until a signal assigns specific customer or destination. This is useful when many variants would lead to possible misallocation if the completion would be based on forecasts.

Put simpler, postponement delays a decision until what is expected is clearly specified. The reason is most of transformation step in a process modify the product in such manner that returning to previous state is impossible.

Example: if you cut a piece of fabric to make a handkerchief, it cannot be returned to a piece of fabric for a trousers’ leg. (except it was a huge handkerchief or tiny trousers)

Materials usually lose flexibility along the transformation process. Once transformed there is no stepping back.

Postponement is used to delay the completion or manufacturing until a differentiation point from which the item loses its flexibility (e.g. pack in white box and add customized label latter).

Because postponement and later completion is no realistic option for our vessels or space base, they must embark spare parts for all possible cases, but under constraint of volume and in some cases weight.

The embarked mix is a set of items based on forecasts and tradeoffs about what could possibly happen and what is most likely needed, still carrying the frightening risk that what will really be needed will not be included in the cargo.

Printing at will

Now if you can trade the same finite volume and mass of many different spare parts selected through complicated statistical computation for a 3D printer and raw printer material, the risk drops to almost none as any required part (as long as material is suitable) can be printed when required, and even customized to some unexpected specification change.

This is why NASA, the navy or some private companies consider to embark 3D printers and train staff in order for the unit to be independent from its supply base for a longer period.


If you liked this post, share it!

Lean in digital age: sensors and data

In near future, technology and especially connected objects – smart things stuffed with sensors and so-called wearable devices – will supercharge Lean improvements.

One example of such already used device is given in a Mark Graban podcast about Hand Hygiene & Patient Safety. In this podcast (Episode #205), Mark’s guest Joe Schnur, VP Business Development at Intelligent M, explains how his wearable solution called smart band, (see video below) helps gather huge amount of accurate data compared to human observer with a clipboard.

You may listen to the whole podcast or skip to 13:30 and more specifically to 15:00 to hear about the wearable smart band, 21:50 about the data gathering.

Human observer has its limitations as to what information he/she can catch and how accurately it can be done. Think about fast events occurring often and/or tasks not easy to watch because of the layout. Human observations are therefore often limited to ticks on a pre-formated check sheet.

As human observers are high cost (compared to newer technology), they are used in limited number, during limited time and usually with sampling techniques.

Appropriate technology can gather many data for a single event: temperature, motions, duration, acceleration, applied force and what ever embedded sensors are designed for. These devices capture everything of each event, not only samples.

The cost per data point is obviously in favor of technology, not only because of quantity of data but also its quality (read accuracy). In near future the cost of these technologies will further drop, making automatic data collection available almost for free.

The mass of data captured allows using big data techniques, even so data scientists may smile at the “big” in this specific case. Nevertheless, with more smart objects and sensors everywhere (Internet of Things, Smart factories, etc.), the flood of data will grow really big and allow process mining, correlation search on a huge sets of parameters and more.

I am convinced that in near future, most of Value Stream Maps will be generated automatically and updated real time by such kind of devices/data sets, with ability to zoom in on details or zoom out for a broader view at will, and more.

The same systems will be able to pre-analyze and dynamically spot bottlenecks and sub-optimized parts in the process, make suggestions for improvements if not corrections by themselves.

  • Artificial intelligence with machine learning ability will suggest improvements based on scenarios stored in their literally infinite memory or on their predictions about potential problems.
  • The Internet of Things (IoT) will be made by objects communicating and interacting with each other.

What is likely to come are intelligent monitoring systems for any process, that build and maintain themselves, hence smart factories.

So, when Lean goes digital to that point, what will be left to humans?

This is a topic for a next post and an opportunity for you to give your opinion in a comment.

Bandeau_CH11View Christian HOHMANN's profile on LinkedIn

You may also be interested by my series about What jobs in the factory of the future?

If you liked this post, share it!

How lean can help shaping the future – compact factories

The factory of the future has to comply with several constraints, among which the energy efficiency and respect of environment, the latter meaning nature as well as neighborhood.

Factories of the future will probably be close to housing areas, not only because in some areas space is scarce but because commuting is a major source of waste and annoyance.

Factory leanness is directly and positively correlated to its compactness. The more compact the factory the less travel distance within. Distance induces transportation and motion wastes. The shorter the distance, the less these types of waste.

The shorter the distance, the shorter the lead time hopefully.

Compact factories do not allow large inventories. I remember Japanese factories and the mini trucks; if you cannot store, deliver more often! (not sure about energy efficiency and environment friendliness of the trucks milk runs though).

Factories of the future will be built in flow-logic, unlike their centuries-old ancestors in which flows are just nightmares. Actual greenfields easily supercede brownfields and elder facilities on this point.

Best would be scalable units that can be plugged one to another, like plug-and-play shelters having some commodities ducts and cables pre-installed/pre-wired. Among them, Smart Industry or Industrie 4.0 (Europe) standard industrial buses for connecting anything out of the Internet of Things (IoT).

Such shelters could be specialized, like holding 3D printers, laser cutters or 3D scanners ready to use. They could be rented on-demand, installed, connected and used for some period and reused somewhere else after that. A kind of Rent-a-factory..!

Compact factories (in volume) need less heating and air conditioning and less artificial light. Industrial compressed air – if still in use – or other gases need less piping and volume in pipes in compact factory, less compressor units and power.
Air leaks in bigger facilities often require an additional compressor for compensation.
All good points for the sake of energy efficiency.

Most of the principles listed above are lessons learnt from lean experiences with existing factories. In such old-style factories, the improvements are often limited by physical, building construction constraints. Taking these lessons learnt into account is a way lean can help shaping the future.

 Bandeau_CH11View Christian HOHMANN's profile on LinkedIn

How much non-added value additive manufacturing can take out of actual processes?

It is a well-known fact: the sequence of all activities required to bring a product to a customer is called a value stream and despite the name, value does not flow smoothly nor swiftly along streamlined processes. Value streams are cluttered with non-added value processes, tasks and steps, so-called wastes.

Traditional manufacturing processes aren’t very efficient especially when several different techniques are required e.g. cutting, lathing, milling, drilling, welding, deburring, assembling, painting, etc.

All these machines require energy and floor space. The more complex the process, the more energy and space is required.

This remains true even if the process is partly subcontracted, which adds more transportation and management costs, maybe additional quality controls.

In such processes there are many hands-off and transportations between machines and work posts, the different operation require different skills, thus a staff of qualified workers.

Production is launched in batches in order to have some economies of scale but with carryover costs and all the trouble related to WIP and inventories.

Of course lead time is dependent on the number of operations and the process’ efficiency. Measured in time ratio, the added-value time to total Lead time ratio is often around 2% (poor efficiency) and around 10% (?) at best.

Customers pay for all this as until recently there was no alternative. Yet a tremendous change will affect some industries / businesses with additive manufacturing.

With these new techniques, when relevant and possible, the part or product is created in a single process by adding (“3D printing”) material one thin layer after another.

So how much non-added value additive manufacturing can take out of actual processes?

Well, considering the examples given above, I’d say a lot of handling, storing, energy, floor space, capital frozen in inventories and WIP, manpower costs, a large share of overhead, capital for different machines, lot of floor space and related costs (heating, cooling, light, locker rooms and other “social” rooms).

For the industries and businesses that will be threatened by the rise of these new manufacturing techniques, the disruption can be tsunami-like. Think of all the barriers to entry suddenly disappearing for new challengers and the irony of established companies, if caught unprepared, being suddenly locked-out from their own markets!

Some companies may not be able to switch quickly from traditional to additive manufacturing. It will probably take them some time to get the new know-how, find a suitable business model and get rid of assets that became a burden; machines, buildings and… some of the workforce. If additive manufacturing techniques supersedes traditional ones, companies that couldn’t manage the turnaround will be pushed out of their markets.

For customers it should be good news: cost and lead time should drop significantly while customization makes a giant leap.

Sad for those who will lose.

Bandeau_CH12if you liked this post, share it!

View Christian HOHMANN's profile on LinkedIn

You may also be interested in reading more posts about the factory of the future, like How disruptive 3D printing can be or Will 3D printing revitalize strategic analysis?