Did you already SWOT yourself?

SWOT stands for Strengths, Weaknesses, Opportunities, and Threats and SWOT analysis is often used to assess a project, a venture or reflect on the organization’s relative forces compared to competition and business.

A SWOT analysis can be done on one’s self in order to get clarity about one’s Strengths, Weaknesses, Opportunities, and Threats, before a job interview or when facing an important decision.

“SWOTing” oneself can also be useful just to get clarity on one’s status, so to say. Being aware of these four dimensions helps to get clarity about thyself and to take decisions with more than just gut feeling. Let’s start with Strengths and Weaknesses

Strengths and Weaknesses are very self-centered. It’s all about individual traits and how they compare to others. Of course, strengths and weaknesses are relative to the circumstances and the self-assessment should be done with a specific “use case” in mind.

Strengths

List your distinctive strengths, what you are really good at, what makes you different from colleagues and other people, what makes you stand out of the crowd, and would be a real advantage over similar profiles. Be honest.

Strengths must be specific and difficult or long to acquire for your “competitors”.

Weaknesses

Being clear about own weaknesses sets the boundaries about what you can do and what not. Awareness about your own limitations and weaknesses will probably prevent you to try something out of reach or likely to fail.

Weaknesses may be disqualifiers when applying for a job or when looking for a promotion.

It is also an indication of what to improve – if possible – and about what your competition is potentially better.

Now to the external factors. Circumstances and social and professional environment are changing constantly, providing new Opportunities but also exposing to Threats.

Opportunities

Circumstances and environment at large may provide personal development or new professional opportunities. Clarity about one’s strengths and weaknesses helps to decide to seize an opportunity or to get prepared for it. Sometimes the gap is too big and thanks again to clearly knowing one’s limits, the right decision can be made.

Threats

Everything is going VUCA, an acronym for Volatile or unstable, temporary, Uncertain, Complex and Ambiguous. Threats are somehow the flip side of the Opportunities: what can be a real chance for one can be disastrous for another.

Threats can come in a form of a new “competitor” or a technology that trump your skills or make you as a contributor… unnecessary.

Threat can be the obsolescence of your knowledge, the decline of some abilities in time…

Threats can come in so many ways that it is wise to question them about being plausible and their probability. Take into account only the most plausible and likely ones.

Once clear about the exposure to risks, figure out how to mitigate or bypass them.


Related: When facing a choice, get clarity with the change matrix

View Christian HOHMANN's profile on LinkedIn

Advertisements

3-color system for Goal Trees

In this 5 minute excerpt from the Logical Thinking Process (LTP) Alumni reunion with Bill Dettmer, June 2016 in Paris, France, I explain my 3-color system for assessing the current reality with a Goal Tree.

The 3-color system is a visual management tool to assess the organization’s readyness to achieve its Goal and shows where to act in priority.

You’ll find several articles on this topic here on my blog, for instance:


Bandeau_CH40_smlView Christian HOHMANN's profile on LinkedIn

TOC-based decision for best product mix

Theory of Constraints (TOC) provides a framework to identify, exploit, set pace and elevate* the constraint, or put in simpler words: identify the bottleneck in the process and make the best with it.

*Identify, exploit, subordinate, elevate and prevent inertia are known as the “five focusing steps” of Theory of Constraints.

In this constraint or bottleneck-centered approach, the aim is to give this peculiar resource a privileged treatment as it controls directly the whole system Throughput, hence the profit.

But working on the constraint capacity is not enough to maximize Throughput, the product mix is also very important.

Theory of Constraints therefore developed Throughput Accounting in order to cope with issues when making decisions based on traditional accounting and a new way to make decisions regarding product mix.

This video is a 49mn course about TOC-based decision for best product mix by Prof G. Srinivasan, IIT Madras


Bandeau_CH4

View Christian HOHMANN's profile on LinkedIn

Play big on small data

Chris HOHMANN

Chris HOHMANN – Author

This weird title “Play big on small data” suggests the utilization of big data principles on small data sets. “Small” is to be considered relatively to huge amount of data big data can manage, which is not necessarily only a handful.


I came across big data with former colleagues who were IT experts and got a kind of epiphany about big data with the eponym book.

Since that reading I do not collect, structure and analyze data the same way anymore. I tend to be more tolerant about inaccuracies, mess and lack of data because what I am looking for is insight and big picture rather than certitude and accuracy.

As poorly tended datasets are the norm rather than exception, starting an analysis with this mindset saves some stress. The challenge is not to filter out valid data for a statistical significant analysis, but a way to depict a truthful “good enough” picture, suitable for decision-making.

Playing big on small data does not mean apply the technical solutions for handling huge amount of data or fast calculation on them, simply get inspired by an approach favoring the understanding of the “what” rather than the “why”, in other words, favor correlation instead of causation.

In many cases, a good enough understanding of the situation is just… good enough. Going down to the very details or make sure of the accuracy would not change much but would take time and divert resources for the sake of unnecessary precision.

When planning a 500km journey, you don’t need to know each meter’s details, some milestones are just good enough to depict the way.

Accepting to trade, when it’s meaningful, correlation for causation helps to get around the few and messy data usually available. Even so data may be plenty, for a given analysis they are too often few fitting the purpose and in the right format. It is then smart to look at other data sets, even if they are in the same state, and search for patterns and correlations that can validate or invalidate the initial assumption.

The conclusion is most of the time trustworthy enough to make a decision.

Bandeau_CH36
View Christian HOHMANN's profile on LinkedIn

If you liked this post, share it!

Why Big data may supersede Six Sigma

Chris HOHMANN

Chris HOHMANN – Author

In this post, I assume in near future correlation will be more important than causation* for decision-making, decisions will have to be made according to “incomplete, good enough” information rather than solid analyses, thus big data superseding Six Sigma.

*See my post “my takeaways from Big data” on this subject

In a world with increasing uncertainty, fast changing businesses and fiercer competition, I assume speed will make the difference between competitors. The winners will be those having:

  • fast development of new offers
  • short time-to-market
  • quick reaction to unpredictable changes and orders
  • fast response to customers requirements and complaints
  • etc.

Frenzy will be the new normal.

I also assume that for most industries, products will be increasingly customized, fashionable (changing rapidly from one generation to the next, or constantly changing in shapes, colors, materials, etc.) and with shorter life cycles.

That means that production batches are smaller and the repeating of an identical production run unlikely.

In such an environment, decisions must be made swiftly, most often based on partial, incomplete information, with “messy” data flowing in great numbers from various sources (customer service, social media, real-time sales data, sales reps reports, automated surveys, benchmarking…).

Furthermore, decisions have to be made the closest to customers or where decision matters, by empowered people. There is no more time to report to a higher authority and wait for the answer, decisions must be made almost at once.

There will be fewer opportunities to step back, collect relevant data, analyze them and find out the root cause of a problem, not even speaking about designing experiments and testing several possible solutions.

Decision making is going to be more and more stochastic: with the number and urgency of decisions to make what matters is making significantly more good decisions than bad ones, the latter being inevitable.

What is coming is what Big data is good at: fast handling a lots of messy bits of information and revealing existing correlations and/or patterns to help making decisions. Hence, decision-making will rely more on correlation than causation.

Six Sigma aficionados will probably argue that no problem can be sustainably solved if the root cause is not addressed.

Agreed, but who will care about trying to eradicate a problem that may be a one-shot and which solving time will probably exceed the problem duration?

In a world of growing interactions, transactions and in constant acceleration, time to get to the root cause may not be granted often. Furthermore, even knowing what the root cause is, this one may lay outside of the decision maker or company’s span of control.

Let’s take an example:

The final assembly of a widget requires several subsystems supplied by different suppliers.The production batches are small as the widgets are highly customized and with short life cycle (about a year).

The data survey – using big data techniques – foretells the high likelihood to have some trouble with the next production because of correlations between former experienced issues in combination of some of the supplies.

Given the short notice, relatively to the lengthy lead time to get alternate supplies, and the short production run, it is more efficient to prepare to overcome or bypass the possible problems than trying to solve them. Especially if the likelihood to assemble again these very same widgets is (extremely) low.

Issues are not certain, they are likely.

The sound decision is then to mitigate the risk by adding more tests, quality gates, screening procedures and the like, supply the market with flawless widgets, make the profit and head for the next production.

Decision is then based on probability, not on profound knowledge.

But even so the causes of issues are well-known, the decision must sometimes be the same: avoidance rather than solving.

This is already the case with quieter businesses, when parts, supplies or subsystems are supplied by remote unreliable suppliers and with no grip to control them.

I remember a major pump maker facing this kind of trouble with pig iron casted parts from India. No Six Sigma techniques could help make a decision or solve the problem: the problem laid beyond the span of control.


If you liked this Post, share it!

Bandeau_CH36View Christian HOHMANN's profile on LinkedIn

My Takeaways from Big data, the book

I got my first explanations about Big Data from experts who were my colleagues for a time. These passionate IT guys, surely very knowledgeable about their trade, were not always good about passing somewhat complex concepts in a simple manner to non-specialists. Yet they did well enough to raise my interest to know a bit more.

I then did what I usually do: search and learn on my own. That’s how I bought “Big data: A Revolution That Will Transform How We Live, Work and Think” by Viktor Mayer-Schonberger & Kenneth Cukier.

Without turning myself into an expert, I got farther in the understanding of what is behind big data and got better appreciation of its potentials and the way it surely will “Transform How We Live, Work and Think”, as the book cover claims.

My takeaways

Coping with mass and mess

Big data as computing technique is able to cope not only with huge amount of data, but data from various sources, in various formats, able to show order in an incredible mess the traditional approaches could not even start to exploit.

Big data can link together comments on Facebook, twitter, blogs, websites and companies’ data bases about a product for example, even the data formats are highly different.

In contrast, when using a traditional database software, data need to be neat and complying to predetermined format. It also requires to be disciplined in the way to input data into a field, as the software would be unable to understand that a mistyped “honey moon” meant “honeymoon” and is to be considered, computed, counted.. as such.

Switch from causation to correlation

With big data, the obsession for the “why” (causation) will give way to the “what” (correlation) for both understanding something and making decisions.

Big data can be defined as being about what, not why

This is somewhat puzzling as we are long used to search for causation. It is especially weird when using predictive analytics, the system will tell a problem exists but not what caused it, why it happens.

But for decision-making, knowing what is often good enough, knowing why is not always mandatory.

Correlation was known and used before big data, but with big data and as the computing power it is no more constrained, limited to linear correlations, more complex non linear correlations can be surfaced, allowing a new point of view and even a bigger picture to look at.

I use to imagine it as a huge data cube i can handle at will to look from any perspective.

Latent, inexhaustible value

Correlation will free latent value of data, therefore, the more the better.

What does it mean?

Prior to big data, the limitations of data capture, storage and analysis tend to concentrate on data useful to answer the “why”. Now it is possible to ask huge mass of data many different questions and find patterns, giving answers to (almost any?) “what”.

The future usage of data is not known at the moment it is collected, but with low-cost of storage, it is not (anymore) a concern. Value can be generated over and over in the future, just going through the mass of data with a new question, another research… Data retain latent value until it will be used and used again, without depleting.

That is why big data is considered the new ore and it is not even exhausted when used, it is a kind of infinite usage. That’s why so many companies are eager to collect data, any data, many data.

Do not give up exactitude, but the devotion to it

For making decisions, “good enough” information is… good enough.

With massive data, inaccuracies increase, but have little influence on the big picture.

The metaphor of telescope vs. microscope is often used in the book; when exploring the cosmos, a big picture is good enough even so many stars will be depicted by only a few pixels.

When looking at the big pictures, we don’t need the accuracy of every detail.

What the authors try to make clear is not giving up exactitude, but the devotion to it. There are cases where exactitude is not required and “good enough” is simply good enough.

Big versus little

Statistics have been developed to understand what little available data and/or computing power could tell. Statistics are basically extrapolating the big picture from (very) few samples. “One aim of statistics is to confirm the richest findings using the smallest amount of data”.

The computing power and data techniques are nowadays so powerful that it is no more necessary to work on samples only, it can be done on the whole population (N=all).

Summing up

I was really dragged into reading “Big data”, a well written book for non-IT specialists. Besides giving me insight of the changes and potentials of real big data, it really changed my approach with smaller data, the way I collect and analyse them, how I build my spreadsheets and how I present my findings.

My takeaways are biased as I consider big data for “industrial”, technical data and not personal ones. The book shares insights about risks of the usage already made of personal data and what could come next in terms of reduction of or threat to privacy.


Bandeau_CH36If you like this post, share it!
View Christian HOHMANN's profile on LinkedIn