Author: Karl E. Weick
_Karl E. Weick_
Reading time: 19 minutes
Synopsis
Managing the Unexpected (2015) looks at why some companies deal with surprises, problems, and difficult situations much better than others. It shows how companies can stop small problems from becoming big disasters. It suggests being very aware in daily work. This means paying attention to small warning signs, knowing what’s happening now, and respecting expert opinions.
What’s in it for me? Learn to see problems early, recover from difficulties, and lead when things are unclear.
Imagine working in a hospital, bank, or airline where one bad choice can harm many people. Most of the time, nothing big happens at work. Your normal routine might feel safe. But it can also make you stop asking difficult questions. You might stop noticing strange things. You might start thinking tomorrow will be just like today. This is how companies slowly move towards big problems, even when everyone believes things are fine.
The main idea here is that being reliable is something people make happen every day. It comes from how you see small issues, how you discuss them, how fast you change things, and who you trust when problems grow. Some places create ways that make small warning signs easy to see and talk about. Other places see these same signs as just distractions, until the problem is too big to ignore.
In this summary, you’ll learn how what you expect and how you understand things affect daily work. You’ll also learn how five rules for being careful help you see problems before they happen and recover from unexpected events. And you’ll see how a good company culture and respecting experts help companies do well even when the world changes.
Blink 1 – Companies deal badly with surprises because they ignore early warnings
When things go very wrong, people often say they didn’t see it coming. But the warning signs were often there from the start. The problem is not a lack of information. It’s how fast small strange things get covered up with a story. This story makes everyone feel that they understand the situation and have it under control.
We often accept the first explanation that seems right. Especially if it matches what we already want to believe. Leaders like to see growth more than danger. They prefer good results over difficult questions. So, signs that something is wrong are quietly called noise, small mistakes, or bad luck. Over time, this makes people too positive about events. They clearly see possible benefits. But they don’t clearly see possible problems, or they think these problems are not important.
Big failures in money, business, and public safety often happen this way. At Washington Mutual Bank, which famously failed in 2008, fast growth and big risks were seen as signs of being strong. But worries about how good the loans were, or about the systems that checked them, were seen as too much worry. Things that seemed like single, separate problems were actually early signs of a bigger pattern. This pattern stayed hidden until it was too late.
So what do companies do to deal well with surprises? They keep asking if their idea of what is real is still correct. They keep updating what they see as a danger. And they see small unusual things as real information, not just annoyances. They don’t quickly try to explain away unclear things with a simple story. Instead, they stay with the strange feeling long enough to understand what it might be telling them.
To do this reliably, these companies need a different basic way of managing what they expect and understanding events. In the next section, you’ll see how that deeper system really works.
Blink 2 – Being careful and organized relies on a hidden system of daily tasks
Companies that handle surprises well have an unseen system. This system quietly shapes how daily work is done. It guides how people notice things, how they talk about them, and how they change what they do because of them. At its heart are four main ways of doing things. Together, these decide how the company stays connected to what is really happening.
First come expectations. People have an idea of what is normal. They know what would be a warning. And they know what they can safely ignore. These expectations save time. But they also decide which small unusual things get noticed and which ones are ignored.
When what is really happening is not what they expected, then sensemaking starts. People share information, look for patterns, and try out possible reasons. This is not just a one-time thing. It’s an ongoing effort to understand the real situation they are in.
Organizing is when those ideas turn into real changes. Jobs change. Daily tasks are updated. Working relationships are rearranged. This way, the new understanding of events shows in who does what, in what order, and with what tools.
Finally, managing brings everything together. It means choosing which problems to focus on. It also means deciding which ideas will guide action. And it means choosing how much uncertainty or risk the company is ready to accept while it learns.
These four ways of working become clearest when normal routines are broken. Think about the B&O Railroad Museum. Its old equipment was badly damaged. This happened when heavy snow made the roof fall in. They had to stop their normal work. Its staff had to rethink their ideas about the building. They had to change how they did repairs and kept things safe. And they had to organize their work again because their environment had changed.
When these tasks are done with great care, they support the system. This system is the main strength of reliable companies. In the next parts, we will look at more specific rules that help make companies reliable. There are five of them. The first rule is always thinking about what might go wrong.
Blink 3 – Reliable companies stay aware of small signs of problems
A main difference between reliable systems and weak ones is how they deal with small problems. They don’t just ignore strange events as unimportant. Instead, reliable companies see them as early signs that something bigger might be wrong. They watch carefully for the first hints that their actions are not going as planned.
When something is a bit different from what people expected, we call it an anomaly. At first, an anomaly is just a signal, something seen for a moment. As people talk about it and link it to other events, it can become a sign that a bigger problem is starting.
In daily work, it’s very tempting to ignore unusual things and call them small mistakes. This makes them seem less dangerous. And it stops people from wanting to know what is truly happening.
A way of thinking that focuses on possible problems fights against this habit. It makes people ask what they depend on. It makes them ask how it might fail them. And it makes them see ‘almost accidents’ as warnings that the system is weak. Not as proof that safety measures are working.
On an aircraft carrier, for example, everyone on the deck looks for tiny pieces of rubbish that could harm an engine. They are expected to speak up and act if they find something. This routine makes looking for problems part of normal work. It turns bad news into a chance to learn and change, instead of a reason to blame someone.
Blink 4 – Not wanting to simplify things protects against hidden problems
Simple stories feel safe. But in difficult systems, they can be dangerous. When unclear situations are put into simple names like “normal event” or “worker mistake”, important differences disappear. And early signs of trouble become easy to miss. People only focus on what matches the name. They stop looking for details that might show the situation is changing.
A lot of this depends on how we use groups or types. Names help people work together. But they also stop people from seeing what is truly happening. Think about when the virus that became known as West Nile virus first appeared. At first, patients with fever, feeling confused, and weak muscles were given an old diagnosis. This diagnosis did not fit their true condition. Dead birds and many cases in one area were ignored. This was because they did not fit the chosen diagnosis. By the time these problems were taken seriously, important time had been lost.
The same problem appears in technical failures. In the space shuttle Columbia accident, foam breaking off the outside tank and hitting the shuttle had become a known small problem. It was part of a history that had been accepted as normal. Once that idea was accepted, later observations were seen as less important. But on the flight where people died, a bigger hit damaged the left wing. The shuttle later broke apart when it came back into Earth’s atmosphere. All the crew died. The lesson from these cases is clear: Don’t rush to a simple explanation. Accept some confusion. And keep asking what is different before you decide on a final answer.
It is easier to stay open like this when a company welcomes different ideas. Different tools, experiences, and opinions help to better understand a complex situation. Being healthily doubtful, having open discussions, and good people skills help keep different ideas alive. They stop these ideas from being shut down as problems. People learn to see names and groups as temporary. They learn to openly talk about what might not fit. And they change their ideas as new facts appear.
Blink 5 – Staying safe when under pressure needs close attention to daily work
Our third rule for reliable companies is about paying attention to what is really happening now. Not just what the plan says should happen. In difficult systems, every action changes what happens next. So people need to know how their work affects the whole system right now. This awareness is not only for workers on the front lines. Managers and support staff also work at “sharp ends”. These are points where choices about money, staff, and timetables affect what others can safely do.
You can see this kind of awareness in places like hospitals (surgery), mines, and power plants. Here, teams must do two things at once. They need to keep the work moving. And at the same time, they must protect the system from being used too much. To do this, they pay close attention to signs like small delays, strange noises, or difficult changes between tasks. These can show that things that were separate are now working together in unexpected ways. Being aware of operations means noticing these changes early. It means adjusting before they grow into bigger problems.
Keeping this kind of awareness needs people to share what they see and feel while working. Reliable teams create a shared understanding of a situation. They do this by talking about what they are doing, what they think will happen next, and where they see problems. This ongoing talk connects people in different jobs and at different levels. So, what is seen locally can update the bigger picture of the whole system. Being short on time, distractions, and only caring about getting work done make these talks harder. That is why successful groups make sure there is time and routines for people to share notes and update their shared understanding. This happens even when they are busy.
Of course, even with great awareness, surprises still happen. This brings us to our fourth rule for companies: learning how to recover and change when things go wrong.
Blink 6 – True resilience means learning and changing while a crisis is still happening
Even in the most careful systems, some surprises still get past the safety measures. And what happens next depends on how well you can recover (resilience). Here, resilience means being able to continue, to find new ways to act when under pressure. It also means accepting that things might get worse in a controlled way, instead of failing completely. Reliable companies don’t think they can stop every bad event. Instead, they expect some things to go wrong. They get ready to find out, right then and there, what needs to be done.
This preparation starts long before a crisis. Reliable teams practice working with not enough information. They learn to use old skills in new ways. And they get used to watching how situations change over time. In the Sioux City accident, a DC-10 passenger plane lost almost all control of its steering. This happened when an engine problem destroyed the systems that control the plane. The crew could only use engine power to change the plane’s direction. There was no plan for this situation. They had to invent a way to fly and land a plane that was very hard to control. To do this, they used their long experience. They had a shared idea of what the plane was doing. And they constantly tried something, saw what happened, and then changed together.
Such events show another, often difficult, part of resilience. It needs some extra time and backup parts in the system. When everyone is working too hard, or when backup plans are removed to save money, the ability to adapt becomes smaller. Places that need high reliability try to save time for training, backup workers, and extra equipment. They see these as ways to be resilient, not as waste. They are sure they can handle problems. But they also wonder if they have seen every possible way something could fail. They use this mix of trust and doubt to keep learning.
Blink 7 – Being reliable for a long time depends on expert knowledge, company culture, and daily habits of change
Our last rule for reliable companies is about a question: When things need to be done faster, or conditions change, who really decides what to do next? In more reliable systems, the power to decide goes to the people who best understand the situation. Not automatically to the person with the highest position. When this doesn’t happen, local experts are ignored. Technical warnings are made less serious. And the wrong person takes control just when the pressure is greatest.
Using expert knowledge in a reliable way needs a special way of thinking. The company needs to be seen as a group of different kinds of knowledge. When a surprise happens, the job is to connect the right people with the right experience. This needs humbleness about the work itself. People are sure of their skills. But they see the situation as bigger and more unclear than one person can handle. With experience, people who respond learn that even difficult events eventually calm down. This helps them avoid panic and quick, unplanned solutions.
All of this depends on the company’s culture. This is the shared idea of what is normal and okay. A strong culture that values paying attention, politely questioning things, and learning from mistakes can stay strong even during big problems.
Toyota is an interesting example. For years, people praised it for its strict daily tasks and always getting better. But later, when it aimed for fast worldwide growth and cut costs, it was slow to act on reports of accelerators getting stuck and problems with brakes. This led to many cars being called back and public complaints. The same good points in its culture that once helped it be reliable started to hide bad news and protect leaders from it.
So, keeping up good performance is not about big change programs. It’s more about constantly and carefully updating things. They don’t wait for a disaster to force a hard check. Instead, reliable companies use softer self-checks to see how well they are: noticing small warning signs, questioning names, staying close to their work, changing when things are difficult, and letting real experts make decisions. The good result is dynamic nonevents. These are the many accidents that never happen. This is because people keep talking, keep noticing, and keep changing things together.
Final summary
The main idea from this summary of Managing the Unexpected by Karl E. Weick and Kathleen M. Sutcliffe is this: In a complex world, reliability is something people create together, every day. It’s not something built into systems once and for all. It means noticing small unusual things. It means not just using simple names. It means staying close to what is really happening. And it means being ready to change when plans don’t work. Also very important, it means letting true experts lead during a crisis. And it means creating a culture where people can speak up about problems without being afraid. When these habits are followed, many possible disasters never happen.
Okay, that’s all for this summary. We hope you enjoyed it. If you can, please take the time to leave us a rating – we always like to hear what you think. See you in the next summary.
Source: https://www.blinkist.com/https://www.blinkist.com/en/books/managing-the-unexpected-en