Normalization Of Deviation

2016-11-18-1479429379-719401-spaceshuttlechallenger.jpg

by Captain Alan W. Price, Delta Air Lines Retired

On the evening of the 27th of January, 1986, Bob Ebeling and four other Morton Thiokol engineers engaged in a heated discussion with their leadership team: It was too cold to launch the next day - the solid booster seals may not function properly. In spite of their protestations, NASA decides to launch. That night, Bob despondently told his wife Darlene "it's going to blow up!"

The next morning, 73 seconds after launch, the Space Shuttle Challenger exploded, taking the lives of seven astronauts. The "O" ring seals failed due to the low temperatures, allowing hot gas "blow by" which destroyed the Challenger. How could this have happened, and who was responsible? Great questions central to today's discussion.

In her thought-provoking book ("The Challenger Launch Decision") Dr. Diane Vaughn coined a new phrase - Normalization of Deviation:

"Organizations establish safe best practices. One day it becomes expedient to deviate from one or more of these processes. Nothing untoward occurs. Over time, this becomes the new "normal". Other small steps away from this new normal occur. Then, a disaster happens. RCA (Root Cause Analysis) reveals this progressive movement away from safe practice."

NASA was guilty of Normalization of Deviation, the last and most egregious example of which was a decision to launch in conditions too cold to guarantee safety. Bob and his colleagues had feared this very thing. Sadly, they were correct. No one in a position to stop the launch thought these risks were too great for the rewards sought.

What does the Challenger disaster teach us about decision making in the modern world? Plenty. Let's dive right in.

Leaders are constantly challenged to maximize some preset metric while minimizing its cost to produce. Fill in the blanks as to the exact figures but we all live in this risk-reward universe. Do well, and the future is bright; fail to meet target and darkness ensues.

Into this volatile mix we insert a complication: Is the operation safe and is it wise? This is a very different consideration than profitability and productivity. I come from an industry where safety first is the "key" concept, not just in theory but also in practice. In my industry, the airline industry, safety is not only important; it is the key determinant driving the operation. In many other organizations, however, safety is a much more subtle concept.

"Safety is good business" is an airline motto. Safety has both tactical and strategic implications - on a daily basis, there may be costs associated with erring on the side of safety, but in the longer term, the costs of a fatal crash that kills hundreds of passengers and crew are far higher than the resources necessary to be "safe". Put simply, we cannot afford to be unsafe.

Safety, however, is not a one size fits all idea. We each have differing views of what safe enough is or is not. Rather than try and define safety, let's all agree to operate in accordance with a
standardized set of concepts
:
  • First, understand and ensure every member of your team understands the concept of Normalization of Deviation. Understanding is the beginning of wisdom. Armed with knowledge, our team is far more likely to see the implications of their actions in light of a "safe enough" process.

  • Next, develop and be guided by a defined set of metrics to detect deviations from standard practice. In 1992, the National Transportation Safety Board (NTSB) published what has become a seminal study on safety (NTSB/SS-94/01). In their report, the NTSB sought to determine common factors that led to aircraft accidents by analyzing 37 major accidents occurring from 1978 to 1990. Their findings are legendary in the aviation industry, and have great relevance for any people-centric organization.
  • By identifying warning signs that precede an incident or accident, the NTSB created a framework for classifying and quantifying - the future! These "Red Flags" fall into 7 basic categories. (Conflicting Inputs/Preoccupation/Not Communicating/ Confusion/Violating Policy or Procedures/Failure to Meet Targets/Not Addressing Discrepancies) Failure to monitor/challenge decisions was pervasive in most of these accidents.

    In the accidents studied, there were never fewer than 4 Red Flags/accident with an average of 7 Red Flags/accident! Moreover, we find very similar statistics for healthcare and other team-centered, high-risk organizations. Combinations of Red Flags predict accidents/breakdowns before they occur!

    First order of business is to train our teams to be cognizant of and recognize Red Flags unique to your organization. Most importantly, when they occur, each and every team member must communicate their presence, thereby mobilizing the team's synergistic power to correlate data from multiple sources - and act before an accident occurs. When seeing a Red Flag, speaking up is essential - we cannot know if this is the first or last Red Flag in an accident sequence.

  • Lastly, develop an audit process to ensure that safety standards are not only documented but are actually implemented and faithfully practiced. This ability to know one's self is the critical last step in any safety process. If discrepancies are detected, analyze not only what but "why" to ensure process changes correct back to standardized operation.
  • Three weeks after the Challenger explosion, Bob Ebeling and fellow engineer Roger Boisjoly secretly relayed to NPR the events of the night of the 27th of January 1986. They blamed themselves for not having done more to stop the launch. But, Bob and his fellow engineers did all they could to prevent the Challenger disaster. Roger said of that night, "we were talking to the right people. We were talking to the people who had the power to stop that launch." They chose not to do so.

    Now, 30 years later, Bob spoke again to NPR and still holds himself accountable for not stopping the launch. In truth, Bob did his job. It was a system and process failure that led to the disaster. Normalization of Deviation had become standard practice. Ensure your organization never goes where NASA went that fateful night.

    Captain Alan Price was a founder and leader in Delta's Human Factors Program (Crew Resource Management - CRM), and led "In Command" for five years before becoming Chief Pilot for Delta's Atlanta pilot base. He is a retired USAF Lt. Col. and Command Pilot, and works with airlines, hospitals and other team-centered organizations to utilize teamwork, communication, and leadership skills to serve the passenger, the patient - the customer. He is the founder of Falcon Leadership, Inc. and can be reached on Twitter at @FalconLdrShip.