OH SH—

The difference between a snafu, a shitshow, and a clusterfuck

Disaster area.
Disaster area.
Image: AP Photo/Elaine Thompson
By
We may earn a commission from links on this page.

Let’s say the situation at work is not good. The project (or product, or re-org, or whatever) has launched, and the best you can say is that things aren’t going as planned. At all. It’s a disaster, though the best word for it is the one you drop over drinks with your team and when venting at home: it’s a clusterfuck.

Clusterfucks hold a special place in public life, one distinct from the complications, crises, and catastrophes that mar our personal and professional existences. The F-Word, former Oxford English Dictionary editor Jesse Sheidlower’s comprehensive history of the term, defines a clusterfuck as “a bungled or confused undertaking or situation.” Stanford business professor Bob Sutton goes further, describing clusterfucks as “those debacles and disasters caused by a deadly brew of illusion, impatience, and incompetence that afflicts too many decision-makers, especially those in powerful, confident, and prestigious groups.”

The term dates at least as far back as the Vietnam War, as military slang for doomed decisions resulting from the toxic combination of too many high-ranking officers and too little on-the-ground information. (The “cluster” part of the word allegedly refers to officers’ oak leaf cluster insignia.)

“I have a weird obsession with clusterfucks,” Sutton tells Quartz At Work. He and Stanford Graduate School of Business colleague Huggy Rao took on the topic directly in their 2014 book Scaling Up Excellence: Getting to More Without Settling for Less, though publishers demanded that the softer substitute “clusterfug” appear in the final text. (This was not Sutton’s choice: His other books include The No Asshole Rule and The Asshole Survival Guide.)

To appreciate what a clusterfuck is—and to understand how to avoid one—it is first helpful to clarify some of the things a clusterfuck is not:

A fuck-up. “A fuck-up is just something all of us do every day,” Sutton says. “I broke the egg I made for breakfast this morning. That was kind of a fuck-up.” Whereas clusterfucks are perfectly preventable, fuck-ups are an unavoidable feature of the human condition.

A SNAFU. While sometimes used as a synonym for minor malfunctions and hiccups, this slang military acronym—“Situation Normal, All Fucked Up”—actually refers to the functionally messy state that describes many otherwise healthy companies (and many of our personal lives). A SNAFU work environment is usually manageable; one that is FUBAR (Fucked Up Beyond All Repair, another military legacy) probably isn’t. “When my students with little experience go to work at a famous company and it isn’t quite as they dreamed, I do ask them if it is FUBAR or SNAFU, and tell them SNAFU will describe most places they work,” Sutton said.

 A shitshow. No less an authority than the Oxford English Dictionary describes a shitshow as a “situation or state of affairs characterized by chaos, confusion, or incompetence.” A clusterfuck may come to possess all those characteristics, but is more properly identified by the decisions that produced it than its outcome.

The three main contributors to clusterfucks

Sutton and Rao analyzed countless cases of scaling and expansion, both successful ones and those that ended in disaster. In reviewing the most spectacular failures, they identified three key factors that resulted in the kind of expensive, embarrassing, late-stage collapse that is the hallmark of a clusterfuck. They were:

Illusion. A clusterfuck starts with the decision maker’s belief that a goal is much easier to attain than it actually is. The expectation that two car companies with different languages and different cultures would merge together flawlessly, as the architects of the doomed Daimler-Chrysler merger apparently believed? Clusterfuck. The Bush Administration’s estimate that the invasion and reconstruction of Iraq would take no more than a few months and $60 billion? A clusterfuck prelude of tragic proportions.

Impatience. A misguided idea alone does not produce a clusterfuck. The idea also needs a champion determined to shove it along, usually over the objections of more-knowledgeable underlings. Sergey Brin’s reported insistence (paywall) on introducing Google Glass to the public against its engineers’ wishes turned a potentially groundbreaking piece of technology into a stupid-looking joke.

Incompetence. When errors of information and timing meet blatantly stupid decisions by people who should know better, disaster tends to ensue. Bear Stearns wasn’t the sole cause of the global financial crisis, of course, but former CEO Jimmy Cayne’s decision to spend 10 days of the 2007 subprime mortgage loan meltdown playing at a bridge tournament without phone or email access contributed to the firm’s collapse—and to the worldwide disaster that followed.

All three of these failings share a common root: people in power who don’t (or won’t) acknowledge the realities of their environment, and who don’t push themselves to confront what they don’t know. Nobody likes to spoil the heady euphoria of an exciting new project by discussing the possibility of failure. The problem is, if potentially bad outcomes aren’t addressed pre-launch, they are more likely to surface afterward, when the reckoning is public and expensive.

The antidote to clusterfuckery, Sutton argues, is a willingness to confront the possibility of failure and disappointment built into every new venture, and to plan accordingly.

He cites a favored decision-making tactic of the Nobel Prize-winning economist Daniel Kahneman (who in turn credits it to psychologist Gary Klein). Before a big decision, teams should undertake what Kahneman calls a “premortem.” Split the group in two. One is assigned to imagine a future in which the project is an unmitigated success. The other is to envision its worst-case scenario. Each group then writes a detailed story of the project’s success or failure, outlining the steps and decisions that led to each outcome. Imagining failure and thinking backwards to its causes helps groups identify the strengths and weaknesses of their current plans, and adjust accordingly.

“People make better decisions when they look into the future and they imagine that they already failed, and they tell a story about what happened,” Sutton says. With better planning, it won’t be a story that has to be bleeped out.