Archive for the ‘complexity’ Category
Ethics and Stupidity
Why do people do bad things? It’s an ancient question. Certainly, some people do bad things simply because they are bad people. Psychopaths and sociopaths exist, though thankfully they are very few. Whether those few should be classified as “evil,” or as “mentally ill,” or both, is not clear to me. Either way, they certainly have the capacity to do evil. But sometimes, surely — maybe quite often — people do bad things stupidly, rather than out of evil intent. Sometimes, as I’ve blogged before, people do bad things because they allow themselves to use invalid excuses. It’s likely that some people know (in their heart of hearts) that they’re using lame excuses. But probably some people sincerely believe those excuses, and simply don’t understand that their reasoning is flawed.
“Hanlon’s Razor” is the name for an adage attributed to one Robert J. Hanlon. It says the following:
Never attribute to malice that which is adequately explained by stupidity.
It’s a good rule of thumb, not least because it is so often true that bad outcomes owe more to poor decision-making than they do to evil intent.
Of course, if what we’re really interested in is why bad things happen, attributing it to stupidity rather than malice just pushes the question down one level. If so many people act stupidly, why?
There are at least 3 kinds of situations in which dumb things happen:
- Some dumb moves are made by people who, well, are not that bright. The truth is that people have different levels of ability. We don’t all have equally-good judgment, and we’re not all equally good at foreseeing the consequences of our actions. In a corporate context, good hiring practices are supposed to weed out the untalented. But talent pools are always limited. And remember: screwups can in principle occur anywhere within a corporate hierarchy, so there’s no position so unimportant that a company can simply afford to fill it poorly.
- Some dumb moves are made by people — maybe even smart people — who lack the relevant skills. In some cases, that may mean they lack the relevant technical skills. If you’re not an accountant, for example, you simply may not understand the consequences of certain kinds of bookkeeping decisions. But people can also lack the skills to assess, for example, the quality of their own arguments and thought processes. I teach a course on Critical Thinking, and believe me, people are not all equally good at spotting fallacious arguments or flawed patterns of thought. But it’s a skill-set that can be taught, and learned.
- Some “dumb” decisions get made as a result of one or another of a bunch of well-studied cognitive biases. Those biases — the subject of an enormous body of psychological literature — go by names like “anchoring,” and “confirmation bias” and “the framing effect,”. (Confirmation bias, for example, essentially means that we have a tendency to accept new evidence when it confirms what we already believe, and to reject new data that challenges our beliefs. It’s dangerous, and we all do it.) Basically, cognitive biases are a bunch of persistent, and generally faulty, trends in the way humans think. They are ways in which we are pretty consistently subject to patterns of error in our thinking. Alarmingly, these cognitive biases tend to apply to smart people, too, as well as to people with the kind of technical training that you might hope would help them avoid such biases.
(For a bit more on why individuals do dumb things, see this Wired piece on Why Do Smart People Do Stupid Things?)
So, there are lots of reasons why people — even smart people — end up doing dumb things. And sometimes those dumb things will have evil (or just bad) consequences. It’s worth understanding the difference between bad things that happen because someone did something bad, and bad things that happen because someone did something dumb, though in some cases the line will be pretty fuzzy.
And I suspect Hanlon’s Razor holds true of organizations just as it does for individuals, and maybe more so. So really, we need to distinguish between why individuals act stupidly, and why organizations do. That’s a topic for another day.
Deadly Crashes, “Agency Theory” & the Challenges of Management
Sometimes for a corporation to “do the right thing” requires excellent execution of millions of tasks by thousands of employees. It thus requires not just good intentions, but good management skills, too.
For an example, consider the story of the crash of a Concorde supersonic jet a decade ago. The conditions leading up to the crash were complex, but one factor (according to the court) was negligence on the part of an aircraft mechanic. Whether (or to what extent) that mechanic’s employer is responsible for that negligence, and hence at least partly responsible for the crash, is a difficult matter.
Here’s the story Saskya Vandoorne, for CNN: Continental Airlines and mechanic guilty in deadly Concorde crash
The fiery crash that brought down a Concorde supersonic jet in 2000, killing 113 people, was caused partially by the criminal negligence of Continental Airlines and a mechanic who works for the company, a French court ruled Monday.
Continental Airlines was fined 202,000 euros ($268,400) and ordered to pay 1 million euros to Air France, which operated the doomed flight.
Mechanic John Taylor received a fine of 2,000 euros ($2,656) and a 15-month suspended prison sentence for involuntary manslaughter….
I don’t know the details of this story well enough to have any sense of whether the mechanic in this case really did act negligently. But what intrigues me, here, is the issue of corporate culpability. Note the difficulty faced by airline executives who (for the sake of argument) want desperately to achieve 100% efficiency and never, ever to risk anyone’s life. In order to achieve those goals, executives have to organize and motivate hundreds or perhaps thousands of employees. They need to design and administer a chain of command and a set of working conditions (including a system of pay) that is as likely as possible to result in all those employees diligently doing their very best, all of the time. That challenge is the subject of an entire body of political & economic theory known as “agency theory.”
Agency theory and the various mechanisms available to motivate employees in the right direction are things that every well-trained business student knows about, because those are central challenges of managing any corporation, or even any small team. What is recognized too seldom, I think, is just how central a role agency problems play in assessing and responding to ethical challenges in particular.
Ethics and Economics (And Coffee Too)
A bit of economics can go a long ways in helping understand a range of issues in business ethics. I’m not an economist myself, but I’ve read a fair bit of economics here & there. And I want to read more. In order to arrive at sound ethical conclusions, you need more than just ethical beliefs: you need some understanding of how the world works. For many issues in business ethics, economics provides relevant facts.
For example, consider ethical issues related to price. Prices are clearly important to all of us: the price of a thing tells us how much we would have to pay to get it. But economists recognize that prices play two other very important social roles, roles that are important to the way the economy as a whole operates.
First, a price conveys information. When something is expensive, that tends to convey the fact that it is scarce — scarce enough that buyers are willing to pay a lot for it, and are perhaps even competing with each other and hence bidding up the price. Likewise, when something is cheap, that generally conveys the fact that it is plentiful. (Note that scarcity can be either natural, a straightforward matter of the amount of a thing in existence, or artificial, as when some person or company gains monopoly control over the supply of a thing.)
Second, a price provides motivation. People are generally (though unevenly) motivated by money, and by money-making and money-saving opportunities. (If you really don’t care about money, you should send me all of yours. Thanks.) Among those who want to buy a good, high prices tend to lower demand, and low prices tend to increase it. Price also affects suppliers. The fact that the price for a given good or service is high is going to tend to motivate people to want to get into that line of business. A low price is going to tend to deter people from making that their line of work.
Now, how does that understanding of the social role of prices affect a real-life issue in business ethics? Here’s a simple example of the social function of prices at work, and why economics matters for ethics. It’s an example I learned from the book, The Undercover Economist, written by economist Tim Harford.
Consider coffee. Coffee is a hugely important commodity — second only to oil on the world market. Most people know they now have the option of buying ‘fair trade’ coffee, the aim of which is to make sure that the people who grow coffee get a fair deal for what they produce. (October is “Fair Trade Month,” by the way.)
Hartord’s argument is this. Coffee farmers are poor, and will generally remain poor, because the thing they produce isn’t scarce. Coffee is relatively easy to grow, and can be grown in relatively many (hot) places. Buying fair trade coffee (at a premium price) means paying coffee farmers more. Now, recall what I said above about the role of prices in motivating people. Paying more for coffee is likely to draw more growers into the business. And drawing more growers into the business will increase the supply of coffee. And if you increase the supply of coffee, you inevitably depress its market price — and along with it the wages of those who labour on coffee plantations. So it’s hard to make coffee growers alone better off, until workers in other industries (like the garment industry) are well-enough off that they can’t be attracted into the coffee industry by (for example) fair-trade-driven higher wages. According to Harford (p. 229):
High coffee prices will always collapse, until workers in sweatshops become well-paid blue collar workers in skilled manufacturing jobs, who don’t find the idea of being even a prosperous coffee farmer attractive.
That makes it awfully hard, if not impossible, to boost net wages in the coffee industry, in the long run. Now, that by itself is nothing like a conclusive argument against fair trade coffee. But a sound understanding of the economic role of prices does give reason to pause before we accept the notion that we can make people better off simply by voluntarily paying more for a non-scarce commodity. (I’ve blogged before about other problems with the fair trade notion. See: What’s so Fair About Fairtrade?)
As I noted above, I’m not an economist — so if someone reading this can help by correcting anything I’ve written here, or add any further detail, I’d be grateful.
—-
Here are a few books about economics that I recommend (not all equally good, and I recommend them for different reasons). All of them are aimed at non-economists, and 2 of the 4 are even written by non-economists.
- Economics Without Illusions: Debunking the Myths of Modern Capitalism, by Joseph Heath
- The Undercover Economist, Tim Harford
- The Rational Optimist, by Matt Ridley
- Predictably Irrational: The Hidden Forces That Shape Our Decisions, by Dan Ariely
Ethics, BP, & Decision-Making Under Pressure
Over the last couple of months, criticism of BP has become an international pastime. It’s hard not to get the impression that most members of the public believe that senior managers at BP (and quite possibly everyone employed at BP) are bungling fools. And probably lazy too.
But of course, that’s patently absurd. And maybe nobody actually believes it. We all know that the relevant people at BP are smart and highly-trained. They wouldn’t have the jobs they have if they weren’t. True, no one was very happy with the amount of time it took to get the oil well capped. And almost certainly mistakes were made. But the capping of the well was a feat of enormous technical difficulty and complexity, carried out under intense scrutiny. Few of us, if we are honest with ourselves, can imagine performing well under those circumstances.
Here’s a story that speaks to the difficulty of those circumstances, by Clifford Krauss, Henry Fountain and John M. Broder, writing for the NYT: Behind Scenes of Gulf Oil Spill, Acrimony and Stress. Here’s just a sample, though the whole article is well worth reading:
Whether the four-month effort to kill it was a remarkable feat of engineering performed under near-impossible circumstances or a stumbling exercise in trial and error that took longer than it should have will be debated for some time.
But interviews with BP engineers and technicians, contractors and Obama administration officials who, with the eyes of the world upon them, worked to stop the flow of oil, suggest that the process was also far more stressful, hair-raising and acrimonious than the public was aware of….
So, after reading the NYT piece, ask yourself these questions:
1) If, in the middle of the well-capping operation, you (yes you) had been invited to stop playing armchair quarterback and become part of the team working on a solution, would you have? Assume you had some relevant expertise. Would you have agreed to help? I’m not sure I would have. I would have been seriously reluctant to subject myself (and my family!) to that kind of experience.
2) Assuming you accepted the above invitation, how confident are you that you would have performed well?
3) Finally, setting aside your own willingness and ability to help, do you know of any organization that you are confident could have performed well in a) a task of that technical difficulty and complexity, while b) under similar conditions of intense scrutiny?
None of this is intended to be fully exculpatory. It’s quite likely that there were ethical lapses that contributed to the blowout and the oil spill that resulted. But when we’re thinking about BP’s response to the disaster, our assessment of the company’s performance — and specifically the performance of the thousands of individuals who actually did the work — ought to be informed by an appreciation of the nature of the task performed. Ethical decisions are never made in a vacuum. And in some cases, they’re made in the middle of a hurricane.
The BP Disaster: Regulating (and Managing) Complexity
In my previous blog posting on the BP oil-rig disaster, I pointed to the disaster’s ethical complexity, measured in the sheer number of relevant ethically-interesting questions that we might be interested in.
But the issue of complexity arises in a much more straightforward way in the BP disaster, namely in the fact that the oil rig on which the disaster took place was itself a terrifically complex piece of technology.
See this nice piece by Harvard economist Kenneth Rogoff, The BP Oil Spill’s Lessons for Regulation.
The accelerating speed of innovation seems to be outstripping government regulators’ capacity to deal with risks, much less anticipate them.
The parallels between the oil spill and the recent financial crisis are all too painful: the promise of innovation, unfathomable complexity, and lack of transparency (scientists estimate that we know only a very small fraction of what goes on at the oceans’ depths.) Wealthy and politically powerful lobbies put enormous pressure on even the most robust governance structures….
Rogoff’s point is about regulation, but it could just as easily be about management, and/or the relationship between the two. And to Rogoff’s examples of complexity-driven disasters, you can add Enron and a couple of NASA shuttle explosions. Now, none of these cases can be explained entirely in terms of the difficulty of managing complex systems; each of those cases include at least some element of bad judgment and probably unethical behaviour. But in each of them one of the core problems was indeed complexity — either for those inside the relevant organizations or for those outside trying to understand what was going on inside. When systems (financial or mechanical) are mind-numbingly complex, it becomes all the easier for poor judgment to produce catastrophic results. It also makes for good places to hide unethical behaviour.
So, if we’re going to build fantastically complex systems, we also need to learn how to manage those systems in highly-reliable ways. In other words, we need management systems — effectively, social technologies — that are as sophisticated as the physical and economic technologies they are intended to govern. We already know a fair bit about error-reduction and the design of high reliability organizations. Aircraft carriers are a standard example of one type of seriously complex organization that, through careful design of management systems, has managed to achieve incredibly high levels of reliability — i.e., incredibly low levels of error, despite their complexity. Similar thinking, and similar design principles, could presumably be applied pretty directly to the design and management of oil rigs. Presumably, that’s already the case to at least some extent, though as BP has proven, more needs to be done. The bigger question is whether business firms are ready and able to apply those principles to the design of all of their complex systems — whether mechanical or financial — such that we can continue to reap their benefits, without suffering catastrophic losses.
(Thanks to Kimberly Krawiec for showing me Rogoff’s article.)