So, the 2016 Olympics are over. And while the Olympics always seem to serve up their share of controversy, this year’s event in Rio seems to have had more than the usual quantum of troubles. In fact, the Rio Olympics featured enough scandals and ethical dilemmas to keep a university Moral Issues course going for two full semesters.
The ethical issues at Rio began, of course, long before the Olympics did. To start, there was the controversy over holding a multi-billion dollar event in an underdeveloped country in the midst of political turmoil. And as the games approached, serious concerns were raised about the lack of essential infrastructure, not to mention lack of plausible assurances about safety and security.
Other concerns focused on the the dangers posed by the Zika virus. Experts pleaded with the International Olympic Committee to delay or move the 2016 Olympics, fearing that letting the Rio games go on as planned could accelerate a burgeoning pandemic.
Ethical issues also tainted the competitive aspects of Rio, before the games even started: Russia’s entire track and field team was banned from participating, over concerns regarding widespread, systematic use of performance-enhancing drugs.
Once the Olympics began, well, things didn’t improve quickly. From day one there were concerns for example about security, and about the badly polluted water at the games’ sailing venue.
Ethical questions extended to media coverage as well. There was widespread criticism on social media of seemingly rampant sexism in how reporters covered the games. When American sharpshooter Corey Cogdell won a bronze medal in trapshooting, the Chicago Tribune referred to her not by name, but merely as “Wife of a [Chicago] Bears’ lineman.”
But back to safety. Predictions that safety would be a problem were not entirely unfounded. Take for instance the day that stray bullets zinged into the Olympic equestrian centre, narrowly missing causing injury or death. In retrospect, the fact that Ryan Lochte’s falsely reported being robbed overshadowed the fact that many people really were worried about hosting the Olympics in a city that really can’t claim to have crime under control.
And then there was the corruption. The IOC has, of course, a checkered past in this regard. The organization has been widely criticized for its history of corruption. Rio continued the trend, with IOC member Patrick Hickey being arrested along with three other men in a ticket re-selling scam.
But, but, but…what about the good stuff? What about the spirit of friendly competition? Well, yes, competition was friendly, except when it wasn’t. International politics overflowed into sport when Egyptian Islam al-Shehaby refused to shake hands with his Israeli adversary.
So there you have it. Congrats to all the medalists. Congrats to those who fought hard and lost. And congrats to those athletes, officials, and sponsors who managed not to end up as fodder for the ethics professor’s classroom.
Diversity and equality of opportunity are good things. And discrimination, on the other hand, is both morally repugnant and economically foolhardy. And yet it persists.
So how on earth could programs designed to encourage diversity and opportunity and to discourage discrimination be a bad thing? That’s exactly the question asked and answered by Harvard prof Frank Dobbin and Alexandra Kalev of Tel Aviv University, in research summarized in their Harvard Business Review piece, Why Diversity Programs Fail.
The goal of diversity programs is a laudable one, namely to increase diversity as a way of fighting back against systemic discrimination. The corporate world is in many ways still a male world, and a white male world at that. In spite of advances, women and minority groups still make up disproportionately small proportion of managers at big companies. If change is coming, it is coming painfully slowly. As Dobbin and Kalev point out, “Black men have barely gained ground in corporate management since 1985. White women haven’t progressed since 2000.” And it’s not for lack of qualified management candidates. “[B]oth groups,” the authors point out, “have made huge educational gains over the past two generations.”
So it’s easy to see why some might think that good intentions aren’t enough, and that proactive diversity programs would be a useful thing. Except they aren’t. For evidence, Dobbin and Kalev looked at a range of programs designed to encourage diversity—including diversity training, formal grievance procedures, as well as hiring tests and performance rating systems—and their conclusion about them is resoundingly negative. The authors reach this conclusion based on literally thousands of academic studies that have found, time after time, that diversity programs not only don’t work, they tend to be counterproductive. At companies that have instituted them, diversity has typically actually been reduced.
Why don’t such programs work? Dobbin and Kalev suggest three problems. One is that such programs tend to be negative, focusing for example on legal implications. If we’re caught discriminating, we could be sued! People tend to react badly such reasoning. Second, some companies make diversity training courses compulsory, and employees tend to result compulsory training, and then (so the hypothesis goes) blame the very disadvantaged groups the programs were aiming to help. Finally, Dobbin and Kalev hypothesize that when managers see such programs instituted, they feel blamed, and react badly to that. The result in all three cases is the potential for backlash and for managers to find end-runs around programs they don’t like.
So why do big companies persist in using such programs? Dobbin and Kalev point to fear of legal liability. That is, managers need to look like they’re doing something, even if there’s no evidence that that “something” really works. It’s very much like a physician ordering extra, unnecessary diagnostic tests. The only thing that seems worse than doing it would be not doing it and then having trouble surface later.
The fact that such programs don’t work is further evidence for the truism that management, pretty generally, is a difficult task. Coordinating and motivating people to work together to achieve a goal—whether the goal is increased sales or increased diversity—is not easy. More specifically, it’s an example of the principle that the best way to institute change isn’t always the most straightforward-seeming way, which is to exert direct control by telling people what to do. As lawyer and legal scholar Scott Killingsworth argues, “command-and-control” approaches to compliance come with a number of inherent limitations and adverse side effects. When command-and-control doesn’t work, the better route is through the long, slow road of cultural change.
With regard to diversity, what does work? Dobbin and Kalev recommend three broad strategies, none of which focuses on control. First, they suggest that companies “engage managers in solving the problem.” For example, get them to act as mentors to people in disadvantaged groups, and get them personally involved in for example recruiting a wider range of diverse job candidates. The second strategy is to make use of what psychologists call the “mere exposure effect,” a psychological mechanism according to which merely being exposed to a person, idea, or group tends to result in positive feelings about them. So, expose employees to people from different groups (for example by having them work together on diverse, self-managing teams). Finally, Dobbin and Kalev suggest making managers feel personally accountable for change. Not accountable in a legalistic way; accountable in a social way that comes with the feeling that people around you area aware of your behaviour. To this end, the authors recommend department-level transparency about stats regarding who gets hired and who gets promoted, and the institution of diversity task-forces with members drawn from various departments.
Perhaps most fundamentally, Dobbin and Kalev recommend that it be made clear, within the organizations, that top managers are paying attention to the issue of diversity. That is, what matters is not that the boss is telling you what to do; what matters is that the boss cares, and cares enough to pay personal attention.
What should Olympic sponsors and ‘partners’ like Coke and General Electric and Visa do in light of expert recommendations that the Summer Olympics in Rio be postponed or moved?
Nearly 200 prominent scientists, physicians, and ethicists from around the globe have signed a letter arguing that the 2016 Summer Olympics scheduled to be held in Rio de Janeiro this August be postponed or moved due to the risks posed by the mosquito-borne Zika virus. The letter is technically addressed to the head of the World Health Organization, urging WHO to conduct a “a fresh, evidence-based assessment” of the risks that Zika poses, and asking WHO to use its powers of persuasion (and its close connection to the International Olympic Committee) to get the IOC to rethink things. In particular, the letter notes the risk implied by having 500,000 athletes and tourists visit Rio and then return home, potentially spreading Zika to every corner of the globe. To date, the WHO for its part seems unmoved.
But the letter omits any mention of the other powerful decision-makers in this situation, namely the corporations that will have their logos splashed all over every moment of the Summer Olympics, regardless of where and when it happens. The 2016 Olympics’ “Worldwide Olympic Partners” include Coca-Cola, Bridgestone, McDonald’s, General Electric, Visa, and others. Dozens of other companies are listed as “Official Sponsors,” “Official Supporters,” or “Suppliers.” Becoming a top-tier Worldwide Olympic Partners costs something on the order of $100 million. That kind of cash surely brings considerable influence. The question: should they use that influence with regard to the Zika issue, and what should their position be?
Ethically, these companies should be wary of contributing to an event that could globalize an ongoing epidemic. The trouble is that expert opinions on the degree of danger here differ. The letter-writers represent a very broad range of experts, but not all of the experts that there are. The head of the US Centres for Disease Control, Dr Tom Frieden, for example says “There is no public health reason to cancel or delay the Olympics.” But there’s reason to be risk averse, here. The worst-case scenario if the Olympics proceed as planned is very bad, and includes unnecessary birth defects as well as potential neurological damage in adults. And the worst-case scenario isn’t science fiction: it’s a plausible hypothesis set forward by a substantial group of respected experts.
In reasoning about this, Olympic partners and sponsors face two dangers that could warp their ethical reasoning.
The first danger is the fact that, in terms of potential outcomes for sponsors, the situation is seriously asymmetrical. If the games get moved or postponed, this presumably throws a monkey-wrench into each sponsor’s scheduled advertising. On the other hand, if the games go ahead and if there’s then an up-tick in cases of Zika around the world, sponsors have a two-pronged defence: first, “you can’t prove it’s because of the Olympics” (which is probably true) and second, “the CDC and WHO said it was OK” (which they did). So it will be easy for Olympic partners and sponsors say — and maybe actually believe — that there’s no downside to going ahead.
The second danger is a risk that the sponsors will fall prey to the IOC’s general “can-do,” and “the Olympics must go on!” attitude. It’s widely recognized that a “can-do” attitude is what led NASA to launching the Space Shuttle Challenger, despite warnings that doing so could be unsafe. The results of that can-do attitude are notorious.
In my view, Olympic partners and sponsors should resist the dangers noted above. In the end, this may well be a case where the corporations need to trust the experts, or the bulk of them, and at very least lend their weight to the argument in favour of giving the Summer Olympics a very serious second look.
The fist that landed on Jose Bautista’s jaw echoed around the baseball world almost as loudly as his famous “bat flip” last October. And whereas Bautista’s bat flip violated the unwritten rule against grandstanding, Texas Rangers second baseman Rougned Odor’s punch violated the written rules, but also followed from a different, unwritten rule that permits retribution. In particular, Odor was getting back at Bautista for a very aggressive slide into second base just seconds before — which may in turn have been retribution for a fastball to the ribs that Bautista had previously suffered at the hands of a Rangers pitcher, and which was presumed to be intended as — you guessed it — retribution for last fall’s bat flip. That’s how retribution often works, namely that it results in a string of tit-for-tat acts of violence with no natural end point.
But what’s important, here, from a business point of view, is to see the way all of this plays out within what has been structured, intentionally, as an adversarial system. This kind of eye-for-an-eye pattern of retribution would be seriously problematic in private life; but on a baseball field, it’s merely the working out of a set of informal rules designed to civilize a rather aggressive set of activities.
The point here is that in baseball — as in business — people on opposing “teams” aren’t supposed to get along. They’re supposed to compete, each trying to get the better of the other. And such competitive domains typically have their own rules, rules that permit behaviours not considered OK in everyday life. In everyday life, after all, throwing a ball towards someone at 96mph would be considered recklessly dangerous, possibly criminal. But that’s something major league pitchers are encouraged to do, if they can. And in everyday life, causing a person to lose their job would be a terrible thing to do. But in business if you invent a better mousetrap and force makers of lesser mousetraps out of business, that’s considered entirely justified in the name of innovation.
As philosopher Joseph Heath has convincingly argued, this idea of constrained competition serves as a strong foundation for an ethics of business grounded in the goals of markets themselves. Business is tough and competitive, but even tough, competitive games need rules if they are to achieve their purpose. In a business context this puts limits on the aggressive strategies that managers can use in pursuit of profit. Managers of competing companies are free to act aggressively, trying to outmanoeuvre each other, zealously seeking out efficiencies, devising devilishly clever new products and so on, all in an effort to drive the “other guy’s” market share to zero. Managers at all competing firms employ the same tactics, and generally it is the consumer who wins by gaining access to better and better products at lower and lower prices. But the permission to act aggressively in the market — permission granted as an exemption from the rules of polite society — is limited by requirements that the competitors avoid taking things too far, by for example sabotaging each other’s factories or lying to customers to boost sales. Those would certainly be competitive strategies, but anti-social ones.
My Ryerson colleague Hasko von Kriegstein argues, in a forthcoming paper, that this obligation to compete in a constrained way in principle really applies to corporate shareholders, not to managers. After all, shareholders are the ones seeking to profit in the market, so it’s their profit-seeking behaviour that must be constrained. But it still implies limits on the behaviour of managers because managers act as shareholders’ agents in the marketplace. When you’re the one “on the field,” you’re the one subject to the rules.
And in both business and in baseball, the rules — both written and unwritten — serve to protect a range of stakeholders. Some rules protect participants. Others protect innocent bystanders. In some cases, the written rules are controversial or unclear. And in others, the unwritten rules are uncertain. And so sometimes the former get changed or clarified, and the latter evolve. But we can’t begin to understand the point and the proper scope of particular rules — rules against aggressive slides, rules against insider trading, etc. — and the way those rules differ from the rules of everyday life, without understanding that they are rules whose logic is internal to the game, a way to civilize a justifiably aggressive activity.
Defenders of David and Collet Stephan are right about the Canadian healthcare system, and about the “mainstream” approach to healthcare. Sometimes the system kills. Sometimes errors are made. Some pharmaceuticals, in some circumstances, do more harm than good. Preventable “adverse events” may kill as many as 23,000 adults Canadians each year. Sometimes a trip to the hospital makes things worse, rather than better.
Mr and Mrs Stephan were recently convicted of failing to provide the necessaries of life to their toddler, Ezekiel. Their story has many elements, but a central one of them seems clearly to be a mistrust of the mainstream healthcare system. Rejecting that system, David and Collet Stephan opted instead to treat (or rather, “treat”) their child’s very serious illness with herbs and with vegetable smoothies. They didn’t seek the help of mainstream, evidence-based medicine until it was far too late.
There are plenty of people who mistrust mainstream medicine. That’s why “alternative” and “complementary” medicines sell so well. People object to a system that they see as being dominated by big pharma, a system that intrusively asserts control over our lives, telling us what’s wrong with us, and telling us what we must do in order to get better (as they choose to define “better”). It’s a system that is notorious for “medicalizing” everything. Menopause? That’s a disease, and we’ve got the cure! Baldness? There’s a chemical solution to that! Pregnancy? Let’s treat it like an illness!
The thing is, for all its flaws, mainstream medicine works. That is, it mostly works, and doctors and scientists search pretty relentlessly for the bits that don’t work, and they tend to toss those out. Is there an error rate? Yes. Do pharmaceutical companies have too much influence? Certainly. Do physicians sometimes prescribe medicines that pose risks but do little to help? Yes. But overall, mainstream healthcare works. Antibiotics work. Chemotherapy works. Vaccines work. The same simply cannot be said for almost any of the wide array of complementary and alternative “medicines.”
So failing to take your dying child to the hospital because you don’t trust “modern medicine” is literally like failing to get your kid out of a burning building, simply because you don’t like the look of the weather outside.
Those who mistrust mainstream medicine ought to think, before opting out, not just about what they’re jumping away from, but what they’re jumping into. Imagine you don’t like the way your physician is imposing his view of the world for you, and worry that his view is unduly influenced by the the marketing dollars of big pharma. So you opt to visit a naturopath instead. What you get is your naturopath imposing his view of the world on you, a view that is likely to be unduly influenced by the marketing dollars of the big alternative medicine companies. The move — from a system that “medicalizes” your health to one that “alternativizes” — is not clearly a positive one, even from an ideological point of view. And from the perspective of what we know about what actually works, the move is a disastrous one. And when the stakes are as high as the lives of our children, it’s a move that warrants considerable scrutiny.
What happens — what should happen — when you lose faith in your product, when you come to see that the product you’ve been selling all this time isn’t really what it’s been cracked up to be? Is it wrong to keep selling it? How bad does the product have to be for it to be wrong to keep selling it? How strong should the evidence be?
The question came to mind when I read recent reports accusing Dyson Airblades hand dryers of spreading germs at an apparently horrifying rate. Now to be clear, there are reasons not to overreact to the hyperbolic headlines. The stories you’ve read about Airblades are based on one study, conducted under lab conditions that might not reflect reality. But what if — what if — the reports turn out to be fair and accurate? What if the highly artificial scenarios used for the lab tests turn out to be validated by field trials? What if Dyson Airblades really are spreading filth? Should Dyson simply say “Oh well, so much for that!” and stop selling them?
The question of losing faith in your product also comes to mind with regard to various ‘complementary’ and ‘alternative’ healthcare products. Most people who sell such products, and most health practitioners (homeopaths, naturopaths, therapeutic touch practitioners, and so on) surely do what they do out of a genuine belief that they’re helping their patients. They believe they see positive effects. But the evidence generally doesn’t support that belief. Now, most practitioners and sellers simply never come to accept that fact, and so they go on selling and prescribing products that are physically incapable of doing what they claim they do. And though I’m a strong critic of such practices, I do have a degree of sympathy for the person who has spent, say, 20 years believing that homeopathy really works, and “seeing” it help thousands of people (a fact that can readily be explained by the operation of a whole range of well-documented cognitive biases). When such a person starts to realize the 20-year error they’ve made, they must find themselves in a rather awkward situation.
Some might ask whether the seller’s faith in the product really matters all that much. Isn’t the customer always right? Isn’t the customer’s opinion the one that matters? Yes, mostly. And so there are times when it’s absolutely OK to keep selling your product even after you’ve personally lost faith in it. Imagine you’re in sales for Coke, and you find yourself developing a taste for Pepsi. The fact that you prefer Pepsi doesn’t make it wrong to sell Coke. Your customers are buying based on their tastes, and de gustibus non est disputandum.
But that’s about questions of taste. What about questions of objective reality? Some products after all just don’t work. Selling only products that work is required by the basic ethical and legal requirement of “merchantability.” If you sell me a chair, it needs in fact to be capable of functioning as a chair. If you sell me a gizmo to attach to my car’s engine that you swear will provide better performance, then it had better actually provide better performance: me having the “feeling” that the car is now performing better isn’t enough.
And so some cases are pretty clear. Once you understand — really understand — the weight of evidence against most complementary and alternative medicines, it immediately becomes ethically imperative to stop selling them. And if Dyson finds its product really is spreading germs at an unconscionable rate, then it will be duty-bound to stop selling the product, though surely the restaurants and public buildings that are the company’s main customers will make that choice for them.
In the end, what’s required are vigilance and good faith. Anyone who sells a product is obliged to go to reasonable lengths to ensure their product works, and to grapple with credible evidence to the contrary. A sincere belief in your product is nice — it helps you get up in the morning and look yourself in the mirror — but it’s not enough.
Would FBI Compelling Apple Mean Using Forced Labour?
News surfaced recently that, in the event that the FBI is successful in getting a court to compel Apple to unlock an iPhone, the company’s engineers might simply not do the work.
In addition to raising interesting practical questions (still hypothetical at this point) for the FBI, this turn raises interesting ethical questions about forced labour. It’s easy enough in the abstract to accept the idea of a court forcing a corporation — a lifeless thing —to do something. But it’s somewhat more difficult to stomach the idea of a court order compelling a number of human individuals to do work over a period of weeks. We’re all familiar of course with the idea of courts forcing people to do things — to disclose a piece of information, for example. But forcing labour, in the absence of a criminal conviction of the individuals involved, is dramatically different. It may even be a violation of the 13th Amendment to the U.S. constitution, which forbids “slavery [and] involuntary servitude, except as a punishment for a crime whereof the party shall have been duly convicted.”
This is also a reminder of the complex relationship between corporations per se and the people that, roughly speaking, make them up.
Back in 2011, a lot of people criticized then-presidential candidate Mitt Romney for saying during a campaign stop that “corporations are people.“ In context, it was pretty clear that Romney wasn’t referring to the controversial notion of corporate personhood, but rather to the simple fact that corporations are (in a practical sense) composed of people. When corporations profit, inevitably some people profit. And, more importantly for the present case, when corporations do labour, some people do labour.
The current Apple v FBI situation is a good example of Romney’s point. You can’t compel Apple to unlock an iPhone without compelling its human engineers to do certain work.
Of course, requiring people to do work they don’t particularly want to do is generally regarded as permissible within the context of a labour contract. When you sign up for a particular job, no one promises you that you’ll love every minute of it. So in complying with the (possible, potential) court order, Apple’s engineers would merely be doing their jobs. But on the other hand, there’s apparently now evidence that such a request would be considered sufficiently odious to make those engineers give up their jobs entirely. In that event, getting the work done would mean either literally compelling the individual engineers to do the work, or finding engineers with very specific skills to take their place.
And news emerged just today that the FBI may have found a way to get into gunman Syed Rizwan Farook’s iPhone without the help of Apple and its engineers. Whether the non-Apple route will work remains unknown. But the case still raises interesting questions about the rights and duties of corporations, and the way (or the extent to which) those rights and duties are ultimately held by humans. We offer legal protection to corporate property, not out of respect for corporations but out of respect for their human owners. And we should think twice before legally compelling corporate action, when that action would in practice imply forced labour for the corporation’s human employees.
Just what are corporate boards obligated to know? More precisely, what lengths are they obligated to go to in order to get to know the things they ought to know?
The topic came to mind when I read today’s story about how the salary of the CEO of Canada’s biggest banks, RBC, had gone up 44 per cent to $10.9 million during his first year on the job. One has to ask: just what information does RBC’s board have at hand that would justify that level of compensation, and that very substantial change in level of compensation?
The question is not a trivial one. In fact, it’s the topic of a program of research we are currently conducting at the Ted Rogers School of Management’s Ted Rogers Leadership Centre. The question, more generally, is about what boards are obligated to know. Of all the things a board could know, which things must it know, in order to do its job properly?
The question turns out to be harder than it sounds. Boards typically need of quite a lot of information, and face plenty of obstacles to getting it.
What do boards need to know? Boards of directors are ethically and legally responsible for the oversight of firms. While it is not the job of directors to manage the firm, it is the job of directors to govern it. Both individually and collectively, directors have fiduciary responsibilities to govern the firm by selecting, paying, guiding, and assisting top management. Performing those tasks well requires considerable information. In general, directors need to have sufficient information about the firm they are directing, as well as about the industry within which it operates. They need to understand the relevant bits of corporate law, and to have basic financial literacy. With regard to specific decisions, directors may need very special information. With regard to a major strategic decision such as merger or acquisition, for example, directors may need to have detailed information about not one but two organizations, as well as detailed valuations and reliable market forecasts. With regard to setting executive compensation, boards may need not just detailed information about performance, but also information about industry benchmarks as well as information about what a given CEO’s other employment options are.
Why is it so hard for boards to get the right information? The fundamental problem is that most directors are, at least vis-a-vis the specific organization, amateurs. They are (mostly, preferably) outsiders — they are outsiders on purpose — and so by definition they spend much less time in direct contact with the organization than, say, the CEO or other employees. So they are automatically subject to relative information poverty.
The result is that they have to rely on others. Who do they rely on? First and foremost, they rely on insiders, especially the insider with whom they have the most interaction, namely the CEO. But of course, there’s always the worry that the CEO will, shall we say, “filter” information. After all, if no one particularly wants to give the boss bad news, who on earth wants to give the board bad news? Boards also may get information from other firms. The board’s audit committee, for example, ought to be able to get information directly from the firm’s accounting department, but such direct access is not universally available.
Boards also sometimes look to outsiders, a category that includes consultants (such as compensation consultants, strategy consultants, and governance consultants) and professionals (such as external accountants and outside legal counsel). Compensation consultants are a key example here: many large firms make use of those. But anecdotal evidence, at least, suggests that directors often doubt the reliability and value of comp consultants, even after having paid good money for their advice.
That’s why our research is focused on the wide range of structural and procedural principles that we argue boards ought to attend to. The right structures and procedures need to be in place to make sure (or to make it more likely) that boards will be diligent and effective in their pursuit and use of information.
So, for example, how do you ensure that boards will seek and appreciate a wide range of information? Start by having a nominating committee that is dedicated to seeking out real diversity. How do you make sure that boards have the right expertise to make good use of the financial information available to them? Implement a ‘board skills matrix’ to identify gaps in their collective knowledge. How do you make sure that boards make proper use of consultants? Make sure the board has the appropriate budget, but also implement policies to reduce redundant use or other kinds of over-use of consultants of dubious value.
In the end, that’s what governance is about. It’s about not just doing the right things, but putting the right processes in place to make it more likely you’ll do the right things on an ongoing basis. So the shareholders (and other stakeholders) of RBC need to ask not just is David McKay worth $10.9 million, and not just did the board gather the right information in making that decision, but did the board put in place the policies and procedures to make sure that it has the right information, and uses it correctly, on an ongoing basis? That, after all, is what a board is really for.
By Chris MacDonald and Hasko von Kriegstein
It’s not all that surprising that restaurants that focus on fast, cheap food look to suppliers who focus on fast, cheap production methods. Fast food chains and factory farms, in other words, seem like a match made in heaven. But from the point of view of animals, it’s a match made in hell. The fast food industry’s capacity to efficiently turn ingredients into meals implies a huge demand for the products of the cruelty-prone meat and dairy industries. You don’t have to be a zealous animal rights activist to cringe every time you see images of battery hens, and imagine what a life — even with the very basic mental life of a chicken — would be like in a tiny, crowded cage.
But there are signs that things are changing in this regard. See, for example, the recent announcement by Burger King, along with iconic Canadian coffee chain Tim Horton’s, (both of which are owned by the same parent company) promising to use only cage-free eggs by 2025.
Just how big a deal is such an announcement?
It’s easy to be cynical about a target that’s nearly a decade out. Given that the average tenure of a CEO is far less than that, it risks looking like a leadership team making a promise that someone else is going to have to keep. The key question, of course: Is there a plan in place? Is substantive action underway already? If so, then this is long-term planning, rather than punting.
There are also big questions about supply chains. A big restaurant chain can simply decide to change what it sells if it can’t find a source. So the announcement of the restaurant chains’ intentions implies a need for big changes within the egg industry. And, perhaps not surprisingly (given the purchasing power of the two chains) there are signs that the industry is listening: witness the recent announcement that Canadian egg farmers aim to abandon battery cages by 2036.
Finally, it also should be noted that this move sets a powerful precedent for other restaurants. A pair of restaurant chains, even popular ones, can’t change the practices of the egg industry on their own, and hence can’t make a meaningful dent in the total quantity of cruelty in the industry. But the move by these two (although not the first) could have knock-on effects in several ways.
First, it helps establish a supply chain (see above). Second, it signals to consumers that cruelty-free eggs can be had, even at a fast-food joint, and so it’s OK to expect that from a fast-food joint. Finally, it gives implicit permission to managers at other fast-food chains to live according to their own values. No one prefers eggs from unhappy chickens, but many managers may feel that competitive pressures won’t allow the alternative. Burger King and Tim Horton’s have essentially signalled that they see a path to that alternative, and are willing to follow their conscience to it.
Could this all be a marketing ploy? Of course it could. But in the end, that may not matter. As long as BK and Tim’s see good reason to move to cage-free eggs, they will do it, and there’s good reason to believe that other restaurant chains will follow.
Is it unethical to watch the Super Bowl?
As evidence mounts that professional football is essentially a highly-organized mechanism for inducing brain injury in large numbers of young men with few other options, the question arises whether watching the causing of all that brain injury is itself unethical.
If the game itself is ethically problematic, is watching problematic, too? Is it wrong to find joy in watching young men sustain brain damage?
The question is amplified with regard to the Super Bowl, which sees millions of non-football-fans tuning in, nachos and beer at their sides, to watch the spectacle. Are those millions of game-day converts complicit in the carnage?
We should start by acknowledging that there is of course no plausible causal connection between the casual viewer and the brain damage being done to players. An individual viewer tuning in on Super Bowl Sunday doesn’t matter a bit. But then, in the aggregate, we matter a lot — pro football wouldn’t be such a high-paying endeavour or such an enticement to the men who risk their brains to play it, if millions of us didn’t tune in on a regular basis.
But at very least we might wonder about the extent to which our watching amounts to a kind of tacit endorsement. It might be considered unseemly to enjoy watching the brutality in the say way that it’s unseemly to enjoy watching a serious car accident. You didn’t cause the accident, let us assume, but that doesn’t mean it’s OK to enjoy watching it.
And an argument could be made that enjoying watching football is even more unseemly than enjoying watching a car crash, because the damage inflicted during football is intentional, and hence morally suspect. When you watch football, you’re enjoying not just watching brain damage, but watching young men lured into brain damage by large financial incentives. And in case you think the financial incentives justify the brain damage — “those guys are well paid to risk their brains!” — remember that most of them still probably don’t fully understand the risks. In part, that’s because probably no one has a full understanding of those risks, and because even when we know about risks, we tend to brush them aside in irrational ways when factors like pride are at play.
So hey, I hope you enjoyed the game. I did. But none of us should be altogether proud of that fact.