Ray Rice case shows how difficult it is for employers to deal with off-hours misconduct

What are an employer’s ethical obligations when an employee gets caught doing something bad off the clock? The example of the day, of course, is Ray Rice. As the entire universe now knows, Rice the football player who was caught on video savagely hitting his then-fiancée (now wife), knocking her unconscious. The incident, once it became public, left the his team (the Baltimore Ravens) and the National Football League with the question of what to do about it, and what to do about Rice.

Rice’s case apparently posed something of a dilemma for the court system, too: back in May, Rice was indicted for third-degree aggravated assault, but those charges were later dropped.

Consider also the case of Centreplate CEO Desmond Hague, who was caught on video viciously kicking a friend’s dog in an elevator. Hague was first suspended, and then eventually terminated in the wake of a public uproar.

Interestingly, in both cases the offences took place away from work. Neither offence was an offence against the employer, at least not directly. Yet it was widely believed that Rice and Hague’s employers needed to do something, something beyond whatever legal sanctions might apply.

Of course, in those two cases, the employers’ hands were forced by enormous public pressure. Bowing to such pressure is perhaps most understandable in the case of the NFL’s (eventual) response to the Rice case. Football players are exceedingly public figures, and many people see them as actual or potential role models for kids. Rice is a crummy role model, to say the least, and is therefore a public relations nightmare for the NFL. The same reasoning applies to athletes losing endorsement contracts: it’s no surprise at all that Tiger Woods, Mike Tyson, Kobe Bryant, and Michael Vick lost major endorsement contracts after their respective scandals. Advertisers are buying the athlete’s image and reputation. And when those are devalued, the athlete no longer has any value as a spokesman.

Legally, in Canada and the U.S., at least, employers don’t need to give much reason for firing an employee. But what about ethically? Is bad behaviour (or even criminal behaviour) away from work a good enough reason to sack someone?

There are a few circumstances in which an employer is clearly ethically justified in taking action. First, if the bad behaviour suggests that the employee is liable to act badly on the job in a way that is going to pose a risk to customers, to fellow employees, or to the general public. This might have been the case with Hague. Although we don’t have much evidence, the dog-kicking incident might suggest a man with a temper. What if he’s inclined to treat subordinates the way he treats helpless animals?

Second, an employer is likely to be ethically justified in acting if the bad behaviour directly implicates on-the-job performance. Consider, for example, an airline pilot caught buying cocaine. A coke-head pilot simply can’t be tolerated.

Third, if the bad behaviour in question suggests such poor judgment that the employee simply could no longer be trusted, then an employer might well be right to let him or her go. Some bad behaviour might just imply that the employee is a loose cannon. People are generally hired not just for their talent, but also for their judgment. No judgment means no job.

But what about beyond that? What about the sales clerk spotted smoking pot in the park? That’s technically illegal in most jurisdictions, but does the employer have any business firing her for it? Or what about the salesman who is known to have been arrested for hitting his wife, tried, convicted, and released after a minimal jail term? Should his employer fire him, or consider him already to have been punished? Certainly, domestic violence might make us worry about how he would treat female colleagues. But what if, for whatever reason, that’s not an issue? Is merely having done something bad in one’s personal life any of an employer’s business?

In some cases, such a wrongdoer would simply be impossible to work with, or impossible to have managing a team of employees. If the wrongdoing is widely known within the organization, reputation alone might be enough to make the employee a liability.

But it’s also worth considering that an employer who fires an employee simply based on the wrongdoing itself is effectively imposing a penalty — acting like judge, jury, and executioner — without any of the due-process protections that accompany a criminal trial. It also implies a kind of double jeopardy: being tried and possibly convicted twice for the same crime.

Further, it arguably represents an intrusion of the world of employment into our personal lives. Just maybe we want to keep those spheres separate, not to protect the wife beater but to protect the rest of us from nosy and self-righteous bosses.

There’s a saying in legal circles that “hard cases make bad law.” In other words, our judgments about extreme or unusual cases can induce us to generalize in unhelpful ways. I think that applies quite nicely to our ethical judgment about Ray Rice and Des Hague. Rice and Hague are both wealthy, powerful men who did things that most of us find unthinkable. Before we leap to the conclusion that hell yes they should lose their jobs, we ought at least to think through what that conclusion would imply for a few million lesser offences.

Apple’s new Watch and the Ethics of Disruption

firedOk, pop quiz. How many people did Apple put out of work this week, when the tech giant announced the Apple Watch and the Apple Pay point-of-sale technology built into the new iPhone 6? How many hopes and dreams were dashed?

How many would-be smart-watch entrepreneurs are saying to themselves, “oh, well, maybe I won’t go ahead with that Kickstarter campaign after all”? How many credit card company employees are now contemplating other lines of work? How many people at Samsung and BlackBerry and Pebble and Sony are going to be out of work, as the relevant corporate divisions get downsized as those companies lose market share to Apple’s new products?

The exact number is hard to guess, but it’s certainly not zero.

To lose one’s job, even temporarily, is generally a very bad thing. It jeopardizes one’s ability to house and feed oneself and one’s family. Causing such an outcome would, in most circumstances, be a bad thing to do.

But for all the cynicism about the launch event and the products it featured, no one criticized Apple for having made life hard for executives and employees at other companies. No one is blaming Apple for the fact that its nifty new products are going to put people out of work.

Why? Because that sort of disruption is what capitalism thrives on. Capitalism is, and must be, subject to ethical constraints, but those are effectively just the rules of the game, not a denial of the nature of the game itself, and not an attempt to render it impossible to play the game.

Just imagine what it would look like if companies really were expected never to hurt anyone. That would mean never putting anyone out of work, which means never inventing anything new and never improving one’s own products and processes in a way that might risk putting a competitor, no matter how poorly run, out of business. Such a standard is simply not plausible, not a reasonable limit on doing business.

For sake of comparison, consider a different set of limits on business competition, including things like the prohibition on violence, or the idea that you shouldn’t actively disrupt a competitor’s operations. Those are rules the observance of which doesn’t stop you from getting on with your own work. It’s entirely feasible to compete while observing those rules, where it’s not possible to compete while promising not to put anyone out of a job.

The point here is a simple but deep one. Business ethics isn’t about being a saint, or an angel, or about trying to make everyone happy. At heart, it’s about finding reasonable limits on the pursuit of profit, or, more personally, on how we go about making a living.

Uber’s tactics against rival Lyft crossed the ethical double-yellow line

uberIf all is fair in love and war, what about in business?

Uber is in the news again, and not for happy reasons. The car service company has been accused of trying to poach drivers from competitors like Lyft. And, in the process of poaching drivers, Uber has apparently been responsible, over a 1-year period, for 5,000 or so cancelled Lyft rides — rides that were ordered and then unceremoniously cancelled. According to Casey Newton, whose piece for the Verge broke the story:

Uber is arming teams of independent contractors with burner phones and credit cards as part of its sophisticated effort to undermine Lyft and other competitors….Using contractors it calls ‘brand ambassadors,’ Uber requests rides from Lyft and other competitors, recruits their drivers, and takes multiple precautions to avoid detection.The effort, which Uber appears to be rolling out nationally, has already resulted in thousands of canceled Lyft rides and made it more difficult for its rival to gain a foothold in new markets.

Uber, for it’s part, says its brand ambassadors never intentionally cancel rides. But as others have observed, doing so is a foreseeable consequence of their driver-recruitment strategy. If an Uber brand ambassador contacts a Lyft driver who happens already to have been contacted (and this can only be determined after the ride is booked), they realize their current call is pointless (and may well raise Lyft’s suspicions, resulting in the caller getting blocked) and so they cancel the ride.

Now, I’m no Uber hater. In fact, I’ll admit from the start to being an Uber fan. I use the service frequently here in Toronto, and I love the model. But that makes it all the more disappointing that a company with a great idea is using scummy tactics to gain and hold market share.

Vox’s Timothy B. Lee has defended Uber. Poaching other companies’ employees, according to Lee, is par for the course. That’s what companies do. And since the best way for Uber to speak Lyft’s drivers (not truly employees, but close enough) by posing as customers and ordering a ride, that’s naturally what they’re doing. As Lee points out, this is completely legal (Uber’s brand ambassadors are, after all, paying for the drivers’ time) and is arguably beneficial to drivers to the extent that it makes them aware of new opportunities.

But that defence is off-target. It’s not clear that ordering a Lyft ride is the only way to find new drivers (what about the huge number of taxi drivers not yet affiliated with either company?) Of all the ways there are to recruit drivers (putting up posters near where cabbies aggregate?), why choose the one that just happens to interfere with a competitor’s business? Lee claims that a few thousand cancelled rides pales in comparison to the size of Lyft’s fleet of drivers (roughly 60,000). But that argument fails. It’s the principle of the the thing — sabotage is not OK — not just the actual degree of interference experienced. (Compare: shoplifting isn’t OK just because a store’s sales volume is large.)

Uber is not, contrary to what Andrew Leonard suggested in Salon, an example of “no-holds-barred free-market competition,” precisely because there is no such thing as no-holds-barred free-market competition, at least not in a vaguely capitalist economy. Capitalism embraces competition, but the kind of competition it embraces is not unrestricted. It is competition based on innovation, and on a dedication to producing a better product at a better price than the other guy does. As others have pointed out, failure to compete (say, when such failure takes the form of collusion) is itself unethical and illegal. But that fact certainly doesn’t licence every imaginable competitive strategy. Hockey, too, is a rough game. And players are obligated not to generously share the puck with members of the other team. But the best hockey — and the best business — happens when competitors fight hard within the rules of the game, winning because of their superior talent, not because they busted the other guy’s knees.

Love it or hate it, the Ice Bucket Challenge is good for charity

The ALS Ice Bucket Challenge has been mind-blogglingly successful, raising tens of millions of dollars and becoming a bona fide internet phenomenon. But it has also garnered considerable criticism. So, are the critics right? Is the Ice Bucket Challenge really an example of a terrible approach to philanthropy?

I took the ALS Ice Bucket Challenge last week, after being challenged by a Facebook friend. As part of it, I also happily donated money to the cause. ALS (the neurodegenerative disease Amyotrophic Lateral Schlerosis) is a good cause, and I had fun doing my bit. I encouraged (and encourage) others to take part.

But many people have found the Challenge off-putting, and the criticisms are worth considering.

So OK, to begin with: yes, an internet meme is a pretty silly way to decide which charities to support. If the only thing that inspires you to support a good cause is the fact that Leonardo DiCaprio dumped water on his head, you might want to rethink your priorities.

Critics have in particular focused on the fact that it’s far from clear that ALS, currently enjoying the limelight, is the world’s most important charitable cause. After all, the number of people suffering from ALS pales in comparison to the number of people who die from cancer or heart disease. True, but that’s not a reason not to donate to it. There is no “most worthy” cause. Charities vary along many dimensions, and there’s nothing wrongheaded about donating — even collectively donating lavishly — to help cure a disease that afflicts relatively few in a relatively tragic way.

Other critics have lamented more specifically the fact that the ALS Challenge is—gasp!—taking money away from other charities. And there is some evidence that that’s true. Money is finite, and presumably many people will donate less to other charities if they have donated to ALS. But this applies to any charity’s fundraising efforts. If the Canadian Cancer Society or the Heart and Stroke Foundation has an especially successful fundraising year, it likely means some other charity (or perhaps a great many small charities) will have had comparatively miserable ones. There’s no special reason to single out the Ice Bucket Challenge in this regard. As for me, like most people I know, I dug out an additional $100 out of my pocket—$100 I was liable to spend on dinner out, or on iTunes—and donated it to ALS Canada. I made a donation in addition to the other causes I regularly donate to.

And consider this: Most of the criticisms launched against the Ice Bucket Challenge are ones that apply to your local 10k Fun Run in support of cancer research, too. Or the dance-a-thon to raise money to feed the hungry. Focused on me and myaccomplishments, rather than on the charity? Check! Pressuring your friends into donating or sponsoring, independent of their own priorities? Check! Environmentally wasteful? Check! A non-thoughtful way to select a charity? Check!

One of the best things to come out of the Ice Bucket Challenge has been the vibrant discussion and the range of creative responses it has engendered. A friend of mine dumped water on her head (thus contributing to keeping the meme going) but donated to her own favourite charity, and in her video encouraged others to do exactly the same thing. Charlie Sheen dumped $10,000 in cash on his head, symbolizing the amount he was pledging to donate to the ALS Foundation. Matt Damon (co-founder of Water.org) dumped icy toilet water on his head, to draw attention not just to the stunt but to his own favourite cause, namely the provision of clean water.

In the end, the creativity and even the critical comments are good. It’s good for people to be talking about charity, and which charities to give to, and how to do it. Yes, the ALS Ice Bucket Challenge has faced considerable criticism. And that’s a good thing.

Is it Possible for a Corporation to be Patriotic?

What makes a Canadian company Canadian? What is it that makes an American company definitively all-American? Is it a matter of where the company is legally registered? Where it earns the bulk of its profits? Who its CEO is? Who owns its shares? And what about companies that have offices in multiple countries? Should companies have to swear allegiance to one flag or another?

The question of corporate nationality has arisen recently, in relation to the matter of corporate “inversions,” or “transactions in which American corporations [for example] move their tax residency abroad by being ‘bought’ by smaller foreign firms, in order to reduce their [for example] American corporate tax bills.” Not surprisingly, perhaps, such inversions are controversial. The notion of an American company (and so far all the controversy I’ve seen has been about US companies) abandoning the homeland to put down roots in a foreign land offends more than a few. For some, the act in itself amounts to a kind of treachery. For others, it has to do with the fact that because inversions allow a reduction in taxes paid, they might (or might not) imply big losses to particular national treasuries.

Naturally, rhetoric on the topic is in full bloom. US Treasury Secretary Jacob J. Lew has apparently said that inversions do violence to what he refers to as “economic patriotism.” And US President Barack Obama has waded into the debate, referring to inverting firms as “corporate deserters.”

On the other hand, the practice has its defenders. If the US corporate tax rate weren’t so high, US companies wouldn’t feel the need to find creative (and ostensibly disloyal) solutions. And inversion is perfectly legal, explicitly allowed for example but the U.S. tax code. Not that legality settles the ethical issue, but it’s odd to call something unpatriotic — disloyal to your country — if your country’s law explicitly allows your behaviour.

To me, rhetoric laced with words like “patriotism” and “deserters” seems hopelessly parochial in a global economy. It rings of jingoism. People want free markets — and the free flow of goods and services across borders — but they don’t want to be told that other places are better places to do business, and they don’t like the idea that another nation might grab a bigger share of corporate tax revenues.

But there’s also a point to be made here about corporate personhood. As I’ve pointed out before, corporate personhood, properly understood, is absolutely essential to modern economies and hence to modern societies. Personhood simply consists in the fact that courts identify corporations as having bundles of rights and responsibilities separate from the people who in some sense make up the corporation. That’s what lets corporations sign contracts and own property and honour warrantees and be sued. Without personhood: no corporation.

Despite this fact, many people claim to be opposed to the very notion of corporate personhood. But that leads to a problem with regard to inversions. If you think you’re opposed to the notion of corporate personhood, and additionally find inversion distasteful, you need to ask yourself: just who is being unpatriotic when corporate inversion happens? Because if you are skeptical about personhood, then it can’t be the corporation that is deserting its country. Is the Board being unpatriotic? Even if their decision is consistent with their legal duty and arguably their ethical duty to do what’s best for the corporation?

As one commentator put it, “Corporations aren’t people, so it’s a lot to ask for them to be patriotic, especially when they operate all over the world.” No, they’re certainly not people, but they are persons. As long as you accept that fact, you can then talk seriously about just what bundle of rights and responsibilities corporations ought to have — that is, what form their personhood should take.

If a corporation is a person in this sense, is it then a thing that is capable of having a nationality? Can it have duties of citizenship, as Lew and Obama seem to imply? This isn’t a metaphysical question, but a practical one. Are the duties of citizenship duties that it would make sense to attribute to a corporation? Would that be conducive to important human ends? And if so, are the humans whose interests matter just the ones who happen to liver where you do?

Commercial airlines negotiating the ethics of flying in, and over, conflict zones

Tel Aviv is not a place for the faint of heart to fly into, these days. Should Canadian and American and European airlines go back to avoiding the place, or should they bravely continue flying there? The conflict between Israelis and Palestinians along the Gaza-Israel border is, tragically, showing no signs of letting up, and the result is real risk to commercial aircraft.

Back on July 22, Air Canada briefly cancelled flights betweenTel Aviv and Toronto, and in the US the Federal Aviation Administration issued an order banning U.S. carriers from flying in and out of Tel Aviv’s Ben Gurion International Airport. The European Aviation Safety Agency, on the other hand, merely issued an advisory recommending caution.

Then, after a few days, the FAA lifted its ban on flights, but the trouble is far from over. There was news in late July that rockets had been fired at the Tel Aviv airport as an Air Canada jet was preparing to land. Flight AC85 was forced to abandon its initial attempt to land, and to circle the airport while waiting for confirmation that landing was (reasonably) safe. Reports suggest that the airline is nonetheless going to continue flying to Israel.

Is that the right thing to do? How much risk is too much? With regard to the company’s own calculations, a spokesman for American Airways was quoted as saying “Nothing matters more than keeping our crews and customers safe.” OK, fair enough. But how safe is “safe”? No one in the post-9/11 world thinks air travel is perfectly safe, although it is still in general the safest way to travel. But is flying into Tel Aviv sufficiently dangerous (beyond the minimal dangers of “normal” air travel) to make it unethical for airlines to fly there?

One way out would be for airlines to defer to the relevant federal regulations and edicts. But laws and regulations only sets the minimum standard. Airlines are free to opt not to fly into Tel Aviv, even when legally allowed to do so, so they still have a decision to make.

Some people will immediately say that yes, of course, airlines should avoid taking the risk. After all, every life is precious — you can’t put a price on a human life. Except, of course, you can, and we do it all the time. If every life was literally priceless, we would spend even more on air safety (not to mention auto safety) than we already do.

Another option would be to say, hey, it’s a matter of “buyer beware.” Airlines can fly into Tel Aviv, ethically, as long as their customers know how dangerous it is. And what passenger contemplating flying into Tel Aviv these days wouldn’t know about the dangers? But then, being aware of the conflict there doesn’t imply having a good understanding of the precise risk involved in flying there. Recall that just about everyone was surprised when a Malaysian passenger plane was shot down over the Ukraine back in July, killing nearly 300 people. Everyone knew about the armed conflict going on there, but no one apparently thought that it constituted a serious risk to air travel. So it is unrealistic to expect the average passenger — one without a fine appreciation of the precise geographical location of the latest round of skirmishes and not tutored in the capacities of the latest ground-to-air rocket technology — to make this call. Passengers rely on airlines to engage in reasoned risk assessment, and to keep them reasonably safe.

In the end, commercial airlines should err on the side of safety. After all, even if (let us suppose) all the passengers on a given flight into Tel Aviv are Israelis returning home, ones who are happy to thumb their noses at Palestinian rockets, the airlines still have a duty to their employees — in particular to the pilots and flight attendants who make up their flight crews. Those flight crews accept, as do passengers, that flying implies certain risks. But no one on the plane, whether passenger or pilot or flight attendant, has the information required to make a rational decision about flying into Tel Aviv, and so they shouldn’t be expected to do so.

Facebook’s Study Did No Wrong

It came to light recently that Facebook, in collaboration with some researchers at Cornell University, had conducted a research study on some of its users, manipulating what users saw in their news feeds in order to see if there was an appreciable impact on what those users themselves then posted. Would people who saw happy news then post happy stuff themselves? Or what? Outrage ensued. After all, Facebook had intentionally made (some) people feel (a little) sadder. And they did so without users’ express consent. The study had, in other words, violated two basic rules of ethics.

But I’m not so sure there was anything wrong with Facebook’s little experiment.

Two separate questions arise, here. One has to do with the ethics of the Cornell researchers, and whether Cornell’s ethics board should have been asked to approve the study and whether, in turn, they should have approved it. The other has to do with the ethics of Facebook as a company. But this is a blog about business ethics, so I’ll stick primarily to the question about Facebook. Was it wrong for Facebook to conduct this study?

With regard to Facebook’s conducting this study, two substantive ethical questions must be dealt with. One has to do with risk of harm. The other has to do with consent.

Let’s begin with the question of harm. The amount of harm done per person in this study was clearly trivial, perhaps literally negligible. Under most human-subjects research rules, studies that involve “minimal” risk (roughly: risks comparable to the risks of everyday life) are subject to only minimal review. Some scholars, however, have suggested a category of risk even lower than “minimal,” namely “de minimis” risk, which includes risks that are literally negligible and that hence don’t even require informed consent. This is a controversial proposal, and not all scholars will agree with it. Some will suggest that, even if the risk of harm is truly tiny, respect for human dignity requires that people be offered the opportunity to consent — or to decline to consent — to be part of the study.

So, what about the question of consent? It is a fundamental principle of research ethics that participants (“human subjects”) must consent to participate or to decline to participate, and their decision must be free and well-informed. But that norm was established to protect the interests of human volunteers (as well as paid research subjects). People in both of those categories are, by signing up to participate in a study, engaging in an activity that they would otherwise have no interest in participating in. Having someone shove a needle in your arm to test a cancer drug (or even having someone interview you about your sexual habits) is not something people normally do. We don’t normally have needles stuck in our arms unless we see some benefit for us (e.g., to prevent or cure some illness in ourselves). Research subjects are doing something out of the ordinary — subjecting themselves to some level of risk, just so that others may benefit from the knowledge generated — and so the idea is that they have a strong right to know what they’re getting themselves into. But the users of commercial products — such as Facebook — are in a different situation. They want to experience Facebook (with all its ups and downs), because they see it as bringing them benefits, benefits that outweigh whatever downsides come with the experience. Facebook, all jokes aside, is precisely unlike having an experimental drug injected into your arm.

Now think back, if you will, to the last time Facebook engaged in action that it knew, with a high level of certainty, would make some of its users sad. When was that? It was the last time Facebook engaged in one of its infamous rejiggings of its layout and/or news feed. As any Facebook user knows, these changes happen alarmingly often, and almost never seem to do anything positive in terms of user experience. Every time one of those changes is made (and made, it is worth nothing, for reasons entirely opaque to users), the internet lights up with the bitter comments of millions of Facebook users who wish the company would just leave well enough alone. (This point was also made by a group of bioethicists who pointed out that if Facebook has messed with people’s minds, here, they have done so no more than usual.)

The more general point is this: it is perfectly acceptable for a company to change its services in ways that might make people unhappy, or even in ways that is bound to make at least some of its users unhappy. And in fact Facebook would have never suffered criticism for doing so if it had simply never published the result. But the point here is not just that they could have got away with it if they had kept quiet. The point is that if they hadn’t published, there literally would have been no objection to make. Why, you ask?

If Facebook had simply manipulated users news feeds and kept the results to themselves, this process would likely have fallen under the heading of what is known, in research ethics circles, as “program evaluation.” Program evaluation is, roughly speaking, anything an organization does to gather data on its own activities, with an eye to understanding how well it is doing and how to improve its own workings. If, for example, a university professor like me alters some minor aspect of his course in order to determine whether it affected student happiness (perhaps as reflected in standard course evaluations), that would be just fine. It would be considered program evaluation and hence utterly exempt from the rules governing research ethics. But if that professor were to collect the data and analyze it for publication in a peer-reviewed journal, it would then be called “research” and hence subject to those stricter rules, including review by an independent ethics board. But that’s because publication is the coin of the realm in the publish-or-perish world of academia. In academia, the drive to publish is so strong that — so the worry goes, and it is not an unsubstantiated worry — professors will expose unwitting research subjects to unreasonable risks, in pursuit of the all-important publication. That’s why the standard is higher for academic work that counts as research.

None of this — the fact that Facebook isn’t an academic entity, and that it was arguably conducting something like program evaluation — none of this implies that ethical standards don’t apply. No company has the right to subject people to serious unanticipated risks. But Facebook wasn’t doing that. The risks were small, and well within the range of ‘risks’ (can you even call them that?) experienced by Facebook’s users on a regular basis. This example illustrates nicely why there is a field called “business ethics” (and “research ethics” and “medical ethics,” and so on). While ethics is essential to the conduct of business, there’s no particular reason to think that ethics in business must be exactly the same as ethics in other realms. And the behaviour of Facebook in this case was entirely consistent with the demands of business ethics.

A true leader would rename the Washington R*dskins right away

A leader has to be able to do hard things, including, perhaps especially, leading his or her organization through difficult changes. Indeed, many leadership scholars regard that as the key difference between the science of managing and the art of leading. Lots of people may be able to manage an organization competently in pursuit of well-established goals. Fewer can lead an organization when hard changes need to be made. And in the case of Daniel Snyder, the owner of a certain football team whose home base is Washington, DC, one of those hard changes should be to get on with it and change his team’s name.

Snyder has faced a groundswell of criticism over his team’s continued use of the “R*dskins” moniker. There have been vows to boycott the team and its paraphernalia. A growing list of media outlets have even vowed no longer to use the team’s current name in their coverage of the team. There’s even a Wikipedia page detailing the ethical debate over what many take to be an offensive, even racist name.

And if Snyder is going to change the team’s name (something he’s given no indication he is inclined to do), it needn’t be just because he’s worried about offending people. Two professors from Emory University have argued that there’s a good business argument for changing the team’s name. In particular, their analysis suggests that the name is bad for brand equity. “Elementary principles of brand management,” they state, “suggest dropping the team name.”

The U.S. Patent and Trademark Office has even entered the fray by canceling the team’s trademark registration. The PTO has rules, it seems, against trademarking racial slurs. This doesn’t mean that the team has to change its name, but it surely helps to devalue the brand and promises to reduce income from merchandising.

The whole sorry mess has the feeling of inevitability about it. The name can’t stay forever. The tide of history—and sound ethical reasoning—is against Snyder on this one. Snyder is an employer, most of whose employees are members of a historically-disadvantaged group. It is unseemly at best to resist so adamantly the pleas of members of another historically-disadvantaged group that he stop making money from a brand that adds insult to injury.

It is time for Daniel Snyder to act like a leader, to do the hard thing—the honourable thing—and change that name.

Anti-Homeless Spikes: Within Your Rights, but Wrong

Controversy has arisen recently regarding the installation of anti-homeless spikes on sidewalks. Spikes of various descriptions have reportedly been installed, for example, in the pavement outside an apartment building in London and a commercial building in Montreal. No doubt there are other examples of the use of such spikes. They are presumably intended to stop certain people — to wit, the homeless — from sitting, lounging, or sleeping in those locations. Outrage has naturally ensued. Advocates for the homeless criticized the spikes as a cynical, heartless approach to the problem of homelessness.

This example nicely illustrates the difference between having a right to do something, and it being right to do it. On one hand, property owners have the right to exclude anyone they want to exclude. In general, no one is obligated to share their property with random strangers.

So the owners of apartment buildings or commercial properties are within their rights (assuming they’ve installed these spikes on their own property, and not, say, on a public sidewalk). They are operating, in other words, within the limits of the set of conventions and legal protections that insist that we respect each other’s entitlement to control access to our stuff.

But being within their rights doesn’t imply that the owners are doing what’s right.

But as philosopher Jason Brennan points out, even libertarians — and libertarians are, shall we say, fond of property rights — do not regard property rights as absolute. Property rights are important, and our society is predicated on the basic assumption that each of us has the right to control the stuff we own, to do what we want with it and to invite onto our property those we want to invite and to exclude those we want to exclude. But — to borrow Brennan’s example — if I need to step onto your lawn to avoid being hit by an oncoming car, I am ethically justified in doing so, despite the fact that I am thereby invading your space, your property.

And even if property rights were absolute, it would still sometimes be the right thing to do, ethically, to allow other people access to your property. And one set of conditions under which allowing people access to your property would consist in situations in which the other person is in desperate need and lacks real alternatives, and in which allowing them access to your property doesn’t diminish your own enjoyment of your property in a meaningful way.

So even if property owners are within their rights to install anti-homeless spikes, they may be wrong to do so. But the fact that these property owners may not be doing right — the fact that they may, in other words, be acting immorally — doesn’t immediately license others to do anything about it. Such wrongdoing certainly doesn’t, for example, justify vigilante efforts on the part of private citizens to cover the spikes with fresh cement. If spikes are the “wrong solution” to homelessness, then vandalism is also the wrong solution to the spikes.

Nor does such wrongdoing warrant government action. The fact that you (or someone else, or all of us) find something morally abhorrent doesn’t automatically justify calling for government intervention. Consider, for example, the outrage over Canada’s new anti-prostitution law, which attempts to (re)criminalize behaviour more or less simply because some people think it morally reprehensible. Critics have rightly called the legislation wrong-headed. (A sane law would try to minimize dangers, but without criminalizing the behaviour of consenting adults.)

Taking ethics seriously isn’t simply about passionately insisting on ethical behaviour. It means a commitment to learning better and more subtle ways of thinking and talking about ethics. And that is especially important, perhaps, when the behaviour in question pushes our buttons emotionally.

Bribery: Ethical Failure and Competitive Failure

Last week saw the sentencing of Nazir Karigar to 3 years in jail, under Canada’s Corruption of Foreign Public Officials Act. This week, the RCMP have charged two Americans and one British businessman, demonstrating the force’s willingness to extend its reach to non-Canadians in its efforts to combat corruption. The three, all working with one branch or another of a company called Cryptometrics, are accused of having joined with Karigar in a failed attempt to bribe officials at Air India in order to land a contract to provide security to the airline.

Two words ought to stand out from that last bit, for anyone contemplating engaging in bribery: “failed attempt.” Karigar and his accused co-conspirators are in hot water for an act of bribery that didn’t even work. They didn’t get the contract. That, of course, is one of the big problems with bribery as a business strategy. It doesn’t always work. You may drop an envelope full of cash on a foreign official’s desk, without knowing that someone else has already dropped off an even fatter envelope. And given that bribery is illegal everywhere — even in places where it is reputed to be common — it’s not like you can go complaining to the police that you’ve been cheated. It’s really an extreme case of buyer beware.

The other words that should be front and centre, of course, are “jail time.” Karizar got jail time, and it seems likely that if these latest charges stick, prosecutors will be seeking jail time again. There’s no slap on the wrist here. No mere financial penalty levied against the faceless corporation involved. People who do business overseas have got to get it into their heads that anti-corruption laws are serious.

But back to the question of failure. Every time I hear about a case of bribery, I can’t help but think of failure. More specifically, it always seems to me that if you’re resorting to bribery, you’re essentially admitting failure. You’re opting to cheat because you know you can’t compete fairly. Your product isn’t good enough. Your marketing isn’t sharp enough. You’re not smart enough.

Yes, I know that won’t always be the case. I’m sure there are places where bribery is common enough that you “have to” engage in bribery to compete, and where the fact that you can’t do business honestly really isn’t your fault. But I think our presumption should favour honest business. There’s a difference between saying you lose some business by avoiding bribery and saying that you simply can’t survive without it. There are lots of honest businesses out there, getting by without bribery even in fairly corrupt markets. So we should presume in favour of honesty. We should presume that bribery just implies that you’re not good enough to compete fairly, on the strength of the services you provide or the product you make. It’s the best presumption, ethically, and it’s very likely just plain true.

Follow

Get every new post delivered to your Inbox.

Join 1,599 other followers

%d bloggers like this: