Archive for the ‘technology’ Category
Innovation is a hot topic these days. It’s been the subject of studies and reports and news reports. In fact, I spent the entire day this past Monday at the Conference Board of Canada’s “Business Innovation Summit,” listening to business leaders and civil servants talk about how Canada is lagging on innovation, and how much is left to be done to promote and manage innovation. And certainly technological innovations like Google’s new glasses and 3D printing make for compelling headlines.
So sure, hot topic. But how is it connected to ethics? What is an ethics professor like me doing at an event dedicated to innovation?
If you understand the domain of ethics properly, the connection is clear. In point of fact, innovation is an ethical matter through and through, because ethics is fundamentally concerned with anything that can promote or hinder human wellbeing. So ethics is relevant to assessing the goals of innovation, to the process by which it is carried out, and to evaluating its outcomes.
Let’s start with goals. Innovation is generally a good thing, ethically, because it is aimed at allowing us to do new and desirable things. Most typically, that gets expressed in the painfully vague ambition to ‘raise productivity.’ Accelerating our rate of innovation is a worthy policy objective because we want to be more productive as a society, to increase our social ‘wealth’ in the broadest sense. The 20th Century has seen a phenomenal burst of innovation and increases in wellbeing, exemplified not least by the fact that life expectancies in North American have risen by more than half over the last hundred years. The extension and enriching of human lives are good goals, which in turn makes innovation generally a good thing.
Indeed, when looked at that way, innovation isn’t just a ‘good,’ but a downright moral obligation. Yes, lives for (most) people in developed countries are pretty good. But many still don’t have happy and fulfilling lives; many children, even here, still go to bed hungry. Boosting productivity through innovation is a key ingredient for making progress in that regard. And if less developed nations are going to be raised up to even a minimally tolerable standard of living, we need innovations that will help them, and we need innovations that will make us wealthy enough that we can afford to be substantially more generous toward them than we currently are.
Which brings us to ethical evaluation of the specific fruits of innovation. Some innovations are plainly good: they make human lives better in concrete ways. Penicillin was a very good innovation. So was the birth control pill. So was the advent of the smartphone. Other innovations are less good: nuclear weapons are a clear candidate here, as perhaps are complex financial instruments such as derivatives, which Warren Buffet famously referred to as “financial weapons of mass destruction.”
The problem, of course, is that innovation brings risks. Some of those risks are of course borne by the innovator, by the entrepreneur. Others are borne by society. For one thing, we often don’t fully understand which category a particular innovation will end up in until years later. Is the net benefit of splitting the atom positive or negative? The jury is still out.
But ethical evaluation doesn’t just apply to individual innovations: systems of innovation bring a mix of risks and benefits. If we set ten thousand entrepreneurs loose on the world, and tell them (or incentivize them) to make something innovative that sells, some will bring us the proverbial ‘better mouse trap,’ and others will bring us video lottery terminals, biological weapons, and other bits of detritus that only serve to increase human suffering. If you give your tech company’s R&D department free reign, someone may invent the next ‘killer app,’ and someone else may simply crash your server. And the only way a system can preclude ‘negative’ innovation altogether is probably to discourage innovation altogether.
Hence the recent interest not just in innovation, but in managing innovation. The notion of managing innovation reflects the fact that innovation can be fostered — doing so is an obligation of ethical leadership — and is an activity rooted in creativity, not anarchy. So for practical purposes, the ethics of innovation ends up being a branch of the ethics of management and leadership. Organizations, from small teams to nations, face a range of ethical questions as a result. They need to figure out how much to spend on encouraging innovation, as compared to spending on existing programs. They need to figure out what combination of carrots and sticks to use to foster innovation. They need to figure out how much autonomy to give potential innovators, how much freedom to experiment. And finally, they need to figure out how to spread the risk of innovation, in order to make sure that risks and benefits are shared fairly, and to make sure that fear of risk doesn’t dampen our appetite for innovation. And all of those are fundamentally ethical questions.
Once again, the pharmaceutical industry is under attack, and once again it is for all the wrong reasons.
The problem this time is this: many of the new generation of blockbuster drugs are jaw-droppingly expensive, costing tens of thousands of dollars per patient per year or even per treatment. Part of the reason is that many of them are from a category of drugs known as “biologics.” Such drugs aren’t made with old-fashioned chemistry, but are instead produced inside living cells, typically genetically modified ones, inside giant vats known as bio-reactors. It’s an expensive new technology. And the big biotech firms that make these drugs are not fond of competition.
According to the New York Times, “Two companies, Amgen and Genentech, are proposing bills that would restrict the ability of pharmacists to substitute generic versions of biological drugs for brand name products.”
The companies claim they’re just trying to protect consumers. The generic versions, they argue, are typically similar, but not identical, to the originals. These aren’t simple drugs like Aspirin or the blood thinner, Coumadin. These are highly complex molecules, and the worry is that even slight differences in the manufacturing process could lead to problematic differences in form and function.
The makers of generics, for their part, acknowledge that worry, and say they’re fine with pharmacists limiting substitution to cases in which the Food and Drug Administration has declared two drugs to be interchangeable. But they oppose any further restrictions, including ones that might be imposed at the state level and for which the name-brand manufacturers are lobbying mightily.
What are we to say, ethically, about efforts by name-brand manufacturers to limit competition and thereby keep prices and profits high? Is it wrong of them to do this in a context in which health spending is out of control, and in which patients can die from being unable to afford a life-saving drug?
But as strange as this may seem, there is arguably nothing wrong with pharma behaviour that harms patients and strains private and public healthcare budgets. They aren’t responsible for the fact that people get sick, and they’re not (usually!) responsible for the decisions made by governments or by insurance companies. A lot of the behaviour on the part of pharma that people complain about is no more wrongful than the behaviour of the woman who invents a better mousetrap, thereby putting employees of the less-good mousetrap maker out of business. Innovative, competitive behaviour is good in the long run, but net social benefit is consistent with less-good outcomes for some.
The real sin, here, isn’t against consumers or governments, but against the market itself.
Markets, and the businesses that populate them, can only promise to be socially beneficial when there is competition. When governments move to foster competition, businesses that profess to believe in free markets cannot rightly cajole governments to do otherwise. The same goes for using lobbyists to encourage government to make a market less competitive. After all, playing by the rules of the game is the fundamental obligation of business. But when it comes to changing the rules of the game, we have to look to the limits implied by the spirit of the game. That’s where pharma is going astray here. Using government to limit competition isn’t just bad ethics; it’s bad capitalism.
When new technology puts sweatshop labourers out of work, is that a good thing or a bad thing? It’s not an entirely hypothetical question.
Here’s the story, from Fast Company: Nike’s New Thermo-Molded Sneakers Are Like Sculptures For Your Feet
The classic Air Force 1, Dunk, and Air Max 90 Nike shoes get the Vac Tech treatment–a thermo-molding technique that produces one-piece, stitch-free sneakers.
As a centerpiece for the holiday season, Nike Sportswear has released three of its most venerable brands–the Air Force 1, Dunk, and Air Max 90–constructed using a thermo-molding technique, a kind of vacuum compression method that allows the shoe to be held together without any noticeable seams or stitching. The Nike Dunk VT, above, basically recreates the familiar silhouette of the original design as sculpture around your feet.
Now presumably — though details are sketchy — the lack of stitching will mean these babies will be cranked out by machines, rather than assembled by hand by underpaid people in underdeveloped nations. Critics who think there’s no such thing as a good sweatshop should rejoice. But will sweatshop workers be so happy?
I hasten to add that the word “sweatshop” in its most pejorative sense doesn’t really apply to Nike. Nike, once villainized for having its shoes made by poorly-paid workers working under appalling conditions, is now widely recognized as a garment-industry leader in terms of labour standards. But that’s not to say that a job in a factory that makes Nike shoes is peachy. It’s still a hard life, by western standards. So is it good, or bad, for such labourers if a machine is developed that makes their services redundant?
As I’ve pointed out before, the workers vs machines conflict is, in the grand scheme of things, a false one. Machines can make workers more efficient (and hence valuable), can save humans from dangerous tasks, and can improve net social productivity in a way that stands to benefit literally everyone, in the long run.
But such generalizations don’t obviate the fact that there are some cases in which a new technology comes along and puts you out of work.
Unemployment is bad. Sweatshop jobs are bad. So do we celebrate or mourn when someone with a sweatshop job is put out of work? And is this a matter of choosing the lesser of two evils? Or the greater of two goods? And what does our answer to that question imply about the ethics of buying products made in the sweatshop jobs that remain?
Facebook users should keep complaining, complaining bitterly, complaining in every possible forum.
Oddly, for all the controversy over Facebook implementing yet another round of changes to its layout and user experience, that controversy has almost been drowned out by arguments over whether it’s appropriate for users to complain about Facebook. Yes, the burning debate among users is over whether there should be a burning debate among users.
Much of the force of the “stop complaining!” camp is rooted in the claim that, hey, after all, it’s a free service and no one’s forcing you to use it anyway. But contrary to what you might have heard, Facebook isn’t optional, and it isn’t free. Let me explain.
First, let’s talk price. Lots of people have already pointed out that while Facebook doesn’t charge users for an account, that doesn’t mean it’s free. The service is supported by advertising, just like TV shows have been since the days of early soap operas. So you are “paying” to use Facebook — you’re paying with your eyeballs. You’re paying with attention, however fleeting, to those ads along the side of the page. And — the more worrying fact — you’re paying with your privacy, as Facebook uses what seems to be increasingly-ornate ways to gather information about you, your preferences, and your web-surfing habits. As the saying goes, there’s no such thing as a free lunch. Facebook isn’t an exception.
Think of it this way: Facebook is like a gas-station bathroom. It might be “free”, but that doesn’t mean that quality doesn’t matter. In both cases, the “free” service being offered is there as an inducement. In the gas station’s case, it’s an inducement to stop there for gas (and increasingly for snacks, magazines, etc.). In Facebook’s case, being able to post stuff for “free” for your friends to see is an inducement to look at those ads, and to share your web-surfing habits with that advertising agency. So they have reason to want you to be satisfied, and you have every right to demand excellence in return for your attention.
Second, is Facebook optional? Whether a product is optional or not matters, ethically, because when a product is truly optional, customers can simply exit the relationship, either buying the product from someone else or not buying it at all. Given the option to exit, the dispute between producer and consumer evaporates as the two simply agree to disagree and go their separate ways. (The classic source on this is Albert O. Hirschman’s book, Exit, Voice, and Loyalty.) But Facebook isn’t optional. Ok, I know. Strictly speaking yes it’s optional. But then, so is email, or having a telephone, or having a car. Optional but, for many of us, functionally essential. In this regard, Facebook is a victim of its own success. It has no real competition, and the service is one that many of us cannot simply walk away from. In essence, Facebook has gained a virtual monopoly on what has become part of our social infrastructure. Complaining about Facebook is no sillier than complaining about the state of your local roads or the consistency of your supply of electricity.
So if you don’t like Facebook’s new layout, or if you don’t like Facebook’s approach to privacy, do not hesitate to complain. You’re well within your rights. And if Facebook listens, you might just help make the on-line world a better place.
Stem cell science is pretty sexy. And as the saying goes, “sex sells.” And if something sells, someone is liable to make a buck off it, whether it’s right to do so or not.
See this opinion piece (in The Scientist) by Zubin Master and David B. Resnik: Reforming Stem Cell Tourism.
As with many new areas of technological advancements, stem cell research has received its fair share of hype. Though much of the excitement is warranted, and the potential of stem cells promising, many have used that hype for their own monetary gain. … Young and elderly patients have died from receiving illegitimate stem cell treatments; others have developed tumors following stem cell transplantations….
Master and Resnik point to the need for patient education, and to the limits of international guidelines, but their main focus is on the ethical responsibilities of scientists — including the responsibility not to cooperate in various indirect ways with unscrupulous colleagues. (It is very, very hard to do clinical science in a vacuum, and so isolating unscrupulous scientists may be one way to put them out of business.)
But it’s important to point out that this is as much a story of business ethics as it is of scientific ethics. The unscrupulous individuals preying upon the sick aren’t doing it for free. What these clinics are doing is committing fraud, and endangering their customers in the process.
Now there’s nothing ethically subtle about that. You don’t need a Ph.D. in philosophy to know that fraud is bad. But there’s another, subtler, issue here, namely an underlying theme about the general lack of scientific literacy on the part of consumers and the ability of business to use it to their advantage. Companies of all kinds can do a lot of good in the world by promoting scientific literacy, and by being scrupulously careful about having the facts straight when they present their products to consumers and tell them, “this works.”
Now of course, we’re never going to prevent such behaviour entirely. As long as there are desperate people in the world, there will be snake-oil salesmen eager to make a buck from their misery. But as Master and Resnik suggest, that doesn’t mean we shouldn’t try.
The intersection of social media with social unrest is a massive topic these days. Twitter has been credited with playing an important role in coordinating the pro-democracy protests in Egypt, and Facebook played a role in helping police track down culprits after the Vancouver hockey riots.
But the mostly-unstated truth behind these “technologies of the people” is that they are corporate technologies, ones developed, fostered, and controlled by companies. That means power for those companies. And, as the saying goes, with great power comes great responsibility.
Fast-forward to early August 2011. London is burning, and the riots have spread to a couple other major UK cities. The British government has called in a few thousand extra cops. And again, social media is playing a role. But this time the focus is specifically on Research in Motion’s (RIM’s) BlackBerry, and its use as a social networking tool. There have been all kinds of reports that the BlackBerry’s “BBM” messaging has been the tool of choice for coordination among London’s rioters. RIM is probably asking itself right now whether it’s really true that ‘there’s no such thing as bad publicity.’
Distancing itself from its role in the “BlackBerry Riots,” RIM issued (via Twitter) the following:
We feel for those impacted by the riots in London. We have engaged with the authorities to assist in any way we can.
The “in any way we can” part is intriguing. So, what can, and what should, RIM do? One thing they can do is to help authorities identify those inciting violence by breaking through the security of the BBM messages. But as reported here, “RIM refused to say exactly how much information it would be sharing with police.” The other, much more dramatic, thing that RIM could do would be to temporarily shut down all or part of its network. Whether that would be at all useful is open to question. It would certainly make a lot of people angry, including millions of people who are not involved in the riots, or who are relying on their BlackBerries to keep in touch with loved ones during this crisis. But I point out this option just to illustrate the breadth of options open to RIM.
The question is complicated by questions of precedence. Tech companies have come under fire for assisting governments in, for example, China, to crack down on dissidents. Of course, the UK government isn’t anything like China’s repressive regime. But at least some people are pointing to underlying social unrest, unemployment etc., in the UK as part of the reason — if not justification — for the riots. And besides, even if it’s clear that the UK riots are unjustifiable and that the UK government is a decent one, companies like RIM are global companies, engaged in a whole spectrum of social and political settings, ones that will stubbornly refuse to be categorized. Should a tech company help a repressive regime stifle peaceful protest? No. Should a tech company help a good and just government fight crime? Yes. But with regard to governments, as with regard to social unrest, there’s much more grey in the world than black and white.
Last weekend, a despicable “hashtag” trended* on Twitter, one promoting the idea that violence against women is OK. By Sunday morning, tweets using that hashtag were mostly critical ones, expressing outrage at any non-critical use of the hashtag. One prominent twitterer, Peter Daou, (@peterdaou) asked why Twitter wasn’t preventing that hashtag from trending. He tweeted:
“Unbelievable: Is Twitter REALLY allowing #reasonstobeatyourgirlfriend to be a trending topic??!”
The outrage expressed by Daou and others is entirely appropriate. The hashtag in question is utterly contemptible. But the question of whether Twitter should censor it and prevent it from trending is another question altogether.
The central argument in favour of censorship is that the idea being broadcast is an evil one, and decision-makers at Twitter are in a clear position to stifle the spread of that evil idea, or instead to allow its proliferation. With great power comes great responsibility.
The most obvious reason against censorship is freedom of speech, combined with the slippery slope argument: if Twitter is going to start censoring ideas, where will it end? Freedom of speech is an important right, and that right includes the right to speak immoral ideas. Limits should only be imposed with great caution.
Now, it’s worth noting that the hashtag trending isn’t actually anyone’s speech: it’s the aggregate result of thousands of individual decisions to tweet using that hashtag. So if Twitter were, hypothetically, to censor the results of their trending-detection algorithm, they wouldn’t actually be censoring anyone, just preventing the automated publicizing of a statistic. But perhaps that’s a philosophical nicety, one obscuring the basic point that there is danger anytime the powerful act to prevent a message from being heard.
More importantly, perhaps, Twitter isn’t a government, it’s a company, and it doesn’t owe anyone the use of its technology to broadcast stupid ideas (or any other ideas, for that matter). We insist that governments carefully avoid censorship because governments are powerful and because for all intents and purposes we cannot opt out of their services as a whole. If a company doesn’t want to broadcast your idea, it’s not morally required to. Your local paper, for instance, isn’t obligated to publish your Letter to the Editor. The right to free speech isn’t the right to be handed a megaphone.
But then the challenging question arises: is Twitter a tool or a social institution? Just how much like a government is Twitter, in the relevant sense? It is, after all, in control of what many of us regard as a kind of critical infrastructure. This is a challenge faced by many ubiquitous info-tech companies, including Twitter, Facebook and Google. While their services are, in principle, strictly optional — no one is forced to use them — for many of us going without them is very nearly unthinkable. We are not just users of Twitter, but citizens. That perspective doesn’t tell us whether it’s OK for Twitter to engage in censorship, but it does put a different spin on the question.
*The fact that it was “trending” on Twitter means that Twitter’s algorithm had identified it as, roughly, a “novel and popular” topic in recent tweets. Trending topics are featured prominently on Twitter’s main page.
A recent item in the NY Times dealt with the fact that many companies these days seem relatively reluctant to invest in new employees, but comparatively willing to invest in new machinery. The evidence for that is mostly anecdotal, but interesting none the less.
Here’s the story, by Catherine Rampell: Companies Spend on Equipment, Not Workers
Companies that are looking for a good deal aren’t seeing one in new workers.
Workers are getting more expensive while equipment is getting cheaper, and the combination is encouraging companies to spend on machines rather than people….
The story gives the distinct impression that the issue here is not just an issue of machines or people; it’s about machines versus people, and machines are clearly winning the hearts and minds of employers these days. On the face of it, that sounds bad. Workers — people — matter, from a moral point of view, and machines don’t. So, other things being equal, it is better to spend money on doing something good for people (e.g., providing someone with a job) than it is to spend money on mere machines.
But two perhaps-not-obvious points need to be made, here.
The first point is that even when employers choose to purchase machines instead of hiring employees, that needn’t be a bad thing socially, nor bad for labour as a group. Machinery tends to boost productivity, and boosting productivity boosts wealth, so from a social point of view (including from the point of view of blue-collar workers) it is good when companies invest in machinery. Even if machines displace workers in a given industry, that needn’t spell trouble for workers as a class. In the early 19th Century, Luddites destroyed mechanized looms in a vain attempt to forestall the effect of the industrial revolution on employment patterns in the textile industry. And yet, in the long run, the industrial revolution did nothing to worsen the lot of labourers. Indeed, it ushered in an era of prosperity that made the lot of labourers as a whole vastly better. To be sure, changes in technology result in unemployment in the particular sectors in which new technologies are introduced. But that tends to be a temporary problem. The standard Econ 101 example is transportation. The advent of the automobile surely resulted in some unemployment among those who had formerly worked in the horse-and-buggy industry. But, in the long run, those workers eventually found jobs in the auto industry, and were no worse off. And so on.
The second point is that, even if we focus on the employees of a particular organization, labour and machines are not always (and maybe not even often) in competition. Machines and tools can make employees’ lives better, and in those cases, certainly, spending money on machines and tools is a good thing. The most obvious case is when the equipment purchased is, say, safety equipment, or when the machines purchased are ones with additional safety features or features that make work less back-breaking.
But purchase of equipment can also be good in another way. Machines and tools of various kinds can make labour more productive, and more productive labour is more valuable. Not everyone realizes that the productivity of labour — the amount of goods that can be turned out per hour of a worker’s time — varies vastly across the globe. An hour of an American worker’s labour, for example, produces far more output than an hour of a Chinese worker’s labour. And the reason has little to nothing to do differences in work ethic or intelligence or talent. The difference lies in national differences in access to tools, and to differences in organizational and managerial strategies. So investing in better equipment can be a way of investing in the productivity of your workers.
Of course, past some threshold, when labour is more productive, employers may decide they need less of it. The most famous example of this is in farming, where one man with a big tractor now often does the work that a dozen men might have done in years gone by. But the devil is in the details. We should at least recognize that investment in machinery is not automatically contrary to the interests of labour.
Innovation is a hot topic these days, and has been an important buzzword in business for some time. As Simon Johnson and James Kwak point out in their book, 13 Bankers: The Wall Street Takeover and the Next Financial Meltdown, innovation is almost by definition taken to be a good thing. But, they also point out, it’s far from obvious that innovation is in fact always good. They focus especially on financial innovation, which they say has in at least some instances led to financial instruments that are too complex for purchasers to really understand. Innovation in the area of finance — often lionized as crucial to rendering markets more efficient and hence as a key driver of social wealth — is actually subject to ethical criticism, or at least caution. And the worry is not just that particular innovations in this area have been problematic. The worry is that the pace of innovation has made it hard for regulators, investors, and ratings agencies to keep up.
In what other cases is “innovation” bad, or at least suspect? One other example of an area in which innovation might be worrisome is in advertising. Consider the changes in advertising over the last 100 years. Not only have new media emerged, but so have new methods, new ways of grabbing consumers’ attention. Not all of those innovations have been benign. When innovative methods have been manipulative — subliminal advertising is a key example — they’ve been subject to ethical critique.
Some people would also add the design and manufacture of weaponry to the list. But then, almost all innovations by arms manufacturers have some legitimate use. Landmines and cluster bombs are controversial, largely because of their tendency to do too much “collateral dammage” (i.e., to kill civilians). But they do both have legitimate military uses. So it’s debatable whether the innovation, itself, is bad, instead of just the particular use of the innovation.
Are there other realms in which innovation, generally taken to be a good thing, is actually worrisome? One caveat: the challenge, here, is to point out problematic fields of innovation without merely sounding like a luddite.
The buzz over the appearance by IBM’s computer, Watson, on Jeopardy last week has me thinking about the capacities of computers.
Could a computer run a company, and if so what would we want to say about the ethical constraints on such a company? Well, one obvious worry is that ethics requires exercising judgment. Stanley Fish, in an editorial in the NY Times a couple of days ago (“What Did Watson the Computer Do?“) argues that while computers (from laptops on up through to Watson) are very good at is following rules. What they’re bad at, Fish points out, is adapting to new situations and figuring out whether the current situation is a valid exception to the rule.
So, let’s imagine a corporation without humans. It’s not science fiction, and it’s not far-fetched. I don’t know of any in operation today, but they’re certainly possible. There are some corporations today that, while they currently do have significant human personnel, could likely survive and continue to generate revenue for at least several days without human intervention. For example, basically any company that sells a product that can be bought and shipped via the Internet, such as ebooks or music files, can operate for at least a while without humans. (If you’re skeptical about that, please accept it for now, for the sake of argument.)
So imagine a guy named Dave sets up a company selling audio books. He builds a website, which allows customers to search, find the books they want, pay online, and receive the audio book as a download. Maybe he has a web-roaming software ‘bot looking around the web to find out which print books are popular enough for his online store to feature, and maybe even a decent piece of text-to-voice software to generate the voice files, without the need for human input.
Now, as long as Dave is around, monitoring the system, we’re likely to say that Daves “is” the company, and the computer is a tool he uses. And any ethical questions about the company’s conduct should be addressed to Dave. But what if Dave dies? The computer system would keep on chugging along, making money (barring failures of hardware or software). What ethical questions does such an autonomous electronic corporation pose? If the computer harms no one, and violates no rights, is it acting “ethically”, or does that notion require the kind of judgment that Fish says is impossible for computers? Would this robo-corporation have ethical obligations, or is the very idea of a non-human construct having ethical obligations nonsense? And if it’s nonsense, then does it make sense for corporations to have obligations, or are a corporation’s obligations merely the obligations of the persons that make it work?