Ethical Constraints on a Corporation Without Humans
The buzz over the appearance by IBM’s computer, Watson, on Jeopardy last week has me thinking about the capacities of computers.
Could a computer run a company, and if so what would we want to say about the ethical constraints on such a company? Well, one obvious worry is that ethics requires exercising judgment. Stanley Fish, in an editorial in the NY Times a couple of days ago (“What Did Watson the Computer Do?“) argues that while computers (from laptops on up through to Watson) are very good at is following rules. What they’re bad at, Fish points out, is adapting to new situations and figuring out whether the current situation is a valid exception to the rule.
So, let’s imagine a corporation without humans. It’s not science fiction, and it’s not far-fetched. I don’t know of any in operation today, but they’re certainly possible. There are some corporations today that, while they currently do have significant human personnel, could likely survive and continue to generate revenue for at least several days without human intervention. For example, basically any company that sells a product that can be bought and shipped via the Internet, such as ebooks or music files, can operate for at least a while without humans. (If you’re skeptical about that, please accept it for now, for the sake of argument.)
So imagine a guy named Dave sets up a company selling audio books. He builds a website, which allows customers to search, find the books they want, pay online, and receive the audio book as a download. Maybe he has a web-roaming software ‘bot looking around the web to find out which print books are popular enough for his online store to feature, and maybe even a decent piece of text-to-voice software to generate the voice files, without the need for human input.
Now, as long as Dave is around, monitoring the system, we’re likely to say that Daves “is” the company, and the computer is a tool he uses. And any ethical questions about the company’s conduct should be addressed to Dave. But what if Dave dies? The computer system would keep on chugging along, making money (barring failures of hardware or software). What ethical questions does such an autonomous electronic corporation pose? If the computer harms no one, and violates no rights, is it acting “ethically”, or does that notion require the kind of judgment that Fish says is impossible for computers? Would this robo-corporation have ethical obligations, or is the very idea of a non-human construct having ethical obligations nonsense? And if it’s nonsense, then does it make sense for corporations to have obligations, or are a corporation’s obligations merely the obligations of the persons that make it work?