Ethical Constraints on a Corporation Without Humans

Hal 9000The buzz over the appearance by IBM’s computer, Watson, on Jeopardy last week has me thinking about the capacities of computers.

Could a computer run a company, and if so what would we want to say about the ethical constraints on such a company? Well, one obvious worry is that ethics requires exercising judgment. Stanley Fish, in an editorial in the NY Times a couple of days ago (“What Did Watson the Computer Do?“) argues that while computers (from laptops on up through to Watson) are very good at is following rules. What they’re bad at, Fish points out, is adapting to new situations and figuring out whether the current situation is a valid exception to the rule.

So, let’s imagine a corporation without humans. It’s not science fiction, and it’s not far-fetched. I don’t know of any in operation today, but they’re certainly possible. There are some corporations today that, while they currently do have significant human personnel, could likely survive and continue to generate revenue for at least several days without human intervention. For example, basically any company that sells a product that can be bought and shipped via the Internet, such as ebooks or music files, can operate for at least a while without humans. (If you’re skeptical about that, please accept it for now, for the sake of argument.)

So imagine a guy named Dave sets up a company selling audio books. He builds a website, which allows customers to search, find the books they want, pay online, and receive the audio book as a download. Maybe he has a web-roaming software ‘bot looking around the web to find out which print books are popular enough for his online store to feature, and maybe even a decent piece of text-to-voice software to generate the voice files, without the need for human input.

Now, as long as Dave is around, monitoring the system, we’re likely to say that Daves “is” the company, and the computer is a tool he uses. And any ethical questions about the company’s conduct should be addressed to Dave. But what if Dave dies? The computer system would keep on chugging along, making money (barring failures of hardware or software). What ethical questions does such an autonomous electronic corporation pose? If the computer harms no one, and violates no rights, is it acting “ethically”, or does that notion require the kind of judgment that Fish says is impossible for computers? Would this robo-corporation have ethical obligations, or is the very idea of a non-human construct having ethical obligations nonsense? And if it’s nonsense, then does it make sense for corporations to have obligations, or are a corporation’s obligations merely the obligations of the persons that make it work?

11 comments so far

  1. jilly on

    Dave “is” the company not only because he makes the decisions, but also because he benefits from its profits. The latter consideration would apply after his death: the responsibility (or ethical onus) would lie with whoever took the profits. [If Dave had no heirs, the property would, I believe, be transferred to the State.] Furthermore, Dave would be dependent on a web server of some kind, and whoever provided the hosting service would be responsible for certain potential breaches of law, at least. So I question whether the particular example is a possible one.

    But I believe that the same principle must apply to the larger question as well. Ethical onus lies not only with the individuals who perform (or commit) certain actions, but also with those who profit from them. Human nature being what it is, it is hard for me to imagine a profitable enterprise of any kind that did not have human beneficiaries.

    Cui bono?

    Finally, this post blends nicely with your earlier ones about consumer ethics. Perhaps those who benefit from the service itself would also have some responsibility for making judgments about its ethical acceptability?


    • Chris MacDonald on

      That makes a good deal of sense. But what if the profits are all donated, automatically, to a charity?

      • jilly on

        Is a charity automatically free of ethical obligations because it is a charity?

      • Chris MacDonald on

        No, not at all. But you suggested that responsibility lay with whomever received the profits. But it’s hard to imagine holding responsible an organization that, without asking, simply received money, without being involved in any decision-making.

  2. Courtney Wantink on

    I agree that Dave ‘is’ the company, and the fact is, he set up the system itself. Whether it can now operate without him or not is merely a side note. Dave had to set up the company to operate within legal standards, and was forced to cooperate with the web servers, Paypal or similar payment options, etc. The robo-corporation could not exist independently, and so could never be a primary decisionmaker. Therefore it cannot operate ethically. If it fails to violate laws, or succeeds even in having a positive impact, it is not due to its desire to fulfill ethical duties.

    The robo-corporation should not be assigned ethical obligations because this in essence is assigning it morals and values. Those who created that corporation, as with any in operation, are responsible for the impact on employees (if it’s a non-robo-corp), consumers, and the market within which it resides.

  3. sw on

    Isaac Asimov’s Three Laws of Robotics:
    A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  4. Veronique Luciani on

    Isn’t there an obligation *because* someone can be affected by the company’s actions, regardless of whether or not it is conceivable that a computer has the judgment necessary to “think ethically”? I guess the question is: does ethical obligation come from the person or thing who can act, or is its primary source the person who can be affected? My initial feeling is that it lies where the “right” lies: then it’s a question of finding the agent who has the obligation to act. And maybe the problem is that when you’re faced with a computer, you need someone to back up its obligation: a government? a person? (if not a CEO, then someone else who “owns” a company, like shareholders; where a company isn’t publicly traded, then maybe it becomes a government’s job to ensure “ethical surveillance”).
    You ask if the “robo-corporation [would] have ethical obligations”: I guess the robo-*corporation* would – just not the robot.

  5. Tim Ragan on

    Hi Chris,
    I think you have proposed a great thought experiment with this blog entry. It runs parallel to my thinking of “business as a machine” — I believe a corporation is literally “hard-wired” to operate in a specific way and the more cut and dry the rules that govern the company and its operation can be made, the more automated it can become.

    In essence, then, the idea of “ethical obligations” of a corporation becomes preposterous — its function is to follow its business rules and the rules of law to seek a profit. Any other “behaviour” expected from it is outside of its stated operating parameters. This is why concepts like CSR — while well meaning — are really at best window dressing that go against the grain of the structure underpinning the modern corporation.

    Tim Ragan

    • Chris MacDonald on


      I agree in part. But I think that the machine metaphor opens up the possibility of an engineering perspective on the corporation. That is, a company may be wired in a particular way, but they’re not all wired in the same way, and that wiring is open to being changed. So, for example, I don’t see any reasy (in principle at least) why CSR (of some flavour) cannot be wired into a company’s design.


  6. Tim Ragan on

    I agree, Chris, that “CSR-like concepts” need to be hard-wired into a company’s design if business is going to retain center stage in some kind of sustainable future. Given that a corporation is fundamentally an economic machine that suggests that the only way to make CSR “real” is to build it into the economic equation of the machine — put simply, we need to price in (environmental and social) externalities. Very straightforward in theory, but quite messy in practice.

  7. jilly on

    Hi Chris,

    Thank you for your clarification about charities. I did a little checking, and found that many charities have “gift acceptance” policies designed to ensure that staff recognize when accepting a gift will be too costly in terms of administration or reputation, and “sponsorship” policies that provide criteria for determining when a sponsorship or partnership should be rejected. I also found the following quote from the founder of the Salvation Army, which reflects a different view: “I shall take all the money I can get, and I shall wash it clean with the grateful tears of widows and orphans” (quoted by Garston, J. (2008) Ethics Q&A. Retrieved February 28, 2011, from Ms. Garston’s discussion of the issue seems nuanced and is certainly interesting.

    Moving some distance from the original question! However, I do think that there is some value in doing so. Ethics is fundamentally a human striving, and, metaphors aside, a corporation is a human construct, not an animal or a natural process. In an ideal world, ethics would be built in to the principles on which corporation was founded.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: