Oxford University Press's
Academic Insights for the Thinking World

How to be good

‘How to be good?’ is the pre-eminent question for ethics, although one that philosophers and ethicists seldom address head on. It was the question Plato posed in a slightly different form in The Republic when he said, “We are discussing no trivial subject, but how a man should live.” Marcus Aurelius thought he knew the answer. When he unequivocally stated in his Meditations “A King’s lot: to do good and be damned.” He was himself a king and ruled almost all of the world that was known to him. He could with impunity both do good and be damned. Edward Gibbon famously remarked that “If a man were called to fix the period in the history of the world during which the human race was most happy and prosperous he would, without hesitation, name that which elapsed from the death of Domitian to the accession of Commodus.” Marcus Aurelius, the father of Commodus ruled for the last 19 years of this period.

Recently philosophers and scientists have tried to identify how to make the world better by making people more likely to do good rather than evil. Many of these have proposed ways of changing human kind by chemical or molecular means so that they literally cannot do bad things, or are much less likely so to do, in other words by limiting or eradicating their freedom to do bad things.

This same problem has also faced those interested in artificial intelligence (AI). If we create beings as smart as or smarter than us how can we limit their power to deliberately eliminate us or simply act in ways that will have this result? How can we ensure that they act for the best? Many people have thought that this problem can be solved by programming them to obey some version of Isaac Asimov’s so-called “laws” of robotics, particularly the first law: “a robot may not injure a human being, or, through inaction, allow a human being to come to harm.” The problem of course is how the robot would know whether its actions or omissions would cause danger to humans, or for that matter, to other self-conscious AIs. Consider that ethical dilemmas often involve choosing between greater or lesser harms or evils, rather than avoiding harm altogether, allowing or causing some to come to grief for the sake of saving others.

NAO Robot by Stephen Chin. CC BY 2.0 via Flickr.
NAO Robot by Stephen Chin. CC-BY-2.0 via Flickr.

How would a human being who, for example, had been rendered unable to act violently towards other people, or in ways that caused pain, defend herself or others against murderous attack? How would an AI programmed according to Asimov’s laws do likewise?

John Milton knew the answer. In Paradise Lost, Milton reports God as reminding humankind that  if we want to be good, to be “just and right,” then we need autonomy: “I made him (mankind)  just and right, sufficient to have stood, though free to fall.”

This dilemma, felt no less keenly by God than by the rest of us, of how to combine the capacity for good with the freedom to choose, is now facing those trying to develop moral bio-enhancers and those working on the new generation of smart machines. This is what Stephen Hawking meant when he told the BBC in 2014 that “the primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” How could full AI which would enable the machine which (who?) possessed it to determine its own destiny as we do, be persuaded to choose modes of flourishing compatible with those of humans? Of course, we currently have these problems with respect to one another, but at least we have not as yet shackled our capacity to cope with these by foreclosing some options for self-defence by moral bio-enhancement.

In the future there will be no more “men” in Plato’s sense, no more human beings therefore, and no more planet earth. No more human beings because we will either of wiped one another out by our own foolishness or by our ecological recklessness; and no more planet Earth because we know that ultimately our planet will die and any surviving people or AIs along with it.

Initial scientific predictions on the survival of our planet suggested we might have 7.6 billion years to go before Earth gives up on us. Recently, Stephen Hawking said, “I don’t think we will survive another thousand years without escaping beyond our fragile planet.”

To be sure, we need to make ourselves smarter and more resilient. We may need to call on AI in aid to achieve this, if we are to be able to find another planet on which to live when this one is tired of us, or even perhaps develop the technology to construct another planet. To do so, we will have to change, but not in ways that risk our capacities to choose both how to live and the sorts of lives we wish to lead.

As Giuseppe di Lampedusa had Tancredi say in The Leopard, “If we want things to stay as they are things will have to change”… and that goes for people also!

Featured image credit: Synapse, by Manel Torralba. CC-BY-2.0 via Flickr.

Recent Comments

There are currently no comments.