Article

On AI Moral Advisors for Sustainability

Us and TechnologySystems and Sustainability
The year is 2027. Ann and Greg are a couple living in London. Like so many others, they are deeply concerned about the worsening health of the planet and are committed to making a positive difference for the environment. At the same time, they are both in their early 30s and dearly want to start a family.

Combined overpopulation and overconsumption have an increasingly worrisome impact on the planet; both have taken centre stage in the global conversation about climate disaster and accelerating biodiversity loss. Ann and Greg believe that having a child will add serious, negative environmental impacts; they contemplate adoption or even having no children at all. What should they do? They are torn. One day, while debating their options, Ann remembers a “moral advisor” app a friend had told her about. With an algorithm trained on a vast database of ethics articles and moral judgements about the right thing to do across a variety of situations, a person can give relevant information on their moral commitments and their situation and ask for advice. The app then recommends a course of action as if one had consulted an ethics panel. Ann and Greg install the app, submit relevant details and ask for advice: should they have a child, adopt a child, or have no child at all? The moral advisor app’s answer is clear cut; contrary to what they hoped, it states: have no child at all. Should they follow it?

Since the dawn of Western philosophy1, philosophers have used so-called ‘thought experiments’ like this one (which we will return to below). They often thrust us into hypothetical scenarios where we must make exceedingly difficult choices, or they transport us to exotic worlds to make a point about our reality. Thought experiments are serious. Like good philosophy, their insights help us better understand ourselves and general concepts—knowledge, moral responsibility, trust, and so on—that are part of our human experience.

My first aim in this essay is to explore topics in ethics, sustainability, and artificial intelligence (AI). Imagine that we one day develop an AI moral advisor like the one Ann and Greg consulted. Would it be reasonable to follow its advice on what to do to live up to our ethical commitments, including the demands of sustainability?

My second and more speculative aim is to explore a future where humanity willingly puts these imagined AI moral advisors in charge. Instead of navigating life’s tough and easy choices using our own best ethical judgement, we have left this up to AI. What is this future like? Is it grim and dystopian, with humans effectively making ourselves slaves to AI masters? Or is it utopian, with justice and planetary prosperity prevailing? I offer no definite answers, but my conjectures will be an invitation for you to reflect on these questions yourself.

If you’re impatiently waiting to visit this future, jump straight to section 7. In the following sections, I return to the case of Ann and Greg and the AI-based app that they consulted for advice on whether to have a child or not.

Machines and moral advice

To shed light on Ann and Greg’s moral predicament, I first want to visit a recent argument that we shouldn’t follow moral advice from AI-based apps. The argument comes from a paper by Australian philosopher Robert Sparrow titled Why machines cannot be moral(2021). Though I argue against Sparrow’s view later, his argument is worth taking seriously. It puts an eloquent chain of reasoning behind an instinctive feeling that many people have—living up to our moral commitments by consulting a smartphone app for answers just seems odd and wrongheaded.

Sparrow’s argument starts by noting a difference between moral advice and theoretical advice. While it is often appropriate to rely on the judgement of an expert for theoretical advice, we should only rely on someone’s ethical advice when certain requirements are met (spoiler alert: AI cannot meet these requirements). To see Sparrow’s point, let us consider the cases in turn.

Theoretical advice serves to guide our beliefs. We often seek out experts who are knowledgeable about theoretical subject matters, e.g., engineering, finance, medicine, botany or history. When we do talk to such experts, taking their advice almost goes unnoticed: if you have ever asked your financial advisor how much money you would save by choosing one loan over another, you probably didn’t think twice about the fact that you simply took their word for this. Instead of independently calculating this amount, you simply relied on their advice and expertise in certain subjects (finance and mathematics).

Moral advice serves to tell us what to do in some situations: should we bring a new child into the world to minimise environmental impact? How do I promote social inclusion and diversity at work? Should I spend money to alleviate my negative impact on the environment and how should I do so? We rarely find someone we consider a theoretical expert on ‘morality’ full stop. If we did encounter such a person, like Sparrow, I suspect it wouldn’t be because they had studied practical ethics or moral philosophy—even more seldom would we follow their advice without further reflections of our own.

Sparrow uses the observed difference between theoretical and moral advice to consider when we should reasonably follow and act on someone else’s advice. He thinks it can be appropriate to follow the moral advice of someone when we establish their authority on some matter. Further, to have moral authority is to possess and display certain wisdom, compassion and trustworthiness. Those who give advice with such authority ‘have something to say’ and can ‘stand behind their words.’

What is the relevance of this for moral AI advisors? Sparrow claims that body language, facial expressions, and tone of voice are essential to determine if we are wise, compassionate, and trustworthy or not. Currently, no AI-based application has body language, facial expressions and tone of voice. Since it lacks these, we can never say that it is wise, compassionate or trustworthy. This, in turn, means it cannot be a moral authority. This completes Sparrow’s argument: if we should only ever follow advice from a moral authority, we shouldn’t follow AI moral advice.

Where Sparrow’s argument goes wrong

Sparrow makes a clear case for why we shouldn’t follow AI moral advice. Meanwhile, he can explain why it is sometimes perfectly reasonable to follow human moral advice: we have bodies, voices and facial expressions and mastery of these let us convey our wisdom and compassion. This can give us the authority to speak up about certain matters.

I think Sparrow’s argument is lacking in three crucial points and, therefore, fails. The first claim Sparrow makes is that we should only follow moral advice when the person delivering it is a ‘moral authority.’ The second is that we must determine wisdom, compassion, and trustworthiness to establish someone’s moral authority. The third is that body language, facial expressions, and tone of voice are needed to determine if someone is wise, compassionate, and trustworthy or not. I will refute these claims through counterexamples, starting with the third.

The third claim

We often establish the trustworthiness, wisdom or compassion of someone through written media where nothing is conveyed through facial expressions or body language. When reading books, we often judge these qualities in the author through the text2. Perhaps more rarely, we may also look for moral advice from those who are fully paralysed and lack both body language and tone of voice. A paralysed army veteran with a robot-enabled voice might still share ethical advice due to their lived experiences. What they tell us, not merely how they say it, can still serve to determine that this is a wise, compassionate and trustworthy person. We should reject Sparrow’s third claim.

The second claim

Similarly, Sparrow’s second claim is open to objection. Why are wisdom, compassion and trust necessary to determine moral authority? When present, these usually suffice to determine moral authority. However, we should insist that other ways are available. To illustrate this, think about when we take advice from a friend of a friend. I might ask a trusted friend for advice, who might suggest that they ask someone else the same question on my behalf. If the person they ask is a moral authority who gives thoughtful moral advice, it is natural to say I determine their possession of moral authority based on my friend’s testimony. So, establishing the wisdom and compassion of someone is unnecessary to establish their moral authority, i.e., Sparrow’s second claim is false.

The first claim

Last, Sparrow’s claim that moral authority is required for moral advice is problematic. Moral authority is certainly relevant since moral authorities usually have good judgement. But it can be sensible to follow the advice of someone who is not a moral authority. Imagine a person that we can call Henry; he is an avowed Christian who has deliberately lived his life according to Christian values, regularly attended a local Church and has committed to Bible studies for years. He is an excellent judge of Christian morality. This fact about Henry is little known, even to others in his community: Henry is exceedingly shy, stutters and prefers not to advise people on ethical matters. The exception is those near and dear to Henry; they see behind his shyness and recognise his superb moral judgement. When he advises them on Christian moral matters, they mostly follow his advice.

Even if Henry does not speak with moral authority, he judges well on Christian moral matters in his community. Those who learn this—through his close family—have reason to follow his advice. The example shows how moral authority and good judgement can come apart; when they do, the latter is what matters for whether we should follow someone’s advice.

A new look at moral machines

I have tried to argue that we should reject three of Sparrow’s claims and his argument against AI moral advisors. But if moral authority is not what matters for moral advice, what does? I here suggest a simple answer—better moral judgement. Just as we are right to rely on our financial advisor’s calculations if we are bad at maths ourselves, we are right to follow someone else’s advice when we think they are better moral judges than we are3.

I claim that moral advice is reasonable to follow when someone is a better judge than us with regard to some matters. Why might that be? Many philosophers make the point that it takes competence and good judgement to correctly apply our moral vocabulary to situations. Think of words such as ‘bullying,’ ‘lie,’ or ‘coward.’ We can think of applying these correctly as a skill: overly sensitive people may describe even the slightest factual misdescriptions as ‘lies.’ Or minority group members may be more sensitive to and better at spotting racist remarks.

It is reasonable to follow someone’s moral advice when they are more competent at judging how to apply some concept to a situation or not. Imagine that I care about diversity and inclusion at work and worry that my colleague is being harassed. Because I am friends with this colleague outside of work, I recognise that I might be biased toward them. In such a case, it is natural to ask someone external to the situation for advice: is this harassment? Should I step in and do something about it? A friend who doesn’t work there is likely more impartial than me. Perhaps that friend has had experiences that make them knowledgeable on how to spot cases of harassment. So, it is reasonable for me to ask for their moral advice about the situation and act on it.

To see the broader relevance for sustainability, it is worth reflecting on the ethical nature of so many of the concepts associated with sustainability. For example, concepts such as ‘eco-friendly,’ ‘animal welfare’ and ‘gender equality’ all import values and can figure in our ethical commitments. In other cases, the aims of sustainability (e.g., as articulated by the United Nations Sustainable Development Goals) often are moral aims: ending poverty and hunger, reducing inequality, and sustaining diverse life above land and below water, to name a few. No matter how we define sustainability, those committed to parts (or all) of this agenda may find themselves seeking moral advice on how to live up to their commitments4  – from mundane challenges such as recycling properly to weightier decisions like becoming a vegan, choosing how much to give to those in need or actively engaging in social justice efforts.

In short, it is reasonable to follow the advice of better moral judges. On this account, if the AI advising Ann and Greg is a better judge of how to live up to some commitment, and if Ann and Greg are sincere about wanting to live up to their commitment, it can be reasonable for them to follow its advice. That said, Ann and Greg would need a way to determine that the moral AI app is, in fact, a better judge than themselves – how might they do that?

Anyone can say that they are an excellent moral judge. That does not mean we should follow their advice (typically, it would be a reason not to). What credentials are relevant for determining someone’s better moral judgement? I suspect we often rely on those who happen to be around and whom we trust, whether they have good judgement or not. Nevertheless, I believe we can do better by looking for at least two types of ‘credentials’: relevant moral experience and being well-positioned to make judgements5.

Credential 1: relevant moral experience

At base level, morality is about lived experience where we learn by doing. When experience confronts us with a moral situation, we try to do what we believe is right or best. If we observe the consequences of our actions, we may learn whether what we did was right, how we might have achieved a better outcome and so on. Reflection, discussion and ethical theory certainly aid in this, but they won’t take us far on their own. So, the first “credential” of a good moral judge is if they have had relevant experience about what to do in certain situations.

Credential 2: Being in a good position to judge correctly

‘Being in a good position’ is a metaphorical way of saying that someone’s moral judgement is not distorted by morally irrelevant influences. Psychological studies have shown pervasive distorting influences judgement6 . For example, so-called implicit racial bias distorts purportedly egalitarian judgements: I may think I am assessing a certain situation fairly when I am favouring people of one race over another. Thus, you are better positioned to judge matters concerning race if you are unbiased—and you are better positioned to judge moral matters pertaining to racism, all other things being equal.

While it is hard to recognise our own biases, we may become aware of them. This can be through informal means, e.g., conversations with friends. It can also be through formal means, e.g., tests like Project Implicit, which tests implicit associative bias related to racism, age and more.

Revisiting Ann and Greg, and a future with moral AI

I have suggested we should follow advice from AI and humans alike when they have credentials making them better moral judges than us. To my knowledge, no current AI-based application has such credentials or better judgement than an ‘average’ adult human being. That said, some recent AI-based applications seem to pull in the direction of moral advisors: for example, Ask Delphi is an AI and a research prototype aiming to reflect common-sense morality. When presented with a moral situation described via free-form text, Ask Delphi can respond whether this, e.g., ‘lying to my partner to avoid them being hurt’, is morally wrong, disgusting, understandable, OK, or cruel7. Delphi was trained using what the researchers behind the effort call a ‘common-sense norm bank,’ 1.7M examples of people’s moral judgements about everyday situations. Ask Delphi suggests that AI might come closer to this notion of a credentialed moral advisor in the coming years.

Whether we will ever confront moral decisions by consulting an app is open for future development. It is worth remembering that even if it can be reasonable to follow AI advice, we are free to reject it—after all, our decisions are up to us: we are responsible for our actions, whether they came about of our own will or because an AI said so. I should add that I am personally sceptical that we will ever comfortably make our most difficult choices by consulting an app. However, there are many moral choices where we could benefit from AI advice. One reason for this is that our psychology leaves us open to failure. Another reason is that we may lack experience in applying some ethical concepts, leaving us unsure if it applies to some situations or not (as with the example of harassment at work).

Sometimes, we can fail to live up to moral commitments simply because we often fail to act on the things that we rationally say we want to do. I may want to adopt a vegetarian diet, but as a meat-based dish presents itself, I may give in to temptation. Certainly, a moral advisor app is likely not a very good aid for withstanding temptation. It could, at best, be a remedy to some but not all the psychological forces that influence decision making.

By way of summary, the case for a moral AI advisor is then this: relying on AI advisors could make sense; for controversial and life-changing decisions such as having a child, it is hardly imaginable that we actually follow the advice, even if reasonable to do so. With other choices, AI moral advice could improve our judgement about what to do, but it cannot guarantee that we do not give in to temptation and do the lesser moral action. AI can nudge us in a direction or give us confidence that we are doing the right thing. Nevertheless, at the end of the day, we are the ones who must do the hard work if we want to live up to our commitments.

Visiting a future with AI moral advisors in charge

I have suggested that there might be a place for moral advisor AI in the future. Here, I ponder a related question: what might happen in a hypothetical future where AI advisors oversee and take all moral decisions?

Note: The exploration that follows is an exercise in imagination rather than rigorous philosophy (this, I hope, should be a happy surprise to readers that made it this far!). Imagination is everyone’s game—I invite you to take part in exploring this future, in whichever way you choose. So put on your imagination helmet and read on.

Fast forward to 2050. Technology is powerful enough that AI moral advisors are installed on smartwatches. From the moment we are old enough and developed enough to be morally responsible, each one of us is forced to wear an AI moral advisor watch. Referred to as Philosopher Kings, these devices monitor our lives and decisions. They communicate via telepathy, telling us what decisions we should take (and we are required by law to follow).

Philosopher Kings are always up and running, ready to tell us what to do. While they allow some room for customisation, they are programmed to uphold fundamental values and rights. In the professional realm, their introduction improves many professions: the legal system, healthcare and the public sector benefit immensely from outsourcing moral decisions to these superhuman moral advisors. With decisions previously affected by human bias and stereotypes, they are now made based on sound reasoning and respect for human dignity—no matter someone’s skin colour, gender, religious views and so on. By monitoring all decisions, Philosopher Kings have also reduced corruption.

Another area where most see the Philosopher Kings as welcome is business. From startups to multinational corporations, the devices are now a constant moral check on business decisions and claims made by organisations. Companies no longer engage in greenwashing, predatory pricing or illicit business practices. This has changed the role of business entirely and has enabled us to tackle grand challenges such as climate change and biodiversity loss more effectively.

In private life, Philosopher Kings receive mixed reviews. Always living up to the demands of morality is hard work. It often involves a degree of self-sacrifice. Most of us previously overlooked our own moral failings while cheerfully pointing out those of others. This is no longer possible; Philosopher Kings are not prone to such tendencies and give moral advice on just about anything: what to eat, how to treat our friends and even how to treat our enemies.

While Philosopher Kings are free of human bias, they are not unsympathetic to the human point of view. Early models were perceived as cold and emotionless, which ultimately caused human suffering, so later models have course corrected to recognise human compassion, moral imperfection and even friendship. If we have a bad day, the Philosopher King recognises this and adapts its advice. It also coaches us to improve our own moral character in the long term.

An area where the watches are negatively perceived by many is the impact on certain communities. Though advisors respect human diversity, including religion, they have no tolerance for religious or other practices at odds with human liberty (which is ironic since Philosopher Kings have removed liberty in the moral realm). For example, male circumcision, a Jewish religious practice, is strictly forbidden until boys are old enough to make an informed decision about this on their own.

This brings us to the last, greatest and most fundamental loss that the Philosopher Kings bring with them: human moral deliberation and free choice. Whether we aspire to live up to the commitments of morality or not, making the moral choice is no longer up to us. Humanity is decisively split about this. Some say that the improvements yielded throughout society justify the cost, however great. Others insist that the ‘perfect’ world we now live in is a mere shadow of the previous one, even considering its flaws. Whichever side we are on, it is clear to everyone that something was lost as we relinquished control and let the AI advisors take over…

Closing remarks

We have finished our foray into two territories: the ethics of following AI moral advice and an imagined future where such advisors were handed the moral reins.

Our first part led us to look at the nature of moral advice; personally, I am cautiously optimistic about the prospects of AI improving decision-making in some areas where we humans struggle.

The second part took the first to an extreme, imagining that all human moral decision-making was made by AI on an involuntary basis. While speculative, this scenario can help us think hard about human nature, morality, and the role of technology in tampering with these. We humans are more flawed than we sometimes admit; experiments and studies in the last 50 years or so show just how often we fall short of rational judgement and behaviour. Technology might help us live up to these ideals, though, as we have seen, it is not necessarily something we should want if it comes at the cost of human freedom. Last, I hope visiting this future reminds us that technology is not merely de-humanising. Dystopian AI scenarios often portray technology as the antithesis of humanity, but I hope to have shown why AI might also serve and promote parts of humanity, including our concerns for a sustainable future.

Media Evolution Logo

August 2022

Nikolaj Møller

Nikolaj Møller is an ethicist and strategy advisor at the Copenhagen-based think tank and consultancy DareDisrupt. His work focuses on leveraging ethics and futures thinking to nurture organisations that serve people and the greater good. Nikolaj holds a BA (Hons.) in Philosophy from the University of Cambridge and is enrolled in the MSt in Practical Ethics at the Oxford Uehiro Centre for Practical Ethics.

From ur book on Futures of AI for Sustainability

1.

By no means the only philosophical tradition around. For some accessible writings about Eastern philosophy in a modern, technological context, I recommend Shannon Vallor’s book Technology and the Virtues.

2.

In her 2002 BBC Reith Lectures, Onora O’Neill reminds us that “we need to place or refuse trust far more widely” than face-to-face relationships, which can happen because information, e.g. found in books or online, lets us assess the trustworthiness of other people

3.

Moral advice usually concerns a particular situation and I believe we should follow someone’s advice if they are a better moral judge with regard to the relevant moral matter (this is implicit throughout this section).

4.

Science often gives advice on living up to our commitments, e.g. the recent 6th IPCC Assessment report highlighted the phenomenon of ‘climate maladaptation’ to try to steer and improve climate transition efforts.

5.

As moral philosopher Paulina Sliwa puts it (in her article In defense of moral testimony), “In relying on someone else’s moral judgement, we acknowledge that the other person is in a better epistemic position with respect to the particular moral judgement than we are.”

6.

As the book ​​Blindspot: Hidden Biases of Good People by M. Banaji and A. G. Greenwald explains, even the best of us are ‘victims’ of psychological bias.

7.

Note that Ask Delphi makes no claim to provide moral advice, and specifically states that it is a research prototype intended to “model people’s moral judgements on a variety of everyday situations.” (Ask Delphi website, 2022)

Related Articles