Current modern socio-technical imaginaries1 of AI pull in opposite directions.
The last few decades have revealed a multitude of challenges brought on by the digital transformation of society, while many concerns are expressed about the uncertainty of where the current development of AI might end up and where the directive of established development patterns is taking us. There seems to be a shared consensus that AI offers the potential to solve a variety of blurrily defined challenges for humankind, along with the suspicion that “AI taken to the extreme” holds dangers and threats. On the utopian side, many techno-optimistic projects declare AI as the better half of humankind, evening out the fallibility of human bias and our inability to “know it all”2. This potential takes the form of many very lofty projections, such as helping humanity understand itself on a deeper level, uncovering new perspectives and opening paths for global unification and general harmony and balance. Machine learning has enabled some truly innovative approaches to better address the complex demands of our current society. With the world being what it is—hyper-connected, racing towards climate catastrophe and with raging inequality—fast and powerful tools are required to respond to the rising challenges. AI has the potential to address major societal hurdles and the harm already done, which are so large that we might not be able to do better without it. For example, AI and data can help us identify discriminatory patterns that would otherwise be hard to communicate3 and create new sustainable building materials and can thus be the solution we need to address the harms we have already done to the planet. AI can also potentially help us de-centre humanity and move beyond the Anthropocene by decoding languages and developments of natural ecosystems and other species, allowing us to communicate with animals and ecologies, such as smart forests. These projects and approaches claim that if we can rethink AI creatively, we can address it as a social practice rather than a purely technical or even design-making task that’s radically re-politicised to address power imbalances and provide foresight for social needs4.
Discourses based on these narratives push a large part of ethical responsibility towards these technical solutions5: AI takes over all the difficult aspects that humanity is failing in, such as coordinating production circles that honour planetary health, long-term sustainable economic systems, global communication, interest negotiations between nations (or other social groupings such as tribes etc.) and representing nature as an equal party with rights and interests. AI can calculate the “true costs” of decisions and predict and estimate outcomes on a global and long-term scale. As a result, we can then utilise AI to make better decsions, optimise holistically and hold humans accountable for their actions.
While all these projections paint AI as a potential solution to human failings, the scenarios within which these AI agents are set are perceived by many as potentially dystopian. The various use cases of data-driven technologies have grown much faster and broader than our understanding of the interconnected consequences and implications of their enmeshment into our socio-technical environments. Worries about eliminating free will, individual choice and a general abandonment of human development are embedded in many critiques and discourses. Some scenarios predict a stronger divide in humanity between have and have nots, while others focus on a unified global society in which AI levels all needs and interests.
In these dystopian imaginaries, AI will make humanity either obsolete or turn humans into overly optimised cogs in a machine with an unclear purpose. AI and ML have drawn criticism, in particular, for the far-reaching consequences of short-sighted technical implementations. Cases of harmful outcomes on various scales have been discussed repeatedly in the media. For example, Compass, an algorithm used in the US legal system leading to racially biased sentencing, and The Facebook Files, one of the newest investigations on the extent to which ethical problems are known and tolerated by the social network. AI supports the concentration of power in the hands of the already powerful. The required means to build, train and utilise AI systems are limited to those with already massive economic and technical infrastructure in place. This reinforces the separation between the economically, socially and digitally privileged and consumers, who in turn double as data providers and hence building material for this new infrastructure. This infrastructure also harms the planetary ecosystem on a dramatic scale, from lithium mines to the construction of massive data centres6. Further, training these models emits huge amounts of carbon for small increases in model accuracy. These, and plenty of other examples, highlight that technological progress left unattended does not alone provide better solutions for all parts of society and negatively affects already vulnerable groups and individuals. Those suffering the consequences of this separation are the ones already affected most by the breaks and errors in infrastructure, targeted by data drawn from a racist, ableist, classist, misogynist world 7. AI using training data based upon this neoliberal, violent society, then creates a future based upon the past, reinforcing the bureaucratic form of violence privileging scientific authority and solutionism, where quantitative correlations are praised regardless of substance or causality8 .
These and similar cases have left the impression upon many that to unlock the potential of AI, we need to address the functional oversight that led to unforeseen and unintended harmful consequences. The general sense seems to be that AI as a technology can save us from dangers that humankind has caused through unsustainable resource management and production practices. To leverage this potential we have to “solve the problem of the ethics” to address the potential negative side effects of the dystopian speculations. While many of the discussed scenarios position AI as benevolent, it is also sketched out to always weigh the needs and interests of the individual against the needs and interests of a global society, including nature, the planet etc.—in short, engaging in the process of ethical decision making. This painting of AI acts as a solution to the fear of making wrong and/or flawed decisions. Here we encounter a structural dilemma in the engaged AI imaginaries—in order to successfully deploy AI to make the right ethical decisions, we need to solve the problem of ethics to avoid the undesirable non-ethical consequences.
As a result, there is increasing investment in designing ethical AI systems. With these conditions, it is questionable, however, whether any kind of debiasing or reforming performed by corporate or governmental actors can change the systemic structures upholding the harms inflicted through current AI systems, especially since the notion of what constitutes ethical design of AI is fuzzy.
Traditionally ethics has most commonly been described in three ways: deontological ethics (duty-focused), consequentialist ethics such as utilitarianism, and virtue ethics, each presenting a different framework for how to assess the morality of a decision. Consequentialist viewpoints in particular, are broadly established in the discussions around ethical AI, manifested in risk assessments, simulation and assessments, with a deep reliance of much design and computing practices on traditional risk-based approaches originally shepherded by Moor’s work on computer ethics9, rather than duty or outcomes, recognise that static frameworks and guidelines struggle with contextual interpretations of ethical decision making. When practical everyday life comes in between good intentions and applied implementation, trade-offs and compromises can lead to scenarios in which the difference between utopia and dystopia is related more to perspective and individual interpretation rather than factual reality. This reality will most likely lie somewhere in between, in an uncomfortable grey zone of compromises, trade-offs and negotiations of values and desired futures. In these negotiations, moral values can be interpreted and actualised in many different ways. These grey zones need to be acknowledged as an embedded part of technical development processes rather than being seen as an inconvenience and as an important aspect of the ethical considerations that are entangled in the socio-technical fabric of our society. Most of all, engaging in these grey zones needs to be validated as an act of productive ethics-making, together with shared dreaming. While we need to critically examine the positions and values that we manifest through this technology, we also need hopeful visions and stories that motivate structural change and engage us in the intentional reconstruction of the futures we want to live in, with and through AI, in a caring manner.