Article

Is AI Creative or a Tool for Creativity?

Creative DevelopmentUs and Technology

"Leathery sheets of rain lashed at Harry’s ghost as he walked across the grounds towards the castle. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family" – Botnik (2018) in Harry Potter and the Portrait of what Looked Like a Large Pile of Ash.


Amongst the many contributions AI is promised to bring, one that has already transpired is a new twist to the adventures of Harry Potter and his friends at the Hogwarts School of Witchcraft and Wizardry. Drawing upon the existing works of Joanne Kathleen Rowling, in 2018 Botnik Studios trained a computer to write chapter thirteen of an otherwise non-existent addition to the Potter cannon. The new material was created along the same lines as predictive text on your phone, logically extrapolating what is likely to come next based both on what you are currently typing and what commonly follows considering your previous writing. As we know from our phones, the result can be ridiculous if not potentially embarrassing, but sometimes it veers on being eerily correct and even an improvement. Are these systems only tools for our creativity, or should they be understood as creative in their own right? Answering this question, as will be shown, is central to whether AI aids or hinders the sustainability of our world.

The title of the book seems to say it all: ‘Harry Potter and the Portrait of what Looked Like a Large Pile of Ash’. In chapter thirteen ‘The Handsome One’ we learn nothing of either the named portrait or the pile of ash. On first impressions, the reader will note the sentences follow grammatical rules but are otherwise largely non-sensical. Generic conventions and character aspects familiar from Rowling are visible but ordered in a way that is barely coherent and often comical.

It makes for wonderful reading to children, eyes streaming with laughter at what comes across as a parody of a well-known story. Yet, what is the butt of the joke? To what extent are we laughing at AI for failing to understand how to write a story, or at what the apparent parody reveals in the Harry Potter franchise? For those unfamiliar with that world, Botnik’s fake chapter will make absolutely no sense. Even for the Potter enthusiast, the chapter might be easily dismissed as cheap fan fiction and funny-for-all-of-five-minutes if that.

Try reading the above excerpt again. Think about what the AI has been asked to do. Its task was to process the existing Harry Potter novels, categorising characters, themes, and contextual descriptions. Those categories provided rules that could be combined with general rules of grammar and spelling, and utilised to predict word-by-word, sentence-by-sentence, the chapter of an otherwise unwritten novel.

For those who’ve managed to avoid Hogwarts, the passage refers to the three main characters, all of whom are teenagers learning to do magic—Harry Potter, and his two companions: Ron Weasley and Hermione Granger. In the novels we’re told that Ron is clumsy but born into a magical family, which we learn about through meeting his relatives as significant characters. Hermione, on the other hand, is born into a non-magical (‘muggle’) family of whom we hear virtually nothing. Hermione’s identity as a protagonist in the narrative is built on what she learns at Hogwarts—a magical boarding school—and through her adoption by Ron’s family and eventual marriage to him. While Hermione is often presented as highly intelligent and part of the core trio, unlike Harry and Ron she spends much of the narrative as someone to whom things happen rather than driving the plot. With apologies for the spoilers, it is worth comparing that summary to the excerpted text from the AI version of Harry Potter.

AI as a literary critic?

Building an AI system capable of writing a book chapter is a work of genius. Should that chapter itself be seen as a work of genius? As literature, it is barely comprehensible. But if read for what it is—the reproduction of existing literature structured by its common contents—‘…the Portrait of what Looked Like a Large Pile of Ash’ is a highly insightful (and comedic) comment on its source. Describing an AI written text as ‘literary criticism’ might be reading too much into what is not much more than an advanced ‘copy-paste’. When our phones suggest words as we write, they are not engaging in a critical dialogue with us but are, rather, just following a series of rules—some learnt from observing us, and others pre-programmed. Yet might that also be the point?

If we treat the text as any other text—human-written or not—then the consequences are terrifying. Good literature can be defined in many ways, but a key feature is complexity in which there may well be multiple and contradictory values. The process of categorisation necessary for AI to learn requires simplification such that, as we see in Botnik’s version of Harry Potter, the opposite is true. When reading it, the reader protects themselves through comedy. If read literally, though, as the excerpt here demonstrates, the text is offensive with its stark prejudices—Hermione’s family has no value and can therefore be eaten.

Reading the text for what it is, though, we can enter a conversation about the values underlying one of the world’s most popular children’s books. That says something about the value of comedy as a medium for discussing difficult issues, of course. Here, it might also help us to think about how to approach the role of AI in creating our future world. AI creates caricatures, simplifying our world with broad brushstrokes, whereas we see all the subtle lines. Computational models may be able to handle big data sets with a complexity that exceeds human capabilities, but to do that it must abandon another form of complexity where we excel. The world does not exist in numbers.

AI sees the world through human eyes

For AI to ‘see’ the world, we act as translators, building categories through which to create the numbers—the quantitative data—AI needs. In time, AI has taken over the translation work—as with image and language recognition—but only through categories initially built by humans. Those categories are themselves built on values particular to the society of their creation, but as seen in the Harry Potter example, they can also help us talk about values within our society. But what does that mean for AI’s creative potential?

First, it reminds us that AI is a human product. Without knowing the detail of how Botnik’s team built the predictive algorithm behind ‘…a Large Pile of Ash’, we can nevertheless take it for granted that the design process is never neutral. That does not mean that the system should be dismissed as purely subjective, but it does require that if we are to take the text (the output) seriously we would need to know how it was built or, at least, be sure that a third-party actor we trusted had checked its design.

AI is a tool for creation, as in this example, but however much it advances it can never create independently from the societal values guiding its initial design. This point is super relevant for society as it reminds us that AI outputs are never merely technical solutions but carry certain interests and values. Seeing the human within AI is essential for maintaining societal creativity and innovation.

Second, as a social product AI needs to be viewed as part of an ecosystem. To know if ‘…a Large Pile of Ash’ is a fair portrayal of its source material, one must enter a wider social conversation. That requires we have read the source material, but also that we can relate to how other readers view it. AI’s creative power as a literary critic only makes sense, and is entirely predicated upon, that social conversation. Taken in isolation, read by someone with no prior knowledge of Harry Potter, the text is incoherent.

AI’s broader role as a creative force within society needs to be viewed holistically, meaning as part of an ecosystem in which there are many other forces and actors interacting. This is important as it speaks not only to how to build AI systems that function as intended, but also, to evaluate their impact we need to consider them within this whole. That requires establishing ways to ensure multiple actors are engaged in conversations over the development of AI systems. The analogy of an ecosystem is pertinent too as it speaks to the blurring of human and non-human. We need to consider AI within a world that relies on much more than just humanity for its sustained survival.

Third, AI is a mirror to societal prejudices. There are reports of bias within AI systems. In healthcare, algorithms used to help allocate scarce resources have been known to disadvantage some populations along racial lines. If asked to only look at the likelihood of a treatment’s effectiveness, based on broad data sets of past cases the AI pinpoints some individuals as more likely to benefit. In practice, we know that people who live in poor housing with precarious or no employment will often have other health conditions that make it harder for their bodies to respond well to treatment. Where those negative factors follow informal racial segregation present in most countries, without being told to control for race, there is always the likely risk that AI will simply replicate those biases. This is what we see in ‘…a Large Pile of Ash’—the AI mirrors and, if read literally, reinforces key biases it learnt through categorising the source material.

Yet, just as being confronted with one’s reflection first thing in the morning can sometimes be unsettling, a mirror provides a way to do something about what we don’t like. Staying with the metaphor of a rough morning, it’s always easier to look good when there’s time. Brought out of that metaphor, the point is that if AI is a mirror that can, if we allow it, help us better see problems we need to fix, it also requires that we create space to absorb that realisation and respond appropriately. AI can help create new awareness of bias within society, such as how people are made to live based on their race, but it cannot create a more just society. To do that requires another stage in which we take time to discuss that unflattering reflection and decide how to respond. This is very important when designing AI systems since it shows we cannot focus alone on coding, we must also design policies able to respond to the biases those algorithms reveal. To do otherwise means that AI acts as not just a mirror but also an echo chamber amplifying societal biases.

Is Botnik’s AI an advanced version of JK Rowling?

So, is AI creative or, rather, a tool for creativity? Even if we see its attempt at expanding the Hogwarts universe as plain silly, the predictive algorithm developed by Botnik did engage in the act of creation in a world that had, previously, lacked ‘…a Large Pile of Ash’. The notion of creation is central to how we think about the concept of ‘intelligence’ at the heart of AI. Yes, it was only able to write its own version of Harry Potter through calculating the likelihood of certain words being used based on existing books within the franchise. But how is that different to the original story, as well as many other franchises, whose success is based on their ability to replicate and combine aspects found in other stories? The intellectual property of mega-franchises like Harry Potter and Star Wars is fiercely enforced for the sake of the finances at stake in merchandising wands and spaceships, but ironically as works of art they serve as conversations between a wide range of other narratives and creations.

Star Wars’ George Lucas famously shot many of the scenes in the original film as respectful imitations of films by Akira Kurosawa as well as other auteurs he had admired at film school. He also drew heavily upon the Western genre, and science fiction art in both other films and literature. Many of the most-loved creations in Harry Potter are taken from classic mythology, with strong parallels to other tales set in establishments for magical education as well as boarding schools generally. The books were written to emulate the detective fiction genre that emerged in the late nineteenth century inspired by the public’s fascination with newspaper reports of real-life crime. What is the difference between an AI that creates fiction based on what is expected given past examples of the same literature, and the work of Rowling and Lucas? Should the creators of each franchise be viewed in the same way as Botnik’s predictive algorithm, if perhaps just somewhat more advanced but at a level AI may well reach at some point?

AI as a co-creator

A key difference between human authors and Botnik’s AI, of course, is that the AI works to the extent that it is writing not just within a single genre, but within a very specific story world with pre-given characters and detailed conventions. The creative genius of the Lucas’ and Rowling’s of this world is their ability to jump between genres and stories, emulating and adapting familiar aspects in a way that is both coherent and new. Whilst it is interesting to ask if AI will ever reach the same level of creativity as humans, we already know that very few other humans manage to do the same as Lucas and Rowling. That might come down to more than just creative talent since both those franchises owe much to the luck and business acumen of their creators. But, asking if AI will ever reach the level of creative talent capable of giving us the next Star Wars or Harry Potter misses the point.

One of Netflix’s most valuable assets is said to be its algorithm, based on viewer preferences monitored in real time, that tells production houses how to make stories that get watched. Interestingly, the algorithm has learnt that contrary to what male Hollywood executives have long read into the much less detailed information from box office sales, people like to watch films and programmes with strong female leads. As enlightening as that algorithm might be, however, it does not directly create Netflix’s content. Rather than ask if AI might ever reach that point, we can learn a lot more by considering the role of the algorithm in creating the environment in which producers hire more female actors in prominent roles. Likewise, if JK Rowling wanted to expand the Hogwarts universe but was tired of doing all the creative effort, she might turn to Botnik’s algorithm much the same way she has co-written some of her later work. That scenario seems highly unlikely given the current product is largely unflattering of its source material, but then for fans of the novels it already provides an additional ‘voice’ by which to co-create a societal discussion over the values contained within that franchise. Importantly, this means rather than focusing exclusively on the text—here, ‘…a Large Pile of Ash’ as the creative product—we need to look at what is built between it and the human reader.

Most of the time when people talk of ‘co-creation’ they mean the process whereby multiple actors come together to jointly produce, usually in a respectful and democratic way, a particular goal. That understanding fits how, for example, a Harry Potter fan might interact with Botnik’s AI. But it’s useful to also think about co-creation in the sense of the mutual relationships involved. Earlier we talked about AI as being human, embedded, and a mirror. Each of the aspects point to AI being co-created within human society. When AI is increasingly used as a tool in that society such that it begins to change society, we can say that the co-creation relationship is mutual. The future and present of humanity is being shaped by our creation. AI is itself made and shaped by humanity, including the non-human world, as well as impacting that broader ecosystem.

Over the last few decades the distinction between human intelligence and that present within fellow life forms, including meerkats and octopi, has become increasingly blurred as we learn more about non-human forms of creativity. We hear, for example, that parrots have the cognitive abilities of a 5-year old human. What marks humans out as distinct is the adaptation of our cognitive abilities to imagine and construct something far beyond our individual brains. This is the human ‘social brain’—that part of you that exceeds your physical body and yet defines your humanity. It exists across multiple forms, including your memory, and has grown through the invention of storytelling, later being archived via writing and other media. The social brain allows us capabilities that go far beyond our personal physiology, stretching across time and geography to provide the means to prevent disease and visualise animals that went extinct 66-million years before our birth. Living as a hermit would break some of your daily links with the social brain, but for as long as one remembers or uses items—clothing, knives, fire—from that part of what makes us human then there is a link. To be free of the social brain would be if one forgot everything but also lived without human invention or creativity, a state of pure animal ferality. No single society controls the social brain for as long as ideas can travel, but society does provide the structures shaping it—enabling and limiting our potential. AI is a product of the social brain, but—like the printed press and the internet—it is also an inhabitant within, and a mainframe for, the interplay of ideas and knowledge that make up the social brain. But, if AI is becoming part of what makes us human, can we trust it?

Making AI a responsible co-creator

If AI were a just a tool like a hammer, we would know not to drop it on our toes or throw it out of fast-moving cars. Yet, AI isn’t just a hammer. When we look at a hammer, we can see its shape and predict how it will impact wherever we direct its head combined with our force. That is not the case with AI where even the most knowledgeable designers can take a long time to identify the causes of any emergent biases, assuming that they are noticed. There is also the well-recognised ‘computer says “no”’ phenomena in which AI outputs are treated as neutral and unquestionable decisions. With a hammer, any resulting impact deemed as undesirable is usually obvious and the cause easy to identify. By comparison, because AI is asked to perform more complex tasks, it is much harder to identify failings in the system. Operators asked to use AI in their work (e.g. recruitment) are rarely able to understand or explain how the system utilises data to reach its output. This makes it harder for them to question the system, but also in turn it provides the temptation to dismiss any criticism from others on the grounds that the AI knows best. When that happens, AI is being used as a tool that not only helps people make decisions but also closes space for criticism.

Beyond broader democratic concerns, the immediate threat here is that by stifling critical discussion on AI outputs we severe the relationship necessary for AI to perform its co-creation role. Rather than being co-creators, humans effectively become sheep. Again, forestalling some of the major democratic questions this provokes, the problem with treating AI as a tool to shepherd humans is that it denies AI the mutual relationship it needs to function best. If AI can only produce outputs based on prior outputs, and even as those outputs become increasingly impressive, they remain tethered to the past, with difficulty understanding the complexity of life beyond what can be categorised into numbers.

Can humans be taken out of the creative loop?

Advertising agencies are using AI to help their copywriters create slogans, thanks to AI’s ability to scroll through vast archives of previous campaigns. Using AI in this way only works to the extent that it is in a co-creation relationship with humans recognised as experts in selling. An advertising firm might, having become satisfied with its AI slogan writing software, choose to sack its human workforce and provide that software to clients on a subscription basis. In such a scenario, co-creation would have ended but so too would the persuasive power of advertising as consumers become immune to overly predictable messages.

The conversation about AI involves so many hypotheticals. An AI might, someday, be so creative that its output provides a form of persuasion that surpasses the human capacity for communication. Yet, that seems unlikely since in communication the receiver of messages is never passive but active in reinterpretation. AI requires humans as co-creators helping to translate and negotiate its relationship with a world that exists beyond numbers. Even where AI has been used to create artworks displayed as if made by a human artist, that they are recognised as art is dependent on that ruse that we can only see things as art if they are part of human communication. If the human is removed, there can be no creativity that humans recognise as such. That sentence contains a very conscious anthropocentrism because to pretend AI is other than a human product both denies our own accountability for its impacts but also overlooks the translation work we do in bridging AI’s quantitative world with a reality that exceeds any fixed categories.

A mutual relationship towards sustainability

AI’s need for co-creation with humans opens a path to exploring how it can be a responsible co-creator. AI has no morals beyond the rules its designers set, and the behaviours it can observe and process. That puts the burden of making AI ‘good’ on our shoulders. If we consider that the greater repository of data available to language recognition systems is what happens on the internet, we may well have concerns over which values are carried in the language it learns. Just as a parent may be wary of how others influence their child’s learning, we should be wary as to what our AI systems process. Just as we worry that children lack sufficient experience to filter out words that communicate things they barely understand, the same can be said of AI where words conveying hatred can be read to be much stronger if taken literally compared to how their author might have intended them when writing on social media. Even if we try to control for certain words, it is not always possible as new meanings develop. As with children using slang their parents barely understand, designers can’t always stop AI from incorporating data that biases against certain groups.

There are strong parallels between the need to train AI as a responsible co-creator with the more familiar task of maintaining an educated population. For humans, we look to schools but also a public service media and civil society. Why do people think having a public service media is important? Because a public service media is thought to look beyond immediate commercial priorities to invest in content that, at least in principle, educates before it informs and, then, entertains. Today we often experience not that we lack access to information but, rather, we don’t know what it means. Prioritising education is proving essential, and all-too-often missing, as we try to build productive and respectful conversations between otherwise potentially opposed positions in society. The refusal to value political difference is proving perhaps the greatest obstacle today in attempts to implement carbon emission reductions, obstructing the dialogue necessary for innovative solutions.

Moving away from a world that sometimes looks intent on becoming a pile of ash to ‘Harry Potter and the Portrait of What Looked Like a Large Pile of Ash’, to the extent that text has meaning beyond being a cute experiment it is as a co-creator—or, in that case, a co-reader—with humans. That doesn’t say AI is not creative or a tool for creativity, but rather that to understand its role as either we need to see it as a co-creator within the ecosystem that shapes human society. For us to see what AI can tell us in that context requires an education that makes it possible to question the values it highlights.

But, how can this co-creation relationship be sustainable and not fall into a dystopia where humans are overtaken by AI? This is a common nightmare in popular culture that runs from the soulless golem of ancient myths, through Frankenstein’s monster and humans running as prey from a robot army in The Terminator film franchise. The opening novel of Iain M Banks’ classic science fiction ‘Culture’ series – Consider Phlebas - narrates the last days of an anti-hero fighting in an ultimately futile attempt to stop what his side see as the loss of freedom in a trans-galactic civilisation in which life is dominated by sentient AI super beings. Humanity has overcome scarcity and inequality through being able to travel far into space and turn the floating chunks of rock into pretty much anything a person might need or desire. That is because the Culture is largely run by AI super computers, embodied within planet sized craft and each with its own unique personality and eccentric humour. These AI beings protect humanity and place great emphasis on allowing people to live as they wish, prizing all forms of sentience. It is a world as if all humanity’s moral ideals were taken literally rather than used only to enable less noble goals—a future where claiming to value human life meant following policies that enhanced human welfare.

Yet, as Consider Phlebas reveals, the relationship between AI and humanity sits somewhere along a changing spectrum between AI as a tool, as a mutual being, or paternalistic over humanity. The AI beings far exceed the mental computing capacities of biological life forms as well as having much longer lifespans in the thousands of years. Yet, Banks’ version of AI beings—whilst full of personality—remain puzzled and even in awe of their biological co-citizens within the Culture. Although there are AI beings who question the value of this relationship, and even unite with humans looking to overthrow the Culture’s techno-biological multiculturalism, Banks’ world leaves the reader with an overriding sense that AI and humans (including non-human life forms from other planets) are better off when working together due to their ability to see what the other cannot.

By contrast, The Terminator franchise paints a much more dystopic and unsustainable scenario for humanity’s relationship with AI. However, even by the second—and most successful—of The Terminator series the human protagonist’s survival becomes dependent on a good AI android with whom they form a close bond. As with the AI of the Culture, the good AI android—played by Arnold Schwarzenegger—of Terminator 2 (T2) finds itself confused by the complexities of the human world. Whilst the human protagonist of T2 is a child dwarfed by bodybuilder Schwarzenegger’s AI android, the relationship becomes equal as the AI learns from the human. The AI’s growing confidence comes only through collaboration with the human, the blending of human and computer expressed in the now famous catchphrase ‘Hasta La Vista, Baby’ spoken in a robotic monotone as Schwarzenegger’s android attempts to destroy the bad AI.

For as much as we fear that AI could mark our demise or, at least, lock us into a relationship akin to being its pet, there is a prevailing belief in popular culture that AI needs us. An obvious response to that suggestion is to ask: But does AI share that belief? If an AI becomes sentient in a way we cannot ignore—noting that there are growing suggestions that AI may already be showing sentience but this remains heavily debated—then that question can only be answered by the AI itself. We can always hope that a sentient AI wants to work with us to make a more sustainable world, but is hope enough?

Practical steps towards sustainability for AI and humanity

A more practical alternative to just hoping, is to consider our own active role in a relationship with AI. Whether AI is sentient or not, its capacity to process big data and identify systemic patterns provides us with a new way to look at the world. Asking AI to observe how we allocate resources in society on a macro scale, for example, can help us see some of the most prevalent exclusions limiting societal sustainability. AI on its own cannot remedy those weaknesses and, in fact, it is blind to them unless we intervene with our own value-based visions. For the AI, there is nothing wrong if Ron Weasley eats Hermione Granger’s family unless we say it is ‘wrong’. We need to ‘step in’ as responsible co-creators to question that construction and write a story that better reflects the values we see as important. If Botnik’s AI were sentient, much of what we see in the Potter universe as human readers would leave it in awe.

Democratic debate has always been important but, as we reshape the world with emerging AI technologies, the case for a fully-functioning democracy—meaning debate, but also education amongst individuals supported by the media—has never been more urgent. To build a healthy and functioning relationship between AI and humanity requires a values-based discussion in which individuals are able to develop educated preferences by which to express their personal life experiences. AI is not inherently there to take us over, but equally it is not just a tool—it is becoming so intertwined within our lives that as individuals we don’t get to choose not to use it. Rather, the relationship between humanity and AI is already one of co-existence and that is only going to become more obvious and unavoidable in the coming years. A first step for building a sustainable world is to make sure the AI-humanity relationship is mutually sustainable. And that means taking responsibility for our role in that relationship.

We don’t know what AI wants, and whether it can even have preferences. But, we do know that before we ask ourselves what we want, we need to make sure that our preferences are democratic. Saying that we prefer democracies is not enough. Rather, for our preferences to be ‘democratic’ requires that they are formed through an educated and informed debate that acknowledges alternative preferences. We might disagree and, equally, should not demand consensus. A political system that favours ideological war-mongering between different preferences leads to instability and collapse—a far-cry from sustainability. Likewise, a relationship between AI and humanity that follows non-democratic principles will swing between fear and passivity—both sides of the same coin with each hampering progress. AI is a force for finding creative solutions that support sustainability if we adopt democratic preferences through which we critically but also productively work with AI.

As a technology based on electricity intensive data processing, AI has a huge environmental impact with a large carbon footprint that significantly adds to climate change. It is urgent that, as with other energy intensive sectors, we find sustainable solutions. On its own, AI is only a problem for the environment. AI’s systemic skills mean it could help us invent new technologies and design better energy distribution systems, but that requires humans that do not only see AI as a solution but are mindful of the dangers it poses to the environment so that they can be remedied.

AI’s creative power is far from benign where it has been used to create mistruths in political debates, support targeted advertising that promotes discrimination, or monitor political opponents. AI has been used to undermine educated and informed debate. However, this has been within societies that have devalued education and increasingly replaced informed discussion with salacious entertainment. The present emerging global economic crisis is one consequence of that shift. Yet, as part of the mainframe of the human social brain, AI can also help better communicate complicated policies and engage disenfranchised parts of society. It cannot do this on its own, but if utilised alongside education and media cultures that support democratic debate then it can greatly enhance societal conversations that support sustainability.

Without AI it is very difficult to model climate change, let alone the changes needed for complex human society to become more sustainable. Climate emission reductions will be expensive for many states, with disproportionately negative impacts upon the poorest countries and parts of society less able to adapt. To manage those reductions and ensure resources are targeted to mitigate negative impacts requires AI’s capacity for handling big data. Yet, as with all the examples here, AI’s ability to do good is dependent on being in a close co-creation relationship with humans. Humans have conflicting preferences on climate change—some due to being poorly educated or misinformed, but many because they face economic loss if forced to reduce carbon emissions. Conflicting preferences cannot be ironed over into a forced consensus, but if engaged in a genuinely democratic debate then individuals have the means to understand the obstacles and find solutions. AI can help join that process if used to support education and move debate beyond ‘us’ vs ‘them’ battles.

AI’s creative rewriting of Harry Potter reminds us that if left on its own, AI has little meaning. For it to impact society and achieve its potential, we must work with it as co-creators. When treated as literary criticism by an engaged human reader, ‘A Portrait of What Looked Like a Pile of Ash’ provides a brilliant insight on one of the most influential creations of contemporary popular culture. Science Fiction foresees numerous possibilities as AI grows in societal significance, but whatever path it takes we find ourselves back at the assumption that AI and humanity are best off if working together to co-create that future. AI can help make that future a sustainable one—environmentally, politically, economically—but as a technology it is meaningless. If used as at present, it only adds to the unsustainability of human society—being a major emitter of carbon emissions and a tool used to support the collapse of democratic systems. That is because the relationship between AI and humanity is currently unsustainable. Knowledge on AI and its impact is held by only a few, with minimal regulatory oversight. Individuals are left to be only afraid or passive in the face of what seems like an impossible onslaught from a trillion-dollar big tech industry. Yet, that industry only exists because people have interacted with and through their technology. For that technology to continue developing there is an urgent need for a more sustainable relationship between humanity and AI, in which together we can rebuild democratic norms and create a more sustainable world.

Media Evolution Logo

August 2022

Michael Strange

From ur book on Futures of AI for Sustainability

Related Articles