Article

Facilitating Decision-making and Governance

Us and TechnologySystems and Sustainability
We are increasingly leveraging AI to augment decision-making across organisational levels in multiple sectors. From supporting healthcare professionals to detect diseases, assisting vehicle safety checks and assessing bank loan eligibility, to identifying the most effective strategies in company boardrooms. AI has also reached high levels of government; in 2021, in preparation for a statement on the EU's strategic foresight, the Committee for the Future of the Finnish Parliament heard an AI called GPT-3. The purpose was to illustrate and explore how AI handles and responds to problematic questions, including causes of poverty, unemployment and education.

As AI begins to aid jurisdiction, stricter regulations and questions around ethics, privacy and bias are also on the rise. In April 2021, the European Commission presented a proposal for the Artificial Intelligence Act (AI Act) 1. The AI Act proposes four sets of regulations depending on the risk level of an AI solution. Last year, the Chinese government, which has stated its goal for China to become the global AI leader by 2030, issued guidelines2 on AI ethics containing ethical norms such as enhancing human well-being and protecting privacy and safety.

AI for citizen-led democracy

In the latter 2020s, we witness developments in seamless interaction between humans and AI. In the 2030s, most organisations use AI to support decision-making, and AI-supported organisations prove to perform better. However, authoritarian governments are abusing AI to control their citizens, and AI security and privacy breaches are at an all-time high, as is AI hacking. This becomes a barrier to full adoption and leads to several crises, including cyber war and the collapse of basic infrastructures. But it also spurs more comprehensive education and data literacy, and AI-ethics strategy and code of conduct become the norm within organisations.

Organisations are using machine learning to facilitate citizen participation and democratisation. The UK-based Alan Turing Institute uses natural language processing (a field of AI) techniques to help make collective sense of possibly overwhelming quantities of information made available to the public. This makes it easier for individuals to cast votes and have their say.

Throughout the 2030s and 2040s, the public sector uses more AI, which creates a resistance to democratic governance as AI enables new modes of citizen participation and collaborative democracy. Authoritarian regimes weaken as their political systems fail to interact with AI, using it only as a way to manage people rather than co-learn. The fragility of authoritarian-regime AI systems becomes evident whenever crises occur, many of which are connected to climate change. After a series of major regime collapses and civil wars, there is renewed interest in radical, decentralised forms of democracy to support AI development toward more stable political systems. The 2050s see polycentric democratic communities establish across the globe.

Illustration from the book If Only The Lake Could Talk
Illustration from the book If Only The Lake Could Talk

Pathways to the most robust and least impactful solutions in product development are often sought using a process of life cycle assessment (LCA). This approach evaluates impact from cradle to grave, spanning raw-material extraction, refinement, manufacturing, distribution, use phase and disposal. LCAs can be a time consuming undertaking, involving sourcing, collection and analysis of huge bodies of data. In a proposal to make LCAs more efficient, researchers 3 from, amongst other places, the Institute of Molecular Sciences, University of Bordeaux found that by using natural language processing, decision-makers can arrive at fast and accurate assessments to predict the environmental performance of their products.

By 2042, AI provides decision-making support and is seen as a trusted advisor. It offers alternatives and reveals the consequences of proposals. It helps us take more parameters into account than human cognition allows. AI is supported by vast data collected through, for example, biosensors to provide information on human and planetary health. It also pulls from non-written, cultural and indigenous knowledge. Real-time data enables the constant collection of advice and possible new pathways based on the impact of our previous decisions. This is based on choices affecting us as individuals and those we make about organisations. The advice is delivered via visual speech bubbles in the air—temporary messages that disappear after being read.

The assumptions at play in these futures imply that our systems of power play out much as they do today, with the existence of different countries and governments and the continuation of the notion of the human-lead organisation. There is an underlying assumption that humans will still be in charge, that we will fight for democracy over authoritarianism and that all countries and institutions will have equitable access to AI technologies. We suppose that there will be trust in AI and that there will be access to enough data to facilitate it, that AI/human hybrids have not developed and that AI will be ready and willing to help us make decisions.

Could we thrive in hyper-local communities?

In 2042, humans focus solely on their local living environment. The economy, production and exchange of goods and services take place locally. A global AI is connected to local AIs that track natural system data. While humans are still in charge, AI helps to decide, identifying what the environment needs, how to allocate resources, which resources can be reused and what to plant where. The global AI weaves together knowledge and data gathered from local expertise and helps local societies. This is a humble and harmonious world.

Is it possible to be ‘local’ at this point, given our history and how we live now? Is it difficult for us to imagine a disconnected local world since we are among those who have benefited from globalisation? Some parts of society/the world can benefit from global connectivity, but not all communities do or can. Is the local aspect chosen or forced?

This narrative is based on a scenario collectively conceived and developed by core group participants in a Collaborative Foresight cycle. The group's voice was captured and creatively expanded by the writer.

Media Evolution Logo

August 2022

Rowan Drury

Rowan Drury is a strategic copywriter specialising in sustainability communications for brands that drive change to remain below 1.5 degrees and projects that create momentum for the climate transition. Rowan holds a Master of Science in Environmental Management and Policy from Lund University (IIIEE) and is the founder of Sweden’s first zero-waste store, Gram, in Malmö.

From ur book on Futures of AI for Sustainability

1.

The AI Act is the first proposed European law on artificial intelligence, that would categorise AI applications by risk.

2.

In comparing and looking for commonalities between the Chinese and EU approaches, World Economic Forum in Can China and Europe find common ground on AI ethics? (2021) states that "The Chinese guidelines derive from a community-focused and goal-oriented perspective." where as "The European principles, emerging from a more individual-focused and rights-based approach, express a different ambition, rooted in the Enlightenment and coloured by European history."

3.

Koyamparambath et. al in Implementing Artificial Intelligence Techniques to Predict Environmental Im- pacts: Case of Construction Products in Sustainability (2022).

Related Articles