Artificial intelligence is one of the mega-trends of our time and is already reshaping industries, governance systems, and diplomacy. Its use can multiply human efforts, but it also sparks concerns related to policymaking, decision accuracy, and job security. But what is artificial intelligence? First coined by John McCarthy in 1956, AI was defined as “the science and engineering of making intelligent machines, especially intelligent computer programs.” More recent definitions describe it as the ability of computers to “imitate “intelligent human behavior.” Among its most popular forms is generative AI, which can create new content through Natural Language Processing (NLP). Large Language Models (LLMs) such as ChatGPT or Copilot represent a sub-set of NLPs focused on understanding and generating human language.
Multilateralism remains at the heart of diplomacy, guiding nations to collaborate on global challenges, from climate change to security or technological disruption. Yet, multilateral processes have grown increasingly complex over the decades. At the UNFCCC climate conferences (COPs), participation has surged from 30,372 delegates at COP 21 in 2015 to 54,148 at COP 29 in 2024. The number of broad agenda items increased by 21% in the same period while official documents grew by 27%. This escalating complexity places heavy demands on delegations, especially smaller or under-resourced ones, who must navigate technical, data-heavy, and fast-paced negotiations.
Against this backdrop, AI emerges as a potential tool to enhance delegations’ capacity, synthesize documents and identify strategic alignments. It has already been brought into the COP process by the Technology Mechanism through its #AI4ClimateAction initiative aimed to enhance the capacity of Small Island Developing States (SIDS) and Least Developed Countries (LDCs) regarding the use of AI for climate action.
This raises the central question: to what extent can AI redefine how global policy decisions are negotiated, and in which areas of multilateral environmental processes might it be having a large impact with huge risks?
The transformative impact of AI in multilateralism
The growing complexity of multilateral processes as shown above has put increasing pressure on delegations to process large volumes of information while still negotiating effectively. LLMs can assist decision-making by serving in several capacities: as an assistant (performing specific tasks), as a critic (reviewing completed work), as a second opinion (comparing results) or as a consultant (offering advice based on given information).
Data analytics and predictive modeling powered by AI are promising tools to reshape negotiation dynamics. These technologies can help delegations synthesize meeting documents, analyze trends across negotiation texts, and even identify allies based on shared priorities.
AI has the potential to accelerate consensus-building through fast-paced and focused analytics and modeling. AI systems also excel at sentiment tracking and predictive analytics. Therefore, their use can allow diplomats to gauge public opinion, monitor geopolitical trends in real-time, and anticipate emerging challenges before they escalate into crises. More and more delegations, particularly from developed countries, are equipping their negotiators with such tools. However, integrating these tools into multilateralism raises important questions about equity of access, inclusivity, and transparency.
On the other hand, using AI tools to enhance institutional efficiency may also accelerate the pace of reaching intergovernmental agreements. Solutions such as synthesizing written submissions from Member States, generating real-time summaries of plenary positions, assisting with note-taking in meetings, or offering translation support, are being developed within the UN system. If implemented responsibly, such applications could free up human resources and reduce information bottlenecks, thus empowering delegations with more timely access to negotiation-relevant insights. Yet, this promise of AI as a ‘force multiplier’ for diplomacy is not without risks.
Equity and access: the small delegations perspective
Many developing countries struggle to field large teams at major international meetings. For them, AI could offer real-time translation or digesting lengthy documents. Properly deployed, it could level the playing field, but only if access and training are provided equitably. Without deliberate capacity-building, AI may instead deepen the divide between developed and developing delegations.
Risks and power imbalance in AI adoption
While AI has the potential to democratize climate diplomacy, it also raises equity risks that require careful attention. A small group of private companies and research institutions – mainly based in the Global North – currently dominates AI development, creating a geopolitical imbalance in controlling the AI infrastructure. These systems often carry hidden biases, as they rely on datasets that overlook perspectives from the Global South. Such biases risk reinforcing existing inequalities and further deepening global divides, privileging countries with resources while leaving vulnerable states at a disadvantage.
Another layer of risk comes from the lack of transparency in many AI systems. These ‘black-box’ algorithms – AI models whose internal decision-making processes are opaque or difficult to interpret – produce recommendations without clearly showing how they reached their conclusions. This lack of explainability can undermine trust, especially for smaller delegations that may lack the technical capacity to question or challenge AI-generated outputs.
The digital divide further entrenches this imbalance. Many SIDS and LDCs face infrastructural and capacity constraints that limit their ability to integrate AI tools, turning what could be an equalizer into an instrument of marginalization. Without explicit equity-focused interventions, AI adoption risks becoming another arena of environmental injustice: those already sidelined will find themselves even more voiceless in shaping global responses to their own ecological crises.
Ethical and inclusive AI for climate diplomacy
To prevent AI from deepening climate injustice, we must embed equity and inclusion at the core of AI governance. That starts with investing in capacity-building initiatives tailored to SIDS and LDCs, embracing open‑source tools, training programs, and technology transfers that could help level the playing field in existing power dynamics.
UNESCO’s 2021 Recommendation on the Ethics of AI – endorsed by 193 nations – offers a critical ethical foundation, emphasizing transparency, fairness, and climate stewardship.
Advocacy must go beyond norms as it requires binding, multilateral frameworks. A Global AI Fund, proposed by the UN Secretary-General’s High‑Level Advisory Body, could finance infrastructure, datasets, and training for Global South actors. Meanwhile, initiatives like GPAI (an India–chaired initiative that includes Global South voices) demonstrate how polycentric governance can empower diverse voices. By insisting on enforceable standards – particularly on transparency or inclusion of indigenous and local stakeholders – we can help ensure that AI becomes a tool for equity rather than a means of domination in climate diplomacy.
AI represents a game-changer in multilateral processes with monumental potential to accelerate decision-making through faster access to information, building strategic alliances, and streamlining procedural tasks. This could lead to a more agile and responsive system of global governance. However, this promise is matched by significant risks, such as data sovereignty, algorithmic bias, and unequal access, particularly for smaller or under-resourced delegations.
Embedding UNESCO’s ethical principles into future AI agreements could create a shared baseline for transparency, accountability, and inclusivity.
To ensure AI becomes a tool for inclusion rather than exclusion, multilateral institutions must act now by embedding equity into AI governance, investing in capacity-building, and ensuring that no voice is left behind in shaping our shared environmental future.
Ultimately, the way forward depends on whether multilateral actors adopt enforceable ethical standards and inclusive capacity-building measures. Additionally critical remain two concepts: 1. establishing shared protocols for AI safety in sensitive security domains and 2. developing joint frameworks for AI ethics in international relations. Nevertheless, the fundamental question remains: Can AI truly democratize multilateral climate processes, or will it ultimately reinforce and amplify existing inequities?
Special thanks to Dino Cataldo Dell’Accio, UNJSPF Deputy Chief Executive of Pension Administration and Cecilia Kinuthia-Njenga, Director, Intergovernmental Support and Collective Progress, UNFCCC for curating this article.