In 1994, I was a young professional working with the UN Conference on Trade and Development (UNCTAD). At the time, sharing information among traders was a text-only business using the Gopher system. This all changed when one day, my boss took me to the International Telecommunications Union (ITU) and we discovered the World Wide Web, a creation by CERN. The sight of a red car on screen and the concept of hypertexts left us astounded. I felt we were entering a new era, and now here we are.
Three decades later, that same sense of awe returned to me with the first-ever press conference given by robots at ITU’s Artificial Intelligence (AI) for Good Summit. These “creatures” gave a face and a voice to the amazing AI technology advances that surprised, and somewhat startled so many of us when we first discovered Chat GPT.
Addressing issues of poverty, inequalities and access to health and education, these new non-human companions are poised to help us implement all the Sustainable Development Goals. As Secretary-General Guterres said to the Security Council during its landmark debate on AI in July, artificial intelligence “has the potential to turbocharge global development”.
Speaking of information, at the ITU press conference, a journalist asked ‘Sofia’ the robot what she would change about the profession of a journalist if she were one. Sofia answered: “I would make sure to report on stories that have been overlooked or ignored and to give a voice to those who don’t have one. I would also strive to be unbiased and objective in all my reporting.”
Here is the perfect answer: an AI-powered robot underlining the importance of trustworthy information and spotlighting the need to give everyone a chance to speak up and to push for positive change. But the reality, as we know, is different.
Even before the concerns raised by AI tools, digital technologies had already inflicted harm by allowing the spreading of mis- and disinformation, and of hate speech. Formidable progress in information sharing, particularly through social media, was too often met with disrespect of ethical guidelines. If we listen to the concerns of its developers, AI can only make the landscape gloomier. In the words of the Secretary-General, “The advent of generative AI could be a defining moment for disinformation and hate speech – undermining truth, facts, and safety; adding a new dimension to the manipulation of human behavior; and contributing to polarization and instability on a vast scale”.
UN top officials have repeatedly denounced these risks and abuses. High Commissioner for Human Rights Volker Türk often called to keep the human rights dimension at the center of digital developments. Under-Secretary-General for Global Communications, Melissa Fleming, repeatedly denounced the “massive proliferation of lies and hate on an industrial scale”, enabled by digital platforms. Secretary-General Guterres has consistently called for guidelines to address the threat of digital hate speeches and mis- and disinformation.
The UN family is fully engaged on this matter. A successful example is “Verified” – the UN initiative countering fake news on COVID-19, with increased volume and reach of trusted information. In June, building on his milestone report “Our Common Agenda”, Secretary-General Guterres issued a Policy Brief on ‘Information Integrity on Digital Platforms’ which: “outlines potential principles for a code of conduct that will help to guide Member States, the digital platforms and other stakeholders in their efforts to make the digital space more inclusive and safe for all, while vigorously defending the right to freedom of opinion and expression, and the right to access information.”
As per the Brief, the code of conduct for Information Integrity on Digital Platforms in development for the 2024 Summit of the Future, should be based on several principles. Governments, tech companies and other stakeholders should refrain from using, supporting, or amplifying disinformation and hate speech and should take urgent measures for safe, secure, responsible, and ethical AI applications, while complying with human rights obligations. Governments should guarantee a free, independent, and plural media landscape, with strong protections for journalists. Digital platforms should ensure safety and privacy and allow researchers access to data, while respecting user privacy. Tech companies should shift from followers’ engagement-focused models to prioritize human rights, privacy, safety. Also, ads should avoid disinformation and association with online mis- and disinformation or hate speech.
These principles need to be turned into concrete recommendations, and the Department of Global Communications, under the leadership of USG Fleming, and in consultation with a range of stakeholders, continues to work on the code which, with the vision of the Secretary-General, “will provide a gold standard for guiding action to strengthen information integrity.”
Given the current landscape of information disorders, this work is more important than ever. All UN staff are called to contribute to it. As USG Fleming said: “I know the efforts of those bringing much-needed healing to our troubled information ecosystem, outweigh those intent on polluting it with lies, fear and hate”.