In recent years, generative AI has become a powerful tool, revolutionizing the digital landscape. Its ability to swiftly create a broad range of content—including articles, reports, visuals, videos, and voiceovers—without explicit instructions has garnered significant attention. 

It resembles seasoning a dish with salt: it enhances the flavor but must be used judiciously. While the rise of generative AI has certainly improved productivity, efficiency, and creativity, it also presents substantial risks, particularly in the context of information integrity and human rights, as AI systems are becoming increasingly embedded in digital platforms.

Generative AI’s influence on information integrity

AI tools are gaining popularity not only because of their capacity for high-quality output, but also due to user-friendly interfaces. Unlike traditional AI systems that rely on specific prompts, generative AI leverages Natural Language Generation (NLG) and Large Language Models (LLMs) to produce and disseminate content at an unprecedented rate.

In most digital platforms, the algorithm often prioritizes content that drives user engagement over content accuracy, creating fertile ground for the amplification of falsehoods. 

This can result in the rapid dissemination of mis- and dis-information, with AI-generated content increasingly being weaponized to distort facts. Despite advances in fact-checking technologies, AI algorithms and digital platforms still lack robust systems to consistently verify content credibility. Moreover, fact-checking mechanisms are uneven across different jurisdictions, meaning that even when false information is identified, it can take hours or days to correct.

If this spread of false information continues, it can pose a significant challenge and raise higher concerns of mis- and dis-information while also raising concerns regarding authenticity, bias, privacy of information, and much more. Moreover, it also endangers democratic institutions and fundamental human rights.

Human rights at risk

Article 19 of the Universal Declaration of Human Rights states that everyone has the right to freedom of opinion and expression. But on the contrary, we should not misuse or manipulate information to harm other humans or societies. Unfortunately, AI-generated content has increasingly been used to spread hate speech, xenophobia, and discriminatory rhetoric, targeting vulnerable populations such as refugees and ethnic minorities. 

A 2019 study by Deep Trace Labs revealed that 96% of all deepfake videos were non-consensual and pornographic. Additionally, AI-generated content can be used in human trafficking, particularly targeting women and children, to lure them into exploitation. 

The reliance on vast amounts of data—mostly collected from social media and other digital platforms—presents further human rights concerns. AI models, while powerful, lack the nuanced understanding required to discern harmful biases in the data they process. 

Moreover, the labor conditions of those tasked with training and managing AI systems pose another form of human rights abuse. These individuals, often underpaid and exposed to disturbing content, work under conditions that can lead to psychological distress, an issue that is only now starting to receive attention. For example, more than 150 workers involved in the AI systems of Facebook, TikTok, and ChatGPT gathered in Nairobi and pledged to establish the first African Content Moderators Union.

Towards responsible AI development

As AI continues to evolve, a responsible approach is critical—one that aligns technological innovation with the preservation of truth, dignity, and human rights. To mitigate the risks posed by generative AI, a multifaceted approach is required.

From a technological standpoint, the development and deployment of AI systems must be guided by transparency and accountability; they are key to building public trust, as is the responsible use of data. AI-generated content should be rigorously tested for factual accuracy and bias before dissemination. 

This entails collaboration between AI developers, policymakers, and human rights organizations to ensure that AI algorithms operate within ethical frameworks.

Human rights organizations must work alongside legal institutions to enforce stronger regulations against invasive data collection and ensure that AI respects privacy, freedom of expression, and non-discrimination. Special attention should be paid to protecting vulnerable groups, including minorities, women, and individuals with mental health challenges, from AI-generated misinformation that disproportionately affects them.

Furthermore, digital platforms that use AI to curate and promote content must be held accountable. Content monitoring systems should be mandated to detect, flag, and remove false or misleading information.

The call for responsible AI practices is not just an option but a necessity to ensure a just and equitable digital future. 


READ MORE ARTICLES FROM 

GLOBAL AFFAIRS