Artificial intelligence, once confined to sci-fi movies and graphic novels, is becoming more and more integral to our daily lives. From the rise of home voice assistants to life-altering decisions in employment and criminal justice, AI is shaping our world in profound ways. Yet, as this transformation accelerates a critical challenge is emerging: the pervasive bias embedded in its algorithms. While many of the AI tools that we use in our daily lives seem rather innocuous, the biases hidden in their algorithms pose serious threats that can perpetuate and amplify inequalities.

When the initial versions of AI voice assistants rose in popularity, many pointed out that these virtual assistants overwhelmingly featured default female-sounding voices. The common critique was that most of the tasks carried out by Siri, Alexa and Cortana — like writing a shopping list or setting appointment reminders — are often regarded as duties of a domestic assistant. Critics argued that associating female voices with subservient roles will inadvertently reinforce outdated gender norms and perpetuate harmful stereotypes. 

Issues of bias and discrimination in AI systems have also been observed in other contexts, with serious implications. In 2020, a decision by the UK to use an algorithm to predict A-level grades led to students from disadvantaged backgrounds being disproportionately downgraded, while those from private schools and affluent areas were favored. The issue caused a nationwide backlash, leading the UK government to abandon the algorithm. 

Perhaps most alarmingly, inaccuracies in AI-driven predictive policing systems can lead to wrongful arrests, detentions, and even violence. A tool deployed in a United States suburb showed a clear bias against black neighborhoods, leading to over-policing and increased tensions between law enforcement and communities. Similar issues have been reported with facial recognition technologies in China and South Africa, where darker-skinned individuals were more likely to be misidentified or mischaracterized. Such incidents not only threaten the justice system’s integrity but also perpetuate systemic racism. 

These examples underscore a crucial point: AI systems are only as unbiased as the data and people that create them. While fingers are often pointed at the skewed datasets used to train these systems, the root of the problem is often the lack of diverse AI developers who can recognize and mitigate the built-in bias in the data, model or system. 

The tech industry remains male-dominated, with women representing only 22% of AI professionals globally. Geographic disparity is also an issue, with the majority of AI research and development concentrated in a handful of countries, primarily in North America, Europe, and parts of Asia. This homogeneity in the AI workforce often leads to intelligent machine systems that are blind to the experiences and perspectives of the underrepresented, mostly from the Global South.

To nurture a diverse pool of AI developers, strengthening AI literacy for everyone is crucial. This would arm society with the skills needed to recognize, and hopefully mitigate, algorithmic bias and misinformation. An AI-literate population is better equipped to question the assumptions underlying AI models and demand transparency in their development. In Singapore, for instance, the government recently announced a national initiative to build AI literacy among students and teachers. By 2026, training to employ AI in classrooms will be offered for teachers at all levels, including those in training. 

Yet the path to eliminating AI bias through education is not without challenges. The importance of literacy cannot be overstated, but we must acknowledge the deep inequalities that exist in access to education and technology. Those with limited access to quality education and digital resources are less likely to enter the AI field, resulting in a lack of diverse perspectives in AI development. 

Moreover, the digital divide — which separates those with access to technology from those without — exacerbates the AI diversity dilemma. To this day, a significant portion of the world remains offline, hindering their ability to adapt to the new digital norm. 

A substantial percentage of the population in developing countries still lack basic digital literacy, let alone advanced AI skills. Individuals with lower digital proficiency are at higher risk of being adversely affected by biased technologies, often lacking the knowledge to identify and contest unfair AI decisions. This perpetuates the cycle of discrimination.

Recognizing that AI and emerging technologies offer both exciting new possibilities and significant risks—including existential threats— to humanity, the international community has begun to take significant steps. Last September the UN Summit of the Future adopted the Global Digital Compact, which aims to address the ethical and inclusive use of digital technologies, including AI. The United Nations University (UNU) is also at the forefront in addressing AI challenges through education. Last year, it launched the UNU Global AI Network and is in the process of establishing a new institute specializing in AI and big data. Moreover, UNU recently created an Action Group on “Futures of Higher Education and Artificial Intelligence.” Through its partners across the globe, the action group is exploring the impact of AI in higher education through the lens of sustainability, leveraging experts from the Global South and pioneering women in AI.

Some universities have also launched comprehensive AI strategies. Through its “AI Across the Curriculum” initiative, the University of Florida is committed to building the first institution of its kind in the US. The initiative will integrate artificial intelligence education across various disciplines by hiring more than 100 AI expert faculty members and offering over 200 AI courses to all students. These courses will be designed to hone students with relevant skills in their fields, boosting their career prospects in the AI-driven future. 

Until AI literacy becomes a fundamental part of global education efforts, the cyclic bias and discrimination embedded in AI data and systems will remain pervasive. Investing in education that prepares all individuals, regardless of their backgrounds, to be active participants in the AI revolution can be transformative. Only then can we hope to navigate towards a future in which AI serves to unite rather than divide, and to empower rather than marginalize. This is far more than fixing biased algorithms; it’s about reshaping human values and knowledge in a future where AI is a force for good. 


READ MORE ARTICLES FROM 

GLOBAL AFFAIRS