Here is a truth we cannot ignore: artificial intelligence is nearing its ubiquitous phase. It is in the news, in our homes, and in the palms of our hands. Yet, for all its ubiquity, AI remains strangely opaque. We find ourselves in a gap between knowing of AI and knowing AI. The former is downloading an app, asking Alexa to play a song or marveling as your smartphone finishes your sentences with alarming accuracy. But what does it really mean to know AI, to grasp its underlying mathematics and the biases lurking in its data? As AI seeps into every corner of daily life, the question is no longer whether we should learn about it, but how.
The problem with most AI courses today is that they overemphasize technical proficiency, while neglecting the human dimensions of technology. Meaningful community engagement is sparse, and when it happens, it frequently sidelines the voices most affected by algorithmic bias. The absence of ethical frameworks in schools also compounds the issue, leaving students ill-prepared to confront the consequences of the systems they build. The result? A generation of AI natives fluent in Python but illiterate in empathy.
A different blueprint
One proposal gaining traction comes from unlikely sources: cultural heritage and Indigenous Knowledge. At first glance, the pairing seems out of place. AI is shorthand for innovation; heritage, for tradition. Yet centuries-old cultural practices and Indigenous knowledge systems offer powerful frameworks for making AI more “human.” This wisdom, particularly in four areas, context, community, stewardship and sustainability, can provide a blueprint for reimagining AI literacy that is culturally responsive and deeply human.
Context
Cultural heritage is inherently contextual. Its artifacts are interpreted as products of historical, geographical and social environments. AI requires the same. Consider the example of voice assistants struggling to understand non-Western accents. It is not a mere technical glitch but a systemic symptom of datasets stripped of cultural nuance. When AI systems are trained on datasets that omit minority languages, they don’t just underperform; they exclude entire communities from digital participation.
Courses like AI and Social Justice at the University of Tokyo attempt to correct this blind spot. Students trace historical patterns of technological disruption to imagine the ripple effects of today’s innovation. By studying these contexts in history, students develop foresight, to anticipate how emerging technologies might fortify or fracture the social fabric.
Without context, AI education programs risk producing graduates who trust the outputs of machines more than their own judgment. With it, students learn that datasets are never neutral; they are shaped by the biased decisions of people.
Community
Heritage is a living dialogue between past and present. At its heart is a commitment to the community. In this sense, heritage is preserved not for communities but by them.
Bringing this ethos into classrooms entails inviting students to co-design AI tools alongside communities. Rather than solving abstract technical puzzles, they would grapple with real-world dilemmas. In Australia’s Kakadu National Park, for instance, scientists worked with Indigenous custodians to monitor ecosystems using both AI and traditional ecological knowledge. The project demonstrated reciprocity, pairing cutting-edge tools with ancestral wisdom. This approach builds trust, transparency and accountability, qualities that are lacking in the current paradigm of black box AI models.
Stewardship
In many traditions, knowledge is more than information; it is trust handed down. The Hudhud chants of the Ifugao in the Philippines encode generations of rice-harvesting wisdom; Micronesian canoe-making traditions embody navigational lore. These practices survive through stewardship from elders to pupils, by treating knowledge as a responsibility.
By contrast, much of today’s AI education lacks this depth. Students are frequently left to explore AI tools freely without proper guidance. UNESCO’s 2023 study shows that fewer than one in ten schools have formal policies concerning the use of generative AI. Some schools, fearing misuse, ban AI tools outright, leaving students unprepared for the realities of an AI-driven job market.
The alternative is to cultivate technologists who see themselves as stewards. At MIT, the Moral Machine project confronts students with ethical dilemmas before they write a single line of code. Students should be encouraged to imagine themselves in different societal roles: policymakers, advocates, or activists who shape the future. In this context, responsibility is not an add-on, but a foundational competency to practice.
Sustainability
In Japan, the forestry technique of Daisugi cultivates cedar trees not for immediate harvest, but for future generations. The philosophy is simple: cultivate today for tomorrow, a mindset AI educators can learn from.
Modern AI systems are resource-intensive, from energy-hungry models to data centers that risk causing water stress in communities. Just as heritage practices value continuity, AI literacy programs can embrace teaching circularity, systems built to last, not discard. Initiatives such as the Green Software Foundation are already charting this path, providing resources for energy-efficient algorithms, data minimalism, and green computing. Data centers equipped with closed-loop cooling systems, incorporating wastewater recycling and rainwater harvesting, can reduce water consumption by 50 to 70%. Embedding such principles into AI education fosters a mindset that what we build today should continue to serve us tomorrow.
A turning point
The future of AI literacy programs carries profound implications. Without a shift from knowing of AI to knowing AI, we risk creating technologies that redefine our world in ways we cannot control.
Humanizing AI literacy programs involves thoughtfully embedding context, community, stewardship and sustainability into the core of curriculum. By drawing on the enduring wisdom of cultural heritage and Indigenous knowledge, we can design education that prizes responsibility as much as innovation.
Decades from now, AI’s legacy will not be measured in teraflops or model size, it will be measured in the choices humans make today. What began as a technological revolution has the potential to become something far more profound: a moral and cultural turning point. We have a choice for AI to become a testament to superior human intelligence that will bridge us to futures yet imagined.
That future is ours to shape. That choice is ours to make now.
