Single Blog Title

This is a single blog caption
29 May 2024
Emma Ruttkamp-Bloem

Towards Fairness and Justice in AI Education Policymaking

In this blogpost, which was previously published in NORRAG’s 4th Policy Insights publication on “AI and Digital Inequalities”, Emma Ruttkamp-Bloem warns of the risks of digital poverty, “monolithic societies” and misinformation through AI. 

Considering the main elements of an ethical and just policy approach to AI in education should be done without giving in to either hype or panic. The landscape of digital technologies, structured as it is by inequities, biases and potential and actual harms, is in fact a reflection of the real world, which there are measures in place to navigate.

Willingness to revisit why, what and how we learn in this context implies willingness to take up the challenge to reflect on the long-term implications of generative artificial intelligence (GenAI) applications in education for the creation, acquisition, representation, validation and communication of knowledge.

A framework for such reflection should include social values such as affirmation of the interconnectedness of all humans with each other, equity and human agency; human rights values such as privacy, transparency and accountability; and research values such as honesty and integrity. These values can play out in terms of actions such as identifying shared concerns about the impact of GenAI on the cultivation of autonomous reasoning skills, conducting impact assessments to determine what is needed in each region of the world to enable inclusive and meaningful participation in GenAI and teaching students the value of engaging in robust and trustworthy knowledge production, validation and communication. An integral part of actualising the values in such a framework would be to enable students to develop critical awareness of GenAI machine models, understand how they work in general, investigate where their biases come from and determine and understand why their content is often shallow or false. To instil such awareness, it would be invaluable to engage students in discussions on the social impact of GenAI, such as for example, “the racial implications of automated decision-making, the increasing carbon footprint of cloud computing, the long histories of technological change, and the dangerous stereotypes that internet data amplifies” (Goodlad & Baker, 2023). Additional actions that might support and realise this framework include building capacity for teachers and researchers to make proper use of GenAI and encouraging motivation among students to remain engaged in their learning, such that they come to appreciate the value of the writing process in their overall cognitive evolution.

Three of the biggest obstacles to attaining these goals include digital poverty concerns, the creation of monolithic societies and misinformation.

Digital poverty relates to the fact that countries without adequate infrastructure for GenAI methods, such as those with computing power and sufficient access to data, cannot make appropriate digital progress. Furthermore, GenAI models are trained on data that reflect values and norms of the Global North, and as such, digitally poor countries are confronted with a real risk of data colonisation.

Through monolithic societies, there is a real risk of reducing pluralism of opinions and increasing marginalisation of vulnerable groups in the Global South. The reason for this is that the only views reflected in GenAI-generated content are those dominant at the time when the training data for the model in question was produced, and these are, as already noted, heavily biased towards Global Northern values and the norms of those who frequent the internet.

Through misinformation, AI-generated content is polluting the internet. When incorrectly generated text is posted online, not only are humans misled but also generative AI systems are then trained on this content. Hence, it is important to also consider the long-term issues that could potentially arise because the reliability of the knowledge produced is compromised not only for what students learn but for society as a whole. A last, more subtle concern, perhaps best understood by those of us from the Global South, is to find the best way to manoeuvre the central tension between, on the one hand, the role digital technologies might play in opening up and democratising knowledge and education and, on the other, the potential for digitalisation to reinforce and entrench existing inequalities at global and local levels.

To overcome these and other obstacles, our most important task is to enable students to engage with this technology in a responsible and critical manner.

Key takeaways:

  • Sufficient action should be taken to counter or address the potential negative implications for education in the Global South resulting from the increasing inequality in the training of generative AI systems.
  • Critical AI awareness and skills to analyse the social impact of AI technology should be introduced at appropriate levels in schools.
  • Sufficient support should be given to encourage and maintain the development of local AI ecosystems, including local AI and AI ethics capacity development that is focused on developing solutions particular to a specific region.
  • Actions for ensuring the reliability of knowledge generated by AI systems and developing awareness of misinformation linked to AI processes should be put in place.

About the Author:

Emma Ruttkamp-Bloem, University of Pretoria, South Africa

Member of the UN Secretary- General’s AI Advisory Body and UNESCO Women4EthicalAI

Chair, UNESCO WorldCommission on the Ethics of Scientific Knowledge and Technology (COMEST)

(Visited 181 times, 1 visits today)

1 Response

  1. Thank you for sharing. The discussion on fairness and justice in AI education policymaking is critical for creating inclusive learning environments. Understanding MBA Course Fees and exploring the Best MBA College options further support informed decision-making for a just and comprehensive educational journey.

Leave a Reply

Sub Menu
Archive
Back to top