Single Blog Title

This is a single blog caption
30 May 2024
Alejandro Artopolous

AI and Unequal Knowledge in the Global South

In this blogpost, which was previously published in NORRAG’s 4th Policy Insights publication on “AI and Digital Inequalities”, Alejandro Artopolous reflects upon the three levels of AI’s digital inequalities — the sociolinguistic nature of textual generative AI, AI literacy as human cognitive capacity and digital education policy.

One of the key issues in debates around artificial intelligence (AI) and education is that they exacerbate inequities. In addition to the biases that AI possesses by design, systematic errors can discriminate by race, gender, sexual orientation, physical conditions or political bubbles, both by data infrastructure and by algorithms. Generative AI has the capacity to empower students who have already reached their information literacy thresholds, while at the same time disconnecting from society those who for various reasons have not been able to overcome their digital divides. AI’s digital inequalities operate at three levels—namely, the sociolinguistic nature of textual generative AI, AI literacy as human cognitive capacity and a new digital education policy.

Sociolinguistic

Generative AI produces disparities between language regions. Since it is an English-centric technology, the same kind of quality of such technologies is not available to learners who use other languages. The production of generative AI requires significant investment in machine learning, therefore when business ask for choices, English becomes a priority language. Also, their models are trained by data that by and large is in English. This results in lower-quality responses in other languages and higher costs to develop reliable AI systems in non-English languages. As a result, English-speakers have an advantage. The underprivileged ones could be Catalan-, Igbo-, Quechua- or Spanish-speakers. Although it seems to be an issue similar across the Global North and Global South, it acts on different levels of disparities for each language region/population (Schneider 2022).

Cognitive

Challenges such as misinformation and hate speech raise the dilemma in democracies of building critical computational thinking skills and digital citizenship. Generative AI undermines public trust by making available chatbot services that do not ensure 100% truthful content in their responses.1 AI augments and automates the fragmentation of public dialogue through polarised opinion networks. With weak AI,2 mistrust spreads in the public debate, but with generative AI, our students can increase their mistrust in truthful knowledge (Heintz 2022). 

Since it was assumed that the digital divide could be solved by granting access to connectivity and basic digital literacy, we now struggle to consider adding a new layer of AI literacy. A new kind of digital literacy is required that addresses data literacy and applied computational thought. This kind of AI literacy varies by field of activity, from art to marketing, agriculture or

engineering, which implies that there is no generic computational thinking that applies to every field of practice. It is therefore necessary to rethink curriculum design at all levels of education to integrate regular literacy with algorithmic and data literacy. The level of response to the reconfiguration of digital education policies varies across the Global North and Global South. Thus, corporations in the Global North have created a disruption that can break the stable practices of modern schooling, but the antidote is only available to OECD education systems (Tedre & Vartiainen 2023). 

The cognitive dimension of AI inequalities in education is particularly defining because it depends not only on the readiness of school leaders but also on the strategic direction they choose to take. The socio-technical imaginaries about the application of AI to education are crucial to designing democratic models of citizenship development. Educational policies with a solutionist approach tend to promote inequalities if a government encourages the adoption of AI as a robot mentor for solving the learning of discrete content and does not consider new teachers’ AI competencies. In contrast, if a government acknowledges that AI, as well as devices, are cognitive capabilities that should be incorporated into curricular designs, these countries are aware that the restitution of equality always relies on the human side of the ensembles.

Policy

Even if Global South educational leaders were aware of how to implement AI literacy, they would not have the financial resources to prepare schools for such a challenge. The COVID-19 pandemic highlighted the global divide in access to technology between OECD countries and the Global South. While in OECD countries, only 10-20% suffered from a homework divide, in the Global South, this gap could in extreme cases be as high as 90%. But at this level, talking about the divide is an oversimplification; it is far from a black-and-white situation, there are several grey areas. In middle-income countries such as Argentina, Uruguay and Chile, we can find three segments; one is similar to the OECD, another similar to the typical Global South, while in-between, children and teachers could have been connected during the pandemic, but when they went back to school, they lost the ability to learn and teach in a connected way. We called this little-explored third situation “Silvester platformization”. We say “Silvester” platformisation because it is the feasible experimentation in the Global South (without infrastructure or teacher training) of the transition from the modern school to a cloud-ready classroom. Only a tiny fraction of the population can access a cloud-ready classroom. As the process of platformisation in education, including the use of AI, creates inequity by design, low and middle-income countries tend to develop unplugged (from the internet) computational education policies. We are facing an advancement of the socio-technical trajectory of the cloud-ready classroom that is in fact causing a restriction in the freedom of access to information, and in turn, a geopolitical digital divide of access to the educational cloud (Tedre & Vartiainen 2023).

Footnotes

  1. Tech Companies in their narrative attempt to anthropomorphism AI tends to talk about “hallucinations”. They explain that it “is a phenomenon wherein a large language model (LLM) perceives patterns or objects that are nonexistent, creating outputs that are nonsensical or altogether inaccurate.” https://www.ibm.com/topics/ai-hallucinations The use of the word hallucination tends to anthropomorphise AI, something which we are trying to avoid in this publication.
  2. Generative AI has the potential to cause a leap in scale from artificial narrow intelligence (ANI) to artificial general intelligence (AGI). ANI, or weak or narrow AI designed to specialise in a specific task, is limited to a specific or narrow area and cannot operate outside the parameters predefined by its programmers, so it cannot make decisions on its own—that is, the ultimate decision rests with the human. In contrast, AGI is achieved if a machine acquires human-level cognitive capabilities. Additional advancements would be required for that leap to occur, including in reasoning, memory and contextual and ethical decision-making.

Key takeaways:

  • Even if school leaders are not proficient in English, it is necessary for them to realise the differences in generative AI behaviour. Therefore, it is necessary to train them in AI literacy.
  • AI literacy is a novel practice that needs research and development independent of AI-producing companies. Governments and academia must take the lead in developing a deep AI literacy curriculum that engages the development of diverse applied computational thinking in the Global South.
  • Since the pandemic, governments in the Global South have been improvising an unplugged digital education policy agenda that increases digital inequalities. It is necessary to set a renewed agenda for a new decolonial digital education policy.

 

About the Author:

Alejandro Artopolous, University of San Andrés, Argentina

(Visited 282 times, 1 visits today)

Leave a Reply

Sub Menu
Archive
Back to top