Single Blog Title

This is a single blog caption
30 May 2024
Ben Williamson

Making Education AI-Friendly

In this blogpost, which was previously published in NORRAG’s 4th Policy Insights publication on “AI and Digital Inequalities”, Ben Williamson argues that far from being merely technical foundations for teaching and learning, AI infrastructure includes social and political technologies with potentially profound and unpredictable impacts on public education. 

After OpenAI released ChatGPT, industry analysts reported it was the “fastest growing internet app in history” (Bove, 2023). Characterising ChatGPT as an app, however, is misleading. As has since become clear, ChatGPT (v.3.5) was launched strategically as a marketing demo to entice users to subscribe to its fee-paying artificial intelligence (AI) models and encourage institutional customers to purchase enterprise licenses for GPT-4. Educational companies including Khan Academy and Duolingo were early partners, gaining first-mover advantage to promote AI in education. Now, educational technology (EdTech) businesses are rushing to incorporate AI into their products, backed by expectant investors and political enthusiasm (Komljenovic et al., 2023).

As OpenAI’s strategy shows, AI development is not about building “apps”. It is about building AI infrastructure as the technical foundation for all industries and sectors to operate upon. This explains why Microsoft has invested billions inOpenAI and why other Big Tech companies, such as Google, Amazon and Meta, are also racing to release AI “foundation models”. The competition for infrastructural dominance in AI is fierce, with huge financial rewards expected for the winners (Williamson, 2023).

Building the foundations is one challenge. The other is getting people to use the services that will be built on top of them. Studies of infrastructure highlight that complex systems are never just technical. They also require people to use them. A successful infrastructure therefore requires the accustomisation of users to its affordances, “making the user friendly” so they will amenably respond to what the technology allows (Gorur & Dey, 2021). 

In education, making the user friendly to AI infrastructure has become a key aim of a wide range of organisations and individuals. Teachers and learning institutions are to be made AI- friendly through a variety of training courses and guidance that might be characterised as “PedagoGPT”. PedagoGPT captures how pedagogic advice is being formulated to accustom educational actors to AI. Examples include training courses in AI provided by entrepreneurial educators or via online programs as well as guidance offered directly by Big Tech companies as part of their strategic aim to stretch AI infrastructure throughout the sector. A prime example is the “Teaching with AI” resources produced by OpenAI, a ‘guide for teachers using ChatGPT in their classroom’. 

Through these PedagoGPT initiatives, schools are being targeted as potential users and customers of AI. The intended result is a synchronisation of AI with pedagogic routines and administrative procedures. The effects of synching schools with AI could be profound. Infrastructure is never merely a technical backdrop upon which other activities take place but actively shapes the practices of its users.

Generative AI infrastructure will be generative of particular effects, including unintended consequences (Holmes, 2023). For example, the entrepreneurs behind foundational AI infrastructure and EdTech applications privilege particular narrow conceptions of “personalised” and “mastery” learning, resulting in measurable individual achievement improvements. The capacity of AI to power “personalised learning tutorbots” reinforces this reductionist, privatised and atomised vision of education.

Embedding education in AI infrastructure also privileges commercial technologies as “co-pilots” in the classroom, potentially degrading teachers’ pedagogic autonomy by outsourcing responsibilities to automated technologies (Kerssens & van Dijck, 2022). This risks displacing teachers’ subject expertise and contextual knowledge to computerised data-processing machines.

Another risk is that AI will put degenerative pressure on the quality of knowledge taught in schools. AI language models often produce plausible but false information or biased and discriminatory content due to the available material they draw upon (Goodlad & Baker, 2023). The danger here is that teachers and students may find it increasingly difficult to tell whether AI is delivering them authoritative and accurate sources or just convincing but fallacious content.

Making education AI-friendly therefore poses distinct challenges to learning, teaching and curricula. AI infrastructure is set to coordinate a wide range of educational practices, with PedagoGPT guidance intended to synchronise schools with their affordances. This is despite the lack of independent evidence that AI can improve education in the ways claimed or serious consideration of the risks or unintended consequences of introducing AI into schools. Far from being merely technical foundations for teaching and learning, AI infrastructure includes social and political technologies with potentially profound and unpredictable impacts on public education. Rushing to make education AI-friendly would serve to amplify technological power over schooling. Instead, educators and schools should be supported to take a cautious and critical stance to AI (McQuillan et al., 2023).

Key takeaways:

  • Independent evaluations should be commissioned to assess the effectiveness of AI applications targeted at schools, since the weak evidence base currently means schools may be sold products on the basis of exaggerated marketing.
  • Research funders should fund interdisciplinary research to examine the implementation of existing AI applications in schools and to understand their intended effects as well as unintended consequences.
  • Authorities should ensure that schools are not used as live testing sites for AI by requiring independent testing for any policy decisions concerning widespread use.
  • Teacher unions and representative organisations should support teachers’ capacity to critically evaluate AI applications and campaign to ensure

that teachers’ professional pedagogic autonomy is protected from automation.

  • School leaders should not mandate the use of AI applications by staff and students and should promote forms of AI literacy to enable them to evaluate and make informed decisions about AI.

 

About the Author:

Ben Williamson, Centre for Research in Digital Education and Centre for Sociodigital Futures, University of Edinburgh, UK

(Visited 43 times, 1 visits today)

Leave a Reply

Sub Menu
Archive
Back to top