Single Blog Title

This is a single blog caption
24 Aug 2023
14:00 - 17:00 CEST
Online

Event Highlight: AI and Digital Inequities Summit

Wednesday 27 September 2023

14:00 – 17:00 CEST

Online

Co-organisers: Centre for Sociodigital Futures (CenSoF)

In collaboration with the Centre for Sociodigital Futures, the NORRAG AI and Digital Inequities Summit explored the perspectives from research and practice to ask what an ethical and justice-oriented response to AI in education might look like. The online event was held on the 27th September, 2023 and brought together researchers, students, educators and education policy makers. The aim was to examine how generative AI, and the large language models (LLMs) that underpin them, can be understood within a contested landscape of digital technologies structured by inequities, biases, and potential and actual harms for minoritised and global majority people.

At the start of the summit, Moira Faul, NORRAG’s Executive Director, welcomed everyone and opened the agenda of the event by addressing the prominence given to AI that even became a high priority on the agenda of the UNESCO digital learning and the UN General Assembly recently in the New York. She asserted the expertise brought by the keynote and panelists invited to the summit, who have been working for a long time on the forefront of AI and digital inequities, justices and ethics.

Sobhi Tawil, Director at UNESCO on the Future of Learning and Innovation, Education Sector, shared some opening remarks on the event, acknowledging the collaboration with NORRAG. He highlighted the TheSouthAlsoKnows campaign, which aligns with UNESCO’s approach to common knowledge and the linguistic cultural diversity as a wealth of humanity. He acknowledged the Centre for Sociodigital Futures as co-organizers of the Summit and recognised the alignment with UNESCO’s work on digital learning futures. He also highlighted some of the UNESCO’s efforts in providing guidance on AI and its application in different domains including education and research, technological promises to equitable educational opportunities and inclusion particularly in the post COVID-19 era.

 

Keynote on “Critical perspectives on AI in education” 

Then, Dr. Ben Williamson, a Chancellor’s Fellow at the Centre for Research in Digital Education and the Edinburgh Futures Institute, University of Edinburgh, took the floor for a keynote on “Critical perspectives on AI in education”. In his introduction, he highlighted that the subject has gained considerable hype and hope as well as significant anxiety and despair globally. He acknowledged that AI has made some impressive technical advances as well as controversies, particularly generative AI. Ben’s discussion focused on a series of critical tensions and problems with the introduction of AI in education. Being critical means trying to challenge some of the assertions and taken-for-granted assumptions that AI will never lead to positive transformation. He indicated that introducing AI into education will not be a seamless process and pointed out that the Centre for Sociodigital Futures is currently looking at tracing how desirable digital futures of learning will be constructed and circulated. He then presented the three waves of AI: the expert system or knowledge models that emerged in the 1960’s, followed by the big data revolution as the second wave and the third wave of generative AI characterized by large language models. He stressed that AI is an infrastructure that served as a foundation for other applications to play on top of, with a great potential in our educational system that is inevitable to do away with. He shared that getting AI embedded in our educational system depends on educating people on the use of AI. He maintains that making AI user-friendly is about rethinking our assumptions on educational expertise and authority while also being amenable to increasing tech industries in public education systems. However, he said that AI also raises critical questions and ethical controversies about the degenerative effects on the content and knowledge that is taught and learned in schools. He added that AI privileges innovation in the Global North but evades ethical responsibility, social justice and rights-based concerns. He concluded that if educators focused entirely on the ethical issues of AI, then they may be unfriendly to it.

Panel 1: Justice, Knowledge & AI 

This was followed by the first panel discussion featuring Dr. Radhika Gorur, Associate Professor of Education, Deakin University (moderator); Dr. Alejandro Artopoulos, Academic Director of Centro de Innovación Pedagógica, Universidad de San Andrés; Dr. Harini Suresh, Postdoctoral Researcher in Computer Science, Massachusetts Institute of Technology (MIT); Dr. Lulu P. Shi, Research Associate at the Oxford Internet Institute (OII) and Dr. Amber Sinha, Senior Fellow at Mozilla Foundation. This panel focused on Justice, knowledge and AI with the following questions as a guide for the discussions; What is the relationship between generative AI and LLMs and knowledge inequity and injustice? What forms of response and redress are students, educators and practitioners exploring, including in Southern contexts?

Here are summaries of each of the panelists’ intervention:

Dr. Alejandro Artopoulos

Alejandro discussed the differential impact of AI and other technologies on inequities and its dynamics within the Global South and North and across different countries. He shared that the debate that uses the old biases of policies by design as social problems should be separated. He mentioned that AI exacerbates inequalities in education in three ways inter alia social linguistic level, social economic and educational disparities. At the social linguistic level, AI limits access to other languages by offering information in English, resulting in low quality learning in other languages. He argued that the process of formalizing innovation that was experienced during the pandemic made AI as a main component of educational platforms with disparities in each country. The last level is related to educational policies. Alejandro discussed that the machinery about the application of data and how it is related to the design of democratic policies in education is the main challenge. He pointed out that socialist educational approach tends to promote inequalities. To conclude, he stated that if governments approached AI as a black box device that has great cognitive content, then it is encouraged to incorporate it into its curriculum.

 Dr. Harini Suresh

Harini spoke about private access and the huge proliferation of Edtech and its impact on knowledge processes. She highlighted two aspects of knowledge and machine learning systems. First, how knowledge is coded into the system. Then addressed the distribution and access to the knowledge produced. She mentioned data collected is usually unsupervised with the proliferation of giant language models. She believes there are specific kinds of knowledge and perspectives that are overrated and, conversely, others that are underrepresented or non-existent. She claims that as the systems are deployed as dissemination tools, they are uplifted on the hegemonic perspective. On state-of-the-art modes in  Edtech, Harini discussed that the risk is insidious, because data at this scale is difficult to be measured, while the inability to probe or quantify leads to lack of accountability. She highlighted that more transparency is needed to ask and answer questions about which perspectives are being represented in these tools. To conclude, she asked whose expertise is valuable enough to be considered during the development and evaluation process of these systems.

Dr. Lulu P. Shi

Lulu’s contribution related to the dominant discourse in the EdTech landscape and how these relate to justice and knowledge. She drew on the analysis of the media narrative of EdTech in the UK for the discussion. She highlighted that discussions or voices on Edtech are heavily dominated by tech actors. She argued that the media conversations on digital inequality are much superficial as it is not engaged in a deeper kind of structural inequality analysis as the basis of digital inequality. The focus lies on how tech could fix education without paying attention to social, economic and political problems. She further noted that data privacy and ethics were other issues in the media. She said, there were a lot of data breaches or the situation where students got hacked during the pandemic. Therefore, a critical issue that evolved was the ownership of data either by students or the tech companies. Lulu mentioned that there is a lack of consensus on data ownership; while there are concerns raised about the government’s incapability to handle data, there is also the fear that in certain countries, the government may have access to student data. To conclude, she highlighted data capitalism, where tech companies collect data from the people and sell it back for profit. It is therefore important to think about data ownership in a more democratic way to ensure people who provide the data have a say.

Dr. Amber Sinha

Amber focused on the issue of regulating technologies and AI systems. From a human-centered perspective, he agreed with Harini on the kind of data that is fed into the system and its accessibility. He noted that one of the challenges that is encountered in data collection is the lack of representation and equity, which is an extremely exploitative practice. Amber also stressed that there is lack of transparency, scrutiny, and self-regulation since the nature of knowledge produced is influenced by modern research, which is not only the prerogative of academia but also of large tech companies. He shared that in terms of regulation, countries are focusing much on human rights violations, particularly in the Global South, where there is limited capacity. Amber also discussed emerging standards and conventions on what are good practices. However, he underlined the large efforts required for questions of inclusivity and integration of automation. He added that regulations regarding AI will not happen rapidly as it is not considered a priority for regulation policies. He concluded by stating that engaging in critical thinking is key to developing conversation and standardisation of best practices which does not compromise the inclusion of technological automation.

 

Panel 2: Ethics and AI 

A second panel, moderated by Dr. Emma Harden-Wolfson, an Assistant Professor in the Faculty of Education, McGill University, Canada presented a discussion on the Ethics and AI with Dr. Janja Komljenovic, Lancaster University, UK; Raïssa Malu, Investing In People non-profit association; and Prof Emma Ruttkamp-Bloem, University of Pretoria, Chair: UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) serving as panelists in this session.

Here are summaries of each panelist’s intervention:

Prof Emma Ruttkamp-Bloem

Emma discussed the main elements of ethical and just policy approach to AI education. She argued that the landscape of digital technologies are characterised by structured inequalities, biases and potential harm which are reflected in the real world. She called on educators to beware of the hype and revisit the long term implications of generative AI in education relating to the acquisition and the validation of knowledge. She noted that social values in the context of education such as equity, inclusiveness and human agency should be anchored in the spirit of accountability, transparency and research values. She said, these can play out in terms of actions such as identifying shared items about the impact of generative AI on the cultivation of reasoning skills. Similarly, conducting assessments will help determine what is needed in each region of the world to enable inclusive and meaningful participation in generative AI. This will also support the promotion of plural opinions and expression of ideas. She discussed digital poverty as the biggest obstacle to an ethical and just approach to AI. Emma shared that it is therefore important that educators and researchers take a critical value of orientations, cultural standards and social customs that are embedded in the training models to give a representation of what is generated. She concluded that AI should be at the service of development for human capabilities in order to build inclusive and sustainable futures with a positive impact on human dignity and cultural diversity in the spirit of common knowledge.

Raïssa Malu

Raïssa was asked how the ethical framework in AI should be adapted to suit a unique culture and linguistic diversity of French Africa. Raissa responded that instead of AI contributing more inclusively or sustainably to learning, it has already begun to compound the existing inequalities. She stated that in French-speaking Africa, the main challenge remains the language barrier, as technologies are developed and readily available in English. She added that even though the Democratic Republic of Congo (DRC) is the largest french speaking country in the world, assessments showed that teachers do not reach the minimum proficiency in French. Therefore as French is not even mastered, English does not seem to be a good idea. Raïssa stressed that while online resources and digital platforms should be made available in French, being in the local languages is most important. This involves collaboration with the local people, linguists and cultural experts who can provide guidance. She argued that AI content should be translated to be culturally adapted. Raïssa said there is also a need to support African researchers to develop tools to explore innovative ways to integrate oral storytelling into AI driven educational materials. She stressed the importance of community engagement by inviting educators, parents, teachers, students and community leaders to discuss and provide input and feedback that has ethical guidelines with local values, concerns and needs. She highlighted that by taking into account the racial aberration with varying levels of digital access, we foster a sense of ownership and trust in AI for education.

Dr. Janja Komljenovic

Janja addressed micro levels of concern on generative AI in education. She began with user consent, where she shared that users have to agree to all terms and conditions unilaterally by proprietary platforms or various contract platforms that educational institutions have. This relates to the agency of users. She also noted the lack of democratic and relational decision making regarding data privacy regulations. Janja argued that there is symmetry of power regarding who is able to make digital user data available. She called for the setting up of a data sectorial trust where user data will be collected and governed to separate the data production and data control sites. She shared that such a governance structure will not only foster ethical and just innovation but also set standards for user rights and user contribution to decision making. To conclude, she suggested that generative AI for education must be discussed internationally to look at issues of development, motivation, monitoring and evaluation in relation to local knowledge. She highlighted that these complex issues should be unpacked and addressed.

The AI and Digital Inequities Summit was closed by Moira Faul, who thanked the keynote speaker, moderators, panelists and the participants for the questions shared during the event.

 

Speakers:

Dr. Alejandro Artopoulosspeaker

Alejandro Artopoulos is a Professor of Technology and Educational Change at Universidad de San Andrés, and of Science & Technology Studies (STS) at Universidad de Buenos Aires, and Researcher at CIC. PhD Universitat Oberta de Catalunya. Specialized in Digital STS. Interested in sociotechnical transitions towards hybrid education and smart agriculture, digital inequities and development of information capacities, and applied computational thinking.

 

Dr. Radhika Gorurmoderator

Radhika Gorur is Associate Professor of Education at the School of Education, Deakin University. Radhika’s research examines how the world is translated into numbers, and how quantification transforms the world, with an empirical focus on education sites. Her current projects are ‘Global Policy Networks and Accountability in Education in the Indo-Pacific’, and ‘The Techniques and Politics of Standardisation and Contextualisation: PISA for Development.’

 

Dr. Emma Harden-Wolfsonmoderator

Dr Emma Harden-Wolfson is Assistant Professor in the Faculty of Education at McGill University, Canada. She is an international and comparative higher education policy specialist. Over the past two decades, Emma has worked in higher education research, teaching, policy analysis and university administration across four continents. Prior to joining McGill, Emma was Head of Research and Foresight at UNESCO’s International Institute for Higher Education in Latin America and the Caribbean where she led flagship projects on the right to higher education, artificial intelligence and digital transformations of higher education, and the futures of higher education.

 

Dr. Janja Komljenovic speaker

Janja Komljenovic is a Senior Lecturer at Lancaster University. She is a Management Committee member of the Global Centre for Higher Education (CGHE). Her research focuses on the political economy of higher education and higher education markets. Komljenovic is especially interested in the relation between the digital economy and the higher education sector; and in digitalisation, datafication and platformization of universities. She led the ESRC-funded research project “Universities and Unicorns: building digital assets in the higher education industry”, which investigated new forms of value construction in digital higher education and employed a theoretical lens of rentiership and assetization. Komljenovic has published internationally on higher education policy, markets and education technology.

Raïssa Malu – speaker

Raïssa Malu is an international education consultant. She directed the Education Project for the Quality and Relevance of Teaching at Secondary and University Levels (PEQPESU) for the Ministry of Primary, Secondary and Technical Education of the Democratic Republic of Congo. This project was financed by the World Bank. She is also director of the non-profit organization Investing In People, which organizes Science and Technology Week in the Democratic Republic of Congo, the 10th consecutive edition of which was held from April 15 to 22, 2023.

 

Prof Emma Ruttkamp-Bloem – speaker

Emma Ruttkamp-Bloem is a philosopher of science and technology, an AI ethics policy adviser, and a machine ethics researcher. She is the Head of the Department of Philosophy at the University of Pretoria, the AI ethics lead at the Centre for AI Research (CAIR), and the chair of the Southern African Conference on AI Research (SACAIR). She was just recently appointed as chair of the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST). Emma is a member of the African Union Development Agency Consultative Roundtable on Ethics in Africa and of the African Commission Human and People’s Rights Committee (ACHPR) task team working on the Resolution 473 study. She is a member of the Global Academic Network at the Center for AI and Digital Policy, Washington DC, and a participant in the Design Justice AIA Global Humanities Institute.

Dr. Lulu P. Shi  speaker

Lulu Shi is a lecturer at the Department of Education, Oxford University, and a research associate at the Oxford Internet Institute. She is a sociologist and her research spans technology, education, work and employment and organisations. Lulu leads a project funded by the British Educational Research Association, which investigates the political and economic agenda behind the push for digitalisation of educationShe has also recently completed a British Academy funded project, in which she developed an index that traces EdTech usage in the UK.

 

Dr. Amber Sinha speaker

Amber Sinha works at the intersection of law, technology and society, and studies digital technologies’ impact on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, technology, and society. Until June 2022, he was the Executive Director of the Centre for Internet and Society, India where he led programmes on civil liberties research, including privacy, identity, AI, cybersecurity and free speech. He is currently a Senior Fellow-Trustworthy AI at Mozilla Foundation studying models for algorithmic transparency.

Dr. Harini Sureshspeaker

Harini is currently a postdoc at Cornell University and incoming assistant professor of computer science at Brown University.  Her work asks how societal context and diverse participation can shape the ML lifecycle, from problem conceptualization to evaluation.  Her research has considered these issues across contexts including x-ray diagnostics, gender-based violence monitoring, and online content moderation.  She is also an enthusiastic proponent of interdisciplinary collaboration for thinking about and addressing societally-relevant impacts of technology.

Dr. Ben Williamson keynote

Ben Williamson is a senior lecturer and co-director of the Centre for Research in Digital Education at the University of Edinburgh, UK. He is an editor of the international journal Learning, Media and Technology, and of the forthcoming World Yearbook of Education 2024: Digital of Education in the Era of Algorithms, Automation and Artificial Intelligence.

 

 


Partner: 

Centre for Sociodigital Futures (CenSoF) – Website

The Centre for Sociodigital Futures is an international centre of excellence for sociodigital futures research and collaboration. It will run for an initial five years from 2022 to 2027. It brings together world leading expertise from across the Social Sciences, Engineering and the Arts, and is led by the University of Bristol in collaboration with a growing number of renowned universities, strategic partners from both industry and policy, and a network of leading global universities

(Visited 3,679 times, 1 visits today)
Sub Menu
Archive
Back to top