Single Blog Title

This is a single blog caption
30 Nov 2023
Kathryn Conrad

Sneak Preview: A Blueprint For An AI Bill of Rights For Education

This blog post was originally published by Critical AI. This is a part of the Digitalisation of Education series.

In this blogpost, Kathryn Conrad argues that it is time to develop lasting principles that proactively put the needs of individuals and the general public at the fore of technological development, in particular the use of generative AI. She proposes a “bill of rights” for educators and students.

Since November 2022, when OpenAI introduced ChatGPT, so-called generative AI1 has been a subject of broad public conversation and debate. Although relevant also to large image models (LIMs), discussions about its use in education have tended to focus on ChatGPT, a large language model (LLM) engineered for question answering and dialogue. Almost immediately, media commentators began to hail the tool as the end of the high school and college essay and to urge teachers of writing at both levels to adopt such chatbots in their teaching.2

There is some truth to the media’s obsessive focus on plagiarism or “cheating”: the ease with which students can generate ostensibly passable work on a range of assignments, combined with the unreliability of AI-detection software, has compelled educators to reassess their syllabi, rubrics, and statements of academic integrity in the effort to ensure that students consistently meet learning goals.3 They have met this challenge, moreover, in the wake of pandemic-driven pedagogical disruptions: shifts to (and from) online learning often in tandem with layoffs, austerity, and heightened workloads. From K-12 to elite research universities, educators have managed this technology-driven turbulence with minimal training, support, or guidance—all while contending with clickbait articles that portray them as pearl-clutching technophobes.

The pressure to “teach with” generative tools has continued to mount, driven partly by technology companies that have long perceived education as a lucrative market. However, in producing these much-hyped commercial tools, these companies neither focused on education nor consulted with educators or their students. Not only designed without consideration of educational goals, practices, or principles, these models emerge from a technocratic landscape that often denigrates higher education, imagines teaching to be a largely automatable task, conceives human learning as the acquisition of monetizable skills, and regards both students and teachers as founts of free training data. This disregard for experienced professionals reflects Big Tech’s tendency to ignore domain experts of any kind. 4

As this special issue elaborates at some length, today’s AI entails a host of ethical problems, including the nonconsensual “scraping” of human creative work for private gain, amplification of stereotypes and bias, perpetuation of surveillance, exploitation of human crowdworkers, exacerbation of environmental harms, and unprecedented concentration of power in the hands of a few corporations that have already proven themselves poor stewards of the public interest.5 The reality of increasing harm in the deployment of these systems has led the EU to place “AI systems intended to be used for the purposes of assessing students” and “participants in tests commonly required for admission to educational institutions” in their highest category of risk, alongside those used for law enforcement and administration of justice (EU AI 2023, n.p.).

Teaching critical AI literacy (Bali 2023) includes making this larger context visible to students. Advancing such literacy does not preclude the possibility of envisioning AI tools that work. As the anthropologist Kate Crawford (2021) argues, AI is not simply a technology; it is also “a registry of power” (8). Law professor Frank Pasquale (2020) shows how the failure so far to regulate technology corporations adequately has perpetuated such power; the new rules he calls for must be “co-developed with domain experts,” “diverse,” “and responsive to community values” (229). At present, educators and other professionals have tended to react to each technological rollout in the effort to contend with deleterious effects.6 But as we move to consider whether and how generative tools have a place in our classrooms, it is time to develop lasting principles that proactively put the needs of individuals and the general public at the fore of technological development. The point is not to banish LLMs but, rather, to encourage the development and potential adoption of systems that are trained on ethically obtained datasets; designed in collaboration with educators, students, and community stakeholders; and developed with careful attention to access, equity, and learning goals.

My blueprint for educators and students builds on a document that the Biden administration’s Office of Science and Technology Policy released in 2022: “Blueprint for an AI Bill of Rights.” Although at present they serve merely as an aspirational guide, I believe that the following principles, quoted verbatim from each of the Blueprint’s five sections, should be enshrined in law and enforced.7

  • Safe and Effective SystemsYou should be protected from unsafe or ineffective systems. 
  • Algorithmic Discrimination ProtectionsYou should not face discrimination by algorithms and systems should be used and designed in an equitable way. 
  • Data PrivacyYou should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. 
  • Notice and ExplanationYou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. 
  • Human Alternatives, Consideration, and FallbackYou should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. 8

With these principles as a starting point, I propose a supplemental set of rights for educators and students.

These are intended as the beginning rather than the end of the conversation, a foundation on which policies and protections can be based. Ultimately, however, educators must lead this conversation, guided by aspirations for and collaboration with our students.

Rights for Educators

Input on Purchasing and Implementation

You should have input into institutional decisions about purchasing and implementation of any automated and/or generative system (“AI”) that affects the educational mission broadly conceived. Domain experts in the relevant fields should be informed and enabled to query any consultants, vendors, or experts who have promoted the systems before such systems are adopted. Institutions should also set up opportunities for students to participate in and advise on policies that involve the mandatory use of any such applications. Institutions interested in exploring coursework devoted to the use of automated and/or generative tools should enable instructors to work with developers and vendors to ensure that any adopted tools are appropriate for educational contexts and do not subject students or educators to surveillance or data theft.

Input on Policies

You (or your representative in the appropriate body for governance) should have input into institutional policies concerning “AI” (including automated and/or generative systems that affect faculty, students, and staff). By definition, educators are at the heart of the educational mission and must be given the opportunity to lead the development of “AI”-related policies.

Professional Development

You should have institutional support for training around critical AI literacy. Critical AI literacy includes understanding how automated and/or generative systems work, the limitations to which they are subject, the affordances and opportunities they present, and the full range of known harms (environmental as well as social). Such literacy is essential, but educators cannot be expected to add gaining critical AI literacy to their workloads without such support.

Autonomy

So long as you respect student rights (as elaborated below), you should decide whether and how to use automated and/or generative systems (“AI”) in your courses. Teaching about “AI” is increasingly important to educating students, but commitment to teaching critical AI literacy (as elaborated above) does not imply any mandatory student use of an automated system. Educators should not be pressured into adopting new systems or penalized for opting out. Educators should be given resources to evaluate best practices for teaching in consultation with other domain experts and peer-reviewed research on pedagogy.

Protection of Legal Rights

You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above). 

Rights for Students

Guidance

You should be able to expect clear guidance from your instructor on whether and how automated and/or generative systems are being used in any of your work for a course. These guidelines should make clear which specific systems or tools are appropriate for any given assignment.

Consultation

You should be able to ask questions of your instructor and administration about the use of automated and/or generative systems prior to submitting assignments without fear of reprisal or assumption of wrongdoing. Critical AI literacy, especially in an environment of rapid technological development, requires honest conversations between all stakeholders. This includes students being able to ask why any given system is required for a given assignment. Educators should recognize that students who have been using AI tools in other courses or in their private lives should be treated respectfully on this as on any other matter.

Privacy and Creative Control

You should be able to opt out of assignments that may put your own creative work at risk for data surveillance and use without compensation. Educational institutions have an obligation to protect students from privacy breaches and exploitation.

Appeal

You should be able to appeal academic misconduct charges if you are falsely accused of using any AI system inappropriately. If you are accused of using technology inappropriately, you should be invited to a conversation and allowed to show your work. Punitive responses to student abuse of generative technologies must be based on the same standard of evidence as any other academic misconduct charges. Critical AI literacy means that all parties recognize that detection tools are at present fallible and subject to false positives.

Notice

You should be informed when an instructor or institution is using an automated process to assess your assignments, and you should be able to assume that a qualified human will be making final evaluative decisions about your work. You should always have the ability to choose to be assessed by a human and to appeal automated assessments.

Protection of Legal Rights

You should never be subjected to any automated and/or generative system that impinges on your legal rights (including but not limited to those stated above).


I especially want to thank the Critical AI editorial team for extensive and thoughtful feedback on several drafts of this essay; Anna Mills, Maha Bali, Autumn Caines, Lisa Hermsen, and Perry Shane’s input on my earliest draft of this framework; and the Kansas and Missouri educators who attended the June 2023 AI & Digital Literacy Summit at the Hall Center for the Humanities at the University of Kansas, whose concerns and dialogue helped sharpen my thinking about this work.


Footnotes

1 I use the industry’s preferred term, generative AI, while recognizing that “artificial intelligence” (AI) is a loaded term with a complicated history of extracting data. Though “generative AI” is increasingly common, I concur with Bender (2023 n.p.) that a more apt term for this cluster of technologies might be “synthetic media machines.” The novelist Ted Chiang refers to this elaborate data-mining technology (Murgia 2023) simply as “applied statistics.”

2 For “end of the essay” predictions, see Herman 2022, Marche 2022. For journalists urging educators to “teach with” AI, see, for instance, Heaven 2023, Roose 2023, and Rim 2023.

3 On flawed efforts to detect AI-generated work, see Wiggers 2023 and Fowler 2023; evidence already suggests that vulnerable populations are more likely to be accused of cheating (e.g. Liang et al 2023).  See Klee 2023 on one notable case of a false accusation by an instructor who assumed that the LLM could detect its own outputs (see, e.g., Klee 2023).

4  See, for example, Pasquale (2020, 6-88) and Goodlad and Baker (2023). While Pasquale’s first proposed law specifies that “Robotic systems and AI should complement professionals, not replace them” (3), the public release of GPT-4 was accompanied by widely hailed claims that falsely implied that chatbots capable of passing, say, the LSAT, are thereby equipped to practice law. As Goodlad and Stone write in the introduction to this special issue, “The same mentality that finds tech companies eager to portray LLMs as gifted educators also finds them keen to uphold chatbots as the ideal replacement for psychotherapists, lawyers, social workers, doctors, and much else.” For the recent case of one lawyer’s disastrous reliance on ChatGPT for legal research see  Armstrong 2023, Milmo 2023. Notably, the release of OpenAI’s GPT-4, supported by Microsoft, coincided with the firing of Microsoft’s AI Ethics and Society team in 2023. In this, Microsoft followed in the footsteps of Google’s firing of ethicists Timnit Gebru and Margaret Mitchell in late 2020 and early 2021, respectively (see e.g. Metz and Wakabayashi 2020, Schiffer 2021). To be sure, some educators have been in productive, collaborative conversations with tech companies and fellow educators about the use of generative technologies in education (e.g. Mills, Bali, and Eaton [2023]).

5 Describing human creative work as training data is already “deliberately alienating reductionism” (Conrad 2023 n.p.), stripping such work “of the critical essence by which it avails itself of copyright protection: its expressive value and human creativity” (Kupferschmid 2023. For important work on these problems, see Bender et al 2021, Crawford 2021, Weidinger et al 2021, Whittaker 2021, Acemoglu and Johnston 2023, Caines 2023, D’Agostino 2023, Fergusson et al. 2023, Furze 2023, Gal 2023, Hendricks 2023, Perrigo 2023, Sweetman and Djerbal 2023, Turkewitz 2023, and van Rooij 2023.

6 Adopting a principles-first approach helps to avoid situations similar to those encountered by copyright law, which emerged in reaction to the printing press (ARL n.d.) and whose “fair use” stipulations (Turkewitz 2023) have been exploited by large corporations. While the Russell Group of UK universities has articulated a loose set of principles (2023), their likely impact is attenuated by vague language around the notion of “appropriate use” as well as an unsubstantiated assumption that “AI” provides a “transformative opportunity” that these institutions are “determined to grasp.”  The MLA-CCCC Joint Task Force on Writing and AI Working Paper, published as this essay was copy-edited, makes a strong set of principled recommendations focused on building critical AI literacy.

7 As its extensive legal disclaimer page makes clear [OSTP 2022b]), the Blueprint is not enforceable; indeed the US White House and Congress have actively supported companies known to have violated these very rights (see White House 2023a, White House 2023b, Krishan 2023).

8 The quoted text comprises each of the headers and the statement of principles in the Blueprint as of July 2023. Full text can be found at https://www.whitehouse.gov/ostp/ai-bill-of-rights/.


Bibliography

Acemoglu, Daron, and Simon Johnson. “Big Tech Is Bad. Big A.I. Will Be Worse.” New York Times, June 9, 2023. https://www.nytimes.com/2023/06/09/opinion/ai-big-tech-microsoft-google-duopoly.html

Armstrong, Kathryn. “ChatGPT: US lawyer admits using AI for case research.” BBC News, May 27, 2023. https://www.bbc.com/news/world-us-canada-65735769

Association of Research Libraries (ARL), “Copyright Timeline: A History of Copyright in the United States.” [n.d.] https://www.arl.org/copyright-timeline/  

Bali, Maha. “What I Mean When I Say Critical AI Literacy.” Reflecting Allowed [blog]. April 1, 2023.  https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy/

Bender, Emily. Twitter post, June 17, 2023, 8:47 am. https://twitter.com/emilymbender/status/1670065739196420096?s=20

Bender, Emily, Timnit Gebru, et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜” Proceedings of the 2021 ACM, Virtual Event, Canada, March 2021, 610-623.

Caines, Autumm. “Prior to (or instead of) using ChatGPT with your students.” Is a Liminal Space [blog], January 18, 2023. https://autumm.edtech.fm/2023/01/18/prior-to-or-instead-of-using-chatgpt-with-your-students/ 

Conrad, Kathryn (Katie). “Data, Text, Image: How We Describe Creative Work Matters.” Pandora’s Bot [blog] May 4, 2023. https://kconrad.substack.com/p/data-text-image

Crawford, Kate. 2021. Atlas of AI. Yale University Press.

D’Agostino, Susan. “How AI Tools Both Help and Hinder Equity.” Inside Higher Ed, June 5, 2023. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/06/05/how-ai-tools-both-help-and-hinder-equity

European Union. “Artificial Intelligence Act.” 2023. https://artificialintelligenceact.com/ Accessed July 4, 2023.

Fergusson, Grant, Catriona Fitzgerald et al. “Generating Harms: Generative AI’s Impact and Paths Forward.” White paper, Electronic Privacy Information Center, May 2023. https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward/ 

Fowler, Geoffrey A. “We tested a new ChatGPT-detector for teachers. It flagged an innocent student.” The Washington Post, April 3, 2023. https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Furze, Leon. “Teaching AI Ethics.” Leon Furze [blog], January 1, 2023. https://leonfurze.com/2023/01/26/teaching-ai-ethics/ 

Gal, Uri. “ChatGPT is a data privacy nightmare, and we ought to be concerned.” ArsTechnica, February 8, 2023. https://arstechnica.com/information-technology/2023/02/chatgpt-is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/ 

Goodlad, Lauren, and Sam Baker. “Now the Humanities Can Disrupt ‘AI’.” Public Books, February 20, 2023. https://www.publicbooks.org/now-the-humanities-can-disrupt-ai/ 

Goodlad, Lauren, and Matthew Stone, “Introduction.” Critical AI, forthcoming, February 2024.

Heaven, Will Douglas. “ChatGPT is going to change education, not destroy it.” MIT Technology Review, April 6, 2023. (online).

Hendricks, Christina. “Some Ethical Considerations in ChatGPT and Other LLMs. You’re the Teacher [blog], February 2, 2023. https://blogs.ubc.ca/chendricks/2023/02/02/ethical-considerations-chatgpt-llms/

Herman, Daniel. “The End of High-School English.” The Atlantic, December 9, 2022https://www.theatlantic.com/technology/archive/2022/12/openai-chatgpt-writing-high-school-english-essay/672412/ 

Klee, Miles. “Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers.” Rolling Stone, May 17, 2023. https://www.rollingstone.com/culture/culture-features/texas-am-chatgpt-ai-professor-flunks-students-false-claims-1234736601/ 

Krishan, Nihal. “Congress gets 40 ChatGPT Plus licenses to start experimenting with generative AI.” Fedscoop, April 24, 2023. https://fedscoop.com/congress-gets-40-chatgpt-plus-licenses/

Kupferschmid, Keith. Copyright Alliance, AI Accountability Policy Request for Comment, Docket No. 230407–0093 (June 12, 2023). https://copyrightalliance.org/wp-content/uploads/2023/06/NTIA-AI-Comments-FINAL.pdf 

Liang, Weixin, et al, “GPT detectors are biased against non-native English writers,” ArXiv, April 6, 2023. https://arxiv.org/abs/2304.02819

Luccioni, Alexandra Sasha, Christopher Akiki, Margaret Mitchell, and Yacine Jernite. “Stable Bias: Analyzing Societal Representations in Diffusion Models.” March 20, 2023. ArXiv. https://arxiv.org/abs/2303.11408

Marche, Stephen. “The College Essay Is Dead.” The Atlantic, December 6, 2022. (online),

Milmo, Dan et al. “Two US lawyers fined for submitting fake court citations from ChatGPT.” The Guardian, June 23, 2023. https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt

Metz, Cade, and Daisuke Wakabayashi. “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.”  New York Times, December 3, 2020. https://www.nytimes.com/2020/12/03/technology/google-researcher-timnit-gebru.html

Mills, Anna, Maha Bali, and Lance Eaton. “How do we respond to generative AI in education? Open educational practices give us a framework for an ongoing process.” Journal of Applied Learning and Teaching (6.1) 2023. https://doi.org/10.37074/jalt.2023.6.1.34 

MLA-CCCC Joint Task Force on Writing and AI, “MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations.” MLA and CCCC, July 2023. https://aiandwriting.hcommons.org/working-paper-1/

Murgia, Madhumita. “Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’.” Financial Times, June 2, 2023. https://www.ft.com/content/c1f6d948-3dde-405f-924c-09cc0dcf8c84?shareType=nongift 

OpenAI. “GPT Technical Report,” March 27, 2023. (online).

Pasquale, Frank. 2020. New Laws of Robotics: Defending Human Expertise in the Age of AI. Harvard University Press.

Perrigo, Billy. “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” Time, January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/

Rim, Christopher. “Don’t Ban ChatGPT—Teach Students How To Use It.” Forbes, May 3, 2023. https://www.forbes.com/sites/christopherrim/2023/05/03/dont-ban-chatgpt-teach-students-how-to-use-it/?sh=581ea1b8245b 

Roose, Kevin. “Don’t Ban ChatGPT in Schools. Teach With It.” New York Times, January 12, 2023. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html 

Russell Group. “Russell Group principles on the use of generative AI tools in education.” July 4, 2023.

Schiffer, Zoe. “Google fires second AI ethics researcher after internal investigation.” The Verge, February 19, 2021. https://www.theverge.com/2021/2/19/22292011/google-second-ethical-ai-researcher-fired

Sweetman, Rebecca, and Yasmine Djerbal. “ChatGPT? We need to talk about LLMs.” University Affairs, May 25, 2023. https://www.universityaffairs.ca/opinion/in-my-opinion/chatgpt-we-need-to-talk-about-llms/ 

Torres, Jennifer. “GPT-4 Is Here, Microsoft Gives Its AI Ethics Team the Boot, More AI News.” CMSWire, March 16, 2023. https://www.cmswire.com/digital-experience/gpt-4-is-here-microsoft-gives-its-ai-ethics-team-the-boot-more-ai-news/

Turkewitz, Neil. “The Fair Use Tango: A Dangerous Dance with [Re]Generative AI Models.” Neil Turkewitz [blog], February 22, 2023.  https://medium.com/@nturkewitz_56674/the-fair-use-tango-a-dangerous-dance-with-re-generative-ai-models-f045b4d4196e

Turkewitz, Neil. “Fair Use, Fairness, and the Public Interest.” Neil Turkewitz [blog], February 20, 2017. https://medium.com/@nturkewitz_56674/fair-use-fairness-and-the-public-interest-27e0745bee86

U.S. Office of Science and Technology Policy. “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People.” October 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/ 

U.S. Office of Science and Technology Policy. “About This Document” (“Blueprint for an AI Bill of Rights”). October 2022. https://www.whitehouse.gov/ostp/ai-bill-of-rights/about-this-document/ 

Van Rooij, Iris. “Stop feeding the hype and start resisting.” Iris van Rooij  [blog], January 14, 2023. https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/

Weidinger, Laura, John Mellor, et al. “Ethical and social risks of harm from Language Models.” December 8, 2023. ArXivhttps://arxiv.org/abs/2112.04359

White House. “FACT SHEET: Biden-⁠Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety.” May 4, 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/

White House. “FACT SHEET: Biden-⁠Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence Research, Development, and Deployment.” May 23, 2023. https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/23/fact-sheet-biden-harris-administration-takes-new-steps-to-advance-responsible-artificial-intelligence-research-development-and-deployment/

Whittaker, Meredith. “The Steep Cost of Capture.” interactions 28, 6 (November – December 2021), 50–55. https://doi.org/10.1145/3488666 

Wiggers, Kyle. “Most sites claiming to catch AI-written text fail spectacularly.” TechCrunch, February 16, 2023. https://techcrunch.com/2023/02/16/most-sites-claiming-to-catch-ai-written-text-fail-spectacularly./ 

(Visited 362 times, 1 visits today)

Leave a Reply

Sub Menu
Archive
Back to top