Welcome to the machine

Generative AI & higher education: a selective guide to resources

Image: Jamillah Knowles & We and AI/Better Images of AI/People & Ivory Tower AI 2 / CC-BY 4.0

Alt text: A neural network comes out of the top of an ivory tower, above a crowd of people's heads. Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.

Dr Perry Share

Head of Student Success

Atlantic Technological University, Ireland

Updated 4 July 2024 🇺🇲

Quick link for this document: https://tinyurl.com/ATU-AI-2023

It’s well over a year since ChatGPT erupted into a largely unsuspecting and unprepared higher education world. Since Dec 2022 we have seen an explosion of discussion, peer-learning and development of tools and tactics, as well as critical debate.

This set of resources does not claim to be in any way comprehensive, such is the volume of material now available.

It tends towards commentary within public and academic culture, rather than research articles.

Feel free to share and reproduce as per CC license (see end of document) and please debate the issues in your own context and stimulate discussion on perhaps the most substantial disruption to education we have experienced. Thanks to all those who so generously share materials online. Items are generally ordered by date of publication, earliest to latest. Use ctrl or command F to find what you are looking for!



1.        Why all the fuss?

Initial responses to ChatGPT (up to Feb 2023)

Marc Watkins wrote a fairly positive evaluation of ChatGPT in Inside Higher Education [14.12.22]

ChatGPT and the Future of University Assessment by Kate Lindsay. An early intervention [16.1.23]

Academic papers written by AI get a solid B — but is it cheating? brief early opinion piece by Rhona McEwen, Vice Chancellor of Victoria Uni, Toronto. [23.1.23]

ChatGPT is a Paradigm Shift; why Education Should Embrace It. Mark van Rijmenam, Futurist. A rather breathless encomium on the potential of AI to take education out of the C19 but does paint a vision of a much more interactive and dare say it – fun – future. [25.1.23]

A graphics-heavy explanation of Generative AI, by Nick Routley and Mark Belan [1.2.23]

A quick overview of the history of AI from Wired, to put you in the picture. [8.2.23]

A snappy ‘investigative’ story from NBC Bay Area TV news (USA) [7 min video][17.2.23]

Ted Chiang’s New Yorker article: ‘ChatGPT is a blurry JPEG of the web’ - an instant classic in thinking about the technology and its implications for writing. [9.2.23]

The AI/HE debate develops (after Feb 2023)

Elizabeth Weil in New York Magazine on Emily Bender (of ‘stochastic parrot’ fame) and other critics/boosters of LLMs [1.3.23]

Simon Willison argues that we should think of an LLM (like ChatGPT) as a ‘calculator for words’ – a brief and useful guide [2.4.23]

Thomas Lancaster on why ChatGPT may be a ‘force for good’ in higher education [24.4.23]

It’s been claimed that ChatGPT could ‘ace’ an undergraduate curriculum at MIT. This response is highly critical of that claim, on a number of levels [Chronicle of Higher Education 7.7.23]

ChatGPT discussion with Dave Cormier (CAN) and Tim Fawns (AUS). Hour-long video of a 2-way conversation - an accessible and entertaining introduction to the many issues that AI technologies raise for educators. [no date]

The robots are coming: Artificial Intelligence and the education revolution - A nice slide deck by Brenna Clarke Gray that gets the key issues across very clearly and concisely. [no date]

The real risk of generative AI is a crisis of knowledge suggests Joshua Thorpe in WonkHE [27.6.23]

Will AI mean the end of the classroom - and those who teach in it? Quite possibly, suggests AI pioneer Stuart Russell (Guardian 7.7.23)

A pretty comprehensive comparison of ChatGPT, Bard and Bing, from WIRED [30.3.23]

An ‘explainer’ from Elliot Jones for the Ada Lovelace Institute on Generative AI, foundation models, LLMs &c - very useful for those who want/need a more in-depth understanding [17.7.23]

An even more in-depth guide to Understanding Deep Learning - free-to-download e-book from Simon Prince [MIT Press][4.9.23]

An intro to Claude2 - the relatively new ‘ethical’ AI kid on the block [Guardian 12.7.23]

The ever-excellent Ethan Mollick on the state of play with AI tools in July 2023 [15.7.23]

Brian Basgen of Educause has created a Generative AI primer - all you need to get you up to speed on this technology [15.8.23]

How AI chatbots like ChatGPT or Bard work – a ‘visual explainer’ from the Guardian. A good intro and overview of Large Language Models [LLMs] [1.11.23]

Kane Murdoch at UNSW has a blog on all this - a frank look at how AI and contract cheating (amongst other things) pose big challenges for higher education. [frequently updated]

How complex is AI? Very, according to this amazing graphic from the Anatomy of AI [last checked 12.4.24]

The Stanford AI Index report for 2024 - an excellent overview at over 500 pages. Covers just about every aspect of AI [April 2024]

2.        It’s all about the money!

A brief history of ChatGPT from Technology Review – it has been decades in the making [8.1.23]

Tech Giants are in a Race to Dominate the AI Frontier. Outlines the main players (usual suspects: Apple, Meta, Alphabet, Microsoft) and how much money is being ploughed into AI [17.1.23]

Where is the money being made? Interesting but slightly techy article that outlines the components of the industry and where the value is likely to be captured (think chips and data centres). [19.1.23]

The inside story of ChatGPT by Jeremy Khan in Fortune. A potted history of the popular AI platform, outlining the contradictory motives and implications of its development. Excellent for putting this technology into a broader context. [25.1.23]

Joma Tech’s Re-cap of ChatGPT – a nice techy video from a nice techy guy. Will give you a good overview of the evolution of ChatGPT – (comes with a little bit of swearing)[Feb 2023]

Risk of ‘industrial capture’ looms over AI revolution The small number of private tech companies dominating the AI research landscape has implications for all of our futures, according to Madhumita Murgia in the Financial Times [24.3.23]

The ‘inside story’ of how ChatGPT was developed – Will Douglas Heaven reports from the developers of the technology (in MIT Technology Review)[3.3.23]

The secret history of Elon Musk, Sam Altman, and OpenAI – Another inside story by Reed Albergotti for Semafor [29.3.23]

Promotional video from Microsoft about its CoPilot service that will put generative AI at the centre of its tools such as Word, Powerpoint and Excel [March 2023] CoPilot to launch in late September for Microsoft 365 - that will be a game changer [21.9.23]

A (long) (2hrs+) interview with Sam Altman, CEO of OpenAI, conducted for his podcast by Lex Fridman. Important insight into the thinking behind the development of ChatGPT and why it was deployed into the public so rapidly (YouTube video)[April 2023]

The 2023 Landscape Report from AI Now critically examines in detail how ‘tech power’ is shaping the AI landscape [11.4.23]

Brief update from WIRED on changes to the development process of LLMs, according to OpenAI’s CEO Sam Altman [17.4.23].

Briefing from Google on their Palm2 LLM - greater multilingual capacity, amongst other things [10.5.23]

An interview with Sam Altman, OpenAI CEO, by Raza Habib on future plans for the company and its products, including ChatGPT [29.5.23]

How Microsoft, Google and Disney are making AI the cornerstone of future products and services - Leigh Mc Gowran in SiliconRepublic [9.8.23]

The UK Competition and Markets Authority [CMA] has examined AI Foundation Models, and their impact on competition and consumer protection. It finds that FMs (such as that behind ChatGPT) raise significant competition issues and suggests five principles that should underlie their development [18.9.23]

Guardian article about the CMA report [18.9.23]

Amazon enters the fray with significant $4bn investment in Anthropic AI (firm behind Claude) [25.9.23]

More on Anthropic and how it fits into the AI picture, from Elaine Burke for Business Plus [2.10.23]

The economics of the AI revolution - podcast from popular Irish economist David McWilliams [21.11.23]

So - what did all the ructions at OpenAI mean for the future of the company and AI? Kevin Roose provides an early analysis [New York Times, 23.11.23]

Who is leading in AI research? Shows the dominant influence of the major tech companies in AI [EpochAI 27.11.23]

Other companies, such as French start-up Mistral, are starting to enter the fray to challenge the established players [Silicon Republic 11.12.23]

OpenAI has shifted its EU data processing responsibilities to its Dublin office, so will be regulated by the Irish Data Protection Commission [3.1.24]

Unsurprisingly, the government of Saudi Arabia is investing heavily in AI - according to in the Adam Satariano and Paul Mozer in the New York Times [25.4.24]

Anthropic’s Claude LLM is positioning itself as a serious rival to ChatGPT, according to Alex Herne in the Guardian [1.5.24]

OpenAI is adopting a time-honoured approach: release buggy tech (as in its Omni product) and let the public improve it! [New York Times 31.5.24]

Intellectual property

This article by Jesse Dodge and colleagues outlines the sources ‘scraped’ to create C4, one of the large datasets used to train LLMs - basis for this article in the Washington Post [paywall] [19.4.23]

A brief legal article on the implications for music performers (in the wake of the ‘Weeknd/Drake’ AI release) - in terms of intellectual property [IP] rights in the age of AI [21.4.23]

A UNESCO webinar on the topic of IP and AI [25.5.23]

US comedian Sarah Silverman and others are suing OpenAI and Meta in relation to unauthorised use of their IP to train LLMs [Guardian 10.7.23]

Musicians are also suing AI company Anthropic (makers of Claude) in relation to intellectual property [Guardian 19.10.23]

In an IP ‘arms race’, visual artists are using technological ‘poison’ to prevent the use of their images by AI image generators, according to Melissa Heikkilä in the MIT Technology Review [23.10.23]

Authors (inc. Margaret Atwood) are fighting back too [18.7.23] - as is actor Scarlett Johansson [1.11.23]

AI starting to impact on the work of professional illustrators, according to Erik Ofgang in the New York Times [4.11.23]

Publisher Axel Springer does deal with OpenAI over providing material for ChatGPT - a sign of things to come? [Guardian 13.12.23]

While the New York Times elects to sue OpenAI and Microsoft for property infringement [2.1.24]

OpenAI admits that training of LLMs ‘would be impossible’ without use of copyrighted material [Guardian 8.1.24]

Tennessee first US state to make AI voice cloning (in music) illegal [New York Times 21.3.24]

200+ artists - from Elvis Costello to Nicki Minaj - have signed an open letter expressing concern over the use of AI in popular music [2.4.24]

3.        How are people using this AI stuff? (tiny, tiny sample)

AI Tools

A list of some of the 25 most popular AI writing tools [updated 7.12.23]

And a list of all AI tools (11039+ as of 3.1.24)!

Another site aggregator: There’s an AI for that - 12400+ tools and counting [as of 7.5.24]

AI educator tools: A repository of AI tools for teachers - a resource compiled by Dan Fitzpatrick ‘The AI Educator’. Over 100 tools across all levels of education [23.1.24]

Prompting

Here is a free online course on prompt engineering that will get you from very basic to advanced. Here’s the promotional video. 

Of course AutoGPT (still in its early days) may remove the need to know how to prompt! Learn more about this potentially powerful tool in this Forbes article [25.4.23]

If you want to use AI effectively, you need to learn about prompting or more technically ‘prompt engineering’. Here is a brief overview from Ray Schroeder [Inside HE 26.4.23]

And here is some guidance from Bronwyn Eager and Ryan Brunton on using prompt engineering in the higher education context [29.5.23]

In Harvard Business Review Oguz A. Acar argues that the challenge of effectively using AI tools is not prompt engineering (which will soon become redundant) but mastering problem formulation (‘the ability to identify, analyse and delineate problems’)[6.6.23]

David Smith’s excellent Prompting 101 for higher education teachers [4.7.23]

Microsoft Github has compiled a great source of prompts for use in education [last checked 2.3.24]

Other stuff

This New York Times article contains the now infamous (and early) example of the Bible, the peanut butter sandwich and the VCR. [5.12.22]

ChatGPT (and similar) may be of great benefit to people with a range of communication disabilities, according to this article by a group of Australian researchers [19.1.23]

AI use in secondary schools remains patchy according to this research by Mutlu Cukurova and colleagues [5.4.23]

ChatGPT-controlled Furbies — is this the end of humanity? Probably not, but how could you resist? [5.4.23]

[Furby image https://www.flickr.com/photos/whatleydude/8317241259 CC BY 2.0]

It’s not that long since Wikipedia was seen as a threat to the sanctity of knowledge. Now there is worry that the online crowd-sourced encyclopaedia might itself be subverted by ChatGPT! [2.5.23]

An interesting test of ChatGPT4’s visual capabilities [27.9.23]

An early (2022) music video made using AI. No particular reason to be here, it’s just nice 🙂

AI Weirdness by Janelle Shane is a lovely compendium of the (many) sorts of things that can (and do) go wrong with AI tools


4.        How is Higher Education going to cope with all this?

General resources and guides

Anna Mills (Marin College, San Francisco) has co-compiled an ever-expanding set of resources on AI and ChatGPT in HE. You need to check it out and contribute if you can – it is a collective exercise. She has written a more formal piece in Critical AI - regularly updated.

Anna Mills hosts another collaborative list: AI text generators:  Sources to stimulate discussion among teachers [last visited 17.9.23]

Mary Jacobs of Aberystwyth University (Wales) has a fantastic weekly round-up of resources and events on HE Teaching and Learning, including a special section on AI

Artificial Intelligence, Student Learning and Assessment A useful assessment guide from Louise Drumm at Edinburgh Napier University. [Jan 23]

The ultimate guide to generative AI in higher education? It’s a very comprehensive set of tactics, ideas and suggestions. From Abram Anders at Iowa State University [20.2.23]

Unlocking the power of generative AI models and systems such as GPT-4 and ChatGPT for higher education a guide for students and lecturers – comprehensive guide from a group based in a number of German universities, including diverse discipline areas. [20.3.23]

AI tools for teachers - from Rachel Arthur. AI-based tools that can make the life of an educator easier and more productive [24.4.23]

Australian HE quality agency TEQSA has collated a page of all their AI resources: webinars, guidance &c - mainly from Australian universities [9.5.23]

JISC’s Generative AI primer. A great overview for all in higher education [11.5.23]

Google has made available free online courses in generative AI [19.5.23]

Terrific AI in Higher Education Resource Hub from teachonline.ca in Canada. Covers all aspects in a very accessible way [21.6.23]

CRAFT at Stanford University School of Education is a collaborative collection of resources for teaching about AI [21.6.23]

ChatGPT and Bing: A practical guide - comprehensive (and frequently updated] guide especially for business studies and social sciences - from Christian Hendriksen, Copenhagen Business School [29.6.23]

For the visually-inclined, this ChatGPT Wakelet collection from Rich Dron is great! [no date]

If techy metaphors help you to understand phenomena - this is for you! From Anuj Gupta [no date]

AI, Chatbots & ChatGPT for Teachers – a MOOC that is free to use. Developed by Nick Jackson. Starts with a very simple intro to AI, moves on to chatbots and then deals with educational aspects including assessment. A great way to get up to speed fast.

Practical Responses to ChatGPT – guide from Montclair University

Responses from educators at the American University of Cairo – lots of great ideas! [Spring [NH] 2023]

Generative artificial intelligence technologies and teaching and learning. A guide from Monash University.

UNESCO’s useful ‘quick start’ guide [no date]

‘There’s still a way to go’ at Harvard University as they seek to come to terms with the impact of AI on their teaching, learning and assessment. That’s a relief for us lesser mortals! [THE 7.9.23]

The University of Maryland has put together a great free on-line short course to introduce students to Generative AI. Even better, it is licensed Creative Commons, so available for reuse and remixing [21.9.23]

TeachAI is a big collaborative project to develop educational policies for AI. Material set to emerge 17.10.23!

AI for Education - a great resource for primary and secondary [K-12] teachers - who are helping to shape the HE students of the future! [last visited 28.9.23]

Almost one year after the emergence of ChatGPT and time for the first meta-systematic review of the impact of generative AI on higher education: by Melissa Bond and colleagues [October 2023]

An introductory MOOC on Generative AI in HE from King’s College London (on FutureLearn)[11.10.23]

Role of AI chatbots in education: systematic literature review, by Lasha Labadze and colleagues in the Int. J. Ed. Tech in HE [31.10.23]

Best AI Tools for Students - from IU International University of Applied Sciences [Germany] - offers a Top Ten and some useful contextual information [6.11.23]

A short but useful set of key readings on AI from Bryan Alexander, aimed at graduate students [14.11.23]

​​Leveraging AI for authentic assessment in higher education: Embracing the paradox for online learners. A collaborative padlet of resources curated by Kate Grovergrys and colleagues at Madison College [1.12.23]

Instructors as innovators: a future-focused approach to new AI learning opportunities, with prompts - Ethan and Lilach Mollick. Great resource from two of the leading practitioners/commentators in the field of Generative AI in education [23.4.24]

More Ethan Mollick - this time talking about GenAI and education at the ASU+GSV Summit (linked to his book Co-intelligence) [video 28.4.24]

​​

Student voices

Student guide to using AI from Monash University

Ireland’s young people had an opportunity to express their views on AI (before the emergence of GenAI) [12.10.22]

Thomas Lancaster actively explored ChatGPT (and other AI-based tools) with his students. Here is what happened. [16.2.23]

ChatGPT in higher education: Considerations for academic integrity and student learning with an emphasis on student experience and (lack of) student voice. By Miriam Sullivan, Andrew Kelly and Paul McLaughlan of Edith Cowan University, Australia. [21.3.23]

A (UK) student perspective on generative AI from a JISC-convened panel discussion [21.4.23]

A series of student-generated webinars from the University of Kent [UK] [27.4.23]

Large scale survey of Swedish students’ attitudes toward and use of ChatGPT - from Malmström, Stöhr & Ou, Chalmers University [12.5.23]

UCL ran a student-based AI Assessment Hackathon to explore how to respond to generative AI - here is what happened [27.6.23]

Students need to be co-pilots on the AI adventure argues UK student leader Seb James in WonkHE [14.7.23]

Student Inaya Compton provides her view on how AI can help to create a student-centred learning environment [27.7.23]

Report on a US student-run conference on AI in HE , from Jeffrey R. Young for EdSurge [24.8.23]

​​10 ways technology leaders can step up and in to the generative AI discussion in higher ed - by Lance Eaton and Stan Waddell. Useful suggestions based on student feedback [3.10.23]

Case study from UCL on students’ use of AI writing tools [19.10.23]

David Rinehart, Librarian at Dublin Business School, on his Originaite project to survey students on their engagement with Generative AI tools [16.2.24]

David Goldberg and colleagues at Colorado State University have shared their AI student survey tool and are encouraging other HEIs to use it [29.2.24]

AI on campus - student perspectives. From the University of Limerick [Spotify podcast 25.4.24]

System-level statements

Other statements + guidelines

University College London has developed a well-considered set of recommendations for use of AI in assessment, based on the work of an expert group in the university [Feb 2023]

Some very useful advice from JISC (UK) on how not to re-word academic integrity statements in response to ChatGPT/generative AI [14.2.23]

Ray Schroeder, senior fellow for UPCEA: the Association for Leaders in Online and Professional Education (USA) on How to Respond to Generative AI in Inside Higher Education [14.2.23]

The Australian Academic Integrity Network [AAIN] has created a set of advisory guidelines for students, staff and institutions based on the work of a large representative group. CC licence makes it available for repackaging and reuse. [March 2023]

Classroom policies for AI generative tools - a really useful collaborative (Google) document, with c. 125+ programme and module (course) - level statements commenced by Lance Eaton [last viewed 04.06.24]

EDUCAUSE has created a 2023 Horizon Action Plan for Generative AI [25.9.23]

University of Technology Sydney has identified 5 Principles for effective ethical use of generative AI [21.6.23]

The University of Glasgow has outlined its position on use of generative AI on this webpage [last checked 25.1.24]

Here’s what the University of Oxford has to say, in its guidelines on ethical use of AI in learning [last checked 11.3.24]

Slide-decks

ChatGPT & Education an excellent slide-deck from Torrey Trust, that addresses all levels of education with lots of practical examples and advice. [no date]

PSRBs [Professional, Statutory and Regulatory Bodies] are beginning to ask HEIs about graduates’ use of AI tools. In this slide deck (comes with ads) Thomas Lancaster provides PSRBs with a briefing [28.6.23]

Podcasts

How Artificial Intelligence is Impacting Higher Education podcast from US educator Cynthia Alby whose mission is to ‘re-enchant education’.

Unwrapping the use of ChatGPT in Academic Integrity - episode of The Education Burrito with Emma Duke-Williams [28.3.23]

ChatGPT and the challenge of artificial intelligence podcast from Australian psychology educator and academic integrity expert Guy Curtis - focuses on the connections between AI and AI [31.7.23]  

Is AI for me? Perspectives from the humanities: humans and machines. Research podcast from JISC, featuring Ruth Ahnert, professor of literary history and digital humanities at Queen Mary, University of London. Includes discussion of the relationship between humans and machines, and the project Living with machines [13.11.23]

I was delighted to be asked to participate in Seán Delaney’s Inside Education podcast to talk about some of the challenges of generative AI [5.6.24]

Videos/webinars

An early webinar presentation by Anna Mills for QQI can be found here.

QQI in Ireland hosted a series of 5 webinars on AI in Higher Education [27-31.3.23]

QAA in UK also hosted a series of 3 webinars on ChatGPT [22.3.23 - 18.4.23]

N-TUTORR (Irish tech HE sector) masterclass on Academic Integrity [29.3.23]

Digitally Enhanced Teaching webinar series (of nine) from the University of Kent, focus is on aspects of AI/ChatGPT in the classroom, with lots of practical advice [Mar-Apr 2023]

The AI Education Revolution is Coming – or is it? TedX Salon discussion with Philippa Hardman, learning scientist [3.5.23]

Practical AI for Instructors and Students Part 1: Introduction to AI for Teachers and Students - first of a series from Lilach Mollick and Ethan Mollick at U. Penn. - first of a series of instructional videos [31.7.23]

Another key contributor to the debates is Sarah Elaine Eaton of the University of Calgary. In this brief video she outlines her current thinking of AI and the ‘post-plagiarism’ world. Here is Maha Bali’s considered response to Sarah’s proposals.

Mike Sharples (co-author of the excellent Story Machines) is lead speaker in this EDEN NAP Webinar: ChatGPT and the AI Essay: Who Will Write the Ending?

CRADLE at Deakin University (Australia) has produced a series of five webinars on Generative AI in higher education in partnership with TEQSA - the Australian HE quality agency: | 1.What do we need to know now? |2.How should educators respond?|3.What have we learnt? |4.Generative AI: what do researchers need to know? | 5.Assessment reform for the Age of Artificial Intelligence [4.10.23]

AI for Good is a UN initiative (in conjunction with ITU and the Swiss government) that explores contemporary issues and developments in AI. It has a great webinar series, often with topics relevant to HE.

The Teaching, Learning, and Educational Technology Center of the College of Lake County, Illinois held a symposium on AI in higher education - here are the recorded sessions [2.6.23]

Wayne Holmes of UCL on AI in education, a critical studies approach [9.6.23]

Sarah Eaton and Kane Murdoch present on academic integrity and the challenges of GenAI (amongst other things) in this webinar from Munster Technological University [9.10.23]

AI for Education has a great webinar series AI Launchpad. Though aimed at K-12 educators, lots in here that would be of interest to those in HE [last checked 14.11.23]

Danny Liu via TEQSA has created a great 6 min demo video of just some of the things educators and students can do with generative AI. Very effective! [22.22.23]

The first episode of Advance HE’s DVC Dialogues - featuring HE senior leaders - addresses the challenges of AI from a management perspective. Further episodes will look at ‘what next’! [30.11.23]

Way(s) forward for (higher) education

Ethan Mollick’s One Useful Thing blog is full of great insights and ideas about AI and tools like ChatGPT in education and more broadly

University of Calgary collaborative project (led by Sarah Eaton) on Artificial Intelligence and Academic Integrity: The ethics of teaching and learning with algorithmic writing technologies

A set of recorded presentations from University of Portsmouth [UK] on the ways forward for assessment, including guidelines to students and staff

Some thoughts from Vassilis Galanos on teaching with ChatGPT - part of the University of Edinburgh’s Teaching Matters blog ChatGPT 23 series

Open Educational Resources [OER] are publicly available, shared teaching resources. Anna Mills and Elle Dimopoulos have called for widespread generation and sharing of OERs in this comprehensive slide deck that has numerous links to valuable resources. [24.2.23]

How can we teach and assess with ChatGPT? A brief set of suggestions from Soumyadeb Chowdhury and Samuel Fosso Wamba in THE Campus [may require registration – free]. Better on suggestions for tasks than on how to assess them! [22.3.23]

Here is a 40+ min. presentation from Thomas Lancaster on AI in the classroom: friend or foe? Highly entertaining and accessible. Great intro to the field and the issues with lots of examples of AI tools in action [YouTube video][March 2023]

A thoughtful blog post from Greg O’Brien at Griffith College, Dublin, addresses the need to reinvigorate our approaches to learning and teaching [16.3.23]

Artificial intelligence, creativity, and education: Critical questions for researchers and educators, by Edwin Creely and colleagues ‘examines the intersections and tensions between AI, education and creativity using sociomaterial theory’ [SITES on Researchgate 17.3.23]

Marc Watkins is similarly putting out great stuff. Here are some of his ideas for using generative AI in learning as well as links to/info on some of the lesser known platforms [20.3.23] He has a [paid - US$185] course on AI for higher education educators.

ChatGPT in higher education: Artificial intelligence and its pedagogical value is an open access ebook from Rob Rose, published by Northern Florida University, that stresses and outlines some positive and productive applications of ChatGPT in higher education [2.5.23]

Here is a paper from Anna Mills, Maha Bali and Lance Eaton on open educational practices as a framework to respond to generative AI [11.6.23]

 

Assigning AI: Seven approaches for students, with prompts. By Ethan Mollick & Lilach Mollick.  Proposes a framework with seven approaches for using AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AIsimulator, and AI-student [11.6.23]

AI × Education: Some thoughts on how I'm approaching my courses this fall - newsletter entry from Josh Brake, The absent-minded professor [14.6.23]

CRADLE is a research centre on higher education at Deakin University, Australia. Here is its 3-page ‘CRADLE suggests’ on AI and assessment, based on their extensive research into higher education assessment policy and practice [21.6.23]

101 creative ideas to use AI in education: A crowd-sourced collection edited by Chrissi Nerantzi, Antonio M. Arboleda, Marianna Karatsiori and Sandra Abegglen [23.6.23]

AI in my life is an AI teaching resource for secondary school students developed by Ireland’s ADAPT research centre - some outline information here [28.6.23]

The European Digital Education Hub is a key source for educational initiatives and information in relation to AI - open to those inside and outside the EU and operational across all levels of education [29.6.23]

Warwick International Higher Education Academy has created a comprehensive set of resources, that includes materials on AI in education; teaching, learning and assessment, academic integrity issues and AI ethics [July 2023]

How are university design courses adapting to incorporate AI? Abbey Bamford interviews some leading British design educators for Design Week [7.7.23]

Panel Discussion: How Can Educators Maintain Academic Integrity in the Age of GenAI? - (yet) another discussion about AI. This one is unusual as the ‘panel’ comprises Tricia Bertram Gallant, along with ChatGPT, Bard and Bing. [30.7.23]

Using ChatGPT to create courses about ChatGPT - how educators in higher education are beginning to harness the technology - from Lauren Coffey in Inside Higher Education [31.7.23]

How one HEI (UCL in London) is changing its approach to learning, teaching and assessment [from JISC] [31.7.23]

Rob Gibson for Educause Review on how AI is reshaping instructional design [14.8.23]

UNESCO has developed AI Competency frameworks for students and teachers - aimed at second level, but many aspects transferable to HE [14.9.23]

UCL’s Generative AI Hub aims to ‘bring together all the latest information, resources and guidance on using Artificial Intelligence in education’ - a great resource! [21.9.23]

It’s not like a calculator, so what is the relationship between learners and generative artificial intelligence? So ask Jason Lodge and colleagues, exploring the metaphors in the field [25.9.23]

Advance HE has developed a new collaborative project on generative AI in higher education - one that goes beyond assessment and academic integrity to address issues such as employability - the AI Garage (may require AHE membership to fully access) [29.9.23]

Bryan Alexander on Teaching with generative AI, September 2023 - a lively account of using some generative AI tools in a graduate level class, with lots of examples [6.10.23]

The unintended consequences of Artificial Intelligence and education - report by Wayne Holmes for Education International (global teacher union organisation). Full research report and Executive summary available. Amongst other things, points to the issues of equity and power in relation to development and use of AI in education and is deeply sceptical about the potential for ‘personalised learning’. Is a useful counterpoint to many other enthusiastic accounts of AI in education [18.10.23]

metaLAB@Harvard has launched the AI Pedagogy Project - aimed at educators, this: ‘interactive guide will give you the background you need to feel more confident with engaging conversations about AI in your classroom’. [13.11.23]

After a year of ChatGPT, is academia getting to grips with generative AI? Asks Mariët Westermann in the THE (may require registration to access). Not really, but it will have to, is her conclusion [23.11.23]

Adrian Kirwan from Maynooth University, Ireland, on ChatGPT and university teaching, learning and assessment: some initial reflections on teaching academic integrity in the age of Large Language Models. [May be paywalled] Outlines his experience of teaching with LLMs. Things have moved on, but a snapshot of where we were in early 2023 [24.11.23]

Artificial intelligence and data literacy for primary school teachers and children - a great handbook for those aiming to become, or who already are, teachers at this level. From Dublin City University/Insight Centre for Data Analytics [30.11.23]

A Catholic Ethics viewpoint on use of GenAI in education - from ethicist Irana Raicu of Santa Clara University [30.11.23]

Used as a learning co-pilot, ChatGPT puts students on the road to success - according to Dan Sarofian-Butin in the THE (may require registration to access) [31.1.24]

AI for teachers - an online textbook. Output of the AI4T Erasmus project. While aimed at secondary level teachers, a very useful open resource. Available in English, Italian, German, Slovenian or French [2.2.24]

UNESCO Guidance for Gen AI in Education - online open access interactive lecture [March 2024]

A map of Generative AI for education - by Laurence Holt and Jacob Klein. An updated visual representation of tools and challenges. Focus is K-12 but lots of interest to HE [Medium 7.3.24]

A discussion with Lance Eaton, Anna Mills and Amanda Ellis on AI, assessment, skills and competencies, hosted by the Academic Data Science Alliance. Key takeaways here [March 2024]

Student writing

Digital writing technologies in higher education - an online open-access e-book from Springer that ‘covers the advancements of 40 years of digital writing with precise descriptions of more than 20 key technologies’. From Otto Kruse and colleagues. Puts LLMs into historical and discursive perspective [14.9.23]

Leon Furze is an Australian teacher who blogs extensively and intelligently about the use of ChatGPT in the everyday work of teaching writing

How ChatGPT transforms a classroom – podcast with US high school writing teacher Cherie Shields - discusses very practical ways that she uses the platform to support student writing. Lots that could be transferred across to a HE setting. [podcast on Spotify 13.1.23]

There is a lot of agonising about academic writing in the age of generative AI. Here Thomas Basbøll argues Why you can’t cite ChatGPT on his Inframethodology blog [8.3.23]

Technology-supported writing did not of course start with AI! Digital Writing Technologies in Higher Education is an open-access book edited by Otto Kruse and colleagues that traces 40 years of digitisation in academic writing - from word processing to AI [March 2023]

From Anna Mills: Rethinking writing for assessment in the era of Artificial Intelligence. Great overview of the challenges and - of most importance - how to move forward. [18.4.23]

Writing to and from, for and against, with and without language models - by Jeremy Douglass from the USC Future of Writing Symposium [1.5.23]

Jennie Young on ten ways that ChatGPT may be a positive aid to supporting student writing. Food for thought! In Inside Higher Education [11.5.23]

ChatGPT may be capable of reflective writing [23.5.23]

​​How to talk to your students about AI by writing instructors Tim Laquintano, Lafayette College and Annette Vee, University of Pittsburgh - brief reusable guide [1.6.23]

We all agree writing is important for learning, don’t we? But what if it is a transitional technology? Short piece by Matt Reed in Inside Higher Education [23.6.23]

From Tim Laquintano, Annette Vee & Carly Schnitzler, a whole collection of essays on the challenge of writing in the age of AI: Teaching with Text Generation Technologies. An excellent background intro and then extensive exploration, ideas and suggestions. A fantastic resource! [2023]

Template for critiquing AI text - from LibreTexts [4.10.23]

Thomas Basbøll argues that essays are still an important assessment tool [9.10.23]

Generative AI activities for the writing & language classroom from Anna Mills Starting to pull together a lot of wisdom and experience in this field [17.10.23]

Case study from UCL on students’ use of AI writing tools [19.10.23] (also in Student voices)

Write what matters is an open educational resource [OER] on academic writing created by Amy Minervini, Liza Long and Joel Gladd from Idaho state colleges. It has a whole section on how best to use AI in the writing process [last checked 17.11.23]

Exploring AI pedagogy - a community resource from MLA-CCC Joint Task Force on AI and writing [last checked 15.12.23]

Human-AI collaboration patterns in AI-assisted academic writing - research report (on doctoral-level academic writing using GenAI tools) by Andy Nugyen et al. Concludes that an iterative approach to AI tools can improve academic work; more linear approaches result in poorer quality output. More research needed!  [Studies in Higher Education open access [28.2.24]

‘Writing as passing’ - recorded webinar (from Helen Beetham (UCL) How to reimagine student writing in a positive way in the age of Generative AI [21.3.24]

‘AI Literacy’

Macquarie University has proposed an AI literacy framework [30.3.23]

Everyone agrees that we all need to develop ‘critical AI literacy’, but what might this mean? This thoughtful post from Maha Bali’s blog Reflecting Allowed provides a lot of food for thought (and links to resources) [1.4.23]

Research article in the Journal of Academic Language and Learning by Lynette Pretorius of Monash University on how to foster AI literacy through teaching [19.4.23]

A Tech Librarian Explains How to Build AI Literacy [Choice 26.4.23]

US-based AI Education Project seeks to develop AI literacy amongst school children: but the activities can be adapted to higher education learners [6.6.23]

Learn with AI from the University of Maine ‘offers an opportunity to introduce students to the ethical and economic questions wreaked by these new tools, as well as to experiment with progressive forms of pedagogy that can exploit them’. A great initiative! [18.7.23]

Tom Farrelly and Nick Baker explore the necessity for AI literacy, in particular in relation to international student experience and in terms of linking into current ways of thinking about higher education. The reference list in this review is particularly useful [Education Sciences 4.11.23]

Keep up to date with all things AI with the AI Exchange newsletter from Rachel Woods [last checked 10.11.23]

Does an algorithmically-determined culture lead to ‘intellectual passivity’ and if so, what can educators do about it? - asks Eileen G’Sell in the Chronicle of Higher Education (subscription may be required) [30.5.24]

Syllabi

Boris Steipe established the Sentient Syllabus collaborative project to foster learning in the age of AI. Its three principles are: i) An AI cannot pass a course; ii) AI contributions must be attributed and true; iii) AI use should be open and documented. May help you to design your own AI-conscious syllabus. [23.2.23]

Design of learning environments: AI, equity, and public education. Syllabus document from Sepehr Vakil and Charles Logan, Northwestern University. [25.9.23]

Teaching CS50 with AI: Leveraging Generative Artificial Intelligence in Computer Science Education by Rongxin Liu and colleagues at Harvard. This research paper details how AI tools have been used by the authors to augment teaching and learning on a CS course [7.3.24]

Generative AI and faculty writing/research

Nature addressed the issues around the use of ChatGPT in scientific research [3.2.23]

What about academic writing – by academics? It will be affected by AI tools as well, as this article by Ben Chrisinger in the Chronicle of Higher Education points out. [22.2.23]

Academic research is already being affected, in both positive and negative ways, according to Jack Grove, in a comprehensive article in Times Higher Education [16.3.23]

Also in Times Higher Education, Three ways to leverage ChatGPT and other generative AI in research, by Daswin De Silva and Mona El-Ayoubi [20.6.23]

Reviewers of research applications in Australia have been found to be using ChatGPT to help to assess funding application - the Australian Research Council is not happy about this, citing confidentiality concerns [30.6.23]

Scientists Milton Pividori and Casey Green outline in this article how they use AI tools ‘to reduce the time-consuming process of writing and revising scholarly manuscripts’. If it’s good enough for academics … [17.7.23]

Use of AI is seeping into academic journals—and it’s proving difficult to detect - according to Amanda Hoover in Wired [17.8.23]

Why not double your research productivity by creating a digital academic twin through training your own LLM? Then again, why stop at twins? Set up a whole research team! Debate with yourselves. Argue over authorship. Fascinating speculations by Sven Nyholm. [9.10.23]

How embarrassment! Australian academics caught out by Bard in submission to parliamentary enquiry [Guardian 2.11.23]

Oxford ‘AI experts’ (Brent Mittelstadt, Sandra Wachter and Chris Russell) on why LLMs pose a threat to scientific research [20.11.23]

The Hardiman Library at the University of Galway delivered a workshop to address the AI needs and opportunities of doctoral and postdoctoral researchers - it identified some key tools and also student usage and concerns [4.12.23]

The European Research Council has warned about the use of generative AI in research grant proposals [19.12.23]

STM - the association of academic publishers, has published a useful set of Guidelines for the ethical and practical use of generative AI in scholarly publishing [Dec 2023]

In late 2023 the Irish Learning Technology Association [ILTA] published a special issue of their journal that featured only articles co-created with AI. They also hosted a discussion (YouTube video) with editors and contributors to examine the process and the broader implications for higher education [20.1.24]

Why scientists trust AI too much — and what to do about it - editorial in Nature based on recent social scientific research [6.3.24]

Responsible use of generative AI in research - guidelines published by the European Commission [22.3.24]

A lot of academics already using GenAI to help write papers: A rapid investigation of artificial intelligence generated content footprints in scholarly publications - research by Gengyan Tang and Sarah Eaton [20.5.24]

Assessment practices

Ed Pitt webinar on the potential for AI Enhanced Assessment and Feedback [27.7.23]

AI could automate or support marking of student work, according to Atsushi Mizumoto and Masaki Eguchi [28.7.23]

Martin Compton on how students may use ChatGPT to interpret lecturers’ feedback [28.7.23]

​​We can save what matters about writing - at a price says Ted Underwood. It is about freeing ourselves up from the recycling of old ideas [31.7.23]

Hello GPT! Goodbye home examination? An exploratory study of AI chatbots impact on university teachers’ assessment practices by Alexandra Farazouli and colleagues. Finds that educators will give passing grades to work generated by ChatGPT and that the ‘most manipulated versions of the chatbot’s outputs (ChatGPT1-2) achieved the highest grades’. [1.8.23]

The university essay will die out - according to Rahul Kumar of Brock University writing in Macleans [13.10.23]

The ‘age of AI’ will inevitably lead to a focus on more ‘authentic assessment’. Some key approaches are discussed by Siham Al Amoush and Amal Farhat in Faculty Focus [13.12.23]

University College Cork has created a Toolkit for ethical use of generative AI with a strong focus on assessment [last checked 18.4.24]

Useful advice from Kate Crane at Dalhousie University for designing assessment with GenAI in mind [THE Campus 1.5.24]

The AI in Education learning circle has designed a step-by-step tool for the design of assessment in the context of AI [last checked 17.6.24]

Researchers fool university markers with AI-generated exam papers’ - report of an experiment at University of Reading (UK) [Guardian 27.6.24]

Enhancing access

Using AI to support UDL [Universal Design for Learning] Beth Stark and Jérémie Rostan introduce the free Ludia platform - platform is here [19.8.23]

The Future of Artificial Intelligence in Special Education Technology - Marino et al assess the potential and the issues - from K-12 to higher education in this research article [2023]

Accuracy

How accurate are generative AI search engines in terms of citation and reflecting cited sources? Here is one of the first studies to test this. Answer: not brilliantly. By Nelson Liu and colleagues from Stanford [19.4.23]

Performance of generative AI tools can deteriorate as well as improve, as indicated in a study by Lingjaoi Chen and colleagues (this gels with my own experience). Does not make clear why this may be happening but food for thought [18.7.23]

Trusted Source Alignment in Large Language Models - TSA is a potential way to increase the factual accuracy of LLMs - article by Bashlovkina and colleagues [Arxiv 12.11.23]


‘Detection’ - the evolving debate

Turnitin claims to have developed an almost foolproof detection device [13.1.23]

Initial research that indicates that LLM detection tools are not reliable: ‘a light paraphraser … applied on top of the generative text model, can break a whole range of detectors’ – by Sadasivan et al [17.3.23] (NB need a fair bit of expertise to understand ☺)

‘It looks like you are trying to assess a student’ Jim Dickinson on WonkHE on the limitations of detection software. [29.3.23]

Washington Post article on a test of the Turnitin AI ‘detector’. Spoiler alert: liable to generate false positives (identifying human-written work as AI-generated) [3.4.23]

Sydney Morning Herald article on why many Australian universities are NOT using Turnitin to ‘detect’ use of AI in written work [5.4.23]

Brandi Lawless on an AI-detection strategy that went horribly wrong [Inside Higher Education 20.4.23].

Anna Mills in a thoughtful discussion of why AI detection may have a place in higher education and in society more broadly [slide deck] [1.5.23]

Heather Desaire and colleagues at the University of Kansas say they have developed a way to detect AI generated text (with ‘99% accuracy’), specific to scientific writing [8.6.23]

Comprehensive study of the main AI ‘detection’ tools by Debora Weber-Wulff and colleagues finds that they are ‘neither accurate nor reliable and have a main bias towards classifying the output as human-written rather than detecting AI generated text’. The study identifies the ‘serious limitations of the state-of-the-art AI-generated text detection tools and their unsuitability for use as evidence of academic misconduct’ [28.6.23]

AI-text detection tools are really easy to fool according to Rhiannon Williams in MIT Technology Review [7.7.23]

‘GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness’ according to research by Weixin Liang and colleagues [10.7.23]

OpenAI (creators of ChatGPT) developed their own detector, but quickly admitted it was ‘impossible to reliably detect all AI-written text’. The tool has subsequently been discontinued [25.7.23]

Evaluating the authenticity of ChatGPT responses: a study on text-matching capabilities - research article by Ahmed M. Elkhatat (stats heavy). Concludes that ‘ChatGPT … can generate unique, coherent, and accurate responses that can evade text-matching software, presenting a potential risk for academic misconduct’. Suggests alternative approaches to assessment in higher education [1.8.23]

A further contribution from Jim Dickinson: Of course you can’t detect students’ use of AI. So what next? - discusses some of the methods that AI ‘detectors’ use but quickly gets into a discussion of the purpose (and future) of academic writing per se [3.8.23]

AI text detectors aren’t working. Says Tom Williams in THE. What are the alternatives? [9.8.23]

Beyond ineffective: How unreliable AI detection actively harms students by Marc Watkins - seeks to see the death of such attempts that he argues are anti-student [3.9.23]

AI detectors - do they really work? asks Amanda Bickerstaff of AI for Education in this 5m YouTube video (the answer is NO!) [10.9.23]

What’s it like to be accused of ‘cheating’ with generative AI? Here is a research article from Tim Gorichanaz of Drexel University. It suggests that clumsy attempts to ‘detect’ use of generative AI are eroding trust within higher education [11.9.23]

And here is a more discursive article based on the same research: In the age of ChatGPT, what’s it like to be accused of cheating? [12.9.23]

The AI detection arms race is on. The background to those seeking to develop detection tools, and those plotting ways to evade them. Christopher Beam in WIRED. [14.9.23]

JISC in the UK has provided useful advice in relation to detection of AI text. It’s view: `relying on AI writing detection is going to be futile as a primary mechanism of maintaining academic

integrity’.  [18.9.23]

Seán O’Sullivan’s AI detection project wins Ireland’s Young Scientist competition in 2024 [12.1.24]

An interesting Reddit thread on home-grown detection methods created largely by teachers [last post Feb 24]

Students Are Likely Writing Millions of Papers With AI - according to Turnitin. Disinterested player? [WIRED 9.4.24]

Is it time to turn off Turnitin? Ask José Antonio Bowen and C. Edward Watson in THE Campus. The implication in this short extract from their book appears to be ‘no’ as they report it is quite accurate at detection of AI. The problem of false positives remains [29.4.24]

A useful set of articles from WIRED traces aspects of the ‘detection’ story [30.4.24]

Real or fake text? allows you to practise your own ‘skills’ as an AI-detector. Have fun - it’s a lot harder than you might think! [13.5.24]

Higher education processes

A report from the Office of the Independent Adjudicator for HE in England and Wales pointed to important issues in the application of academic integrity as a consequence of COVID19 – similar mistakes should be avoided in relation to use of generative AI [no date]

Should HEIs use AI in their admissions processes? Lilah Burke in Higher Ed Dive [18.7.23]

It’s already happening (in the US) according to this New York Times investigation by Natasha Singer [1.9.23]

There is potential for the application of AI across all processes in HE, from leadership to finance to student support - as shown in this AI Playbook from Complete College America - the potential is quite mind-blowing [22.11.23]

5.        Broader implications and critical perspectives: Higher education

Times Higher Education Campus Spotlight on AI in HE – diverse contributors explore some of the broader questions. Lots of good ideas. [no date]

The Post-Learning Era in Higher Education: Human + Machine A brief and accessible article by George Siemens from Educause on the impact that AI might have in education, written before ChatGPT was a thing (2020)

AI in education: ChatGPT is just the beginning A view from Germany (in English). A great preview of the other platforms and technologies coming down the track [7.1.23]

Prior to (or instead of) using ChatGPT with your students. Autumm Caine on some of the reasons NOT to ask your students to experiment with ChatGPT [18.1.23]

ChatGPT and the educational AI chatter: Full of bullshit or trying to tell us something? asks DCU’s Eamon Costello. His aim is to ‘say as little as possible’ and maybe to calm the waters (only Buddhist response to ChatGPT I have yet encountered 🙂)[17.3.23 ☘️]

 

How AI is shaping the future of higher ed by M’hammed Abdous in Inside Higher Ed - addresses admin, teaching, learning and research. [21.3.23]

Laurie Phipps of JISC uses composite narratives to explore the complex issues of AI and ‘academic misconduct’ across the higher education community [27.3.23]

Tim Fawns (Monash U.) has been a perceptive observer of the emergence of #ChatGPT. His 2022 article An Entangled Pedagogy: Looking Beyond the Pedagogy-Technology Dichotomy is a good starting point for thinking about how to shape a response to the AI challenge [2.4.23]

Is AI degenerative for education? asks Ben Williamson in this post from his Code acts in education blog [30.6.23]

Donald Clarke discusses the work of polymath Gordon Pask, his Conversational Theory and how it relates to Generative AI in education. Suggests that conversation (broadly applied) is the basis of all learning. Very interesting ideas [22.6.23]

An interesting discussion by Neil Selwyn of the broader development of AI in education: from within the AIED community [5.7.23]

Integrating Generative AI into Higher Education: Considerations, by Charles Hodges and Ceren Ocak for Educause. AI will soon be integrated into the tools (like the Microsoft suite) used by students. What then? [30.8.23]

Margaret Bearman seeks to shift our thinking through this article on the relational use of AI - moving away from a binary us/AI mindset [full article] [27.9.23]

Character.ai is a popular chatbot that allows you to ‘interview’ historical figures (set to receive a multi-million $ investment from Google). Nathan Rennolds and Lakshmi Varanasi  identify some of the concerns with such tools [Business Insider 1.10.23]

ChatGPT can’t write - an interesting debate on what it means to ‘write’, drawing on the work of deconstructionists such as Barthes, Foucault and Derrida. Features Thomas Basbøll, David Gunkel and other contributors [7.10.23]

Often discussions about AI and education come down to the question: what is the role of the teacher in this new environment? This article by Ariana Garcia [from Chron] is about second-level education, but perhaps provides one potential scenario: A Texas private school is using AI technology to teach core subjects [25.10.23]

Artificial intelligence for good? Challenges and possibilities of AI in higher education from a data justice perspective. Chapter by Ekaterina Pechenkina in the open-access book, Higher education for good. Examines the use of AI in HE from a social justice and ethics of care perspective [25.10.23]

Learning with Generative Artificial Intelligence within a network of co-regulation - Jason Lodge and colleagues at the University of Wollongong place AI within the broader context of ‘self-regulated learning’ which requires us to rethink some approaches to higher education [J. Univ. Learning and Teaching Practice 06.11.23]

Sonya McChristie of the University of Sunderland casts a sceptical view in Trying to predict the future of AI - not convinced that ‘AI’ is much more than hype [7.11.23]

What does AI mean for the employability of graduates? This report from the Demos thinktank and the University of London looks into this important question (UK context but relevant to all knowledge-based economies) [8.11.23]

Neil Selwyn of Monash University is critical of the broad impact of AI in higher education, as he explains in this article in the Nordisk tidsskrift for pedagogikk og kritikk [24.1.24]

Is it acceptable or ethical to use AI-based facial recognition tools to monitor student engagement in the classroom? Is this a step too far in surveillance of students? Interesting questions asked by Susan d’Agostino in Inside HE [27.2.24 - may require registration]

It’s time for academic programmes to look seriously at AI, according to Kathleen Landy in Inside HE . Way beyond time, in my opinion! She provides some useful guidelines as to how to do it [28.2.24 - may require registration]

Artificial intelligence: From experimentation to institutional strategies - European University Association (EUA) webinar that features inputs from JISC (UK) and University of Murcia (Spain) [21.5.24]

6.        Broader implications and critical perspectives: general

The work of creating and maintaining AI

The hidden work of AI - those who train the LLMs at US$15/hr - from NBC News [6.5.23]

A podcast from the WSJ on Kenyan workers involved in training ChatGPT [11.7.23]

Former ChatGPT moderators in Kenya are suing OpenAI The article by Niamh Rowe also looks at the broader issues around content moderation and the companies that engage in it [Guardian 2.8.23]

Paris Marx on how AI (including ChatGPT) is dependent on huge amounts of low-wage human labour, often based in countries such as Venezuela, the Philippines and Kenya - from Business Insider. [12.2.23]

AI is a lot of work - for poorly paid humans - as Josh Dzieza reveals in Verge. Great visuals too! [20.6.23]

Turns out that Amazon’s gimmicky US AI-based supermarkets were actually operated by humans - in India [Hindustan Times 3.4.24]

Understanding the human cost of AI - Adio Dinika of the community-based Distributed AI Research (DAIR) Institute (SOUR podcast)[11.6.24]

Environmental impacts of AI

The Generative AI Race Has a Dirty Secret - Chris Stokel-Walker in Wired on the (significantly negative) environmental impact of LLMs [10.2.23]

Sustainable AI?  Mark van Rijmenam examines negative and positive impacts [23.2.23]

Bill Tomlinson and colleagues claim to demonstrate that AI tools use far less energy than humans when competing tasks of writing or illustration [8.3.23]

The environmental impact of AI is not equally distributed, argues Nabiha Syed in The Markup [8.7.23]

Every response on ChatGPT requires about 25ml of water. Read about the coolant demands of LLM servers in this piece from Clive Thompson on Medium (may require sign-up). That said, Ireland’s Poulaphouca Reservoir could keep ChatGPT going for 108 years (presuming no additional rain 🙂) [30.7.23]

The growing energy footprint of AI - Alex de Vries examines the latest estimates on the electricity demands of AI [10.10.23]

Mariana Mazzucato reveals The ugly truth behind ChatGPT - its substantial use of planetary resources [Guardian 30.5.24]

AI is placing heavy burdens on datacentres and threatening the climate aspirations of major corporations such as Google [Guardian 4.7.24]

AI in the workplace

General

CEO of IBM Arvind Krishna predicts that AI will replace up to 30% of back-office roles at the corporation over next five years [1.5.23]

Companies are increasingly integrating proprietary AI systems into the workplace, according to Yuwen Lu in the New York Times. With customisation and inbuilt security features is this the future of LLMs? [5.7.23]

Robots didn’t take our jobs - will ChatGPT? This article by Aaron Benanav argues that the impact of AI on employment may be overstated [New Statesman 11.4.23 - may require registration to access]

Britain’s trade union movement is beginning to explore responses to use of AI in the workplace [Guardian 4.9.23]

Trade unionist and digital rights advocate Christina Colclough on how digital technologies, including AI, impact on the work of educators [podcast] [7.6.23]

Recent graduates are concerned about their employability [Inside Higher Education 26.7.23]

How will it happen? What it looks like when jobs disappear in the shadow of AI Brian Merchant in the LA Times [24.5.23]

Another take on the jobs issue: When AI comes to work: How to evolve, thrive and keep your job - podcast from the Wall Street Journal [4.5.23]

A perspective from Aida Ponce Del Castillo of the European trade union movement [12.7.23]

There is evidence in this paper from Xiang Hui and colleagues that AI is already impacting on the work for skilled freelance workers: those ‘in highly affected occupations suffer from the introduction of generative AI, experiencing reductions in both employment and earnings’ [1.8.23]

Why AI is an opportunity, not a threat, for the future of work - according to Róisín O’Coineen of BearingPoint in Silicon Republic [26.9.23]

International recruitment firm Greenhouse reports on use of AI tools in recruitment (widespread) but also on concerns about bias and trust (significant) [8.11.23]

Real impacts of generative AI in jobs as language learning app Duolingo cuts 10% of contractors as it uses more AI to create app content [Bloomberg 8.1.24]

Data & Society has released an excellent series of podcasts on the impact of GenAI on work. |1. Hierarchy| 2. Recognition| 3. Adaptation | [24.4.24]

Here’s the view from global consulting firm, McKinsey’s - it sees a considerable shift in employment patterns/jobs, with the greatest AI impact on ‘office support’ (negative) and healthcare (positive) [21.5.24]

AI harms in the workplace reflect existing patterns of inequality, argue Nataliya Nedzhvetskaya (Berkeley) and JS Tan (MIT) in this FAccT’24 conference paper [5.6.24]

Media/entertainment/communications/arts (see also Intellectual Property)

How generative AI may impact one area of work: local media journalism. There are positives and negatives. From Partnership on AI [23.12.22]

Does ChatGPT mean the end of jobs in editing and proofreading? - not yet according to Adrienne Montgomerie at Right angels and polar bears [Jan 2023]

A brief piece on how Spotify is using AI-generated music to boost its revenues - with implications for human musicians [21.4.23]

Oliver Whang in the New York Times on AI-generated ‘art’ and what this might mean for visual artists [2.5.23]

Early indications of potential impacts in libraries and for librarians [13.5.23]

As Hollywood creatives strike, Sharon Goldman of Venturebeat explores the burgeoning potential for AI in the entertainment industry, including K-Pop [18.7.23]

News Corp is already using AI to generate 3000 news items per week (using just four human staff) in its Australian media outlets [Guardian 31.7.23]

AI’s potential impact on book publishing [2.8.23]

How some artists are exploring the potential of AI - Gabrielle Schwarz in the Guardian [10.8.23]

Critical topics: AI images. Eryk Salvaggio of Bradley University has made publicly available his undergraduate design course that provides an ‘overview of the emerging contexts of AI art making tools that connected media studies and histories of new media art, with data ethics and critical data studies’. Lectures, interviews with artists, students’ work - it’s all there. Particularly good on the origins of AI and how this related to ‘art’. Will take a significant investment of your time, but what a resource! [20.9.23]

How AI may stretch artistic boundaries: The AI opera combining Barbie assault rifles and Greek mythology [Dazed 21.9.23]

A bunch of well known novelists (eg Jeanette Winterson, Bernardine Evaristo &c) contemplate the future of writing in the world of AI [Guardian 11.11.23]

Some of the fascinating (and also weird and worrying) ways that AI is being used to create child-focused content for YouTube and elsewhere [Wired 12.3.24]

IT

How ChatGPT and Natural Language Technology Might Affect Your Job If You Are a Computer Programmer. In Forbes. AI might make coding available to all of us but might reduce people’s desire to learn to code in the first place. [23.1.23]

Radu Gitea on the potential for use of generative AI in UX (user experience) research [2.8.23]

AI may be starting to impact on employment levels in the giant tech corporations themselves, according to Jason del Ray in YahooFinance [31.1.24]

Medicine

Generative AI shown to increase productivity in the field of plastic surgery [4.9.23]

In the US, nurses’ unions criticise use of AI in healthcare and say they ‘will not accept algorithms replacing the expertise, experience, holistic, and hands-on approach we bring to patient care’ Jules Roscoe in 404 Media [24.4.24]

Care work

AI and Social Work - from the Social Work Graduate [14.9.23]

Oxford University researchers warn over the use of AI to create social care plans A ‘good practice’ guide is planned by relevant care organisations [Guardian 10.3.24]

When AI is combined with socially assistive robots, there may be an opportunity to address the global ‘care gap’ according to Maja Matarić in this TED Talk [15.3.24]

Law

GenAI likely to transform aspects of the law profession - examples from the Irish context [12.3.24] 

Psychology

Using large language models in psychology. Dorottya Demszky and  colleagues explore potential applications [13.10.23]

Science

Science and the new age of AI - Nature special issue with an excellent range of articles [10.10.23]

Business & finance

A critical report from the US Consumer Financial Protection Bureau on the impact of AI-based chatbots in consumer-facing banking [6.6.23]

AI used to monitor workers in the fast-food industry, by Shanique Yates [Yahoo! Finance 27.2.24]

Will careers in tax face obsoletion or evolution because of AI? Discussed by RTE Brainstorm [10.4.24]

GenAI has the potential to radically change the work in finance - with what implications? - potentially a cut of over 60% in hires at entry level? Rob Copeland in the New York Times [10.4.24]

Gender, ethnicity and AI

The development of AI has been very gendered. Catherine D'Ignazio and Lauren F. Klein’s Data feminism (MIT Press, 2020) provides an alternative perspective and is available as a free e-book online.

Will AI image generation make us all look alike? jenka on smiling and the ‘visual monoculture of American expressions’ in AI and the American Smile [Medium 27.3.23]

Companies including Levis are using images of AI generated models in an attempt to reflect ‘diversity’ - raises many issues, as pointed out by Alaina Demopoulos, not least for ‘real’ models [Guardian 3.4.23]

This says the much the same thing, but quicker: Inside the mind of an AI-generated woman laughing alone with salad, by Mary Flannery in McSweeney’s [26.4.23]

Sophie Gardner warns of Women and the dark side of AI in Politico - including generation of pornographic deepfakes and inbuilt gender bias [19.5.23]

My A.I. Lover Three young Chinese women and their relationships with Replika avatars - in this short film from the New York Times. Really, it's about loneliness and technology. Echoes of Her [23.5.23]

AI is implicated in the reshaping and homogenisation of actual women’s faces, according to Elise Hu in WIRED [25.5.23]

AI Is Steeped in Big Tech’s ‘Digital Colonialism’ - WIRED article on the work of Abeba Birhane  that uncovers the sexism and racism inherent in the deployment of AI [25.5.23]

How do text-to-image generative AI platforms construct gender and ethnicity? This paper/set of tools, created by Sasha Luccioni (Huggingface) and colleagues shows how, for three such platforms [12.6.23]

Is it OK to have a relationship with your Replika avatar? asks Amy Fleming in the Guardian [15.6.23]

Probably not, according to this brief article from Serena Smith in Dazed [28.7.23]

Will the development of AI be driven (like many other technologies) by the sex industry? That is at the centre of this interesting take by Thom Waite, also in Dazed [21.6.23]

AI art/visuals are also susceptible to bias, according to Zachary Small in the New York Times [4.7.23]

 

Dustin Hosseini on Digital Education Practices applies the concept of intersectionality to the interconnections between race, gender and ethnicity in relation to generative AI - with a view to informing educators’ practices [4.9.23]

​​Black in AI aims to increase the presence and inclusion of Black people in the field of AI by creating space for sharing ideas, fostering collaborations, mentorship and advocacy [accessed 15.9.23]

Meet the women detoxifying the world of AI. From Rosamund Dean in Elle [20.9.23]

ChatGPT and Bard ‘perpetuate racist, debunked medical ideas’ about Black people, according to this Stanford study reported in Fortune [20.10.23]

My blonde GF - disturbing Guardian documentary short film about one woman’s experience of AI-generated pornographic deepfakes [25.10.23]

How to use AI in a fair and responsible way - podcast from the Wharton School of the University of Pennsylvania as part of their Leading Diversity at Work series [9.11.23]

Where are the crescents in AI? By Maha Bali. On the HE blog of the LSE [London School of Economics]. An intro to critical AI literacy, that (amongst other things) uses the Palestinian war to show how bias is built into Generative AI [26.2.24]

How AI makes fashion’s future look dangerously like its past - video report from the Guardian [2.3.24] 

UNESCO report identifies ‘persistent social biases within … state-of-the-art language models, despite ongoing efforts to mitigate such issues’, with sexist, homophobic and misogynistic terminology being consistently generated by LLMs [March 2024]

Dialect prejudice predicts AI decisions about people's character, employability, and criminality. Research paper by Valentin Hofmann and colleagues that indicates that LLMs output can be shaped by input including Afro-American dialect terms, more so than direct indication of race/ethnicity [arXiv 1.3.24]  

How generative AI depicts queer people (hint: purple hair) - report from Reece Rogers in WIRED [2.4.24]

Lydia Morrish in WIRED on the rise of AI-generated, sexually explicit ‘girlfriends’ [25.4.24]

Other societal issues

If you are communicating about AI - it’s a good idea to avoid clichéd or misleading imagery! Better Images of AI can help. The site includes free downloadable CC-licensed images that reflect the labour and environmental impacts/origins of AI. If you hadn’t already noticed, one of their images graces the header of this document [last checked 8.5.24]

Emily Bender and colleagues’ celebrated article: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? – a stern warning on the dangers of thinking of AIs as human. [March 2021]

Writer Meghan O'Gieblyn in n+1 on the connections between GPT, ‘automatic writing’ and the unconscious [Summer 2021]

AI and Society - special issue of Daedalus [MIT] [Spring 2022]

​​From big to democratic data: Why the rise of AI needs data solidarity. Open-access academic book chapter on the implications of AI for democracy, by Mercedes Bunz and Photini Vrikki [2022]

‘Do algorithms dream of electronic shapes? Dublin-based art/technology work by Robin Price that explores the ethics of AI [Jan-Mar 2022]

Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report - from U. Texas - a long-term assessment of the development of AI [27.10.22]

Autumm Caines struggles with questions around the moral dimensions of ChatGPT – including issues around ownership of data and outsourcing of our lives to the companies. [29.12.22]

Data & Society has collated a set of eight essays under the title The Social Life of Algorithmic Harms - addressing use of AI in areas from child welfare to climate change [15.2.23]

Now the humanities can disrupt “AI” – Lauren M. E. Goodlad & Samuel Baker manage to pack just about every critical perspective on ChatGPT and AI into one article. From Public Books – a magazine of ideas, arts, and scholarship. [20.2.23]

Are we already living in a ‘dystopic present’ of AI? Luke Hurst raises some challenging issues in Euronews [28.2.23]

Linguist Noam Chomsky and colleagues in the NY Times on the amorality of AI; [8.3.23]

The AI Dilemma YouTube video in which Tristan Harris and Aza Raskin of the Center for Humane Technology discuss how AI as it already is poses potentially catastrophic risks to a functional society, drawing parallels with nuclear weapons technology. [9.3.23]

Deepfakes and the epistemic apocalypse - Joshua Habgood-Coote offers a critique of the impact of AI-generated ‘deepfakes’ [Synthesia 9.3.23]

Emily M. Bender discusses the ‘hype problem’ around AI and why it distorts our view of reality [3.4.23]

Technology & society scholar dana boyd on the need to avoid overly deterministic thinking (if not completely) when it comes to LLMs [5.4.23]

CC reusable slide-deck (with speaker notes) from Sasha Luccioni of HuggingFace that examines the History, costs and risks of generative AI (also looks at environmental &c costs) [March 2023]. The same material as an article in Ars Technica [12.4.23] and a brief overview in a TED talk [31.10.23] 

Nathanael Fast (Assoc. Prof. of Management and Organization at USC Marshall School of Business) and Jacob Metcalf (AI program director at the Data and Society Research Institute) interviewed about how AI may change our lives (on radio LAist 89.3, Southern California) (15m) [21.4.23]

List from @DataChazGPT of the top podcast series on AI - including a number that focus on social, ethical and environmental issues. [24.4.23]

Jeffrey Binder explores the issues of textual originality vs automation in a discussion of historic ‘versifying’ machines (explored in more detail in Sharples and Perez y Perez’s book Story Machines) [5.5.23]

The Great A.I. Hallucination - New Republic discussion on some of the significant safety and ethical issues with LLMs - features Emily Bender (linguist), Ted Chiang (New Yorker) and Washington Post reporter Will Oremus (audio + transcript)[10.5.23]

This extensive report What’s in the Chatterbox? Large Language Models, why they matter, and what we should do about them from the U. of Michigan addresses the substantial policy issues related to LLMs – broader than just ChatGPT and assessment [one-pager][16.5.23]

Perry Share and John Pender talk to Liverpool ‘hospice designer in residence’ Andrew Tibbles on death, robotics and AI [16.5.23]

Guardian interview with Timnit Gebru who raised ethical concerns about AI while at Google - and had to leave the corporation as a consequence [22.5.23]

Generating harms - report from epic.org on the potential negative implications of generative AI [23.5.23]

Evaluating the social impact of generative AI systems in systems and society - a collaborative paper led by Irene Solaiman (a work in progress) [9.6.23]

ChatGPT: deconstructing the debate and moving it forward - a philosophical deconstruction by two key figures in the ethics of AI and robotics: Mark Coeckelbergh and David Gunkel [22.6.23]

AI in Africa - brief video report from the BBC [28.6.23]

Using ChatGPT to simulate communication with loved ones after (their) death. A step too far? Aimee Pearcy discusses in the Guardian [18.7.23]

How useful are AI tools if you don’t speak English or other major languages? Not very! Read about how local start-ups are Bridging the AI language gap in Africa and beyond - by Kira Schacht in DW [29.7.23]

University of Galway philosopher & legal scholar John Danaher explores many aspects of GPT and AI with a variety of experts in his Philosophical Disquisitions podcast. Very accessible and stimulating discussions! Most recent episodes feature discussions with philosopher Sven Nyholm on broader issues of technology and ethics [25.9.23]

AI-made images mean seeing is no longer believing. Chris Stokel-Walker in the Guardian on the growing threat of AI-based misinformation [26.9.23]

Project Syndicate has a number of interesting articles on the social, economic and political impact of AI [last checked 29.9.23]

Recently deceased US ​philosopher Daniel Dennett argued that AI is dangerous as it has the potential to destroy trust, a linchpin of civilization [2.10.23]

What happens when you ask ChatGPT to design a robot? Quite surprising results in this research by a team at Northwestern University [5.10.23]

​​ChatGPT and Co: Are AI-driven search engines a threat to democratic elections? A report from Algorithm Watch [5.10.23]

Artificial General Intelligence is already here - notwithstanding the limitations of the current ‘frontier models’ [ChatGPT &c] - the provocative claim of Blaise Agüera Y Arcas (Google) and Peter Norvig (Stanford) in Noēma magazine [10.10.23]

Virginia Dignum, Andreas Theodorou and Leila Methnani have made available the slides from their tutorial on Responsible and Explainable AI [11.10.23]

AI: the future is now. Maclean’s collection of articles on AI from 15 ‘Canadian thinkers’. Lots of interesting viewpoints [12.10.23]

Kiran Stacey outlines ‘bias’ in AI tools in use across the UK public sector [Guardian 23.10.23]

The ​​ethics of AI in education - short briefing paper from the EU Commission's Digital Education Hub [may need to request access][accessed 24.10.23]

How is AI already part of our everyday lives? This Guardian article by Hannah Devlin, Rich Cousins and Alessia Amitrano looks at a ‘day in the life’ of AI [25.10.23]

Perspectives from the AI fringe - report from an event held in Oct-Nov 2023 to complement the UK government’s AI Safety Summit. Viewpoints from academia, civil society, industry and a ‘people’s panel’ [3.11.23]

How much do AI chatbots ‘hallucinate’ - New York Times article by Cade Metz reports on an attempt to find out [6.11.23]

AI and in/justice - an educator view - slides from Laura Czerniewicz, University of Cape Town. Excellent on the political economy of AI [10.11.23]

Dangers of ‘extremists’ generating content using AI tools - David Gilbert in Wired [9.11.23]

Knowing Machines - a series of podcasts hosted by Tamar Avishai that traces ‘the histories, practices, and politics of how machine learning systems are trained to interpret the world’. [last visited 20.11.23]

Critical AI - the journal. Does what it says on the tin [last visited 20.11.23]

New Yorker special issue on AI - 5 articles on aspects including coding, crime, deepfakes, art and the future of humanity. To read them all you will probably need to subscribe [20.11.23]

The New York Times collection of articles on generative AI [may need subscription to access; last visited 19.3.24]

One of these is A.I.-Generated Garbage Is Polluting Our Culture by Erik Hoel - his answer: watermarking of AI-generated text/images [29.3.24]

AI Incident database - things that have ‘gone wrong’ with AI, lots on algorithmic bias [last visited 3.4.24]

Another article on use of generative AI to create avatars of deceased relatives - now popular in China, apparently [Guardian 4.4.24]

Also in China (and elsewhere) AI-avatars as newsreaders, delivering ‘fake news’ [Guardian 18.5.24]

The Guardian’s Science Weekly podcast examines some of the big stories in AI [30.5.24]

A.I. is getting better fast. Can you tell what’s real now? A fun test in the New York Times that challenges you to identify ten images as ‘real’ or ‘AI’. I got 6/10 correct! [24.6.24]

Regulation

OpenAI has signed up to Partnership on AI’s Framework for the ethical and responsible development, creation, and sharing of synthetic media Hopefully, a positive step! Other signatories include BBC, CBC [Canada] and TikTok [27.2.23]

Senior figures in OpenAI, developers of ChatGPT, call for regulation of AI due to concerns about potential harm [24.5.23]

The EU’s AI Act has been claimed as the world’s first comprehensive attempt to regulate artificial intelligence [14.6.23]

What does EU Artificial Intelligence regulation mean for AI in education? There may be some very consequential impacts! Overview from Andrew Maynard at The future of being human [10.7.23]

Algorithm Watch call for protections for EU and global citizens in the EU AI Bill [12.7.23]

Emily Bender describes the release of LLMs as ‘an oil spill into our information ecosystem’ and points to the failings in industry and US government efforts at regulation [30.7.23]

Six months after the famous ‘pause letter’, Gary Marcus asks where AI regulation is (not) going [22.9.23]

Countries agree on the need to regulate AI, but not on how to do it [Guardian 2.10.23]

Decoding the EU AI Act: A student-friendly guide to the world’s first comprehensive artificial intelligence regulation. From Sadbh Boylan in Trinity News [3.10.23]

In the US, President Biden signed an executive order that focuses on control of AI [New York Times 30.10.23]

Bletchley Declaration by Countries Attending the AI Safety Summit - 28 countries, including Ireland, plus the EU, have signed up to this statement on the regulation of AI. Asserts that ‘AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible’. And so say all of us! [1.11.23]

Preventing Big AI from Project Syndicate is a collection of opinion pieces on the challenges of regulating generative AI in the context of market domination by giant tech firms [Registration (free) required, 19.1.24]

Control AI is an NGO dedicated to alerting the public about the potential harm of AI-generated ‘deepfakes’ [site last checked 19.3.24]

Applying sociotechnical approaches to AI governance in practice - Miranda Bogen and Amy Winecoff of the Center for Democracy and Technology on why social science is crucial in the quest to regulate AI [15.5.24]

Employees of AI companies have called for better whistleblower protection for those who speak out on perceived AI harms [4.6.24]



Edited by Perry Share last update 4 July 2024A person smiling in front of a bookshelf

Description automatically generated

Find me on X at @PerryShare (commenting about AI in higher education, the fabulous North West of Ireland and many other things) and I can be contacted at perry.share@atu.ie

Short link for this document: https://tinyurl.com/ATU-AI-2023

Creative Commons license

You may distribute, remix, adapt, and build upon this material in any medium or format, for non-commercial purposes only. If you modify or adapt the material for distribution, you must license the modified material under identical terms.

Welcome to the machine: ChatGPT resources © 2023 by Perry Share is licensed under CC BY-NC-SA 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/