AI for T&L in Education: AI and society

AI governance and risks

Risks and considerations

We have been using AI for years and have already realised benefits in the realms of sciencehealthcare, and manufacturing (amongst many others) as well as in our academic and personal lives. However, the recent explosive interest in it and the rapid developments of the past few years have brought considerations about what ubiquitous AI will do to our society. Will AI take over our jobs? Monitor our every move? And what if it becomes sentient?

Artificial intelligence is already impacting society.  As early as 2015, leaders in technology such as Stephen Hawking, Elon Musk, Peter Norvig and thousands of others signed an open letter created by the Future of Life Institute calling for consideration of the benefits and risks of AI to humanity.

The trope of the evil super intelligence is well entrenched in Western popular culture. From Asimov’s I, Robot series written in the 1940s through the 1960s to movie franchises such as The Terminator and The Matrix and countless others, distrust of artificial intelligence is part of the fabric of our culture. 2021 survey results (KPMG, University of QLD, 2021) found that while people were willing to tolerate AI, few approved of or embraced it (and this was pre-ChatGPT!).

One of the founders of the non-profit Future of Life Institute (one of the world’s leading voices on the governance of AI and other technologies), Max Tegmark, had this to say about the trope of evil AI in a 2017 interview with The Guardian (Anthony, 2017)

“This fear of machines turning conscious and evil is a red herring. The real worry with advanced AI is not malevolence but competence. If you have Super Intelligent AI, then by definition, it’s very good at attaining its goals, but we need to be sure those goals are aligned with ours. I don’t hate ants, but if you put me in charge of building a green-energy hydroelectric plant in an anthill area, too bad for the ants. We don’t want to put ourselves in the position of those ants.”

Like cloning, AI presents both incredible opportunities to advance human capabilities and existential challenges.

The misuse of AI has already led to a range of issues from the development of social media algorithms that enrage us to engage us to the proliferation of misinformation to deep fake images, audio and video aimed used for propaganda and extortion. Future worries include misuse of the technology to hack encrypted systems of major financial and government infrastructure, biological attacks, and weapons of precisely targeted and mass destruction.

As well, there is the looming spectre of loss of control risks.  

International Governance

In November 2023, representatives of 28 countries (including Australia), AI executives, academics and scientists met at Bletchley Park, England, for an AI Safety Summit to address concerns around the governance of AI development.

The resulting "Bletchley Declaration" (UK Gov't, 2023) makes statements of principles regarding AI development and advocates for a united approach between governments, international organisations and other stakeholders regarding AI safety and ethics to harness AI's benefits and mitigate its harms. The key statements in the document include:

  • AI is a transformative force and should be developed and used in ways that are safe, human-centric, trustworthy and responsible.
  • Risks - particularly those at the frontier of AI - such as the ability to produce misinformation and propaganda at scale as well as the ability to threaten the security of essential systems - are essential to address.
  • International cooperation on the research into AI safety, as well as governance, is essential.

However, not everyone is impressed with the results of the summit. Firstly, not all countries with AI development capability attended the summit - for instance, Russia did not attend nor sign on to the agreement.  Secondly, AI experts such as Gary Marcus are reported as saying (Sparkes, 2023) that it doesn't go far enough and that we need to move past position statements. Clark Barret of Stanford University reportedly agrees (Sparkes, 2023), saying the intent of "building a shared scientific and evidence-based understanding of these risks" is a sensible approach.

Countries around the world as well as blocs such as the EU are either working on or implementing AI governance. THheOECD attempts to track AI policies worldwide in the AI section of its policy observatory website.

In August 2023, China released groundbreaking legislation (Roberts & Hine, 2023) targeting different aspects of AI, including generative AI.

The same week as the Bletchley Declaration was released, US President Joe Biden released an executive order (Hsu, 2023) that directs a range of US government agencies to develop guidelines for testing and using AI systems. However, in order to have a real impact, the order must be backed up by legislation, which may be difficult to pass during a contentious time in the US Congress and the 2024 election year. However, one part of the order addresses AI foundation models, using existing legislation to require companies developing such AIs to notify the government about the training process and to share the results of all risk assessment testing.

The World Economic Forum held an AI forum on 13-15 November 2023 in Davos to discuss AI governance and global cooperation on AI issues. 

The EU is finalising laws to regulate the use of AI. A draft report from the EU Parliament Special Committee on AI released in November 2023 recommends that the type of regulation should solely depend on the type of risk associated with a specific AI so as not to hamper the potential of AI to help humanity solve problems and make life-changing breakthroughs.

International and Australian standards bodies have published several standards regarding AI. However, what roles they should play and how they should be enforced is still being discussed as new types of AI emerge that may require different standards and international governance as well as governance by local authorities is still being finalised. (Pouget, 2023)

The Australian government has released eight voluntary ethics principles for AI companies and has a task force working on AI in the Australian Public Service. Although there are existing IT laws, calls for legislation specifically related to AI are coming from academia and human rights groups. 

From June to August 2023, public consultation on AI governance was sought by the Australian Government Department of Industry, Science and Resources. The next steps are, as of this writing (November 2023), still being determined.

AI and IP

Currently, courts and government bodies are wrestling with intellectual property issues concerning the content used by AI companies to train their large language models and who owns the copyright to the output of these models.

Use of copyrighted work to train LLMs

AI companies are facing a number of lawsuits from individual content creators and corporations accusing them of illegally "scraping" their work to train AI systems and produce output. Microsoft, GitHub and Open AI are among those being sued. Coders (Walsh, 2023), writers (David, 2023)and visual artists (Wiggers, 2023) claim that their work was and is being used by these models to produce output without their permission and without due compensation and attribution. In the complaint against OpenAI put forward by well-known authors, the argument is made that OpenAI could have used public domain works to train their models and that ChatGPT could be used to produce works that will harm their markets. 

Use of an artist's name to sell an AI-generated work

Another IP issue related to AI includes the fraudulent publication of books created using AI but published with the name of a writer without their knowledge, using online marketplaces to sell them. Once on sites like Amazon, authors have found it a time-consuming battle to get these illegitimate publications removed, let alone suing the fraudulent sellers. By the time issues are resolved, there could be dozens more books that risk writers' reputations and income.

Who owns an AI-generated work?

This leads to another IP/Copyright issue. If you work with AI to create an image, animation, piece of music, article, or other work - can that product be copyrighted?

In Australia, human-generated work is copyrighted automatically from the moment an 'independent intellectual effort" - including an idea or creative concept - is documented. (Arts and Law Centre, nd. and Copyright Agency, n.d.)

AI bias

Right before ChatGPT was released in November 2022, the Lensa app was making headlines for all the wrong reasons. When the app added "magic avatars" which took the user's photo and created AI-generated glam and fantasy shots - women noticed a sharp difference between their photos and those of their male friends. As journalist Melissa Heikkla observed in the MIT Technology Review  - men had their images morphed into superheroes in heroic stances, or warriors or knights. Women got hypersexualised images in sultry poses. 

People program AIs, choose the datasets to train AIS and then provide feedback to AIs that come with their own sets of cultural, socioeconomic and educational biases. Those people are largely male - only 12% of AI researchers are female and only 6% of programmers are female. Worldwide, women only make up 25% of data scientists.

More than a third of the world's population doesn't have access to the internet, let alone create content. Medical resources and AI tools often lack images of people of colour.  AI speech recognition software is notorious for not understanding accented English - even though most of the world's internet users don't speak English as their first language and aren't white.  Yet the most talked about AI companies, those who literally had a seat at the Bletchley Park Summit in 2023 are xAI, OpenAI, Meta Platforms, and Alphabet - US-based companies.

Of course, much of this is down to where AI development happens first. Naturally, innovators start with serving markets with which they're familiar and as new AI companies and research crop up in other countries - their populations will be served by AI that reflects their culture. But do these silos of cultural and country-contextual AIs risk a decrease in international cultural understanding as we use our country's AI tools for our personal information and entertainment? 

One year on from the Lensa app, the Washington Post published a story showing that stereotypes based on middle-class and above, white, Western, male perspectives are still very much a problem with general-purpose, generative AI tools accessible to those with internet connectivity.

 AI bias can happen at every stage of the process. A Forbes magazine article (Knapton, 2023) outlines several biases that can emerge during the training and implementation of generative AI systems:

  • Machine bias - present in the data set used to build the LLM
  • Availability bias - easier to access public data, prevailing opinions, misinformation that is more prevalent than factual content
  • Selection bias - if data sources are not sufficiently diverse 
  • Confirmation bias - either in the training data or the prompts used to retrieve data
  • Group attribution bias - when AI has limited exposure to the complexities of a group and makes assumptions based on limited data
  • Contextual bias - misunderstanding context
  • Linguistic bias - when AI favours certain cultural references or linguistic styles over others
  • Anchoring bias - when the AI model relies too heavily on an initial dataset and perpetuates these biases even when new content is added
  • Automation bias - this one is on us -- when we assume machines are smarter and better informed than us and we accept what they give us uncritically

To manage these biases throughout a system, AI companies and data scientists must employ a range of techniques and measures. Data biases are inevitable in human-created data. (Narayanan, 2018). 

In a 2019 McKinsey article, the authors stated that resolving AI bias "...requires human judgment and processes, drawing on disciplines including social sciences, law, and ethics, to develop standards so that humans can deploy AI with bias and fairness in mind. This work is just beginning."

Four years on from that article, researchers are still grappling with AI bias. In a July 2023 piece in The Conversation titled 'Eliminating bias in AI may be impossible - a computer scientist explains how to tame it instead", the author (Ferrara, 2023) describes the pursuit of fairness in generative AI systems as a "holistic approach" requiring "not only better data processing, annotation and debiasing algorithms, but also human collaboration among developers, users and affected communities". 

As reportedly pointed out by Noortje Marres, a professor and expert in digital sociology at the University of Warwick, the Bletchley Declaration did not suggest any consideration of consultation mechanisms that would provide scope for the general populace or marginalised groups to contribute their views or concerns to the governance of AI.

Yet AI tools are already being widely integrated into products and used by a wide range of industries and even law enforcement, where predictive policing tools based on historical arrest data can reinforce existing patterns of racial profiling and stereotypes. IBM's global AI adoption index for 2022 showed that 35% of companies reported using AI in their businesses and an additional 42% are exploring it. 

The need for competent humans in the loop

Recently, it has been revealed that even a company like Microsoft, which should have experts in the field who could have foreseen this, has caused damage by replacing human editors with AI algorithms with disastrous results. Its msn.com page - which features news and current events - is the default page for millions of browser users. The company specifically told human editors they were being replaced with AIs. The result has been disastrous - publishing extremist misinformation sites, propaganda, inappropriate polls (What do YOU think is the reason this woman died?), outright false news with an MSN tagline and violating the IP of other publishers.

An emeritus professor who, with other academics, helped to prepare a submission to Parliament recently had to apologise for not vetting content that contained incorrect information generated by Google's Bard AI

Understanding how to use AIs effectively and ethically is essential - especially when getting it wrong can result in the spread of misinformation, propaganda or false information that can impact legal and legislative outcomes.

This is where we, as teachers of current and future AI users can make a difference to our students. We can consider AI Literacies, such as those set out elsewhere in this guide under "Preparing students for an AI future" - "AI skills and literacies". We can use AI ourselves, applying the PAIR framework to our use and reflecting on the output we receive.