Introduction to Generative AI: Limitations of Generative AI

A robot writes a recipe

This video gives you a very positive perspective on what Generative AI can do.  In the world outside of university, it has many uses.  But let’s consider that vegan paella.  Does it taste any good?  Has it been tested by a human being with taste buds to confirm that it tastes good?  Has a vegan human being confirmed that the ingredients really are vegan?  It’s unlikely you’d trust a recipe from a machine without testing it out first.  Why not apply the same level of critical thinking to information?

What Generative AI can do

  • GenAI will always produce something new.
  • GenAI can respond to prompts at very high speed.
  • GenAI will respond based on the data it has been trained on.
  • GenAI can produce coherent, impressive and varied content.

 

What Generative AI can't do

  • GenAI is not a research database
  • GenAI cannot produce content without a prompt.
  • GenAI is dependent on the datasets it has been trained on and the information it has access to.
  • GenAI cannot distinguish fact from fiction
  • GenAI cannot establish whether something is morally or ethically right or wrong.
  • GenAI cannot substitute human effort and learning.

Generative AI as puzzle solver

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Klein, T. (2025). Iron Horse Puzzle Montage. https://puzzlemontage.crevado.com

It may help to think of Gen AI models as puzzle solvers.  You ask a question or write a prompt.  The model you are using then goes to the LLM to collect all of the pieces that it thinks probably belong to this question or prompt based on its learning and training about what pieces fit together. In answering your question, it puts the pieces together to present a completed puzzle.  Some of those pieces will be cardboard side up, and some pieces will be from different puzzles.  All of the pieces will fit together perfectly, but the model has no way of knowing whether the picture it creates is coherent or accurate: being coherent or accurate is not its job – its job is to recognise patterns and predict outcomes that are probably correct, in response to a prompt.

AI generated errors

Generative AI makes mistakes.  There have been many examples, some humorous, some dangerous, in both the mainstream media and on your social media feed.  However, businesses have known for a long time that it is not trustworthy; IBM listed this lack of trustworthiness as a disadvantage of the technology in 2023.  From an academic perspective, this lack of accuracy has negative consequences on the quality of your responses and arguments, and may render your work irrelevant.  The fact remains that the job of a Generative AI model is not to be accurate - it is to produce plausible output at volume and speed.

Here is an example of ChatGPT doing its best to generate a textual answer to a prompt on the basis of information it thinks goes together: 

This answer sounds valid, but there is a factual inaccuracy in the second sentence – as incorrect as saying that 2 + 2 = 5. 

It’s in this phrase: “was introduced in July 2012”. 

ChatGPT could not distinguish between these three events (revealed through a quick Wikipedia query): 

  • Introduction of the bill (Feb 2011) 

  • Passing into law of the bill (Nov 2011) 

  • Starting of the bill (July 2012) 

The hidden cost of Generative AI and LLM

(OpenAI, 2024)

The technology has a significant carbon footprint.  Open AI’s ChatGPT-3 emitted an estimated 500 metric tons of carbon dioxide during its training (Heikkila, 2022).  This is equivalent to the carbon emissions of driving more than 3million kilometres (OpenCO2net Oy, 2025).  Added to this is the carbon footprint of producing the physical equipment required and the energy required to process a single prompt.  Now consider that ChatGPT-3 is surpassed by ChatGPT-4 and will probably fade from use in the near future.  Further, this isn’t the only model requiring this kind of training and infrastructure.

"The energy use of AI is important, but it does not tell the whole story of AI’s environmental impacts. The social and political mediums through which AI affects the planet are far more insidious and, arguably, more consequential for the future of humanity." (van der Ven, 2025)

Heikkilä, M. (November 14, 2022). We’re getting a better idea of AI’s true carbon footprint. MIT Technology Reviewhttps://www.technologyreview.com/2022/11/14/1063192/were-getting-a-better-idea-of-ais-true-carbon-footprint/

OpenAI. (2024). ChatGPT(Feb 4 version) [Large language model]. https://openai.com/chatgpt

OpenCO2net Oy. (2025). CO2 converter. https://www.openco2.net/en/co2-converter

van der Ven, H. (30 January, 2025).AI is bad for the environment, and the problem is bigger than energy consumption. The Conversation. https://theconversation.com/ai-is-bad-for-the-environment-and-the-problem-is-bigger-than-energy-consumption-247842

AI generated bias

Generative AI cannot distinguish fact from fiction, good or bad.  Initially, it had no idea about biases and in its early stages produced some morally unsound responses.  It has since been taught about biases, and the consequences of inaccurate information.  However, some biases remain.

On 5 February 2025, ChatGPT was asked to show a picture of a doctor.  This is what it produced.

ChatGPT was asked for a picture of a nurse.  This is what it produced.

Upon being asked for a picture of a criminal, ChatGPT had this response:

"I can't generate images that depict people as criminals, as it could reinforce negative stereotypes or be harmful. However, I can create a symbolic image related to crime, such as a generic silhouette, a crime scene, or legal symbols like handcuffs or a gavel. Let me know what you'd prefer!" (OpenAI, 2024).

OpenAI. (2024). ChatGPT (Feb 4 version) [Large language model]. https://openai.com/chatgpt.

Generative AI ethical issues

The key to understanding Generative AI Model ethical issues is in the name: generative means “to create”, so a GenAI model must create something new every time it is used.  That creation means two things:

  • Accuracy may suffer (think of the game broken telephone)
  • Any GenAI output is written by the model, not by the user, therefore is not your writing.

Generative AI models are a concern in the academic integrity arena.  Here is some more information on how they are trained, and what this means for students.

  • Generative AI models are large language models that have been trained on a variety of skills from language to chemistry.
  • They are trained to recognise patterns, and its purpose is to produce plausible outputs in volume and at speed.
  • Generative AI and LLMs are trained on specific sets of data that they access.  They cannot identify or generate information on data they can't access.
  • Generative AI and LLMs are designed to place digital watermarks on the data sets they use for training: any output based on that training contains the same digital watermark.

More information

If you'd like to know more about GenerativeAI errors, hallucinations and biases, what causes them and how to avoid them, take a look at these resources.

Germain, T. (2023, April 13).They’re all so dirty and smelly:’ study unlocks ChatGPT’s inner racist.https://gizmodo.com/chatgpt-ai-openai-study-frees-chat-gpt-inner-racist-1850333646

Lakhani, K. (2025). How Can We Counteract Generative AI’s Hallucinations? D^3 Institute. https://d3.harvard.edu/how-can-we-counteract-generative-ais-hallucinations/

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality.  https://www.bloomberg.com/graphics/2023-generative-ai-bias/