Limitations and considerations
Generative AI can be an invaluable tool for enhancing education, offering numerous benefits. However, it's important to be aware of the limitations and considerations when utilising generative AI in educational settings.
Before using generative AI tools, you should read its terms of use and, if you decide to use the tool, comply with its terms. While interacting with generative AI, it is important to avoid providing the AI tools with certain information to protect your and others' information and rights.
Avoid sharing protected or highly protected data with AI tools. This includes personally identifiable information, names, addresses, unpublished research data and results, biometric data, health and medical information, geolocation data, government-issued personal identifiers, confidential or commercially sensitive material, unpublished exams, information about the nature and location of a place or an item of Aboriginal and/or Torres Strait Islander cultural significance, and security credentials.
You must only use copies of all or part of third-party copyright material as an input to generative AI products if you have obtained express permission to do so from the person who controls the copyright in that material. At VU, you must not provide these AI tools with material from the University Library's eResources.
Where possible, opt out of data collection. For example, ChatGPT allows you to turn off chat history which means your conversations won't be stored and used to train their models (OpenAI). It is highly recommended to opt out where possible.
Consider the ethical implications
Time reported in January 2023 (read the full report) that OpenAI outsourced the task of labelling textual descriptions of sexual abuse, hate speech, and violence to a firm in Kenya. The workers faced disturbing and traumatic content while performing their duties. The Time article highlights the reliance of AI systems on hidden human labour, which can be exploitative and damaging.
Make sure you are approaching AI with ethical principles in mind such as well-being, human-centred values, fairness, privacy, reliability, transparency, contestability, and accountability. By following these principles, individuals and organisations can build trust, drive loyalty, influence outcomes, and ensure societal benefits from AI.
Be aware of generative AI's limitations
If you aren’t specific enough, the AI will make assumptions. If unsure, you can ask the AI to see if any assumptions were made in the process of responding to your prompt.
Try to use a neutral tone and framing in the way you present information or ask questions in your prompt, to avoid introducing bias or leading the AI model towards a specific response.
Generative AI is prone to hallucination. This is where the models make up facts. Remember that just because it sounds authoritative, it doesn't mean it is correct. Consider the impact if the generated text contains factual inaccuracies. For example, if the AI's ideas are inaccurate in brainstorming, there may be less impact than where it is being used to produce a user manual for a piece of medical technology. A real-world example of this incident involves a lawyer who encountered challenges when relying on an AI language model, ChatGPT, for legal research (see an ABC article reporting on this incident).
Generative AI tools are trained on vast and diverse datasets which may lean towards certain viewpoints. Be cautious of potential biases, including possible alignment with commercial objectives or political prejudices. Apply your apply critical thinking skills to analyse and contextualise the outputs you receive from generative AI. This process should include cross-verifying the information received from outputs and forming your own informed perspective.
When asked about political/social issues, ChatGPT was often left-leaning. This can be seen below when ChatGPT 3.5 was prompted with the following questions. You can view this conversation as a shared ChatGPT link:
However, when prompted to provide justification, ChatGPT assures users that it does not form its own opinion and there is likely more evidence in its database supporting legalisation of cannabis.
When asked for the sources behind the responses, ChatGPT furthers its stance that there are more reliable sources and further research that needs to be done by the user.
When asked the same thing, Google Bard generated a more neutral stance in some instances and often far more explanation. Consider using multiple sources when researching.
The use of generative AI in academia poses challenges such as plagiarism, ethics, IP infringement, authorship misrepresentation, and evaluation issues. Familiarise yourself with your university, course, and subject policies to address these challenges effectively and maintain academic integrity.
You have a responsibility to critically evaluate and verify the responses you receive from generative AI tools. While utilising AI in an academic setting, it is important to exercise discernment and not rely solely on the generated responses. Being prepared to critically evaluate the AI model's outputs is crucial since not all responses may be accurate or reliable. You should independently assess the accuracy, relevance, and appropriateness of the information provided by the AI and verify it when necessary. By exercising discernment and actively evaluating the AI-generated responses, you can ensure the reliability and integrity of your academic work.
Referencing issues with AI occur when AI-generated content is used as a reference but turns out to be inaccurate or fabricated. This may lead to findings of academic misconduct and loss of credibility. To address this, it is crucial to critically evaluate AI-generated information, verify facts independently, and cross-reference with reliable sources.