There is an ever-evolving suite of AI tools marketed to assist with academic research
They make a variety of claims about their special suitability to the research landscape; such as enhanced data protections, or the use of only verifiable academic sources.
It is important to remember that this is vendor marketing. And that,
- None can or should claim that they are a secure or safe repository for research data
- None can or should claim that they have eliminated bias in literature search methods
- None can or should claim that they can replicate the depth of insight of a human researcher
The fact remains that AI Tools for academic literature searches and summaries have serious limitations that may mean more work for researchers down the track.
These include:
- Lack of contextual understanding. AI Academic Literature searches decontextualise knowledge; minimizing the situation and place of its creation. We know, however, that the contextual conditions of knowledge creation are just as important in terms of meaning, than the final output. The decontextualisation of knowledge has long underpinned colonial claims to objectivity and universality.
- AI Researcher tools favour mainstream research, representing popular research outputs as authoritative literature. This is often referred to as "algorithmic prejudice". It introduces bias into your literature search while worsening the situation for diverse, less mainstream, knowledges, furthering their marginalisation. It can also circulate research outputs that misrepresents certain identities and experiences, reiterating harmful stereotypes.
- Despite marketing claims, Research AI Tools can still hallucinate and falsify results.
- And are no better at copyright compliance or confidentiality
See:
Ciriello, R. 2025. OpenAI’s new ‘deep research’ agent is still just a fallible tool – not a human-level expert (The Conversation)
Muldoon, J., Wu, B.A. (2023) Artificial Intelligence in the Colonial Matrix of Power. Philosophy & Technololgy. 36, 80.