Academic integrity is a core value of Coventry University, underpinning all academic study. Academic integrity involves:
AI tools can be beneficial to support your learning and assessments, but you must follow the individual assignment guidelines set by your module leaders. See Tasks for ethical examples of using AI in your studies.
Overuse of AI can limit the development of your research, critical analysis, and writing skills. These are valuable life skills to acquire whilst studying at Coventry University and will be required by many employers when you graduate.
When undertaking academic research, remember to evaluate your sources for credibility, accuracy, timeliness and usefulness.
Using AI tools, such as Copilot or ChatGPT for research is no different. In fact, you will need to be more vigilant in your use AI tools and when evaluating the output.
AI tools work on probability maths. This means they produce the next most probable answer based on the training they have received. They have no means of verifying or evaluating the quality of the answer. That is up to you.
See an explanation of the P. A. R. C. test on the Finding Resources guide for an example of one model to help you evaluate your sources.
Below are further areas to consider to engage with AI tools responsibly.
Any bias in the training data or algorithms will be reflected in the results. This perpetuates prejudices in society, particularly as the main source of training data is the internet where most content is written in English from the Western world.
![]() |
Example: AI tools amplify majority views, and mute minority views. Stable Diffusion, an image generator, rarely depicts women as doctors, lawyers, or judges. Steps to take:
|
Generative AI tools can give false information that is presented as truth because the probability maths leads them to misconstrue or misrepresent information.
![]() |
Example: References given by AI tools are often incorrect, they can be totally false, have a real author but incorrect title , a correct title but incorrect journal. Steps to take:
|
The training data can be very poor quality, e.g. Reddit posts, leading to incorrect conclusions (misinformation). Malicious writers will also deliberately manipulate content to create inaccurate output (disinformation).
![]() |
Examples: Conspiracy theories regarding the COVID vaccine or manipulated political messages may be repeated by AI tools. Steps to take: Check for evidence from verifiable sources and do not become overly reliant on AI for your research. Use the research resources available on Locate, the library catalogue. |
Some of the content being used to train AI could include personal data. Some AI tools may record your personal data, or any data you enter into the tools, for future use. As with all technology, you should always check the terms of use and the privacy statement to understand how your data will be used.
![]() |
Example: Some AI tools will retain and use your information. Steps to take:
|
AI tools aggregate information and artwork, often without the creators' knowledge or permission. It is also difficult to know exactly which work has been used to train AI. At present there is legal uncertainty at a national and international level about copyright and AI.
![]() |
Example: Who owns the copyright when an image is generated in the style of a famous artist? The artist whose work the generative AI tool was trained on, the person writing the prompt, the AI developer, or the AI tool? Steps to take: Check the terms and conditions of the generative AI tool to see where the copyright ownership of output belongs. Consider using alternative means of originating images and texts and developing your own skills. |
Remember to evaluate your research findings before referencing content in your assignment.
![]() |
Example: Generative AI content may not be the most appropriate source to use. Have you exhausted the sources available to you from your subject guide? Steps to take: Always follow any link provided by generative AI to view the original source. Evaluate and reference this content, if appropriate. |