AI belongs in a conversation about Academic Honesty and Plagiarism because of its ability to generate sophisticated responses, such as essays and summaries, to prompts. The general expectation with school work is that students turn in work that is their own, including their own thinking, reasoning, research and writing, unless the instructor communicates other expectations.
Turning in work created completely or partially by AI as your own work, therefore, is academic dishonesty and plagiarism.
When you do use ideas and/or content (language, images, code, etc.) created by an AI tool, we recommend that you be transparent about it.
An additional point to consider: One of the reasons we cite sources is to provide a clear path to the sources of information we use, so others can check our evidence and test the validity of that source. The content generated by AI tools is not stable. While you may be able to provide a link back to your specific search, that content was generated uniquely in response to your query and it reflects the information uploaded to the AI database at that moment and your specific prompt. Other searches on the same or another day will produce different results. None of it is edited or fact-checked at all.
For any academic work, be sure to check with your instructor for information on what uses of AI are allowed in their class.
Click on each of the tabs in this box to learn more about:
A generative AI tool like ChatGPT can be useful in many ways - as long as your instructor approves! (very important to check with them FIRST)
For example, it can help you do the following (and you can cut and paste these example prompts directly into ChatGPT if you wish):
When you include sources in a researched essay, presentation, or other original work, you try to include the most reliable evidence you can find. This points to the most significant drawbacks of AI-generated content.
Generative AI tools are fed huge amounts of information - generally from freely available, open sources on the Web, though some of that content may be proprietary. Freely available generative- AI tools are not set up to dig within locked databases. AI tools "want" to provide a response when prompted, but those responses are limited by the information the AI has to work with and by the prompts they are given. AI responses may completely miss the point, provide poor information, or even manufacture, or "hallucinate," fake studies when asked to provide evidence for claims.
Generative AI tools have also been used to create disinformation, or misinformation designed with malicious intent. The fact that it can be very difficult to tell when information is disinformation is a significant problem for us all. Examples of AI generated disinformation have been discussed in the press and they include photographs, news stories, videos, and other content fabricated for political purposes. The information includes photography, videos, news stories, and more, that gain traction by being spread through social media. The link below from the technology news publication CNET is just one news story on the issue.
While you should always cite AI tools to acknowledge any information gathered or created with the aid of an AI tool, the creators of many AI tools are themselves not necessarily using information ethically. That is because:
News publications, such as The New York Times, are considering law suits against AI companies for the practice of "scraping" their content from the Web, while copyright infringement law suits from well-known artists and authors are already in the courts.