Discussion Board
I think a real dilemma for students using AI tools is that sometimes they don't really understand to what extent they could rely on AI to do the works. Some time ago I came across a post online, in which a student was accused of academic misconduct but he doesn't think he's done wrong. In that post, he said he was used to prompting AI to generate a coherent text with a set of ideas he came up with himself, which he then paraphrased slightly and submitted as his assignment. The problem is he thinks as long as the main ideas (which are usually a few key words) are his own he's not really plagiarising, and he's only using AI as tools to refine his language. Which is obviously not the case—even though the initial ideas are his, the entire logic chain and subtle details of the text are AI generated and he's just taking them all as his original work, which leads to a classic ethical problem of unackowledged and improper AI usage. If he really wishes to use AI to improve his writing, the right sequence should be he producing an original text with original ideas, then asking AI for advice of improvement, then modifying the text accordingly with his own thinking. This particular case shows how much it matters for students to have a clear idea of what exactly it means to use AI ethically and effectively, and much of such misconduct can actually be avoided if students are provided proper training on delimiting the boundary between proper and improper AI usage.
Case chosen: A student using AI to rewrite an essay but did not acknowledge
Reflection:
This would refer to the dishonest use of AI tools and the violation of academic plagiarism. Academic integrity emphasises the authenticity in the academic conduction, by concealing the usage of digital tools, it will be misleading for an effective evaluation of the student’s achievement.
Actions should be taken to address the issue. An immediate communication with the student should be arranged to clarify the nature of the student’s behaviour. The student should have the chance to explain their reasons and the teacher should explain the harms and consequences of their actions. Corresponding measures on the assessment result should be taken after the conversation.
For future prevention of similar behaviour, explicit rules for AI use in academic task should be established in the class. Assessment methods could be optimised to adjust with the digital era as well, etc. focusing on more process-oriented and context-depended assignments.
In early 2023, while working as a TA for an undergraduate course, I noticed that a student's final assignment showed signs of possible plagiarism. The content of his essay was inconsistent and lacked a personal perspective, which is unusual in humanities subjects. I ran a similarity check on Daya, and the result showed a 49% repetition rate. However, most of the repeated parts were proper citations. And the tricky part was that he had blended several sources together. The highest similarity rate with any single source was 6%, and there were about six to eight sources that reached this same percentage. As a result, I had no solid "evidence" to prove plagiarism—maybe I was wrong and it was just a poorly written paper. I reported it to the professor. What I still do not fully understand is the relationship between extensive "proper citations" and academic plagiarism.
We had a 3D digital sculpture course before. Since we were required to upload their digital models after class as archive, and the teacher could access all of them, this caused a potential issue. Someone found that the teacher had directly used part of a student’s work model, without asking, as part of his own exhibition piece.
So in this case, the main ethical issue is "misuse of intellectual property."
The teacher used a student’s digital model without permission, violating the student’s rights and trust. This breaches academic integrity and professional ethics, as it involves taking credit for someone else’s creative work.
Appropriate actions include the teacher acknowledging the mistake, apologizing to the student, stop showing immediatly, and removing or properly crediting the student’s work in the exhibition. To prevent similar issues, institutions can implement policies that protect students’ intellectual property, restrict unauthorized access, and educate both teachers and students about ethical practices in digital art.
Case : A student used google Translate tools, such as Deepl, to write an essay but didn’t acknowledge it.
What ethical or integrity issue is involved?
I think fairness and honesty issue is involved in this case. Because it’s not totally under her personal effort. But it is sometimes depends on the situation. Because everyone may need some tools, but think it is conventional practice. But Which tools do not require acknowledgement?
What actions would be approapriate?
- Maybe just acknowledge every tools we use?
- Maybe check the syllabus/assignment brief. If unclear, email the instructor with a concise description of what tool you used and how.
Generative AI is trained by collecting substantial data from the internet, databases, as well as AI-generated content itself—data that may be reliable, or entirely meaningless.
When applied to academic purposes, this background creates a double-edged sword: on one hand, such data can guide thinking; on the other hand, ethical and quality concerns arise, which manifest in two ways:
1. How reliable is such data, given concerns about data collapse?
2. The use of data from diverse sources without proper citation significantly undermines copyright.
In my postgraduate period, as I majored in translation and now many translation tools and AI are good at translation and even perform better than real translator, many students use chatgpt or deepseek and other AI tools to finish their daily translation assignments and even graduation translation project. I think this kind of situation violate academic integrity and sometimes the copyright of some original texts. I think in the future, students should be taught to use AI as a tool to polish their tanslations not totally rely on them and certain review methods of AI usage should be adopted in the translation project.
Hide repliesConsider a case where a student used an AI tool to generate an essay before the deadline, taking perhaps several minutes to finish the assessment. The university would avoid accusing the student of plagiarism because the university’s regulations regarding the use of AI are unclear and confusing. On the other hand, another student wanted to avoid breaching academic integrity related to AI use, so he attempted to finish the assessment by himself but missed the deadline. In the end, the student who did not use an AI tool failed the assessment, while the one who used an AI tool to generate answers within several minutes passed. It is in fact a real life case.
The dilemma is that the one who actually put in effort got an F, while the one who finished the assessment by simply typing the assessment questions into an AI chat box passed. The one who failed learnt more from the assessment, while the one who passed learnt little. Hence, on the one hand it is not fair; on the other hand, it does not reflect the true learning outcomes of students.
I get used to using AI tools in my daily assignment, as they could help me brainstorm ideas and organize them in different ways. I think this process is inspiring, as some advanced AI tools work like personal assistants as well as instructors to help explore and analyze projects. One thing is that if I have the original idea, and I need to expand and polish it a little bit to finish the project, the most handy and quick way is to chat with ChatGPT. If I seek help from AI tools and take some of its suggestions in polishing my sentences and ideas in a well structured manner, should I cite ChatGPT in the text? But the contents are almost from me... sometimes I feel a little bit confused, as digital ethics seems to be abstract and sometimes cause confusion in the "degree" of AI engagement.
If a student used AI to rewrite an essay but didn't acknowledge it,
I think there are some questions :
what ethical or integrity issue is involved?
academic misconduct
violate the principle and rules regards to use ai
It means this student is not submit their work by themselves
how to avoid:
encourage student to work by themselves, and tell them if it isn't ok,you can do twice



I also want to know about this case .