Discussion Board

Jiayuan YU on 2025-10-27 at 11:47

1. Case: a dilemma on whether to choose to use AI tools to correct the grammar of an essay, for instance, grammarly, as it may be detected as AI-generated.
2. Reflect: This reflects the problem of how to define the originality. To address this problem, we could ask students to save the original draft of the essay and mark the adjustments made by AI. In the future, we should set clearer guidelines on using AI, as the technology develops rapidly, our guidelines should be updated in time.

Ruimengyuan Z on 2025-10-27 at 11:46

During my academic life, I witnessed something really special. I still remember it was a course about music, and all the students were actually not from music-related majors—we were only taking it because the university required us to. Throughout the course, we had multiple quizzes, most of which were closed-book. Every time we had a quiz, several students, whom been called "specialists," would search for all the answers on Google and share them with everyone in the course via WhatsApp. I know that sounds crazy and is a complete violation of academic integrity, but it's also important to understand why this happened and why almost all the students took part in this "cheating" process. Maybe it was because they knew nothing about music, and maybe this kind of course shouldn’t have been designed with so many closed-book quizzes.

Ho Wai Lee on 2025-10-27 at 11:45

I have seen people using AI to generate reference for their article, but most of the references are false or do not exist. It is weird to say the AI commit fracbrication since it is not a autonomous agent, at least not yet. However, as more people are using AI, this issue might got amplified. Yet, here is the conundrum as it is not the AI that intentionally commit this (I would argue AI is incapable of possessing that), the AI only grab information online, or from its database. Therefore, it is the humans who are responsible for frabricaton in the first place. However, I think it would be extremely difficult to regulate that, since some of them might be intended, some of them might just confused; it will be difficult to set out laws to apply on these nuances motives, also hard to trace as the internet can be anonymous.

Lorenzo Maria RATTO VAQUER on 2025-10-27 at 11:44

Fairness is not about giving to everyone the same conditions and tools. It is about creating the conditions for which everyone has the same opportunities based on their different conditions. AI use is not the same for everyone and AI ethics should account for it. When a teacher establish in which way AI can and should be used both in class and for assignments, he or she should consider different conditions of students. For example, a student whom mother tongue is not the one used in assignments may find helpful to use AI to double check his translation. Even though he should be obviously the main author of the translation, still it may be fair to allow him for the use of AI to check it. Some native speakers may consider it unfair as they were not allowed to double check their possible grammar mistakes in the assignment. In my opinion it is a due of the teacher to explain to the class why fairness is not absolute, and why it is fair for different people to use tools differently.

LUO Yueming on 2025-10-27 at 11:43

I have come across some tips sharing on social media about how to lower the AI rate. For example, one can first generate a paragraph totally with AI, and then translate to a random language other than English, and then let AI paraphrase again, and finally translate back to English. In this way, the AI rate could be very low, but actually the paragraph is totally generated by AI. Other ways include using AI tools specified for lowering AI rate/humanizing the language. However, sometimes we put our own writing work into AI check and the AI rate is quite high, but for these cases that really use AI they could pass the AI check.
 

I feel this is really a dilemma. On the one hand, we may need AI check tools to encourage unique and original human creation, but on the other hand, there are always ways to "trick" such tools; while original human creation may be labeled with high AI rate. I am not sure what we can do with this situation; it seems ethical AI use really depends on the agency and ethics of individuals, as there are always ways to cheat and plagiarize-“道高一尺 魔高一丈”。Also, I feel this issue actually always exists, and not only relates to AI, e.g., when the simple translation tools come out, there are also similar dilemmas. So maybe it is just a natural process and consequences of human evolving together with technology.

Kunxiao Du on 2025-10-27 at 11:41

1. Case example: Classroom privacy and consent
An instructor records a Zoom class, including breakout rooms, names, faces, and chat logs where students share personal experiences. To “help future cohorts,” the instructor uploads the full recording to an unlisted YouTube link and posts it on the course site. A student later discovers the video appears in search results and includes their identifiable comments.

 

2. 

  • Issue: Privacy breach and lack of informed consent—posting a full class recording (faces, names, chat) publicly can expose students and chill participation.

  • Actions: Ask for immediate removal; restrict any recording with access controls; redact names/chat; notify students and offer opt-out; escalate to the program privacy officer if unresolved.

  • Prevention: Use opt-in consent; default to privacy-by-design (no chat capture, hide names, limited retention); state policy in the syllabus; offer non-recorded participation options; provide faculty training.

Hide replies
Kunxiao Du on 2025-10-27 at 11:45

from chatgpt

Jade Jianing Zhang on 2025-10-27 at 11:41

Case: In September 2023, a paper published in Physica Scripta contained the suspicious ChatGPT phrase “Regenerate Response.” The paper was ultimately identified as AI-generated using ChatGPT and was retracted, becoming the first SCI paper withdrawn for suspected AI ghostwriting.

I have some questions about this case. This phrase "regenerate response" certainly can serve as proof that AI tool was used in this paper. However, in scientific papers, it's the data and conclusion that matter, not the actual writing. Some scientists are not very good at writing up their essays, and they might want to save time by using AI tools to do the actual writing. So my question is, why would this constitute a major integrity issue if the researchers just used ChatGPT for their writing? 

Zhe WU on 2025-10-27 at 11:40

Once in one of my undergraduate courses, the professor shared a video from a student who was in her course previously. The video was taken by the student and owned by the student. Actually, the student even used that video for competition and other purposes. The reason why that professor got that video is the student once sent it to her for advice. However, when I told the student that his video was displayed in that class, he was very shocked because he had never allowed the professor to use it for public purposes. It is an issue related to intellectual proporty. I think if the professor wants to share that video, she should ask the student for permission. If that is an assignment for that course, it would be effective that a consent form or "contract" been signed through the assignment submission platform.

Emma Tsoi on 2025-10-08 at 11:05

I can’t imagine a life without AI now. I feel like I cannot complete most of my work without it. Is this reliance unhealthy? Is this even ethical, to use a second hand at work?

Hide replies
Jackie Zhang on 2025-10-14 at 10:19

You are fine, but I do understand the anxiety sometimes. Tools are for human to use. It doesn’t always have to be you handcrafting everything from scratch. I hope you’re fine and have a nice day!

Sue Xu on 2025-10-15 at 12:44

I can relate to that…

Jackie Zhang on 2025-10-27 at 14:38

Usually I would hatch out my plan by myself instead of asking AI to brainstorm for me. AI should follow our instructions instead of we following AI’s instructions.

Sue Xu on 2025-10-28 at 09:23

I’m from the field of arts, I think human’s touch is essential, even we might make some mistakes, but that’s what’s beautiful.

Jackie Zhang on 2025-10-06 at 12:46

With AI increasingly generating creative content, how do we ethically handle authorship and credit in academic work?

Hide replies
Joanna Liu on 2025-10-08 at 11:04

I think students and teachers should clearly state when AI tools contributed, but also highlight their own input.

Sue Xu on 2025-10-17 at 14:30

I agree and I think schools and governments need to update plagiarism policies to include AI-generated content, so everyone understands the boundaries.

Joanna Liu on 2025-10-27 at 14:37

Yeeeees.~

Privacy Policy | © 2025 Centre for Holistic Teaching and Learning, Hong Kong Baptist University. All Rights Reserved.