Discussion Board

Saman Rasheed on 2025-09-20 at 21:43

Should there be strict limits on the use of AI generated content with regards to the creation and distribution of the work? I debate if there should be some restrictions on the political and social impact and use of AI, especially when it comes to information. AI can create a lot of misinformation which may mislead people who are trying to learn about important historical facts. What would a balance of free speech and the control of AI look like? What roles can a government or the developers play in ensuring that our information mantains its integrity?

Hide replies
Jake Wilson on 2025-09-22 at 09:55

Great points! I agree that limits on AI-generated content are important, especially to prevent misinformation in historical facts. Striking a balance between free speech and control is tricky. Governments could require clear labeling of AI content and set rules for political ads. Education also matters, people need skills to spot misinformation. Transparency and accountability are key to maintaining trust in information while still allowing the positive uses of AI. What restrictions do you think would work best in practice?

Saman Rasheed on 2025-09-20 at 21:37

Is it possible for AI platforms to be unbiased at their core? Or is it inevitable due to the nature of their design or training? Since AI is a product of human data, which consists of a lot of prejudice from historical and systemic differences, can the bias truly be eliminated? Can developers do something different or does their processing of data contribute greatly to the results AI can offer? I often evaluate my use of AI and find myself pondering the nature of the results it gives me.

Hide replies
Jackie Zhang on 2025-09-23 at 07:26

I don’t think it’s possible for AI to be completely unbiased, since AIs learn from human data, which always contains some level of bias. Developers can try to reduce bias by carefully selecting and curating training data, and by testing AI outputs for fairness. Total elimination of bias is probably unrealistic, but being aware of these limitations and working to minimize bias can lead to more responsible AI use.

 

Ocean Bautista on 2025-09-19 at 12:51

The reality of our situation is that AI might as well be as a widespread resrouce for everyone. Which brings up the discussion about In fact, my old highschool has already incorperated AI studies into their curiculumn. I'm wondernig at what stage of learning will students be taught the application and ethics of AI. Should it now be incorperated at an early age? Should it be gatekept and only used for tertiary education?

 

In my opinion, if we start too early, then skills for human growth may be really affected and those children may be overreliant and lose the critical analytical skills. However, if it is developed too late, then they may be falling behind in terms of the current situation of the workforce.

 

These are my thoughts, but I'm curious about other aspects that I've missed.

Hide replies
Mike Badillo on 2025-09-24 at 19:22

You raise a great point.  Introducing AI too early might make kids overly reliant on tech and weaken their critical thinking, but waiting too long could leave them unprepared for the modern world. Maybe the answer is a gradual approach: start with digital literacy and basic AI concepts early on, then build up to ethical discussions and hands-on applications in high school. That way, students develop both technical and analytical skills. What do you think about a phased curriculum like this?

Emily Zhang on 2025-09-17 at 11:05

What are your strategies for teaching students to maintain digital wellness during online learning?

Hide replies
Alice Yu on 2025-09-17 at 14:30

It’s not easy to do this because most of the tasks are done online nowadays. But I also believe that my students are way more adaptive to the Internet than me as they’ve been online almost since birth.

Jackie Zhang on 2025-09-18 at 08:45

Digital mindfulness is often overlooked in course design. We have schedules to follow, the only thing I can do is to remind them to look at the real world more in their free time.

Marcelle Lamarche on 2025-09-12 at 18:07

With the current state of the media having AI generated content mixed in more and more indistinguishably from human-made media, should guidelines on AI generated imagery be tighter in education? I've seen professors and students alike using AI videos, AI platforms, and even AI memes in slides and lectures to explain concepts who tend to be over simplified or plain wrong. I don’t see any problem with human generated content in most academic topics, and hence don’t see any reason to switch to AI generated explaination videos.
Moreover, I fear that in the near future students won't take the steps to corroborate information from these videos, especially when professors are the ones providing them. Are you making, or have you noted this shift?

Hide replies
Saman Rasheed on 2025-09-15 at 22:50

I agree with you! I'm noting this shift and its popularity growth is indeed, quite alarming. Most AI generated videos I've seen tend to contain misleading or confusing information which often creates more stress than comprehension for viewers, and takes away from the 'human' effort that goes into traditional explanation videos, where experts or those interested in the field take the time and effort to explain it in depth with the best available resources. Any thoughts?

Mike Badillo on 2025-09-16 at 14:29

I think part of the issue is that it saves time for professors. Maybe there should be some sort of disclaimer or review process before this kind of content is used in lectures. Otherwise, like you said, students might just accept it at face value because it comes from a trusted source.

Paul Cheung on 2025-09-19 at 10:00

Maybe what we need is more guidance or digital literacy training so students know how to spot AI content and double-check facts. I don’t think AI content is always bad, but it definitely needs to be used carefully, especially in education where accuracy is so important!

Alice Yu on 2025-09-10 at 16:40

Quick question: if teachers use AI-made stuff in class, do you think they should just tell students?

Hide replies
Emma Tsoi on 2025-09-10 at 18:12

I think you should, at least to remind yourself this.

Jackie Zhang on 2025-09-11 at 09:00

even just to amend your writings?

Rodrigo Alonso on 2025-09-12 at 17:24

They totally should, to ensure students are able to differentiate in the future human vs AI generated media. 

Cao Xinrui on 2025-09-19 at 09:37

Definitely yes,because the information generated by ai nowadays may sometimes biased,you should let your students know that the staff is from AI and before the class you should check if there is any problem with the document.

Joe Leung on 2025-09-03 at 10:15

Should we limit student’s use of particular AI models, like only ChatGPT or Deepseek to prevent unforeseeable bias among different models?

Hide replies
Sue Xu on 2025-09-03 at 13:22

Never really thought about this! Thanks for the perspective.

Joanna Liu on 2025-09-04 at 09:05

I agree that bias is a concern, but encouraging students to compare results from different models could help them identify inconsistencies and develop a more balanced understanding, rather than relying on just one tool.

Chandani Ghising on 2025-08-28 at 04:20

Social Media apps .. that we use are designed to maximize user engagement and inorder to do this they often exploit human psychology. While,yes  this keep us enterntained and connected, it can also lead to digital addiction, decreased productivity.

So guys, what do you think.. Do social media companies have an ethical responsibility towards the user to limit these addictive designs or do you think it is completely up to the user to manage their own usuage?

Hide replies
Mike Badillo on 2025-09-01 at 10:42

Interesting question! Personally, I think it’s a balance. On one hand, yes, users should be mindful and take charge of their own usage.  Companies should make their platforms less addictive or at least offer features to help with self-control, and users should be proactive about managing their time online.  What do you think….

Kevin Cheng on 2025-09-02 at 14:21

I think social media companies definitely have an ethical responsibility here.  At least the companies offer tools to help, like time limits or reminders… I agree that users need to take responsibility for their own habits and set boundaries.  Do you feel you’re in control of your social media use? 

Alice Yu on 2025-08-25 at 19:12

What’s the line between self-censorship and being considerate to people’s feeling?

Hide replies
Jackie Zhang on 2025-08-25 at 21:03

I think the key is intention and context. Being considerate means choosing words carefully to avoid harm while still expressing your honest opinion. Self-censorship becomes problematic when fear or external pressure stops you from engaging in meaningful dialogue or sharing important perspectives. All my conversations with my students are based on creating a safe space for respectful communications.

Alice Yu on 2025-08-25 at 22:06

Thanks! You’re so right, I was so aware of everything my students and I said. Sometimes, I’m afraid of offending anyone online by simply just discussing something, maybe not even sensitive at all.

Rodrigo Alonso on 2025-09-12 at 17:30

I believe the line lies in respect towards yourself and others, there are always ways to covey your ideas in a way that is appropriate. Another issue is whether the person or individuals that listen to your ideas are open to different points of view.

Joanna Liu on 2025-08-20 at 19:20

How do you deal with the bias in the provided sources of ChatGPT?

Hide replies
Sue Xu on 2025-08-21 at 15:47

I also wonder if anyone has a solution to this. Sometimes I see different studens cite very similar sources and I reckon it’s probably because of ChatGPT…

Mike Badillo on 2025-08-23 at 18:56

Same:<

Chandani Ghising on 2025-08-28 at 03:59

Oh yes, i also face these problems at certain instances.. instances like when i have the urge to understand about hypothetical or controversial stuffs.

It might not be perfect.. but inorder to tackle this ..i try to understand and apply these few stuff..
i. At 1st , i acknowledge the fact that Chatgpt doesnt have it's own opinion. it's just mirroring the pattern and algorithm that has been installed within it.


ii. I ask for multiple perspectives.. you know a proper prompt which leads me to a neutral viewpoint, so that i can reach to a place where i can understand stuffs without any prejudices.


iii. After getting the informations , i try to cross-check the information with trusted sources.

iv. If incase, i feel the verdict of the information has swayed towards one-side a bit more .. i would start questioning why and prompt further to get a more balanced understanding.

Privacy Policy | © 2025 Centre for Holistic Teaching and Learning, Hong Kong Baptist University. All Rights Reserved.