AI tools can help us enhance learning efficiency, but we should also be mindful of relevant ethical guidelines. The core of AI academic ethics lies in the responsible use of AI tools to ensure the integrity and quality of academic research.
Generative AI Usage Reminders
I. Users Must Identify and Mitigate Potential AI Risks
AI-generated content may be inaccurate or contain biases and discrimination, and can easily lead to academic misconduct such as plagiarism, fabrication, or falsification. Inputting personal or unpublished research data also poses privacy and security risks.
II. Establish responsible and transparent usage guidelines
AI cannot be listed as a paper author, as it cannot be held accountable for research content; ultimate responsibility rests with the researcher. If AI tools are used during research, their application must be honestly disclosed in the paper—a requirement explicitly mandated by international journals.
III. Enhancing AI Ethical Literacy
Users should not over-rely on AI to the detriment of independent thinking. They must develop the ability to critically evaluate and verify AI-generated content. The academic community should also continuously develop relevant guidelines and frameworks to address challenges posed by emerging technologies.
Image source: Center for Taiwan Academic Research Ethics Education(2023). Six key considerations when employing generative AI for academic and research activities.
※For regulations regarding the use of generative AI tools at KMU, please refer to the announcements from the Office of Academic Affairs.>> Guidelines for the Use of Generative AI Tools (Student Edition)



