AS GENERATIVE Artificial Intelligence (GenAI) becomes increasingly integrated into education, academic institutions face challenges in regulating students’ dependence for required outputs. Consequently, concerns related to authorship, critical thinking, and academic integrity continue to rise.
Acknowledging the prevalence of GenAI, the Ateneo released a policy concerning its use in teaching, learning, research, and creative work last January. Four months later, the Office of Vice President for Higher Education released the revised version of the policy with updated information on Security Guidelines to reinforce the University’s core values of integrity and transparency.
However, as controversies surrounding AI technologies persist, ongoing efforts stir discussions on their responsible and ethical use in education.
GenAI in the classroom
In classrooms, instructors are mandated to include the Ateneo’s GenAI policy in their syllabi, detailing the restrictions for its use in their class. However, Ateneo Science & Art of Learning & Teaching (SALT) Institute Director Galvin Radley Ngo shares that these policies are general, and that GenAI use still varies across academic departments.
To regulate AI use, some instructors including Ngo have redesigned their teaching practices and assessments to be more process-based and to ensure that student outputs reflect Course Learning Outcomes (CLOs).
For instance, Philosophy Instructor Federico Jose Lagdameo, PhD focuses his class discussions on students’ critical thinking, now prioritizing recitations over written outputs. Similarly, Ngo highlights the importance of using his CLOs as a guide. For example, he considers image generators unlikely to hinder student learning, as visualizing is not an intended outcome in his classes.
From students’ perspective, School of Social Sciences students Marty Apuhin (3 BS PSY) and Ara De Silva (3 BS PSY) share that they encounter GenAI in using search engines and exploring personal study techniques.
Likewise, School of Management student David Lance Paez (1 BS ME) uses AI for synthesis, ideation, and learning. In his experience, these tools make work efficient when used responsibly.
For Ngo, however, efficiency should not always be the priority. He further shares that using GenAI to summarize readings might result in the loss of reading ability, critical thinking, and other intellectual skills.
Building on this, GenAI’s implications on student’s way of learning, thinking, and creativity remain a pressing concern. According to Lagdameo, overreliance on GenAI debilitates a person’s ability to perform such cognitive functions.
In the context of HE, Lagdameo believes that the general usage of GenAI for academic purposes makes it difficult to determine which insights are the student’s own. With this, he emphasizes the role of instructors in guiding learners to engage with such technologies meaningfully.
When bots do the learning
For an academic setting, issues with GenAI use can be tied to the way Large Language Models (LLMs) like ChatGPT work—they are fed mountains of data to predict the right words for a user’s questions and input. Hence, though they can churn out coherent results, they do not have the consciousness and capacity for deep, subjective, and critical thought.
This raises questions regarding GenAI’s place in education. De Silva believes that using GenAI could undermine the processes of learning, analyzing, and humanizing, which she believes are the pillars of Humanities and Social Sciences (HUMSS) subjects.
Following this, Apuhin posits that persistent GenAI use could hamper the value of one’s learning process overall. “[AI] takes away your process of connecting learning from before [to new material] and becoming critical of that [knowledge],” he explains. For instance, he observes that students sometimes rely on ChatGPT to “read” books, when instead they could have read it themselves.
Meanwhile, Lagdameo points out that GenAI can help students process large volumes of information, give feedback, and adhere to personal needs. However, he acknowledges that it still introduces risks such as diminished creativity, overreliance, as well as ethical concerns such as plagiarism and data bias.
The sheer efficiency that GenAI offers has also led some people to believe it can replace the reasoning and analysis needed in HUMSS courses. For Paez, he observes that people who consider those subjects “menial” would use GenAI to complete requirements.
Lagdameo also recognizes the relevance of technologization in the context of GenAI. He stresses that GenAI’s increasing ability to replicate human intellect underscores the need for stronger humanities education.
Apuhin shares the sentiment, saying AI has only emphasized the need for HUMSS courses. “In the Ateneo, we’re all about community service, and the preferential option for the poor. What can AI do when it comes to building services for people in need?” he explains, in a mix of English and Filipino.
As GenAI continues to evolve, Ngo encourages discernment of its use, emphasizing that students must carefully consider resisting it for tasks they can do or learn for themselves. Conversations on GenAI thus continue to evolve as institutions like the Ateneo learn to adapt to its impact.
A future with AI
Since GenAI’s rise to prominence, it has been associated with academic dishonesty, shaping how students and teachers discuss its use. Ngo, in particular, observes that many link GenAI to cheating, which may hinder important conversations about navigating its use in the classroom.
With this, Ngo suggests that transparency in using GenAI is essential to examine how it can be utilized ethically and productively. Doing so, he asserts, can encourage teachers to be firm with boundaries on how AI is used in a class setting.
For Lagdameo, the Ateneo’s policy is already able to emphasize this transparency as he believes it clarifies the extent to which students can use GenAI. However, Paez points out that the policy may fall short in addressing AI-checker false positives and in giving clear boundaries for ‘proper usage,’ which varies per course and teacher.
Still, in navigating these policies, Lagdameo encourages adaptation instead of rejection. “In education, we have to rethink and rework our approach [to GenAI] because AI isn’t going anywhere, and we have to learn to coexist with it,” he emphasizes.
In this light, the Ateneo faculty has been making strides to adapt to GenAI’s growing influence. According to Ngo, a course is set to be offered to Ateneo faculty this academic year to help them better understand the implications and limitations of AI.
Alongside these institutional efforts, Lagdameo calls for a “shift” in priorities for higher education. Reworking higher education towards creativity and problem solving, he contends, necessitates treating GenAI as a tool that can be used to support these aims.
Ngo also proposes a nuanced take: AI can be a collaborator. “Use AI to enhance your thinking, not replace your thinking,” Ngo expresses, encouraging both teachers and learners to approach GenAI with measured enthusiasm.
As technology continues to advance, GenAI will remain at the forefront of conversations in education—whether it is a catalyst for learning or a replacement for it. AI must then be used in a manner that preserves humanity’s ability to learn, to think critically, and to reason for oneself.