WITH THE 2025 Midterm Elections fast approaching, concerns are mounting over the trend of digital technology being heavily leveraged by key political players to distort narratives and shift discourse for their own gain.
Most notably, generative artificial intelligence (AI) tools, such as ChatGPT, have emerged as powerful amplifiers of disinformation on social media platforms, raising legal and ethical questions over their use for political purposes and potential implications.
AI-powered smear campaigns
Disinformation is defined as “deliberately spreading false or inaccurate information.” In electoral contexts, it manifests through fabricated narratives that target opposing candidates with the aim of advancing certain candidates or agendas.
According to Information Systems and Computer Science Department Professor John Paul Vergara, PhD, many social media platforms maximize user engagement, allowing disinformation to proliferate. He highlighted the direct connection between controversy and engagement, explaining how disinformation spreads quickly on social media.
“Because we react to falsehoods and we react to biases, disinformation—as a form of controversy—gets users to engage, contradict, and [take a] side, which maximizes engagement as a result,” Vergara explained.
He also pointed out that AI has worsened existing concerns surrounding disinformation by enabling increasingly sophisticated tactics. One of the primary strategies is the use of deepfakes, which are a type of media that have been manipulated through deep learning—a form of AI—to generate fake videos of individuals doing or saying something.
“Before, you could say something like, ‘I’d quote a person even though he didn’t say it.’ That’s already controversial in itself, and that’s just a quote, and sometimes people will just believe it because it looks like a quote. Now, you could make it such that it looks like they’re actually saying it,” Vergara said.
A recent case involved Alliance of Concerned Teachers (ACT-Teachers) party-list representative and senatorial aspirant France Castro and her Makabayan coalition, who were recently targeted by deepfakes falsely connecting them to communist insurgents.
In response to such cases of AI misuse, the Commission on Elections has implemented Resolution No. 11064, which seeks to regulate social media activity and the use of AI amid previously raised concerns about its potential to spread disinformation in the lead-up to the 2025 midterm polls.
As AI’s increased accessibility and expanded capabilities continue to worsen disinformation concerns, it can be expected to transform and reshape the real-world political environment, starting with the upcoming local and national midterm elections.
Controlled narratives
Even before generative AI gained relevance, disinformation was already influential in shaping previous Philippine elections. Notably, in the 2022 presidential elections, thousands of individual political influencers paved the way for social media to play an important role in propagating misleading narratives and shaping public perception of the candidates and their platforms.
This electoral disinformation in the Philippines often stems from grassroots activities, particularly through the followers of the actions of such individual social media personalities. They may be tolerated or even tacitly backed by the candidates they support.
Further compounding the issue is the country’s low media and information literacy. A March 2025 Social Weather Stations survey showed that 65% of Filipinos struggle to detect falsehoods in both traditional and social media—a significant increase from 51% prior to the 2022 elections.
When asked about AI’s role behind the finding, Political Science Department Instructor Gino Antonio Trinidad, MA, explained that AI further polarizes Philippine politics by enhancing both the speed and quality of disinformation, as well as its utilization of alternative forms of content, such as short-form videos.
According to Trinidad, AI and disinformation work together toward “galvanizing [and] cementing reasons to vote for and reasons not to vote for a particular candidate.” He elaborated that disinformation usually relies on eliciting emotional responses and tapping into confirmation bias—which is an individual’s tendency to believe in information confirming their preconceived notions—to persuade people.
“Confirmation bias, fundamentally, has an emotive component. […] [Thus, disinformation content is] trying to intensify particular emotions that are visceral, rather than a rational way of dealing with things,” he stated.
As such, Trinidad argued that the use of AI and disinformation helps augment a political candidate’s “image-making” capabilities, allowing them to shape how they are perceived by specific audiences and ultimately increasing their chances of a successful campaign.
Governing AI
While AI continues to be integrated into existing systems, there are viable approaches to directly tackle the effects of its usage in electoral campaigns.
Many jurisdictions across the world have already implemented concrete regulations and policies specifically aimed at preventing the abuses tied to AI usage, especially in sociopolitical contexts.
For instance, the European Union (EU) enforces comprehensive AI laws among all its member nations. AI systems applications that are classified as high risk, such as those in law enforcement and employment, are regulated by the EU.
Meanwhile, other applications classified to have unacceptable amounts of risk to security are banned, such as those that involve behavioral manipulation or biometric identification.
Aside from adapting existing policies, Vergara proposed other possible solutions such as the verification of users—adapting a system similar to how student IDs, bank accounts, or even GCash or PayMaya accounts are verified.
“Platforms can only be more responsible if there are actual people in there. You can’t make the platforms responsible if they allow non-people or fake people to be there,” Vergara argued.
Though such changes seem to tackle the novelty of the misinformation problem, the issue is nothing new to the Philippine political landscape. Previous elections have revealed a consistent pattern of propaganda relying on emotional manipulation and falsehoods to sway public opinion, with emerging technologies threatening to worsen this issue.
Ultimately, comprehensive regulation models are crucial in maintaining clean and fair elections. While the political use of AI may not be completely eliminated, proper restrictions on its application are necessary to avoid its unchecked influence from disrupting a major democratic process.