Those who watched the video may be forgiven for thinking ghosts are real. Before their eyes was the familiar face of the late former Indonesian dictator Suharto, seated at a desk against a yellow background, wearing a traditional black kopiah hat and a batik shirt.
Flanked by the flags of Indonesia and his party, Golkar, Suharto urged his listeners to vote for Golkar representatives in the upcoming elections.
There was just one problem: the video came out in January 2024, a month before the presidential election that had as its frontrunner, Suharto’s former son-in-law and general, Prabowo Subianto, and Suharto died 16 years ago, in January 2008.
Racking up over 4.7 million views as of December 22, the video was a “deepfake,” where technology was used to mimic the appearance and voice of the late dictator. It was shared by Erwin Aksa, the deputy general chair of Golkar, on X (formerly Twitter).
Aksa clarified in the video caption that it was made using artificial intelligence technology, or AI.
After exhorting viewers to elect the “right representatives,” Aksa said: “This video was made using AI technology to remind us how important our votes are in general elections which will determine the future so that the hopes of the Indonesian people are realised and prosperous.”
In neighboring Malaysia, the fast and affordable Internet connections available there allow citizens to consume online content at faster rates than ever before, including short videos like TikTok reels.
This helped shape elections as well. The rise of the Malaysian Islamist Party (PAS) in the 2022 general election, the party that won the most seats (49 out of 222) in parliament, could be largely attributed to PAS’ mastery of TikTok, according to experts.
2024 has been a remarkable year for elections, and particularly for elections in Asia.
We’ve seen voters in Taiwan, Bangladesh, South Korea, Iran, Japan, India, Sri Lanka and Pakistan head to the polls, in many cases, resulting in changes of government or at least significant losses by the incumbents.
If that does happen, long-running policies or plans for the future may be shelved, as the country either adjusts to a new government with different priorities, or the incumbents revise their policies to appeal to more voters before they get thrown out of power. With such high stakes, it is important that elections are kept free and fair, and voters have all the necessary information before they make up their minds.
But if AI technology can be misused, generate misinformation and manipulate the voters, could elections be tipped one way or the other by nefarious actors, people who may not even reside in the country?
We shine a spotlight of how AI, social media and electoral politics have already become enmeshed and entrenched in two recent elections in Malaysia and Indonesia.
Indonesia is a massive country in Southeast Asia, with a population of over 278 million, but it also has a high level of internet penetration. According to Indonesian site Antara News, citing the Indonesian Internet Service Providers Association in January 2024, said that Internet penetration had reached 79.5%.
With this figure reaching even higher levels among the younger generations (Generation Z from 19 to 27 years old had over 87% ), and the fact that Indonesia recorded the most number of TikTok users in the world in July 2024, surpassing the United States, it seems likely that Internet videos will remain an indelible element of future elections.
Indonesia eventually elected Prabowo Subianto, defense minister and chair of Gerindra as the next president. But Golkar improved its own performance, gaining 15.3% of the vote as opposed to 12.3 in 2019, remaining the second-largest party in the legislature.
Perhaps the AI video helped them on their way.
In Malaysia, the use of the Internet has become far more widespread.
“The environment has changed completely,” said James Chin, Professor of Asian Studies at the University of Tasmania, about modern social media.
Thanks to upgraded Internet access and cheaper telecommunications technology, more people in Malaysia have access to social media than ever before.
“For example, you can get unlimited broadband for your mobile phone in Malaysia, the cost is about 25 ringgit (US$6) a month,” he said. And what do the people use their high-speed Internet connections for? Getting online and sharing content one may have found elsewhere, without being too concerned about its authenticity.
The Republic of Indonesia is somewhat overlooked when democratic elections are discussed, but it happens to be the world’s third-largest democracy, after India and the United States of America.
With a GDP of US$1.37 trillion (2023), and a formidable military, Indonesia is one of the most significant countries in the Southeast Asia region. It is also a major diplomatic power in the Association of Southeast Asian Nations (ASEAN) and arguably in the greater Asia Pacific region.
With its prominent position, elections in Indonesia therefore have significant impact on the region and the wider world. Hundreds of people run for elected office to help shape Indonesia’s destiny.
One such person is Anindya Shabrina, 29, a legal affairs specialist who joined Indonesia’s Labour Party and ran as a candidate in the February 2024 elections.
Describing herself as politically active since she was a student, Anindya decided to join the Labour Party, attracted to its left-leaning stance and open approach to recruitment.
“Traditionally, young people who can compete in electoral politics here are mostly from political or wealthy families, but in the Labor Party, anyone can run.”
Despite her parents’ concerns, they gave their full support to Anindya’s bid to stand for a seat on the Regional People’s Representative Council of East Java, in an ultimately unsuccessful effort.
And perhaps they were right to be concerned. Anindya called the experience “incredibly challenging”, citing the financial disparity between herself and her more established opponents, who could promise tangible monetary assistance while she could only expound on her proposals.
Anindya said that during the course of her campaign, she had come across several instances of AI-generated video clips used to support candidates, including the Suharto video.
Even her own party got involved, although they decided to stop following concerns raised by creative industry workers.
However, there was another, far nastier hurdle she faced.
Anonymous online detractors leveled personal attacks and harsh criticism at her, and she even faced attacks from supposed political allies who objected to her even taking part in the election.
“There have been attempts to cancel me, including spreading strange rumors, and an anarchist group even created a poster calling for violence against me.”
When asked if she thinks AI technology would have made the situation worse, Anindya had zero doubt. “Especially for women,” she added, raising the possibility of using AI to create fake nude images.
This is not an unfounded fear, with women politicians in other countries being harassed, insulted and attacked online. During the rule of right-wing Brazilian president, Jair Bolsonaro, there was a surge of online gendered attacks, particularly on social media networks like Facebook.
Manipulated or edited photographs have been around for decades.
Soviet Union dictator Josef Stalin had photos edited when the people in them drew his ire. In the computer age, Photoshop became a common tool, and was quickly used to transform photos for political purposes.
Behold this picture of Sarah Palin, the Republican vice presidential candidate in the 2008 election, edited to look as though she was toting a gun and wearing a bikini.
But although Photoshop was widely available, anyone who intended to create digitally-manipulated images with the programme had to have some skills in the first place.
In contrast, using an open AI program has a much lower barrier to entry.
As Benjamin Ang explained, “AI opens up the capability to more people in Photoshop.”
Previously, one needed skills in Photoshop or video and audio editing. But now, such skills are practically not needed because all the tools are available to the public, even if they don’t understand the language of their target audience.
Ang is a Senior Fellow of the Rajaratnam School of International Studies, and the Head of the Centre of Excellence for National Security. In his concurrent role as Head of Digital Impact Research, Ang is very familiar with the development of AI, and its widespread use in society, including in the political arena.
Ang shed some light on just how and why AI has come about by leaps and bounds within the last few years, going from science-fiction and the mostly theoretical realm to widespread, everyday use.
Calling it a “hockey-stick” effect, where progress is flat for a long time before it suddenly shoots up, Ang pointed to two other factors for the rapid development of AI.
The first is the development of computer chips to the point where processing power is fast enough to handle the demands of AI. The second, Ang pointed out, is there has been roughly two decades of social media use where people have uploaded a staggering amount of personal information online.
This, he said, allowed programmers to feed such information into data sets to train machine learning algorithms.
Ang also highlighted that AI has made the speed of generating such content much faster.
“Something which would have taken you an hour or several hours to do in Photoshop, or maybe a day to do video or audio editing can now be done in seconds. And because it can be done in seconds, you can keep on iterating it. You do it once, you can see ‘is it working’? You can do it again and again until you can really refine it, at a scale that has never before happened.”
Roy Lee, an Assistant Professor of Information Systems Technology and Design at the Singapore University of Technology and Design, agrees.
“Modern AI tools are designed with user-friendly interfaces and require minimal technical expertise. Unlike complex software like Photoshop, which demands specialised skills, AI platforms often offer intuitive prompts and automated features, enabling average users to generate high-quality content effortlessly.”
The role of social media
While AI has come on by leaps and bounds, its combination with social media is like setting an open flame to touchpaper; you get fireworks.
Chin elaborated on the rise of the Malaysian Islamist Party (PAS) in the 2022 general election.
The key to their victory, in his view, was their mastery of using TikTok to win the hearts and minds of voters. The party backed influencers who created “very slick, professionally-run” videos that in combination with their popular religious messaging, created a “powerful machine.”
But what about AI-generated videos? As in Indonesia, Chin foresees the same thing happening in Malaysia.
While such content is currently “simple stuff,” using cartoon figures and the like, he has no doubt that come the next election cycle, political parties will be investing their resources in creating such content.
“If you speak to all the political parties in the old days, a major portion of [their] money goes to ground campaigning, paying campaign workers, setting up booths, holding night ceramahs (night rallies).
If you talk to them now, right, most of them said that they’re going to shift the bulk of those resources now towards social media.”
Chin also believes that in the wrong hands, misleading content created by AI will be very effective, especially once high-quality videos start being produced in earnest.
He pointed out that such content did not need to be entirely faked or created out of whole cloth.
Perhaps an existing video could be edited to add a few words, or redact them, to produce a misleading message. And if it’s uploaded to TikTok, with its notoriously short attention span?
“I doubt very much that the ordinary voter would be able to tell the difference.”
Indonesia has taken a few tentative steps to address the problem.
About a week after the presidential election in February 2024, then-President Joko “Jokowi” Widodo signed legislation that requires digital platforms to pay media outlets that provide them with content.
While it does not directly tackle the misuse of AI content, it could help to ensure digital platforms are more circumspect about the kind of content they share.
In September 2024, the Jakarta Globe reported that Indonesia’s Ministry of Communication and Information Technology was preparing to issue new regulations to establish “clear guidelines” for the use of AI technology in Indonesia.
Deputy Minister Nezar Patria said it would be carefully studied, and require consultation with the “AI development ecosystem.”
The existing circular, issued on a temporary basis by the ministry, merely outlines “ethical guidelines” for the use of AI, including “respecting human rights” and the “need for transparency.”
However, despite the change in presidential administrations, Nezar was re-appointed to his post by the new president, Prabowo, who took over in October 2024. There is some hope that Indonesia will continue to work on creating AI regulations.
Meanwhile in September 2024, Malaysia’s Ministry of Science, Technology and Innovation introduced the National Guidelines on AI Governance and Ethics, which seek to support safe and responsible AI development.
However, Indonesia and Malaysia’s neighbor Singapore is not waiting around, and has already introduced and passed legislation specifically governing the use of AI-generated content in an election, with one eye on its own upcoming general election.
The city-state has recently gone through a rare leadership change, just its third in its near-60-year history. The new prime minister, Lawrence Wong, took over in May 2024.
Unlike leaders of other countries, such as Japan, Wong did not call for snap elections soon after being sworn in. He must call for a new general election by November 2025, which leaves quite some time. He has time to see a new bill being introduced in parliament to directly address the issue of misleading, manipulated content.
In October 2024, Singapore’s Minister for Digital Development and Information Josephine Teo played a “deepfaked,” AI-generated video of herself in parliament.
The virtual Teo said: “It only took one person one hour to create this, using easily accessible software that anyone can use right now from the Internet. Imagine if someone produced realistic deepfakes, depicting Members of this House saying or doing something we did not actually say or do, and disseminated it. Such technology will only improve, and deepfakes may become even more realistic, convincing, and easy to make.”
The bill, which was passed by Singapore’s parliament, is a very narrowly-targeted one. It prohibits the publication of online content that “realistically depicts a candidate saying or doing something [they] did not.” It covers misinformation from both AI-generated content and non-AI techniques used to create content, such as Photoshop or audio dubbing.
However, the law only kicks in when the Writ of Election is issued to the close of the polls, which means it is only active for Singapore’s election season. It doesn’t matter whether the content boosts or denigrates a candidate; both are prohibited. Reposting or sharing such content is also not allowed.
Measures include asking the users to take down the content, or forcing the social media site to disable access to Singaporean users. Punishments include fines for up to S$1,000 (US$760) or jail for up to a year, and up to S$1 million (US$760,000) in fines for a social media service that does not comply.
Harsh? Effective? It remains to be seen.
Lee called the legislation a “commendable step,” but said the main challenge depends on its execution.
“Social media companies will need to regulate and investigate digitally manipulated content swiftly when requested to take down such content. Given the vast amount of content they handle daily, this poses a significant challenge in terms of scalability and response time.”
Cross-border content could also happen, and getting users to take down misleading content who are not in the country could take some time.
To Lee, strengthening collaboration between regulators and social media platforms is important to ensure “comprehensive protection.”
But what else can be done, if not through government legislation?
According to Lee, enhancing public awareness and media literacy is also crucial. “Educating citizens on identifying manipulated content empowers them to critically assess information,” he said.
And in what may be an illustration of the old adage, “set a thief to catch a thief,” Lee shared that AI solutions can also be used to detect misinformation on social media platforms, and other platforms with user-generated content.
He believes fostering partnerships between academia and industry to continuously improve such tools could significantly reduce the spread of misinformation.
Chin is more skeptical. He pointed out that for such AI-generated misinformation, the content is likely to be shared to you by a trusted individual, like a friend or family member.
Chin also feels that Singapore’s attempt to tackle AI-generated misinformation may not be easily replicated in other countries.
“It’s a small city state, it’s much easier to handle. But for countries like Malaysia, I think, is increasingly becoming difficult. Even for Singapore, right, I think it will be very difficult in the coming years because of new platforms like Starlink, which allows you to link directly to the satellite. So whatever filters you put in at the platform level, the ISP level, even that can be bypassed with new technology.” – Rappler.com
Sulaiman Daud is a 2024 #FactsMatter fellow of Rappler. He is a writer and editor at Mothership, Singapore’s youth-focused digital news platform.