In a TikTok video, Lee Kuan Yew literally pushes off his tombstone and rises from a grave, just as the founding prime minister of Singapore once said he would, if his country were in trouble. (techgoondu.com)
In another clip, Opposition leader Chee Soon Juan is seen gesticulating angrily at Prime Minister Lawrence Wong in public, leading to the two squaring up to each other. Both videos are fake.
These AI-generated clips, highlighted last week in a CNA report, offer an early sample of how deepfakes might be used to portray politicians and influence voters, as Singaporeans head to the polls on May 3 for the next general election.
They also show how easy it is to create a fake image or video today, given the plethora of generative AI tools available to all. What’s worrying, say experts, is how much of an impact the technology will have on GE2025.
Today, deepfake audio is getting “very good for very little effort or cost,” said Chester Wisniewski, director and global field chief information security officer at cybersecurity firm Sophos.
In a TikTok video, Lee Kuan Yew literally pushes off his tombstone and rises from a grave, just as the founding prime minister of Singapore once said he would, if his country were in trouble.
In another clip, Opposition leader Chee Soon Juan is seen gesticulating angrily at Prime Minister Lawrence Wong in public, leading to the two squaring up to each other. Both videos are fake.
These AI-generated clips, highlighted last week in a CNA report, offer an early sample of how deepfakes might be used to portray politicians and influence voters, as Singaporeans head to the polls on May 3 for the next general election.
They also show how easy it is to create a fake image or video today, given the plethora of generative AI tools available to all. What’s worrying, say experts, is how much of an impact the technology will have on GE2025.
Today, deepfake audio is getting “very good for very little effort or cost,” said Chester Wisniewski, director and global field chief information security officer at cybersecurity firm Sophos.
“Video is much more difficult to do in a convincing way,” he warned, “but nation-states and even well-funded domestic adversaries could likely afford to make believable video deepfakes.”
Just last week, news came of a finance director here who almost lost US$499,000 after being duped by a deepfake video call where his company’s chief executive officer and other stakeholders were impersonated with AI help.
While deepfakes don’t appear to have swung an election yet, the underlying social media networks that distribute them have proven potent in helping Donald Trump win in the United States back in 2016.
Singapore already has strict rules to rein in falsehoods, and a new law passed last year specifically bans the publication of digitally generated or manipulated content that falsely depicts an election candidate saying or doing something they did not.
Social media companies have to comply with an order from the election authorities to take down an offending post or even block access to the content by Singapore users during the election period.
Despite these proactive steps, however, technology experts say the sheer reach and broad impact of deepfakes will be be difficult to restrain, as the false content becomes easily virulent over time.
“In a tightly governed and digitally connected society like Singapore, the risks are just as real,” said Takanori Nishiyama, senior vice president for Asia-Pacific sales at Keeper Security, a cybersecurity firm.
“Deepfakes could be deployed to simulate public figures making inflammatory or divisive remarks, fabricate policy stances or even impersonate journalists and influencers with significant sway over public sentiment,” he told Techgoondu.
Earlier this week, former Singapore president Halimah Yacob filed a police report over a deepfake video that featured her criticising the government.
“This is scary how AI is being used to influence voters during this critical period,” she wrote on Facebook two days ago.
While many of the deepfake videos and images of Singapore politicians today are clearly fake or at least appear slightly awkward, the technology is rapidly improving.
Thanks to AI learning at great pace in the past two years, it has become increasingly easy to recreate a person’s tone, cadence, facial expressions and mannerisms.
And deepfakes may not even have to be very realistic to be convincing, said Nishiyama. “Typically, people do not analyse content frame by frame and tend to trust what they see and hear, allowing deepfakes to come across as believable.”
“Playing on emotion, a message can override logical analysis, resulting in even mediocre deepfakes being effective,” he added.
“Furthermore, if these clips are shared from friends and family, they may be viewed as “trusted,” easily convincing people that fake news could be real news,” he noted.
Perhaps most worrying is the way a fake image or video of a candidate can spontaneously evoke reactions from people during a period of heightened emotion.
The rapid speed that such deepfakes can spread also plays to moments where people may be particularly concerned about a certain issue leading up to the big day.
These deepfakes may also be sent over WhatsApp or other private messaging apps, where the authorities may not have the same oversight as public social media networks.
For example, a fake video of a party’s members quarrelling with one another or one purportedly showing political leaders making harmful comments to a community close to Election Day may leave little time for the authorities – or indeed, the public – to correct the falsehood that has already spread.
“In the cauldron of election hustings, and time pressure, you can see the potential for damage can be significant,” said Bryan Tan, a partner at law firm Reed Smith Singapore.
Existing laws, he noted, are useful in correcting any potential falsehoods, but as long as people still fall for common scams, they can become unwitting victims of such deepfakes.
It is hard to say if these deepfakes will impact GE2025 but it is important that people are able to spot the fraudulent content, especially since it will be appearing for the first time during a Singapore election, he stressed.
Legal deterrents, in particular, the ones compelling social media networks to take a more proactive role in combating fake content are an important foundational step to rein in large-scale fraud, say experts.
In the longer term, they stress the importance of inoculating the public from deepfakes – or any fake news – by educating people to see the real from the fake.
As the technology evolves, so must the ability to decipher new content that appears on their screens, especially in a so-called “post-truth” world that will be filled with more deepfakes.
“Public education is key, but it must evolve from traditional media literacy to include digital forensics – teaching citizens how to identify telltale signs of synthetic content and verify sources,” said Keeper Security’s Nishiyama.
The Cyber Security Agency of Singapore last year published a guide to help the public discern what may be a deepfake image or video. Its advice includes looking out for signs that the content may be manipulated.
For example, there may be blurring around the edges of a person’s face in a video or unnatural or lack of blinking in the eyes. Plus, lip movement may not be synchronised with the speech and there may be incongruent background noise.