INTRODUCE Sometime in the distant future, if you will, that while casually walking down a busy main street in downtown Kuala Lumpur, your attention is suddenly drawn to a crowd huddled closely — shrouded in intense and thunderous chatter — in front of you TV screen in a shop window.
Broadcasting, you watch as the Prime Minister publicly announces his abrupt resignation at a press conference in Putrajaya because he cannot cope with pressure from the civil service and wants to go into permanent retirement.
Emotions run high and the crowd disperses in anger and immediately leaves the scene – shouting profanity and vulgarity – while you stand there in amazement.
Frantic, you return home to your computer and want to uncover the reasons behind the prime minister’s decision when you notice something rather strange.
The prime minister made no such statement, he was still very much abroad and attended an international summit.
They discover his voice and stature have been accurately captured by deepfake technology, the deceptive livestream created by malicious parties – as part of a political ploy – conspiring to tarnish the Prime Minister’s reputation, incite mass confusion and incite social riots in Malaysia.
They are in total disbelief, fooled by near-authentic footage of a television show that simply never happened.
Given the steady advances in this technology, this grim reality is not too far off.
Authorities should urgently address the issue of deepfakes and how their potential weaponization could threaten national security and the well-being of Malaysians.
Deepfakes have only recently entered the cultural lexicon and rose to prominence a few years ago.
The reason for the nature of the term itself is attributed to the fact that this technology involves artificial intelligence software that goes through the process of “deep learning” to create accurate fakes.
The software in question is programmed through deep learning, which is a vigorous process of exposing information to artificial intelligence to analyze swathes of data sets on a given topic – be it Instagram posts, YouTube videos, etc. – to gather information and develop a comprehensive profile .
It is based on this very profile that the program is able to produce images or videos of the subject in question, which can be oriented to say everything, say in full or do anything similar to that subject.
This is because enough information has been gathered about the subject so that it is able to accurately simulate the subject’s speech patterns and facial appearance, even if it does not have a working model of the subject saying something specific.
However, it can be programmed to realistically represent the subject.
This would make it possible, for example, to train the program to produce fake videos showing Hollywood celebrities committing outrageous acts, American presidents saying the worst things, and public figures in compromising positions in a way that totally indistinguishable from reality and certain is maybe you were wrong.
The potential destructiveness of deepfake technology has almost always been repeatedly emphasized by critics since its modern inception.
Over the years that it has been active, users have used the technology to digitally manipulate existing footage by overlaying a specific person’s face onto that same footage.
Indeed, this was the purpose it served in the early days of technology.
Notoriety rose to prominence on the Reddit website in 2017 for an anonymous user who posted digitally altered pornographic videos that used prominent celebrity faces on the site, making it appear as if the celebrities in question were in the video themselves.
The videos quickly attracted public interest and went viral.
The very first instance in which the technology was used already involved the arming, humiliating innocent people who had completely renounced the porn industry and forcibly implicated their identities in these lewd videos.
In the absence of strategic security parameters, deep fake technology allows the widespread attack on human dignity to be carried out without challenge.
There have been other instances of deepfake technology being used to create sexually explicit content modeled after high-profile internet personalities.
Female streamers on Twitch, an online live streaming platform, suffered from the mass proliferation of deepfakes that appropriated their likeness, causing a great deal of excitement in the internet community.
Due to the uncanny combination of the viralizing properties of online media and the unadulterated capabilities of deepfake technology, virtually no action could be taken as the videos were increasingly shared and replicated.
The subsequent democratization of this technology, made available to the public, caused a significant shift in online media.
Seeing the potential for satire, relatively “harmless” videos were produced by netizens for parody purposes.
The technology was still in its infancy and in the eyes of the public there seemed little danger in distributing videos that could be immediately identified as fraudulent if they were in the service of internet humor.
However, over the years it has become clear that the consequences of deepfake technology were not trivial and indeed had the potential to cause damage of near-epic proportions.
In 2022, a fraudulent video of Ukrainian President Volodymyr Zelenskyy demanding the surrender and total toleration of Ukrainian soldiers to the Russian military circulated on social media.
Ukrainian TV stations – in what appears to be a geopolitical retaliatory attack – were hijacked and programmed to air the fake show on TV in order to sow mass confusion.
Fortunately, the Ukrainian authorities duly removed the video and provided clarifications to the general public.
It is important to note that while the Deep Fake was easily identifiable as fraudulent at the time, as the video exhibited certain irregularities and distortions, it nevertheless showed that it could be tricked into compromising the integrity of a sovereign state.
It also poses a threat to international organizations and institutions.
A person who was able to digitally alter his video feed to mimic the likeness of the mayor of Kiev was able to get senior European Union officials to agree to video calls being made.
This showed that deepfakes could be exploited for state espionage.
It can therefore be firmly established that the technology in question is in fact a matter of national security for the government and its citizens.
It is on an upward trajectory of continued technological engagement and if very little is done to strategically contain its influence, it could very well contribute to the weakening of Malaysian security and affect the lives of many innocent Malaysians, as they are the most vulnerable to it.
The potential of deep fake technology in the criminal arena is limitless.
An advanced variant of this technology could trick financial institutions into legitimizing financial institutions, disseminate politically provocative content to stoke geopolitical tensions, facilitate identity theft, blackmail individuals through the use of synthetic revenge porn, and launch a campaign of deliberate disinformation and misinformation. The list is not complete.
Despite the downsides of deepfake technology, it would not be fair to rule out discussions about the positive effects it could bring to society if tightly regulated.
Deepfake technology could be used in the film and advertising industries to make realistic footage more accessible from remote locations.
It could also be integrated into education and research to allow for more simulations of historical re-enactments and experiments.
What is needed is a middle ground that recognizes the detrimental effects of deepfake technology while enabling beneficial advances in technology.
The government must develop a comprehensive strategy to counter and combat deepfake technologies.
One of the Department of Communications and Digital’s priorities is to consider stricter legislation.
In the early months of 2023, the Cyberspace Administration of China – under the powers of the Chinese government – introduced new policies that completely banned the creation of deep fake media without users’ explicit consent.
National policies may also be modeled on those of the European Union and the United States, which prohibit the distribution of deepfakes in areas that raise political concerns and implicate individuals in pornographic material.
Consideration also needs to be given to extending current legislation that revises the definition of personal data to include more areas of human existence in a way that prevents digital imitation of individuals.
Since the technology in question is still in its infancy, efforts must also be made to conduct national campaigns that raise awareness of the existence of the technology and its harmful effects.
This could help the public spot more sophisticated forms of deep fake scams.
Investing in the development of new technologies would be crucial in this area.
Deep fake detection technologies would be immensely helpful for both authorities and the public to be able to report harmful fakes immediately.
It is vital that Malaysia strengthens data borders.
The government’s recent announcement of the creation of a Cyber Security Commission may coincide with newly discovered studies into deep fake technology.
Last year, the European Police Agency warned of the dangers of foreign actors using deepfake technology to undermine public trust in government institutions.
This fractured relationship between the public and government could create a rift and be further disrupted in ways that destabilize countries.
We must consider ourselves fortunate to be able to resolve potential issues that deepfake technology could cause, but there may very well come a time when it would just be too overwhelming to stop when they did would be left to its own devices.
Therefore, this situation needs to be addressed urgently before it becomes the future suffering of the country.
Comments: [email protected]