02.04.2024 Author: Konstantin Asmolov

The problem of deepfakes – in South Korea and beyond

The problem of deepfakes - in South Korea

On February 23, 2024, with less than two months to go before the parliamentary elections in April, the Republic of Korea’s presidential administration said it would respond strongly to the appearance of a fabricated video on the Internet “featuring” President Yoon Suk-yeol. “We express serious concern over the fact that … certain outlets are labeling the false and fabricated video as a satirical video or reporting on it as if it is okay because it is marked as fake.

The video, which was uploaded to Instagram, Facebook and TikTok, purports to show Yoon apologizing for the corruption and incompetence of his government, which has committed heinous acts and is guilty of grave injustice, destroying the nation and causing harm to the public. The video was reportedly created using Deepfake and artificial intelligence technology.

The Korea Communications Standards Commission (KCSC), a telecommunications watchdog, removed and blocked the video. Police initiated an investigation, which revealed that the “exposé video” first appeared back in November 2023, and was based on Yoon’s campaign video in which he criticized the Democratic Party and Moon Jae-in’s government. The fake video’s author has been found, and if charged with defamation and violating the Public Officials Election Law, he could face a jail term of up to seven years or a fine of up to 50 million won ($38,000).

But although in this case the fake video was less sophisticated than might have been expected, the present author considers this a good opportunity to discuss a very interesting topic.

As readers may be aware, the term “deepfake”, a portmanteau of “deep learning” and “fake”, refers to digitally created images or videos that viewers may mistakenly perceive as real. Deepfakes are often used to disseminate fake news, and for purposes of fraud and defamation, and open up a very wide scope for the manipulation of public opinion.

Readers no doubt are familiar with Arthur Conan Doyle’s story A Scandal in Bohemia, in which Sherlock Holmes is tasked with recovering a compromising document – a signed photograph. Back then the photograph was irrefutable evidence: handwriting could be forged, a personal stamp could be imitated, and witnesses bribed. But – “We were both in the photograph.” That, as Holmes says, is very bad.

In the current century, photography is no longer considered 100% cast iron evidence of wrongdoing, as almost everyone is aware of the potential of image editing software such as Photoshop. However, progress is continually advancing, and we are now faced with a wide range of technologies that allow us not only to fabricate facts, but, in fact, to create digital doubles of existing personalities.

The first of these is “deepfake” technology. What started out as attempts to “paste” the faces of movie stars onto the bodies of other people in amateur porn images and videos is beginning to take on a political dimension as the faces of politicians begin to be used in both propaganda – both to promote their cause and to demonize them. If, in the past, a blurred low-quality photograph of “a person who looks like the Prosecutor General”, in which the viewer’s imagination interprets a bunch of pixels as the face of the official in question, was all that was required, today the image of the “Prosecutor who was caught unawares” will be much more realistic. However, to be fair, it is also important to note the converse situation, in which a person who is really caught in the act can claim that the evidence is a “set up.

The second technology is learnable neural networks, which can now be used for the purposes of digital fraud – it is enough to have a sample of a person’s voice to create fake audio messages purporting to be from that person. The voice of the digital doppelganger is often indistinguishable from that of the victim. Moreover, if the neural network is “fed” a sufficient amount of audio information, the technology can imitate not only the voice itself, but also their intonation and quirks of speech, making it even more difficult to expose the fake.

Similarly, if provided with a voice sample from, say, a politician, and the text of a number of their speeches, neural networks may be able to generate “speeches” on a given topic”, in which not only the politician’s voice, but also his or her unique communication style may be indistinguishable from that of the original.

This combination represents a breakthrough in the creation of fakes because the doppelganger politician will have the face, voice and other features of the original, and it will be very problematic to prove that it is a fake. Especially since the vast majority of people easily believe what they really want to hear, or are afraid to hear.

However, this digital doppelganger technology can be used for far more interesting purposes than to disseminate political fakes during election campaigns. For example, there is what the present author would call “digital necromancy”. Imagine that a now deceased politician has left behind a large legacy of texts from which their position on most key issues is clear. In such a situation, the neural network can essentially use his image as a sort of oracle, able to speak its mind on an issue or answer questions. In South Korea, at an event hosted by the Kim Dae-jung Foundation, a holographic image of the late president made a speech, which, for those who only saw the broadcast, looked very much like the original and, moreover, spoke about his view of current events as if he were still alive. And in Russia, we saw the presentation of a “virtual politician” modeled after the late Vladimir Zhirinovsky, which mimicked his voice and style of statements with striking accuracy.

However, let us leave the realm of sci-fi speculation, and return to earth, specifically, the South Korean election campaign.

In South Korea, where the political struggle is largely about pelting the enemy with dirt rather than opposing their programs, such technologies are a convenient but at the same time a very dangerous means of political struggle. The publication of a fake video right before the election may easily make a strong emotional impact, which will be followed by a political effect. And all the investigations will follow later because they take time, and by the time they are complete the fight will be long over.

In the 2022 presidential election, both Yoon Suk-yeol and Lee Jae-myung used AI-generated “copies” of themselves in their campaign videos, causing Yoon to “appear” in three places at once on one day. The candidates used these AI-generated avatars in the hope of appealing to young voters, but real fakes later emerged. During the May 2022 local elections, a fake video clip depicting Yoon supporting a conservative candidate for the position of head of Namhae County was circulated on social media. This led viewers to mistakenly believe that the President was failing to maintain political neutrality. And in early 2024, a fake video was posted on YouTube, which purported to show interim ruling (People Power) party leader Han Dong-hoon comparing liberals to gangsters during a press conference.

Therefore, since the end of 2023, the South Korean authorities have been cracking down on the use of deepfakes in political propaganda.

On December 5, 2023, a special parliamentary committee passed an amendment to the Election Law which provides for a ban on the use of deepfakes in election campaigning 90 days before voting. Violators of the ban could face a maximum of seven years in prison or a fine of up to 50 million won ($35,000). Creators of deepfake videos are required to declare the presence of synthetic information in such materials, even if they are published before the 90-day period begins.

The ban went into effect on January 29, 2024, but according to the Central Election Commission, between January 29 and February 16 129 deepfakes were identified in the country.

On March 5, 2024 the National Investigation Directorate of the National Police Agency developed a software tool for detecting deepfakes which can be used to prevent the commission of crimes using this technology. It is claimed that in just 5-10 minutes, the software can determine to within 80% accuracy whether a digitally produced video is genuine or fake. The results can be checked with experts in the field of AI to minimize the risk of false negatives or positives.

As noted by South Korean media these moves are in line with a global trend to regulate the use of deepfakes in political advertising. Major technology companies such as Google, Adobe, Amazon, Microsoft, TikTok, OpenAI and Meta Platforms have all updated their policies to require users to add disclaimers when positing political advertising created using deepfake technology. The US Federal Election Commission has also begun procedures to regulate such campaign videos ahead of the 2024 election.

Two of Korea’s biggest Internet platforms have announced that they will take steps to counter deepfakes created by artificial intelligence. Naver, the country’s largest search engine and Internet platform, has said that its chatbot-based artificial intelligence service will not respond to user requests to create “inappropriate content” such as a lineup of facial images. Naver added that it runs a monitoring group dedicated to identifying online posts that violate election rules and analyzing new examples of offensive content such as deepfakes.

Kakao, Korea’s leading mobile messaging app, has said that it is considering introducing watermarking technology for content generated by its artificial intelligence service, adding that a specific timetable for implementation has yet to be determined.

South Korean experts consider that deepfakes represent a very serious threat. Jeon Chang-bae, chairman of the International Artificial Intelligence Ethics Association, notes that the most “concerning” scenario would involve uploading a fake video defaming a candidate just one day before an election, and it would garner millions of views. There would be no time for the press or the government to verify it before voters head to the polls the next day.”

The passage of anti-deepfake laws is, in his view, “a big step forward. However, what is more important than the laws is raising awareness among the public that any videos and images they view on the internet could be fabricated, and not trust them at face value.”

Kim Myung-joo, a professor of information security also believes that AI-generated misinformation could have a significant impact on swing voters’ perceptions, given that it has become “surprisingly easy” to create a convincing hoax using deepfake technology. It only takes about 10 minutes to create a deepfake image using a mobile app, so political YouTubers or bloggers can easily use this technology to spread misinformation about politicians they dislike.

An editorial in the Korea Herald echoes these concerns: “Given the relentlessly fast speed at which online posts are produced, and AI-based editing tools are becoming more available, it is only a matter of time before a torrent of political deepfakes could spread through online communities and mobile messengers.”

But, in the present author’s view, as the ability to create deepfakes grows, so too may the importance of digital media literacy and critical thinking. And while any fake, not necessarily a deepfake, is calculated to bolster the faith of those who never bother to double-check what they see in the news, he hopes that at least analysts and experts who are able to remain uninfluenced by the “echo chamber effect” will be able to separate the wheat from the chaff.

 

Konstantin Asmolov, Candidate of Historical Sciences, Leading research fellow at the Center for Korean Studies of the Institute of China and Modern Asia of the Russian Academy of Sciences, exclusively for the online magazine “New Eastern Outlook

Related articles: