In February, a video was played on the TV screen at the Washington, D.C. Department of Housing and Urban Development (HUD) showing President Donald Trump kissing the bare foot of CEO Elon Musk of Tesla. This video, subtitled 'LONG LIVE THE REAL KING,' was a fake synthesized by AI.
A cover video featuring an AI that learned the voice of singer IU singing 'Bamyangyeong' surpassed 430,000 views, but IU, the owner of the voice, did not receive a single won. This is because current laws do not recognize neighboring rights for the voices used in AI cover songs.
As AI content, such as deepfakes and AI cover songs that are difficult to distinguish in authenticity, floods the market, YouTube is increasing its level of response. This is due to concerns about the social harm caused by false content that distorts authenticity and infringes on the rights of creators.
◇ YouTube solidly supports the U.S. 'anti-fake law'
On the 9th (local time), Google's video service YouTube announced on its official blog that it has selected and applied six popular creators as pilot targets for its 'likeness detection' technology, which automatically detects and filters unauthorized reproductions of celebrities' faces and voices.
YouTube plans to expand the scope of this technology, which was revealed last year, after improving its detection accuracy and management capabilities.
YouTube officially declared its support for the passage of the bipartisan bill 'NO FAKES Act.' Introduced last year, this bill aimed to prevent the dissemination of false content that imitates the images or voices of others generated by AI, but it did not lead to actual passage.
In response, Google was involved in the new legislative process. It coordinated the bill's draft with Senators Chris Coons (Democrat-Delaware) and Marsha Blackburn (Republican-Tennessee) and collaborated with industry groups such as the Recording Industry Association of America (RIAA) and the Motion Picture Association (MPA). The two senators are expected to hold a press conference this week to officially announce the reintroduction of the bill.
If the parties affected by digital reproductions report directly to the platform, the platform must quickly delete the content and promptly notify the uploader. Notably, services or platforms specializing in deepfake video creation are explicitly held accountable without simple intermediary immunity clauses.
YouTube's response reflects a crisis awareness that deepfakes and other issues can lead to not just a decline in content quality but also social chaos and legal responsibility. This is because false videos that illegally use the faces and voices of celebrities can be used for political intentions or malicious manipulation, making the platform not free from accountability.
Leslie Miller, YouTube's vice president of public policy, noted, 'We have been leading the development of technology for content protection for the past 20 years, and now we want to apply that know-how to the new challenges of the AI era,' adding that 'the anti-fake law is the starting point for responsible AI innovation and a key foundation for protecting creators.'
◇ “It's hard to stop deepfake videos due to the balloon effect”
In the United States, there have been repeated cases of political messages being distorted, such as the synthesized video of Trump and Musk, while in Korea, AI cover songs that illegally learned the voices of famous celebrities are gaining popularity, revealing legal gaps.
In Korea, there are currently no revenue distribution standards for AI cover songs, and related points were raised during the National Assembly's Cultural, Sports, and Tourism Committee's audit last year, but the responses from the Ministry of Culture, Sports and Tourism and the Korea Copyright Commission remain at the 'working group discussion' stage.
Recently, deepfake production apps promoting 'creating kissing videos with Cha Eun-woo' using famous celebrities' faces are flooding the app store, making the situation more serious.
Last year's legal revision allowed for penalties for the mere creation of sexual false videos, but some in the industry argue that stricter measures, such as strengthening login procedures or tracking producers, are urgently needed. The Korea Creative Content Agency (hereinafter referred to as KoCrea) under the Ministry of Culture recently launched a research project for the establishment of the 'AI-based Content Industry Promotion Act (tentative name).'
An individual in the platform industry stated, 'It is difficult to avoid the 'balloon effect,' where content deleted from one platform is redistributed on another, as deepfake technology has already spread in open source form. We need to cultivate the ability for users to distinguish between real and fake videos themselves and collectively establish a global standard for compensating original creators appropriately.'