The European Union (EU) is facing backlash from big tech corporations ahead of the implementation of the Global Partnership on Artificial Intelligence (GPAI) regulations under its first-ever artificial intelligence (AI) regulatory law, the AI Act. In relation to this, Meta, the parent company of Facebook, has decided not to sign the code of conduct related to the AI Act. In Korea, the implementation decree of the 'AI Basic Act,' which will take effect in January next year, is expected to be unveiled soon, with calls for regulatory relaxation increasing.
GPAI refers to a generic AI model that can be applied for various purposes. For example, it can perform tasks such as chatbots, automatic translation, document summarization, and image generation, with ChatGPT, Gemini, and Claude being representative of large language models (LLM).
According to the IT industry on the 21st, Joel Kaplan, Meta's Chief Global Officer (CGO), stated through a LinkedIn post on the 18th (local time) that "Meta has thoroughly reviewed the EU's GPAI model code of conduct and will not sign it." He added, "Europe is heading in the wrong direction in the field of AI and this code raises various legal uncertainties for model developers, as well as includes measures that far exceed the scope of the AI law."
The code of conduct is a practical guideline related to GPAI under the AI law passed by the EU last year. The European Commission announced the final draft of the code of conduct last week, allowing each corporation to decide autonomously whether to sign it. Signing corporations can receive legal stability and administrative simplification benefits during the compliance process with the AI law. Key points include ▲ strengthening transparency (documentation of model structure, training data, and notifying users after use) ▲ copyright compliance (obligation to remove data upon request from copyright holders and disclose sources of training data) ▲ security (pre-assessment of high-risk AI models and risk mitigation measures) ▲ voluntary signing system (strict review by regulatory authorities possible if not signed).
The EU officially enacted the AI law last year, which is the world's first AI regulatory law. The AI law classifies AI into four levels based on the risk level: ▲ unacceptable AI ▲ high-risk AI ▲ limited risk AI ▲ minimal risk AI. Differential regulations are implemented according to the risk level, and since the law took effect last year, there has been a phased mandatory application, with transparency obligations for GPAI to be applied starting from the 2nd of next month. This treaty imposes obligations on GPAI corporations to provide technical documentation and user manuals, comply with copyright guidelines, and disclose summaries of the data used for training. Since the AI law defines GPAI as AI services using large language models (LLM) such as OpenAI's ChatGPT and Meta's LLaMA, it appears that global AI corporations will be subject to regulation.
As a result, there has been a strong backlash from not only Meta but also IT corporations within Europe. According to external reports, over 110 corporations and organizations in Europe urged the EU Commission Chair to postpone the implementation of the AI law and advocate for a more innovation-friendly regulatory approach through an open letter sent earlier this month. Corporations including Mercedes-Benz Group, BNP Paribas, Deutsche Bank, Mistral, Lufthansa, Siemens, L'Oréal, Sanofi, and Spotify were also named in this letter. These corporations requested a two-year grace period for the regulations regarding GPAI models to be implemented next month and the high-risk AI system to take effect in August next year.
The global IT lobbying group CCIA is also calling for a temporary suspension of the implementation of the EU's AI law. This group, which includes members such as Google, Meta, and Apple, stated that "hasty implementation could endanger the vision of Europe's AI industry." They particularly emphasized that corporations are not fully aware of the law's loopholes. Daniel Friedlander, CCIA's Senior Vice President for Europe, noted, "The implementation of the regulations is just weeks away, yet the key details remain empty," and added, "If this proceeds, Europe's AI innovation could come to a standstill." According to a survey conducted by Amazon, more than two-thirds of European corporations reported they do not understand their obligations under the AI law.
Korea is also drawing attention as the implementation decree of the AI Basic Act is expected to be disclosed within this month ahead of its implementation in January next year. Korea became the second country in the world, after the EU, to pass an AI regulatory law, the 'AI Basic Act,' in December last year. The Ministry of Science and ICT is expected to solicit opinions from industry stakeholders, including corporations and associations, after unveiling the draft of the enforcement decree. The industry is particularly focused on the direction of the AI Basic Act, as the law needs to prioritize promotion over regulation to accelerate the achievement of AI supremacy.
However, with the Lee Jae-myung government focusing on intensive investment in the AI industry, expectations are growing that the enforcement decree will also emphasize regulatory relaxation. Many corporate figures, such as Chief Heo Jung-woo (from Naver) and Minister Bae Gyeong-hun (from LG), have joined the government, heightening expectations for regulatory relief. In particular, Minister Bae Gyeong-hun stated during a confirmation hearing on the 14th that a grace period for fines is necessary regarding the AI Basic Act set to be implemented in January next year.
An IT industry official noted, "As the EU's AI law is receiving criticism for its ambiguity, both domestic and foreign corporations are responding with backlash," and added, "While Korea should follow the global trend, it must actively reflect the opinions of industry stakeholders after the release of the enforcement decree draft."