[Editor’s note] Recently, SK Telecom, YES24, the National Research Foundation of Korea (NRF), and others have suffered from cyber attacks by hackers resulting in personal information leaks and service disruptions. In the era of artificial intelligence (AI), cyber threats are escalating, and attack methods are becoming more sophisticated, while our government's and corporations' response capabilities remain weak. This article diagnoses the issues in the cybersecurity system through domestic and international cases and seeks solutions.

Illustration=ChatGPT

“It was the exact same face and voice as the actual boss. I received instructions via video call, so I had no suspicion.”

In January of last year, an employee at the Hong Kong branch of the British engineering firm Arup was tricked by an AI-generated deepfake into transferring $25.6 million (about 34.8 billion won). The video replicated the person's face, voice, accent, and even subtle movements of the eyes perfectly. The affected employee followed instructions to bypass the security system, only realizing it was a scam after receiving a call from headquarters stating, “We did not make such a request.”

AI is no longer just a simple auxiliary tool for cyber attacks. Hackers are using AI as a core weapon, and the speed and precision of attacks are undermining existing cybersecurity systems. According to the Thailand National Cyber Security Agency (NCSA), there were a total of 1,002 cyber attacks between January and May of this year, with 63% of corporations suffering actual damages. More than half of these paid financial compensation to hackers. Of attempted login attacks, 94% were attributed to automated AI bots.

◇ Sophisticated attacks using deepfake impersonations and document forgery

Cyber attacks exploiting generative AI are fundamentally different from traditional phishing methods. Hackers train AI on internal electronic approval forms, approval flows, and messenger conversation patterns among employees, and create work scenarios such as money transfer instructions or contract dispatch. By combining deepfake audio and video that imitate the speech patterns, vocabulary habits, and even speaking speed of real people, it causes the user to follow instructions without suspicion. The intricate attack, which combines real-time communication and document forgery, is called “Deep Phishing,” and unlike the conventional email link inducing methods, it directly disrupts human senses and judgment. The methods are so precise that even security personnel cannot differentiate between actual and forged instructions.

Some hackers are creating fake websites impersonating generative AI (phishing sites) to secretly install malicious programs (spyware) that steal personal information the moment a user accesses them. In particular, fake login pages or app download sites imitating OpenAI “ChatGPT” are spreading. According to global security company Check Point, about 4% of the over 1,000 newly registered domains related to ChatGPT last year were identified as malicious domains. Some of these mimicked actual login screens to steal user authentication information.

Attacks using generative AI brands exploit user trust, infiltrating naturally through browser extension installs or invitations to free use. The global research team at Kaspersky (GReAT) warned that “a backdoor malware called 'PipeMagic' was distributed through a fake ChatGPT app,” stating that “the app was distributed via a third-party link, not through Google Play Store or Apple App Store, and upon installation, it begins exploring the internal network and performing data transmission functions.”

Graphic=Son Min-kyun

◇ 74% of global cybersecurity experts say AI-based threats have serious impacts

According to a recent survey conducted by the British cybersecurity company Darktrace among 1,500 cybersecurity experts worldwide, 74% of respondents indicated that “AI-based threats are currently having serious impacts on their organizations.” Forty-five percent of respondents said, “Our response capabilities to AI attacks are insufficient.”

Attackers are utilizing AI to break down language barriers, modify malware in real-time, and adjust attack scenarios on the fly. Major hacker organizations in Russia, China, Iran, and North Korea are reportedly operating their own AI systems based on large language models (LLMs), with some sharing them on open repositories with other hackers. Existing security solutions are increasingly struggling to detect AI-based threats in real-time or proactively block them.

Lee Hee-jo, a professor in the Department of Computer Engineering at Korea University (former AhnLab CTO), stated, “AI is a weapon that benefits both attackers and defenders,” adding, “From an offensive standpoint, automated vulnerability analysis and rapid recycling are becoming possible, while from a defensive perspective, security vulnerabilities can be proactively eliminated from the development stage.” He further noted that “establishing an AI-based security system at the design stage is much more effective than responding to an operational system.”

◇ The U.S. mandates disclosure, the EU legislates… Korea lags behind

The U.S., Europe, and the UK are responding quickly to AI-based security threats. In 2023, the U.S. mandated publicly traded companies to disclose cyber incidents in their annual reports through its 'National Cyber Security Strategy,' aiming for transparency regarding cyber risks. Additionally, federal agencies have adopted a 'Zero Trust' security framework, which does not inherently trust any users or devices, requiring separate authentication and verification each time someone accesses the system. Unlike the previous structure where one could freely access the internal network after entering, it treats all access as a potential threat and only allows minimum permissions each time.

The European Union (EU) has enacted the 'Cyber Resilience Act,' mandating security designs for all digital products and services, while the UK implemented the telecommunications security law in 2022, requiring telecommunications companies to detect AI attacks and establish security measures. In cases of violations, penalty surcharges of up to 10% of annual revenue may be imposed.

Professor Lee said, “In an era where AI and cloud-based systems are fundamental, Korea still clings to the mindset of 'responding after incidents occur,' adding, “It is time to consider a system that includes security budgets, responsible parties, and response strategies in corporate disclosure requirements like in the U.S.”

According to the 'Cyber Security Readiness Index' announced by Cisco last year, only 3% of domestic corporations reached the 'mature stage' in their security response level. This figure falls significantly short of the Asian average (10%) and the global average (15%). Korea received below-average ratings in all categories, including security strategy establishment, risk detection capabilities, and human resources.

Park Chun-sik, a professor in the Department of Cybersecurity at Ajou University (former Head of the National Security Research Institute), stated, “Korea is still stuck in a blame game after incidents occur,” and noted, “A structure should be established that imposes significant legal liability for security incidents, similar to the U.S., to facilitate substantial security investments by corporations.”

The National Intelligence Service revealed in last month’s '2025 National Information Security White Paper' that the security response capability of government and public institutions is weak. Only 67.1% of institutions operate dedicated information security departments, and just 18.6% have more than five dedicated personnel. The proportion of institutions maintaining this ratio in their total information budget was 57.3%, which is a decrease from 64.7% in 2022.

Professor Park said, “We need to recognize AI-related cyber threats at a realistic disaster level and fundamentally reset our response strategies.”

※ This article has been translated by AI. Share your feedback here.