On the 8th, Professor Lee Su-in from the University of Washington, USA, emphasizes the necessity of explainable artificial intelligence (AI) at a meeting held at the Korea Science and Technology Center./Courtesy of Hong Areum.

In recent years, the rapid development of generative artificial intelligence (AI) has made machines that understand and create human language a reality. However, the ‘black box’ problem, which makes it unclear why AI produces certain results, has become increasingly apparent. As AI is introduced into fields with significant social impacts such as healthcare, autonomous driving, law, and policy-making, ‘explainable AI’ can become a risk factor. The concept that has gained attention as an alternative to solve this problem is XAI (eXplainable AI), or explainable AI.

At a press briefing held on the 8th at the Korea Science and Technology Center in Gangnam, Seoul, Prof. Lee Su-in (45) from the University of Washington’s Department of Computer Science noted, “When AI makes mistakes, humans must be able to understand and correct those mistakes,” adding, “Explaining the complex reasoning processes or results provided by AI in a way that is easy for humans to understand is the core of XAI.”

Prof. Lee Su-in is a leading researcher in the field of XAI and developed the SHAP framework, one of the most widely used XAI technologies. This technology quantitatively informs how much influence certain information has on the judgments made by AI models. The SHAP framework is currently utilized in various fields, including bio and semiconductor industries, and has over 35,000 citations.

For instance, when an AI predicting biological age receives health-related data such as a person’s body mass index (BMI), blood pressure, and exercise habits, SHAP specifically shows which factors most significantly influenced the outcome. Recognized for these contributions, she received the Samsung Ho-Am Prize in Engineering last year, becoming the first woman to do so. She visited Korea to deliver a keynote speech at the ’2025 World Korean Science and Technology Conference' on the 9th.

Prof. Lee emphasized the need for XAI by citing a skin cancer diagnosis app as an example. She reported that after analyzing several models of the skin cancer diagnosis app, the research team found numerous misdiagnosis cases, such as AI missing skin cancer or mistakenly diagnosing normal skin as cancer.

Prof. Lee explained, “In such cases, applying XAI visually shows which parts of the images influenced the judgment and allows us to backtrack through the reasoning process. If generative AI plays the role of providing answers, XAI acts as an assistant that shows the reasons for those answers. They are not in competition but are complementary to each other.”

She added, “Research is also actively underway to integrate XAI into generative AI,” stating that in fields where accountability is important, such as healthcare, law, and policy-making, it is crucial for humans to understand the logic behind the judgments made by AI.

Prof. Lee has integrated XAI into Alzheimer’s research as a representative example. She stated, “While AI has been used to identify genes related to Alzheimer’s disease, XAI can explain how those genes lead to the disease through specific biological pathways,” indicating that it can help understand the mechanisms of diseases and design personalized treatment strategies for patients.

However, explainable AI is not perfect either. Prof. Lee noted, “Since there are various methodologies for XAI, there is also the possibility that explanations may be inconsistent or incorrect, so theoretical validation must be sufficiently conducted,” adding, “Even with the current XAI available, a significant portion of AI’s mistakes can be captured, so it is much better to use them even if they are not perfect.”

Prof. Lee graduated from the Department of Electrical and Electronic Engineering at KAIST and earned her master’s and doctoral degrees from Stanford University in 2009. She has been researching AI since the late 1990s during her undergraduate years, witnessing all the ups and downs of the technology. She said, “Even the same technology changes form according to the demands of the era,” stressing the importance of researchers understanding the flow of the times.

Prof. Lee also expressed her views on Korea’s AI policy direction. She remarked, “The capability in AI research is a national competitive advantage. It is very desirable to make AI a national strategic task and invest in it,” adding, “Especially in fields like healthcare and bio, where trust is crucial, XAI technology must accompany it.” She emphasized, “XAI is not just an added function for the development of AI; it is an essential element.”

However, she remarked, “While developing a foundation model that can be applied to various fields of service is meaningful, it is realistic to flexibly adopt excellent technologies from abroad and develop fields where Korea has strengths based on them,” stating that “AI is still an evolving discipline, and a sustainable and stable research environment along with long-term funding must be supported.”

※ This article has been translated by AI. Share your feedback here.