/Kakao provided

Kakao developed and introduced its own artificial intelligence (AI) model "Kanana," which has been evaluated as having high performance. It is gaining attention as another case that demonstrates the competitiveness and potential of domestic AI models.

According to the industry on the 27th, Kakao's recently released open-source "Kanana-1.5-8b-instruct" achieved first place among models with up to 8 billion parameters (8B size) on the "Horang-i leaderboard." Horang-i is a benchmark platform designed to evaluate the performance of Korean language models (LLMs).

The Horang-i leaderboard is a metric established by the U.S. AI developer platform Weights and Biases (W&B) to rank the performance of Korean LLMs based on metrics such as ▲general performance in Korean ▲alignment ▲information retrieval ability. It is considered an important standard for assessing the competitiveness of Korean LLMs tailored to the domestic user environment.

"Kanana-1.5-8b-instruct" recorded a total score of 0.691 among models under 8B. It ranked 4th overall, just 0.04 points behind the first-placed "Qwen2.5-14B" among models under 15B published by the Horang-i leaderboard. This represents the highest ranking among domestic LLMs designed and developed using the "from scratch" method, where the architecture, datasets, and learning processes are built from the ground up.

The "from scratch" approach differs from methods that merely fine-tune existing foreign models. Since it is a domestic model trained on self-generated data and optimized architecture, this result is considered even more significant.

A Kakao official noted, "It is a general language model that demonstrates strong performance in both Korean and English, designed with a balance between performance and expense considerations. It has shown excellent competitiveness compared to numerous global models by achieving overall first place in benchmarks encompassing translation, reasoning, knowledge, Q&A, and syntactic parsing."

The benchmark platform 'Horang-i' ranks models with parameters below 8 billion (8B size)./Kakao provided

◇Government develops Korean AI model… Kakao "continues to enhance Kanana's technology."

The government is currently promoting the development of a Korean-style AI model. The Ministry of Science and ICT has been recruiting participating corporations for the "independent AI foundation model" development project since the 20th. The intent is to develop proprietary K-AI models with high performance, similar to OpenAI's ChatGPT or Google's Gemini. The goal is to create an environment where all citizens can use AI.

Kakao plans to continuously enhance the performance of "Kanana" through its own technology. The Kanana model consists of a series of sub-models with different sizes, types, and characteristics. Specifically, it currently includes ▲three language models ▲three multimodal language models (MLLMs) ▲two visual generation models ▲and two voice models.

Kakao released some models as open-source to expand the domestic AI ecosystem and enhance accessibility to technology. At the end of February, a technical report detailing the research results of the language model Kanana was published in the archive (ArXiv). The "Kanana Nano 2.1B" (language model) was distributed as open-source. Subsequently, last month, models of sizes 8B and 2.1B were also released as open-source. The recently released models are particularly governed by the Apache 2.0 license, allowing anyone to freely modify and commercially utilize them.

Kakao has also recently unveiled the performance of the integrated multimodal language model "Kanana-o," which can understand and process various forms of information, including text, voice, and images, simultaneously. It is characterized by its ability to handle queries entered with combinations of various information and is designed to respond with contextually appropriate text or natural-sounding speech.

Kakao stated regarding Kanana-o, "It performed at a level comparable to the world's best models in Korean and English benchmarks," adding, "It showed a significant advantage in the Korean benchmark." Furthermore, it explained, "In terms of emotional recognition ability, it recorded a considerable gap in both Korean and English, proving the potential of an AI model that understands and communicates emotions."