Kakao has released the 8B and 2.1B size models of its own artificial intelligence (AI) language model 'Kanana' as open source.
On the 23rd, Kakao unveiled four types from the 'Kanana-1.5' lineup: ▲8B-base ▲8B-instruct ▲2.1B-base ▲2.1B-instruct on Hugging Face under the Apache 2.0 license. Anyone can modify and commercially use these models.
This upgrade has enhanced the agentic AI implementation capabilities, achieving an average performance improvement of 1.5 times in coding, mathematical problem-solving, and function calls while maintaining Korean language performance. User experience has also improved through long context understanding and concise answer provision.
Kakao is also developing 'Kanana 2,' which will feature longer input processing and sophisticated reasoning capabilities. This initiative aims to contribute to the activation of the domestic large language model (LLM) ecosystem and the establishment of a collaborative AI ecosystem.
Kakao's Kim Byung-hak, the leader of the Kanana project, noted, "We will create an environment where AI technology competition and growth coexist through open source release, and will set the stage for technological advancement," adding that, "We will pursue both model performance enhancement and open source values simultaneously."
Meanwhile, Kakao has been actively sharing research achievements by releasing the Kanana lineup and multimodal language models, visual generation models, and more since last year. In February of this year, it distributed the 'Kanana Nano 2.1B' model on Hugging Face, and recently disclosed the performance of the first integrated multimodal language model in the country, 'Kanana-o,' which can process text, voice, and images simultaneously.