LG CNS announced on the 10th that it has co-developed a reasoning-based large language model (LLM) with the Canadian artificial intelligence (AI) startup Cohere, which has 111 billion parameters. Two months after releasing a lightweight model specialized in Korean with 7 billion parameters in May, it has unveiled a super-large model.
The model supports 23 languages, including Korean, English, Japanese, Chinese, Hebrew, and Persian, and is characterized by its ability to operate with just two graphics processing units (GPUs) using model compression technology. Generally, LLMs with more than 100 billion parameters require at least four GPUs.
Reasoning-based LLMs are models that derive logical answers by considering multiple variables for complex problems, seen as essential technology for realizing 'agentic AI' services that allow AI to make decisions and perform tasks independently.
LG CNS plans to provide the LLM in an on-premises format, allowing client companies to securely process sensitive data within their own infrastructure without external leakage. The newly unveiled LLM can operate with just two GPUs, enabling client companies to secure LLMs at an efficient expense.
The LLM developed by both companies exhibited excellent reasoning capabilities in Korean and English. According to their internal testing results, it scored higher than global LLMs such as GPT-4o, GPT-4.1, and Claude 3.7 Sonet in the representative benchmark tests for reasoning capability validation, 'Math500' and 'AIME 2024.'
Kim Tae-hoon, head of the AI Cloud Business Division at LG CNS, noted, 'Based on our differentiated AI capabilities and competitiveness, we will position ourselves as the best partner to provide agentic AI services tailored to clients’ businesses and lead clients in their AI transformation (AX).'