Courtesy of Liner

Artificial intelligence (AI) search startup Liner noted on the 3rd that some of the performance of its 'Liner Search LLM' has surpassed OpenAI's GPT-4.1. In particular, it showed superior results in the evaluation of key components used for generating AI search responses compared to the GPT-4.1 model.

Liner Search LLM is a self-developed model that integrates eight key components necessary for analyzing and processing user questions.

Liner conducted a systematic internal validation process to compare and analyze the performance (accuracy), processing speed, and expense (cost per token) of OpenAI's GPT-4.1 based on the same task.

As a result, Liner Search LLM outperformed GPT-4.1 in all aspects of performance, speed, and expense in four key components: ▲category classification ▲task classification ▲execution of external tools ▲intermediate answer generation.

It also stated that it recorded competitive advantages in two or more categories in four components: ▲determining question decomposition ▲identifying necessary documents ▲generating intermediate answers including sources ▲task management.

Liner Search LLM is specialized in flexibly solving problems and deriving accurate answers. The processing expense per token is on average 30-50% lower than that of GPT-4.1.

Jo Hyun-seok, Liner's tech lead, said, “How data is learned and how questions are processed is key to reducing AI hallucination,” adding, “We will continue to provide an optimized and accurate search experience focused on research and actively target the global market.”

Courtesy of Liner
※ This article has been translated by AI. Share your feedback here.