NC AI announced on the 30th that it has unveiled a lightweight multimodal vision-language model (VLM).
NC AI has introduced the multimodal artificial intelligence (AI) model "Barco Vision 2.0 1.7B," which can operate in an on-device environment. This model is structured to understand images and text simultaneously and can analyze multiple images at once, processing complex data such as documents, tables, and charts.
This model is a lightweight model with a scale of 1.7B (1.7 billion) parameters, following the previously released medium model "Barco Vision 2.0 14B." According to the company, Barco Vision 2.0 1.7B has been validated through various benchmarks such as MT-Bench, K-SEED, K-LLaVABench, CORD, and ICDAR, focusing on text processing and Korean performance compared to InternVL3 2B and Ovis2 2B.
NC AI also noted that it showed results comparable to models of double the scale in major benchmarks such as MMMU, AI2D, MathVista, and MM-Vet.
The 1.7B model was designed to be operable on general devices such as smartphones and PCs, in addition to cloud environments. Since user data is not transmitted to external servers, it can be used without a network connection and can handle processing without issues such as communication delays or server loads.
This model will be released as open source for research purposes. NC AI, which has stated that it would provide the entire Barco Vision 2.0 series as open source, has made this model available for developers and researchers to use freely. The company explained that it can also be used for research and education, contributing to improved technology transparency and accessibility.
NC AI is currently considering participation in the government-led independent AI foundation model project. The plan is to expand the scope of AI model utilization based on experience from various industries and to continuously work towards achieving technological independence and accessibility.
Lee Yeon-soo, CEO of NC AI, noted, "We have laid the foundation to apply AI in various environments through lightweight models," and added, "We will continue to develop AI models in forms accessible to everyone."