KT Cloud announced on the 30th that it has signed a business agreement with Intel to collaborate on artificial intelligence (AI) and cloud businesses, accelerating the expansion of the 'AI Foundry' ecosystem.
The purpose of this agreement is to integrate Intel's advanced semiconductor technology with KT Cloud's AI service platform, 'AI Foundry', maximizing the performance of AI and cloud services while increasing expense efficiency. Through this, they plan to provide innovative solutions to customers.
KT Cloud is continuously exploring ways to enhance the cost-effectiveness of inference infrastructure centered around AI Foundry, and is considering the introduction of Intel's AI accelerator, Gaudi, as one of the options. The two companies plan to collaborate on developing cloud-specialized products and technical cooperation.
'AI Foundry' is a project that collaborates with verified partners in the AI field, such as RAG (Retrieval-Augmented Generation), AI models, and inference infrastructure to implement corporations' AI demands 'end-to-end'. KT Cloud plans to provide lightweight AI models and modular RAG services together with Upstage, Denotitia, Polaris Office, and Rebellion, allowing for the easy implementation of highly reliable AI systems.
Before the official launch of the AI Foundry service, a customer-participatory pilot program is planned to be operated, and selected corporations will be offered the service for free throughout August. The participation method and selection process will be announced during a webinar broadcast on the KT Cloud portal on July 24.
Hans Chuang, head of Intel's sales and marketing group for the Asia-Pacific region, noted, 'We will collaborate to ensure that practical results can be derived through Intel's technology from the proof of concept (PoC) stage to commercialization.'
Choi Ji-woong, CEO of KT Cloud, said, 'Through the cooperation of both companies, we will be able to present alternatives that have economic viability and scalability in serving large-scale AI models, and we aim to solve the problem of increasing inference expenses and make a Practical contribution to the AI ecosystem.'