Google Cloud has provided a service guaranteeing data residency for Korean customers using generative AI features with Vertex AI since 2023 and has recently expanded to allow machine learning processing tasks to be conducted in Korea.
Jigi-seong, the CEO of Google Cloud Korea, said during a media briefing at the Google Cloud Day Seoul event in COEX, Gangnam-gu, Seoul, on the 8th, "We are continuously expanding our infrastructure around the Seoul region to strengthen our presence in the Korean market." Google Cloud Day is an experiential conference focused on AI, data, and infrastructure technologies, hosted by Google in key regions worldwide. Developers and information technology (IT) leaders participate to see the latest cloud solutions and innovation cases.
Google Cloud announced on that day that it would strengthen its 'Sovereign Cloud' strategy, where data storage and processing are all carried out domestically. CEO Jigi-seong noted, "Google Cloud is supporting customers for whom data boundaries are important to operate services in the Seoul region," adding, "Furthermore, we uniquely offer a version that can operate only within the customer’s domain without consolidation with Google Cloud."
Google Cloud established its data center in the Seoul region in 2020. The Seoul region, equipped with large-scale infrastructure, includes servers, silicon chips, storage devices, and network equipment. Additionally, it is connected to Google's private network, which connects over 200 countries, boasting high bandwidth with virtually no latency. On that day, Google Cloud announced that it would enable not only data storage but also data machine learning processing tasks to be conducted in the Seoul region.
CEO Jigi-seong stated, "The Gemini 2.5 Flash, recently launched in the Seoul region, shows significant performance relative to its price, and many Korean companies are adopting it," and "The significance of the Gemini 2.5's launch in the Seoul region is that it is not just processing data in the Seoul region but is also performing the machine learning process itself in Seoul."
Google Cloud emphasized that it continually showcases hardware optimized for supporting customers' AI businesses. Earlier in April, Google Cloud unveiled 'Ironwood,' Google’s 7th generation Tensor Processing Unit (TPU) specifically designed for large-scale AI inference. Ironwood is the most powerful and high-performance energy-efficient model among Google TPUs and is expected to be officially released soon. In addition, Google Cloud is providing Korean corporations with access to the latest AI model research innovations from Google DeepMind through 'Vertex AI.'
CEO Jigi-seong remarked, "The emergence of Ironwood signifies that we have entered an era where AI transitions to models that proactively generate interpretations and insights," and "Leading models such as the Gemini 2.5 model and AlphaFold, which won a Nobel Prize, are being executed on the TPU." He added, "The Gemini 2.5 Flash boasts about 24 times the expense efficiency compared to GPT-4o mini and is approximately five times better compared to the DeepSeek model," highlighting its competitiveness.
Meanwhile, at the briefing, Yoo Young-jun, Chief Operating Officer (COO) of Wrtn Technologies, a major domestic startup utilizing Google Cloud, also attended and explained the advantages of Google’s models and infrastructure. COO Yoo stated, "The Gemini 2.5 model group meets the standards based on Wrtn’s own testing results, and it is being utilized appropriately within the service alongside various large language models (LLMs)," adding, "Among several LLMs, Gemini 2.5 has better cost-effectiveness and a broader application scope than existing models, so we expect high-performance models to be launched in the future."