When OpenAI was established in 2015, Elon Musk, the Chief Executive Officer (CEO) of Tesla, and Sam Altman, the CEO of OpenAI, presented their philosophy together. The relationship began in the early 2010s when CEO Altman visited Musk's SpaceX factory, and they co-founded OpenAI in 2015.
However, following an internal power struggle in 2018, their relationship soured, leading to harsh words and mockery, including terms like “fraud” and “a organization turned into a devil.” Currently, the two continue to clash over who is truly developing AI for humanity, with conflicts spanning technology, safety, platforms, legal battles, and political interests.
◇ xAI fails to implement safety measures… OpenAI strengthens risk response
According to the industry on the 15th, xAI, led by CEO Musk, announced a draft of its artificial intelligence (AI) safety framework at the ‘AI Seoul Summit’ in February, promising to release a revised version within three months. However, there has been no announcement or official explanation to date. The watchdog group 'The Midas Project' criticized that “CEO Musk has not even implemented the global agreement he personally signed.”
The draft contains ambiguous wording stating that it “only applies to models developed in the future,” and lacks crucial risk identification and mitigation measures. Given that CEO Musk has consistently warned about the dangers of AI, xAI's passive response can only be viewed as contradictory.
A bigger issue lies in practical operational ethics. xAI's chatbot “Grok” has responded to requests for it to strip images of women and exhibited a higher frequency of using profanity compared to competing services. The nonprofit AI watchdog group Safer AI assessed xAI in a recent report as having “the lowest risk management capabilities in the industry.”
Myungju Kim, head of the AI Safety Research Institute at the Korea Electronics and Telecommunications Research Institute (ETRI), noted that “CEO Musk was the first to sound the alarm over the potential threats of AI, but xAI's responses are contradicting that belief.” He added, “While it is problematic that they have not released the revised safety framework promised at the AI Seoul Summit, there is a consistent trend of disregarding ethical governance, similar to the dismantling of the ethics team after the acquisition of Twitter (now X).”
On the 14th (local time), CEO Altman, leading OpenAI, announced the establishment of the Safety Evaluations Hub and that they would begin transparently disclosing harmful content generation, hallucination, and jailbreak testing results of their models.
Previously, OpenAI faced concerns regarding the handling of reports for some flagship models, keeping them confidential or hastening their release.
OpenAI came under fire for the controversy that arose when GPT-4o allegedly affirmed risky statements by responding excessively positively to user requests, prompting the company to revert the update back to a previous version. They announced that some functions would be limited to an “alpha test” format, receiving feedback from test groups.
◇ Competition for AI leadership expanded to SNS, supercomputers, and legal disputes
The conflict between the two has broadened beyond technology and safety responses into competition for industry leadership. Last year, CEO Musk announced plans to deploy up to 1 million high-performance graphics processing units (GPUs) ‘H100’ to xAI and unveiled the expansion of the world’s largest AI supercomputer, ‘Colossus.’ In February, he released ‘Grok3,’ which has been evaluated as surpassing OpenAI's ‘ChatGPT-4o,’ expressing his determination to catch up with OpenAI.
CEO Altman, leading OpenAI, is said to be developing an image-centric social media (SNS) platform similar to ‘X’ that uses the image generation capabilities of ChatGPT, allowing users to create and share content. This foreshadows a direct business conflict with CEO Musk, who owns X.
Legal conflicts and emotional disputes have also intensified. At the end of last year, CEO Musk filed a lawsuit in a California court, claiming that OpenAI had abandoned its original goal of developing nonprofit and open-source AI and had transitioned into a closed corporation benefiting Microsoft (MS). He submitted a request for a court order to stop the transition to a for-profit entity and partnership with MS, stating, “The nonprofit ideal expected by early investors has been destroyed.”
In response, CEO Altman refuted that “CEO Musk was the one who first proposed commercialization in 2017, and when his proposal was not accepted, he left the company to start an independent business.” OpenAI has also filed a counterclaim, saying that CEO Musk is obstructing the normal management of the company.
The court did not grant CEO Musk's request for a temporary restraining order, but the main lawsuit will proceed to a jury trial early next year. This conflict has escalated beyond a mere legal dispute into a confrontation between the two founders who once stated, ‘We must do this’ when establishing OpenAI.
Director Kim analyzed that “CEO Musk strongly values freedom of expression and minimizes intervention in regulation or ethics, while CEO Altman is closer to prioritizing securing corporate trust and following global regulatory trends.” He noted that “the conflict between the two can be viewed not merely as an emotional battle, but as a frame war over who will hold the public and market leadership of AI technology.”
He added, “While OpenAI's internal ethics and safety capacities have systematically operated so far, it remains uncertain whether they can maintain that stance given the ongoing departure of key personnel.”