In context: OpenAI exclusively depends on Nvidia’s GPUs to train and run its powerful AI models. However, it’s looking to change that. Sam Altman is considering partnering with a silicon designer to manufacture an OpenAI chip. He is reportedly negotiating with Broadcom and other chip designers, but Altman’s ambitions go far beyond producing a proprietary AI chip.
The information reports that Sam Altman’s vision for OpenAI to develop its own AI chips to reduce its dependence on Nvidia’s GPUs has led him to meet with various semiconductor designers. The talks are part of broader efforts by Altman to beef up not only its supply of components but also the infrastructure necessary – including power infrastructure and data centers – to run these powerful AI models.
OpenAI is hiring former Google employees who worked on Google’s tensor processing unit as part of this initiative. Earlier this year, there were reports that Altman was seeking to raise billions of dollars to set up a network of semiconductor factories.
A partnership with Broadcom makes sense for OpenAI. The company has significant experience designing custom AI accelerators, notably its collaboration with Google on the tensor processing unit. Broadcom’s success with Google’s widely deployed TPUs, which are now in their sixth generation, demonstrates its capability to deliver high-performance AI accelerators at scale.
Broadcom also has expertise in creating custom ASIC solutions, which aligns well with OpenAI’s need for an AI accelerator tailored to its specific requirements. As a fabless chip designer, Broadcom offers a wide range of silicon solutions crucial for data center operations, including networking components, PCIe controllers, SSD controllers, and custom ASICs. So, OpenAI could leverage Broadcom’s complete vertical stack of products to meet its data center needs. Broadcom’s offerings in inter-system and system-to-system communication technologies could provide OpenAI with a more comprehensive solution for their AI infrastructure needs.
OpenAI is unlikely to compete with Nvidia’s technological prowess immediately, as it couldn’t optimistically produce a new chip until 2026. However, the company has been exploring ways to become more self-reliant in its quest for general artificial intelligence. Earlier this year, for example, it opened an office in Japan to tap into new revenue streams and collaborate with local businesses, governments, and research institutions. It also partners with entities like Khan Academy and Carnegie Mellon to develop personalized learning experiences using AI.