News & Analysis

Chipmakers Bitten by the AI Bug

Having first worked exclusively with Microsoft, Taiwanese chip maker TSMC has shifted focus on building the stuff suited for AI-based applications

The world’s largest chipmaker TSMC (Taiwan Semiconductor Manufacturing Co.), which had been associated with a bespoke project with Microsoft to develop their proprietary chipset for AI purposes, is now going full steam ahead with developing  chips suited for AI-based applications and high-performance computing. 

Media reports over the past few years had suggested that this shift was planned keeping in mind the lower offtake of smartphones in a plateauing market. The company reportedly wants to avoid making the error made by Intel in the semiconductor business by not shifting from the PCs to the smartphone markets. 

An old story that’s bearing fruit now

TSMC co-chief executive Mark Liu had said during an earnings call some years ago that the company had expected high-performance computing to become a major growth engine in 2020 and by 2025 the need for high performance computing products will really pick up. So, the move appears to be yet another case of TSMC being ahead of the curve in chipmaking. 

The company had said that more than half of its customers placing pre-orders for the 7-nm chips had planned to adopt them in high-performance computing related applications. In fact, a report in Digitimes said the sub-7nm process manufacturing had already resulted in significant AI chip orders amid a surge in demand for generative AI applications. 

Among the companies that have placed orders for AI chips at TSMC are NVIDIA, Apple, and AMD. In response to the growing demand for its top AI GPUs such as the A100 and H100, NVIDIA has placed additional wafer supply orders at TSMC. 

However, this increased demand has put a strain on TSMC’s monthly CoWoS capacity of 8,000 to 9,000 wafers, causing its CoWoS supply to become tight. Despite this, TSMC remains optimistic about the growth of its CoWoS technology and is actively seeking ways to meet the increased demand.

There’s Microsoft’s Athena also on the cards

Earlier, Microsoft had worked on its own AI chipset, which was internally called “Athena”. It began work in 2019 with the initial version planning to use TSMC’s 5-nm process though there is every possibility that in recent times the 7-nm process would have replaced it. Microsoft was also considering multiple generations of chips as part of this project. 

The report published in The Information had suggested that the company’s goal was to save money on chips from suppliers like Nvidia.  Given that Microsoft has pumped billions into OpenAI and its ChatGPT-enabled Bing search to beat competition in using AI and large language models (LLMs), this is a very purposeful move. 

In fact, reports suggest that Athena chips, or whatever it gets to be called eventually, would be designed to train LLMs and similar software that includes referring from data that they acquire during training. The report said these chips are already in use amongst a small subset of Microsoft employees as well as with OpenAI. 

In fact, several companies using training LLMs, both locally and over the cloud, have gone with Nvidia offerings that include powerful graphics cards. Market experts believe that if Microsoft gets it right with Athena, it could reduce microprocessor costs by as much as a third compared to those existing currently. 

Given that Microsoft appears to be taking on other big tech companies to add AI capabilities to its products beyond Bing Chat, then the Athena move could save them big billions. Of course, there is competition already in this space as Google, Amazon and Meta are known to be experimenting with their own in-house silicon chips. 

Leave a Response