In its artificial intelligence data center chip family, Alphabet, the parent company of Google, debuted Trillium on Tuesday. The company claims that this version of the chip is almost five times faster than its previous one.
In a press call with reporters, Alphabet CEO Sundar Pichai stated, “Industry demand for (machine learning) computer has grown by a factor of 1 million in the last six years, roughly increasing 10-fold every year.” “I think Google was built for this moment, we’ve been pioneering (AI chips) for more than a decade.”
One of the few practical substitutes for Nvidia’s industry-dominating top-tier processors is Alphabet’s endeavor to develop specialized chips for AI data centers. The processors, in conjunction with the software that is directly linked to Google’s tensor processing units (TPUs), have enabled the corporation to capture a sizeable portion of the market.
About 80% of the market for AI data center chips is controlled by Nvidia, with Google’s TPUs accounting for the great majority of the remaining 20%. Through its cloud computing platform, the company leases access rather than selling the chip itself.
According to Google, the sixth-generation Trillium chip—which powers technology that produces text and other media from massive models—will reach 4.7 times better computational performance than the TPU v5e. Compared to the v5e, the Trillium processor has a 67% higher energy efficiency.
Cloud customers will be able to purchase the new chip starting in “late 2024,” according to the business.
The engineers at Google were able to attain even greater performance advantages by augmenting the total bandwidth and high-bandwidth memory capacity. The huge quantities of sophisticated memory needed for AI models have been a barrier to further improving speed.
The chips are intended to be used in pods of 256 chips, which the business says may be expanded to hundreds of pods.