Last week, Nvidia announced that it is creating a new cloud service for the market that will use AI Supercomputing. The company’s founder and CEO, Jensen Huang, introduced this new artificial intelligence-based service at the GTC event.
Simply put, the well-known producer of graphics cards is introducing a new cloud service based on supercomputing that allows you to rent the capabilities of potent supercomputers. These reliable systems were used to create ChatGPT and other artificial intelligence technologies. The DGX Super AI Computing System, which comprises eight A100 or H100 flagship chips, was also announced by Nvidia.
Unbeknownst to many, the Ampere and Hopper chips that will be sold in China are essentially dialects of the A800 and H800 languages, which are also the ones that are primarily used by Chinese programmers to create language models. Businesses will be able to access this service from the internet giant by leasing access, which may run them about US$37,000 per month. We might anticipate a quickening of the cycle of artificial intelligence advancement because of this help.
Furthermore, according to Jensen Huang, “We will work with cloud service providers in Europe and America to provide the AI supercomputer capability of NVIDIA’s DGX system.” In China, we provide specialized Hopper and Ampere chips. Tencent and Alibaba Group Holding Ltd, two Chinese cloud service providers, will supply Baidu Inc. with capabilities and implementation. Chinese start-ups will have the chance to create their major language models, and I have full confidence in their ability to deliver top-notch system services.