Speech recognition had an error rate of 16% around the time Apple’s Siri was launched early last decade. Which means it wouldn’t understand many of the words/sentences we spoke to her, and so it provided no answers, or wrong answers. But as we spoke to her more, she learnt from it. Today, speech recognition systems have significantly lower error rates, they can even understand accents. But it has taken years to get there.
If you need to build great AI systems quickly, you need to throw a lot of data and compute power into it. More and more use cases are emerging where the AI system needs to instantaneously understand what’s going on to be able to respond to it. Braking by autonomous cars is a classic one.
Chips with AI acceleration, and chips that are designed for AI are coming in to deal with this. Semiconductor companies, startups, and even those like Google, Apple, Amazon and Facebook, for whom AI is central to what they do, are all developing such chips. A lot of this work is happening in India, one of the world’s foremost chip design hubs.
Srikanth Velamakanni, cofounder & CEO of analytics company Fractal Analytics, says such chips are essential to deal with the massive volumes of data that many systems now generate. “One flight of an aircraft generates more data than Google generates in a day. Because it’s got so many sensors and such high velocity data coming through,” he says. It’s similar in factories and industrial equipment. “You have to comb through all this data in real-time to see what may be failing. Human beings are not capable of that. It also needs hybrid computing, a combination of edge and server. We are looking at Intel’s new processors with AI acceleration to see how much of a performance boost we can create in these kinds of applications,” he says.
Fractal is also looking at these chips for a solution they call Customer Genomics, which mines massive customer data, like in banks, and recommends the next best action for the customer.
Ruchir Dixit, India country manager at semiconductor tools maker Siemens EDA, says such analytics is possible to do in software, but it won’t be fast enough. Many have used GPUs, because they are designed for heavy-duty graphics processing, but even those fall short for emerging requirements. “A machine learning algorithm implemented on hardware is always orders of magnitude faster. When I launch a software programme on my laptop, it has to find time from the CPU, even as the CPU deals with other computations it may be involved in, like an on-going video call. But if you put it in hardware, it doesn’t care what else you are doing, it will do it immediately because that’s what it is designed to do,” he says.
Different AI functions may require their own different chips. Understanding images in a car, with its limited power and limited ability to absorb heat, may require a different AI chip from that in a factory, which has AC power coming in and where how much heat the hardware dissipates may not be a concern.
Alok Jain, VP of R&D at semiconductor tools company Cadence Design Systems, says there are very specific chips for speech recognition, for face recognition. “It all depends on the level of complexity, the level of cores that are required on the size of data available to you. It depends on how dependent the variables it is dealing with are – if they are dependent, the communication between the cores becomes important,” he says.
Given the variety of chips needed, even startups, he says, have found a great opportunity to serve various niches. SambaNova, Groq, Cerebras in the US are among them. In India, there are those like AlphaICs and QpiAI.
IBM recently announced the Telum processor, its first processor to contain on-chip acceleration for AI inferencing and targeted at the financial services industry. Google is known to be building a large team in India for its chip design.
Prakash Mallya, MD of Intel India, says the choice of whether to use a chip designed for a specific AI application, or a general purpose chip with AI acceleration will depend partly on software capabilities within the organisation. The former, he says, requires more software capabilities to program and to build the IT stack.
Source: timesofindia.indiatimes.com