A new generation of computer hardware is making artificial intelligence processing faster and more energy efficient.
The amount of time, effort, and money needed to train ever-more-complex neural network models is soaring as researchers push the limits of machine learning. However, analog deep learning, a new branch of artificial intelligence, promises quicker processing with less energy use.
Like transistors are the essential components of digital computers, programmable resistors are the fundamental building blocks of analog deep learning. Researchers have developed a network of analog artificial “neurons” and “synapses” that can do calculations similarly to a digital neural network by repeatedly repeating arrays of programmable resistors in intricate layers. Then, we may train this network using complex AI tasks like image recognition and natural language processing.
The goal of a diverse MIT research team was to increase the speed of a particular kind of artificial analog synapse that they had previously created. They used a useful inorganic substance in the manufacturing process to give their devices a speed boost of a million times over earlier iterations, roughly a million times faster than the synapses in the human brain.
In addition, this inorganic substance renders the resistor exceptionally energy-efficient. Unlike those employed in the previous generation of the device, the new material is compatible with silicon production procedures. This development has enabled the fabrication of nanometer-scale devices and may pave the path for their incorporation into commercial computing hardware for deep-learning applications.
These programmable resistors greatly accelerate neural network training while significantly lowering the cost and energy required. As a result, these resistors might speed up the process. Researchers create deep learning models for fraud detection, self-driving cars, or picture analysis in medicine.
Accelerating deep learning speed
There are two key reasons analog deep learning is faster and more energy-efficient than its digital cousin. First, computation is performed in memory, preventing the transfer of massive amounts of data between memory and the processor. Analog processors also perform operations in parallel. An analog processor does not require additional time to conduct new processes as the matrix size increases because all computations co-occur.
A protonic programmable resistor is the central component of MIT’s new analog processor technology. These resistors, measured in nanometers (one nanometer equal to one billionth of a meter), are organized in a chessboard-like grid.
Learning occurs in the human brain due to the strengthening and weakening of synaptic connections between neurons. This method, in which training algorithms program the network weights, has been utilized by deep neural networks for some time. Increasing and reducing the electrical conductivity of protonic resistors enables analog machine learning in this novel processor. Furthermore, the movement of protons controls the conductance.
Conclusion
Nanoscale ionic programmable resistors for analog deep learning are 1,000 times smaller than biological cells, but it is still unclear how much faster they can be than neurons and synapses. Scaling analyses of ion transport and charge transfer reaction rates point to operation in the nonlinear regime, where the solid electrolyte and its interfaces have powerful electric fields.
In this project, researchers made nanoscale protonic programmable resistors that work with silicon and have highly desirable properties in powerful electric fields. This mode of operation made it possible for protons to move around and switch places in a controlled way in nanoseconds at room temperature while using less energy. In addition, the devices had symmetric, linear, and reversible modulation and many conductance states that covered a 20-dynamic range. So, the performance of all-solid-state artificial synapses in terms of space, time, and energy can be much better than their biological counterparts.
Source: indiaai.gov.in