The way information is processed is changing as a result of deep neural networks (DNNs). DNNs’ exponential expansion has put computer hardware to the test. As a result, DNN workloads are now run on optical neural networks (ONNs) with high clock rates, parallelism, and low-loss data transport.
Due to low electro-optic conversion efficiency, low compute density, and huge device footprints and channel crosstalk, ONNs have a high energy consumption. A spatial-temporal-multiplexed ONN system that addresses each of these challenges simultaneously is experimentally demonstrated by the researchers.
Enhanced systems
Because it can produce articles, emails, and computer code based on a user’s instructions, ChatGPT has made headlines all over the world. A team of scientists has now created a technique that could result in machine learning systems that are far smarter than the one that operates ChatGPT. Additionally, their technology could consume a great deal less energy than the supercomputers that currently power the most sophisticated machine-learning models.
Atomic-scale lasers
The new system, which does calculations based on the flow of light rather than electrons, is presented by the researchers in its first experimental demonstration. In comparison to state-of-the-art digital computers for machine learning, the team reports a 25-fold increase in compute density and a more than 100-fold increase in energy efficiency.
The team also discusses how the work “opens a way for large-scale optoelectronic processors to speed up machine-learning tasks from data centers to decentralized edge devices” in the study. In other words, software that can only be used at large data centers can now run on smartphones and other small devices.
In-depth neural networks
DNNs like the one underlying ChatGPT are built using sizable machine learning models that simulate how the brain processes information. The digital technologies driving today’s DNNs are plateauing while machine learning is growing. They also need a lot of electricity and are often only found in very large data centers. In computing architecture, it is fostering innovation. It may be possible to get over the current restrictions by doing DNN computations with light rather than electrons. Comparatively speaking, optical computations can use far less energy than their electronic counterparts.
Networks of optical neurons
On the other hand, optical neural networks (ONNs) in use today face significant challenges. For instance, they use a lot of energy since they can convert incoming data based on electrical power into light more effectively. Additionally, the necessary components are large and take up a lot of space. While addition is a linear calculation that ONNs are excellent at, multiplication and “if” expressions are nonlinear calculations that they struggle with.