Researchers from the Indian Institute of Science (IISc), Bengaluru demonstrate how a brain-inspired image sensor can identify minute items like cellular components or nanoparticles that are undetectable to modern microscopes by going beyond the diffraction limit of light. Their ground-breaking method, which combines optical microscopy with a neuromorphic camera and machine learning algorithms, represents a significant advancement in the ability to locate objects smaller than 50 nanometers.
Since the development of optical microscopes, researchers have worked to overcome a challenge known as the diffraction limit, which states that if two objects are smaller than a predetermined size, typically 200–300 nanometers, the microscope cannot discriminate between them. According to Deepak Nair, associate professor at the Centre for Neuroscience (CNS), IISc, and lead author of the paper, their efforts have mostly been directed at either improving the molecules being photographed or creating better illumination techniques.
The neuromorphic camera measures around 40 mm in height, 60 mm in breadth, and 25 mm in diameter. It weighs about 100 kilos. It has various benefits over traditional cameras and mimics how the human retina turns light into electrical impulses. Each pixel in a standard camera records the amount of light falling on it throughout the entire exposure period when the camera is focused on the item. These pixels are then combined to create an image of the object.
Each pixel in neuromorphic cameras functions independently and asynchronously, producing events or spikes only when the amount of light shining on that pixel changes. Compared to typical cameras, which capture each pixel value at a fixed rate regardless of whether the picture changes, this produces sparse and fewer data. Nature Nanotechnology has published the findings.
Such neuromorphic cameras can capture images under a wide variety of lighting conditions, from extremely dim to extremely bright. Neuromorphic cameras are well-suited for use in neuromorphic microscopy because of their asynchronous nature, high dynamic range, sparse data, and high temporal resolution, according to Chetan Singh Thakur, assistant professor in the department of electronic systems engineering (DESE), IISc, and co-author.
In the current study, the team shone laser pulses at both high and low intensities, recording the fluctuation in the fluorescence levels, and then utilized their neuromorphic camera to locate individual fluorescent beads smaller than the limit of diffraction.
The scientists employed two techniques to precisely locate the fluorescent particles inside the frames. The first was a deep learning system that predicted where the centroid of the object may be using approximately 1.5 million image simulations that closely matched the experimental data, according to Rohit Mangalwedhekar, a former research intern at CNS and the study’s first author.
Molecules can become immobilized during biological processes like self-organization. Thus, it is essential to be able to pinpoint the molecule’s center with the greatest degree of accuracy so that we may comprehend the general principles that permit the self-organisation. With this method, the team was able to closely monitor the motion of a fluorescent bead flowing freely in an aqueous solution. According to the researchers, this method can be used widely to precisely track and comprehend stochastic events in biology, chemistry, and physics.