IISc researchers develop neuromorphic camera boosted by machine learning

IISc researchers develop neuromorphic camera boosted by machine learning

IISc researchers develop neuromorphic camera boosted by machine learning

The experimental setup. (Image credit: Rohit Mangalwedhekar).

The setup allows for nanoscopic imaging of cellular components and nanoparticles

News

  • The deep learning algorithms were trained on simulated data to predict the locations of objects.
  • A wavelet segmentation algorithm was also used to determine the centroids of objects.
  • Combining the deep learning and wavelet segmentation algorithms allowed for better predictions of positions of objects than conventional methods.

Researchers at the Indian Institute of Science (IISc) have developed a novel technique that combines optical microscopy with a neuromorphic camera and machine learning algorithms that can go beyond the diffraction limit of light, and detect minute objects such as cellular components and nanoparticles. The diffraction limit is a barrier that prevents conventional microscopes from differentiating between two objects if they are smaller than a particular size, typically between 200 and 300 nanometers. Scientists have tried to overcome the diffraction limit of optical microscopes using a number of techniques, including modifying the molecules being investigated, and developing superior strategies for illuminating the subjects.

The 2014 Nobel Prize in Chemistry was awarded to Eric Betzig, Stefan W Hell and William E Moerner ‘for the development of super-resolved fluorescence microscopy’. The researchers from IISc have tackled the problem with a unique approach. Corresponding author of the study, Deepak Nair says, “Very few have actually tried to use the detector itself to try and surpass this detection limit.” The neuromorphic camera used in the method mimics the human retina in the way that it converts light into electrical impulses. As against conventional cameras, the individual pixels generates events or spikes only when there are changes to the intensity of the light falling on that pixel.

The variations in the intensity levels for individual pixels were used by the researchers to differentiate between objects smaller than the diffraction limit. Co-author of the study, Chetan Singh says, “Such neuromorphic cameras have a very high dynamic range (>120 dB), which means that you can go from a very low-light environment to very high-light conditions. The combination of the asynchronous nature, high dynamic range, sparse data, and high temporal resolution of neuromorphic cameras make them well-suited for use in neuromorphic microscopy.”

A paper describing the findings has been published in Nature Nanotechnology.

Peyman Taeidi

Leave a Reply

Your email address will not be published. Required fields are marked *