You’re likely familiar with the brilliant display that colours the nighttime surroundings green while wearing a pair of night-vision goggles. The human eye is in charge of viewing objects in one colour. Light with wavelengths about 555 nanometers, which seem vivid green, is the most sensitive to us. As a result, under the green hue of night vision, we may detect leaves on a tree better than under other shades.
However, monochromatic technology is about to get a technicolour makeover.
Researchers at the University of California (UC) Irvine utilised machine learning to turn what they saw via a night vision scope or camera into a genuine rainbow of colours in a new study published in the journal PLOS One. This game-changing advancement can assist not only the military but also medical technology, healthcare, and even more specialised activities such as art restoration.
People can see in the visible light spectrum, which extends from around 400 nanometers (where the colour purple is located) to 700 nanometers (where we can see red). So when we’re in complete darkness with no light source around at a wavelength of 800 nanometers, it’s understandable that seeing is nearly difficult.
However, some technologies allow humans to view the scene as if they were looking through an infrared camera. However, “relying on visible light can destroy sensitive tissues like the eye or other fragile biological samples in a lab,” according to Dr Andrew Browne, the study’s lead author and an ophthalmologist and biomedical engineer at UC Irvine.
Scientists have used machine learning to help them see in the dark in the past few years. Browne and his colleagues also focused on a new field, delivering colour information to neural networks (computer programs that operate like artificial brains) based on hundreds of printed images.
“The way neural networks are trained is just like if I gave you 100 pictures of a person’s face and I circle the nose in every single one of those pictures, then the neural network would learn to recognize labelled objects,” Browne explained.
“What we did with [our] neural network is we gave it hundreds of pictures containing data on the visible and infrared spectrum.”
The UC researchers predicted the visible spectrum image using infrared photos of three different wavelengths and deep learning. The neural networks were requested to recreate the images’ colour, which was now obtained by a night vision camera, using this new learning, and the results weren’t that bad.
“There is some variability because you can put them side-by-side and see some differences here and there,” said Browne. “But they’re basically indistinguishable like you wouldn’t even know you were looking at a predicted image.”
While this proof-of-concept is an excellent first step toward improving night vision, Browne pointed out that the neural network’s predictions are limited to the data in its storehouse. However, the researchers are focusing on adding more training data sets and enhancing the computer hardware of the neural networks so that they can gather and retain more data.
This piece of technology would be appealing to the military. Still, it could also be helpful in eye surgery, where night vision could be used to protect sensitive retinal tissue from light damage. It might also be beneficial in art restoration, as visible light can be harmful to the piece.
“Artificial neural networks are something that is going to support a host of different scientific application endeavours. And it’s why they’re a very powerful tool in the context of medical care, they can enhance a clinician’s ability to function,” said Browne.
“In the context of new technologies, they can enhance the performance of that technology to perform a specific task.”