Researchers at the University of Central Florida have developed an AI technique that mimics the human eye.
This technology could lead to highly developed artificial intelligence that can instantly understand what we see and use in robotics and self-driving cars.
Researchers at the University of Central Florida (UCF) have built a device for artificial intelligence that replicates the retina of the eye.
This research could lead to cutting-edge AI that can instantly identify what it sees, such as automatic descriptions of photos taken with a camera or phone. The technology can also be used in robots and self-driving cars.
Techniques described in a recent study published in the journal ACS Nano, It also outperforms the eye in terms of the range of wavelengths it perceives, from the ultraviolet to the visible to the infrared spectrum.
The ability to combine three different operations into one further contributes to its uniqueness. Currently available intelligent imaging technologies, such as those found in self-driving cars, require separate data processing, storage, and sensing.
Researchers claim that by integrating three steps, the UCF-designed device is much faster than existing technology. The technology is also very compact, as hundreds of devices can fit on a one inch wide chip.
“It will change the way artificial intelligence is realized today,” says Tania Roy, assistant professor and principal investigator in UCF’s Department of Materials Science and Engineering and Center for Nanoscience and Technology. “Today, everything is a discrete component and runs on traditional hardware. Here we have the ability to do in-sensor computing with her one device on one tiny platform.”
The technology expands on previous work by a research team that created brain-like devices that allow AI to function in remote locations and in space.
“We had devices that behaved like synapses in the human brain, yet we weren’t feeding them images directly,” says Roy. “Now, by adding image sensing capabilities to them, we are enabling Synapse-like devices that act like ‘smart pixels’ in cameras by simultaneously sensing, processing and recognizing images. ”

The lead author of the study, Molla Manjurul Islam, a PhD student in the Department of Physics at UCF, is investigating a retina-like device on a chip.Credit: University of Central Florida
For self-driving cars, the versatility of the device will enable safe driving in a variety of situations, including at night, said Molla Manjurul Islam ’17MS, lead author of the study and PhD student in the Department of Physics at UCF. says.
“If you’re in a self-driving car at night and the car’s imaging system only operates on certain wavelengths, say visible wavelengths, you can’t see what’s ahead,” says Islam. “But in our case, with our device, we can actually see the whole state.”
“No such device has been reported, and this is the most unique selling point of this device, as it can operate simultaneously in the ultraviolet region and visible and infrared wavelengths,” he says.
The key to this technology is the engineering of nanoscale surfaces made of molybdenum disulfide and platinum ditelluride, enabling multi-wavelength sensing and memory. This work is a close collaboration with YeonWoong Jung, a co-appointed assistant professor at UCF’s NanoScience Technology Center and his Department of Material Science and Engineering, part of UCF’s College of Engineering and Computer Science. It was held in.
Researchers[{” attribute=””>accuracy by having it sense and recognize a mixed wavelength image — an ultraviolet number “3” and an infrared part that is the mirror image of the digit that were placed together to form an “8.” They demonstrated that the technology could discern the patterns and identify them both as a “3” in ultraviolet and an “8” in infrared.
“We got 70 to 80% accuracy, which means they have very good chances that they can be realized in hardware,” says study co-author Adithi Krishnaprasad ’18MS, a doctoral student in UCF’s Department of Electrical and Computer Engineering.
The researchers say the technology could become available for use in the next five to 10 years.
Reference: “Multiwavelength Optoelectronic Synapse with 2D Materials for Mixed-Color Pattern Recognition” by Molla Manjurul Islam, Adithi Krishnaprasad, Durjoy Dev, Ricardo Martinez-Martinez, Victor Okonkwo, Benjamin Wu, Sang Sub Han, Tae-Sung Bae, Hee-Suk Chung, Jimmy Touma, Yeonwoong Jung and Tania Roy, 25 May 2022, ACS Nano.
DOI: 10.1021/acsnano.2c01035
The work was funded by the U.S. Air Force Research Laboratory through the Air Force Office of Scientific Research, and the U.S. National Science Foundation through its CAREER program.
Comments
Post a Comment