Researchers in Korea have developed a vision sensor that efficiently and accurately extracts object outline information, resembling the principles of neurotransmission in the human brain, even in fluctuating brightness conditions. This is expected to aid in the faster and more accurate recognition of the surrounding environment in autonomous driving, drones, and robotics technologies.
A research team led by Professor Choi Moon-ki from the Ulsan National Institute of Science and Technology (UNIST) announced on the 4th that they have developed a synapse-mimicking robotic vision sensor in collaboration with Senior Researcher Choi Chang-soon from the Korea Institute of Science and Technology (KIST) and Professor Kim Dae-hyung from Seoul National University. The research findings were published online on May 2 in the international journal Science Advances.
Vision sensors serve as the eyes of machines, where the information detected by the sensors is transmitted to a processor that acts like the brain for processing. If information is transmitted without filtering, the amount of data transmitted increases, slowing down processing speeds and potentially reducing recognition accuracy due to unnecessary information. Such issues are more pronounced in situations where lighting changes drastically or bright and dark areas are mixed.
The research team developed a vision sensor that can select visual information with high contrast, like outlines, by mimicking the dopamine-glutamate signaling pathway that occurs in synapses between brain neurons. In the brain, dopamine modulates glutamate to enhance important information; therefore, the sensor was designed to imitate this principle.
Professor Choi Moon-ki explained, "By applying in-sensor computing technology that endows the sensor itself with some brain functions, it can autonomously adjust the brightness and contrast of video data and filter out unnecessary information. This can fundamentally reduce the burdens of robotic vision systems that need to process video data reaching dozens of gigabits per second."
In actual experimental results, this vision sensor was confirmed to reduce the volume of video data transmission by approximately 91.8% compared to existing methods while increasing the accuracy of object recognition simulations to about 86.7%.
This sensor consists of an optical transistor that changes its current response depending on the gate voltage. The gate voltage plays a role similar to dopamine in the brain, regulating the intensity of the response, while the current coming from the optical transistor mimics the stimulation signals corresponding to glutamate. By adjusting the gate voltage, it becomes sensitive to light, allowing clear detection of outline information even in dark environments. Moreover, it is designed so that the output current varies based on the brightness difference with the surroundings, resulting in stronger responses to boundaries where brightness changes significantly—that is, outlines—while suppressing areas with constant brightness.
Senior Researcher Choi Chang-soon noted, "This technology can be widely applied to various vision-based systems such as robotics, autonomous vehicles, drones, and Internet of Things (IoT) devices, and it can increase both data processing speed and energy efficiency, potentially serving as a core solution for next-generation artificial intelligence vision technology."
References
Science Advances (2025), DOI: https://doi.org/10.1126/sciadv.adt6527