Summary:
UCLA researchers in the Department of Electrical Engineering have developed an innovative approach to improve image quality when lighting conditions are suboptimal, that can be readily integrated with different machine learning techniques for more accurate object and face recognition.
Background:
The computer vision market size was valued at more than US$ 12.50 billion in 2021 and is expected to grow at a compound annual growth rate (CAGR) of 12.5% during 2022-2026. Low-light conditions are ubiquitous, and due to various technical and environmental constraints, many photographs and videos are captured in suboptimal lighting. As a result, these low-light images often have compromised aesthetic quality and may fail to accurately convey important information. This can affect the viewers' experience and even lead to inaccurate object recognition. Although simply applying longer exposure times can improve image quality, it may introduce unwanted blurriness. Recent research has focused on using machine learning techniques to enhance low-light images. Those efforts have shown some potential in several emerging areas of computer vision. However, they are often unstable and may not consistently provide superior performance, especially in unknown real-world scenarios. Additionally, data-driven approaches generally require training time and specific data for each domain-specific application. Therefore, a generalizable enhancement algorithm to restore an image or a video captured in low-light is desired.
Innovation:
Professor Bahram Jalali and his research team have invented a novel method, Vision Enhancement via Virtual diffraction and coherent Detection (VEViD), to enhance visual image quality for human perception. This newly developed algorithm can significantly improve the quality of an image taken under unbalanced or inadequate lighting, so the aesthetic quality of the image and the message conveyed by it are well-preserved. The algorithm adopts an approach that emulates the propagation and detection of light, so it can be implemented in real-time with low computational burden. Through some mathematical approximations, it is possible to further speed up the process without significantly compromising the quality of the image. VEViD has also been demonstrated as an ideal pre-processing tool for object detection in low-light conditions, and its performance is comparable to the current state-of-the-art deep learning algorithm, Zero-DCE, and it scales better with frame size with the advantage becoming significant for 4K frames. The accelerated version of VEViD can enhance low-light 4K video at over 200 frames per second. Additionally, VEViD can be easily integrated into various machine learning techniques for emerging object/face recognition applications like autonomous driving, security surveillance, and defense.
Patent:
Systems and methods for vision enhancement via virtual diffraction and coherent detection
Potential Applications:
• Low-light image enhancement framework
• Preprocessing tool for autonomous driving, security surveillance, etc.
• Medical imaging/live-cell tracking
• Sports live broadcasting (i.e., Hawk-eye technology in soccer)
• Movie/motion picture shooting
Advantages:
• Produces enhanced images under low-light conditions
• Color enhancement for realistic tone matching
• Low computational burden (simplicity)
• Compatible to various computer vision algorithms (generalizability)
Related Papers:
Jalali, Bahram, and Callen MacPhee. "VEViD: Vision Enhancement via Virtual diffraction and coherent Detection." eLight 2.1 (2022): 1-16. https://elight.springeropen.com/articles/10.1186/s43593-022-00034-y
Reference:
UCLA Case No. 2023-083
Lead Inventor:
Professor Bahram Jalali